ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Foundation ontology, CYC, and Mapping

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Thu, 4 Mar 2010 02:19:54 -0500
Message-id: <039701cabb6b$1665de30$43319a90$@com>

A response to Ali’s note (Wed March 3, 11 PM EST):

  Thanks for presenting the contrary position.  Specific answers are in-line, but here one general comment.

 

The process of creating mappings and translations between different ontologies as I understand your approach is not disjoint with the use of an FO.   I completely agree with your point that an expressiveness at least equal to FOL is needed for accurate mapping.   And if specific mappings are created between two particular ontologies, those mappings may be as accurate and more efficient than generating mappings by means of a common FO.  So specific pre-existing mappings can be used if they exist, and mapping via the FO can be used if specific mappings don’t exist.   If you have some direct mappings, the FO can be used as a backup where direct mappings (or transitive equivalence relations) don’t exist.   If some set of ontologies is mapped to an FO in such a way that one can describe every element in every ontology using only the elements of the FO, then whatever you choose to call the FO might function as I would hope the FO to function.  But mapping every ontology to the FO seems to me to take a lot less effort than generating multiple mappings between the ontologies of that set.  And the transitive mappings available using individual ontologies mapped to a common ontology would be accurate only for an equivalence relation, which seems unlikely to hold for more than 70% of the types in any two ontologies, and fewer when there is an intermediate ontology.  But even that only deals with the types (classes).  The more important mappings are for the semantic relations, and these are very difficult to map, being as their meanings (if domain and range are used) depend on the hierarchy and on logical inferences, and if the hierarchies are not exactly the same, it seems that it will be very hard to translate the semantic relations.  It is still hard to translate semantic relations using an FO, but if all semantic relations are expressible using the FO terms, it at least becomes possible.

 

I emphasize that the goals of mapping between domain ontologies and using the FO are not incompatible, I just believe that there are things you can do with an FO that are not possible or not practical by trying to map among multiple ontologies without generating an FO.   You seem to think that specific mappings will be adequate.  But I am especially concerned with information on the internet for which the ontology has not been mapped to yours.   The FO covers that case automatically, I can’t see how specific mappings can.

 

More details on specifics below.

 

Patrick Cassidy

MICRA, Inc.

908-561-3416

cell: 908-565-4053

cassidy@xxxxxxxxx

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Ali Hashemi
Sent: Wednesday, March 03, 2010 10:11 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping

 

Dear Pat C,


I do not doubt that everyone on this list can think of some additional
*potential* problems with this process.  But the first question to be
answered is: is there a better process to support very broad semantic
interoperability with the same accuracy?


There are number of other ways this interoperability may be achieved, without recourse to a single FO.

First, if theories in a repository are suitably modular, each module on it's own (set of axioms) is an ontology. Additionally, every combination of modules (that is consistent and makes sense) is also a unique ontology.

[[PC]]  The cases I have seen where pluggable modules exist and don’t depend on some ontology of more fundamental concepts to specify the meanings in the modules cover a tiny, **tiny**  fraction of what one would need in an ontology useful for practical purposes.  Perhaps I have missed some important developments.  Can you provide me with examples of applications using ontologies developed from pluggable modules?


Moreover, one need not agree on a unique set of primitives. There might be 7-8 different sets with mappings between them, but no real "join" or generalization of the primitives - i.e. 7-8 different FO's that all efforts have been unable to merge.

I.e. instead of a single hub-spoke, you have 7-8 hubs and ensuing spokes, with the hubs connected to one another.

[[PC]] How would the hubs be connected to each other?  If the mappings are not 1 to 1, what kinds of relations would have to be defined?  Would you introduce new more basic elements to create the translations among the 7-8 hubs?  Would accurate translations be possible?  (If they are 1 to 1, the ontologies are essentially identical, a case I have never seen).

 

Here’s a dichotomy:  (1) two candidates for an FO are logically contradictory.  Attempted mappings without stringent effort to isolate and avoid the inconsistency may cause disastrous errors in translation among them.

(2) the two candidates are not logically contradictory.  IN that case they should be able to be merged.  It may take considerable effort, but if the original ontology developers are around to resolve ambiguities as the mapping process takes place, it should be possible.  Without the originators available, in general the ambiguities will leave considerable uncertainty and make mapping highly error-prone.  Except for CYC, the ontologies I have seen in general have many ambiguous types and even more ambiguous semantic relations.

 

And the best way to avoid ambiguities is to have the domain ontologies specified using a common set of primitives, which have been thoroughly reviewed and tested and documented to be sure that their meanings are clear.


Anyway, the basic idea re mapping is still the same. That merge engine which you allude to can manifest in a number of ways. If the relationships / mappings between ontologies is already recorded in the metadata (i.e. T1 faithfully interprets T2 given mapping module T1-2) a simple reading of the metadata should be able to specify a mapping. Note, all this is quite independent on requiring anything to go through an FO.

[[PC]]  Sure, if you have enough metadata - but the point I was making was that creating the metadata to do such mappings is going to be a lot faster if you develop an FO with primitives.  If you have a merge engine that can actually properly relate ontology elements in independent ontologies, it will in effect be using an FO with primitives, except for those minority of cases where there are accurate 1-1 maps for all ontologies in the mix.


If mappings don't exist, we're still in luck. There's a semantic mapping tool (prototype in my MASc thesis, updated version coming out soon) which does exactly what you wanted. It finds where two ontologies agree, where they differ and on what terms. It can do it wholly automatically, but it is not very efficient (it would test all possible relations between terms, whereas a human could suggest which term-term relations to explore). It does this using any referent onotlogy, it need not be an FO, but it could very well be an FO...

[[PC]] I will certainly be interested in seeing the concrete results of any mapping tool that appears to perform well enough to sustain some level of accuracy over an inference chain of say 6 to 12 in length.  The paper you referenced by Euzenat surveying ontology mapping indicates the problem that makes me despair. An F value of less than 80% may be interesting and potentially useful for generating search results to present to humans, but is utterly hopeless for logical inferencing.  And what I am aware of seems to be the easy part.  Since all of the meaning in an ontology comes from the logical inferences generated by the semantic relations, the most important matching would be for the semantic relations.  Can you point me to a specific paper that shows promising results for that task?  Is your thesis accessible on-line?

 

 

The above comments distinguish two different issues that can be discussed
separately: (Issue1) do we actually *want* every meaning in the FO itself to
change when a new element is added, or do we have criteria of performance
for the FO that guard against unintended changes? (I would use a test
application suite to detect and avoid unintended changes) And
(Issue2) given that the meanings of domain types will differ when different
specialized types are created in domain ontologies, how will this affect the
performance of the FO as a support for interoperability?  (There will be no
contradictions, but less certainly as to whether inferences about a given
subtype in one domain do or do not apply to the parent type or a different
subtype in another domain).


This is a question of how people want to use their given ontology. Often, the terms defined in an FO are wholly unnecessary for the application domain.

Say I have a manufacturing plant, and I have some machines, employees and a process. I might not really care about the philosophical implications of whether number is a subtype of abstract entity. This suggests to me that it might be limiting to commit to a particular FO. But whatever, let's work out an example within the context of a single FO.

I care about processes, actions and the agents that perform them. If I want to to communicate with another plant, we still don't really need to agree on an FO in its entirety. All we need to figure out is what our systems are committing ourselves to. At most (if at all), we would only plug into a sub-component of the FO.

Now, if any such FO is suitably modular, then the addition of a novel element need not change the meaning across the board.

In fact, I suggest it is misleading to refer to the FO as a single ontology. It is really a collection of ontologies, and depending on which modules you decide to include, you have a different ontology each time. So were I to plug into the FO, i would care about the modules pertaining to Processes (which might take me to activities, time, duration etc.), and another module about agents taking me to people, robots, etc.

[[PC]] There may well be parts of the FO that can be segregated out as modules (I have separable modules in the COSMO, such as one for mapping the ontology to databases, with database-specific types and relations), but there will have to be an indispensable framework that integrates the modules, and my experience with COSMO suggests that that would be well over half the ontology.  I also anticipate that users will want a tool that takes the FO and some domain ontology specified using the FO, and extracts from the FO only those elements required to create the domain ontology.  This would be useful to minimize the resulting ontology for computational efficiency, and would  not require that the FO be modularized (though that wouldn’t hurt).  In general it will be unpredictable what modules are needed for the next ontology you encounter on the internet.  See the next comment.


I need not commit to the whole FO, just parts of it. A change in a module that i'm not plugged into (or more general than it), would not change the ontology.

[[PC]] Fine for any two ontologies that are directly hand-mapped.  What do you do for information posted in the internet, assuming that you have the ontology the posting group  uses?   How accurate would an automatic mapping be under those circumstances?  The merging process for different ontologies can be automatic and accurate if the elements of domain ontologies are logically specified only from elements from a common FO.  But I cannot visualize how any automatic mapping can achieve any level of accuracy without the use of a common standard of meaning that can express all the elements in the ontologies to be mapped.  I would be fascinated if you could show an example of a mapping between independently developed practical-sized ontologies (non-toy ontologies, used in some application) that can give a mapping accuracy of at least 99% for both classes and relations.  Or at least outline a procedure that you think can do that.

 

End of comments.

 

PatC


Even if it were, the distinction between conservative and nonconservative extension plays a role too. In Pat Hayes' example, if the previous definition of human was silent on these novel properties, those new properties would be a conservative extension, and the addition of M's axioms would essentially be orthogonal to my original reasoning process.

Now in terms of intended meaning, it makes a huge difference. In terms of reasoning, it can be neatly delineated. However, if the extension were to be nonconservative, we still have a change in intended meaning and the change in logical meaning is complicated as well (though still manageable if modules are used).

Cheers,
Ali




--
(•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>