ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Ontology similarity and accurate communication

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Tue, 11 Mar 2008 15:04:14 -0400
Message-id: <01fc01c883aa$b324ee00$196eca00$@com>

Matthew,

    I think it would not be particularly difficult to include both 3D and 4D in an ontology, but it depends on a specific interpretation of both which may or may not suit the intuitions of those who like one or the other.  I do not believe it is necessary to consider that a physical object “is” a 4D or “is” a 3D entity, but that those are two different ways to view and represent physical objects.  As I have mentioned before, I find both 3D and 4D views of objects to be useful, and would prefer to have both in y ontology, as below.

 

   One mapping is to interpret a 3D object as completely identical to a 4D timeslice of the corresponding 4D object that extends over a zero-length time interval (start and end times are identical).   In this view, a 3D entity is also a 4D entity, but just having special properties ( a 4D object with a restriction on the length of time of the time-slice - zero time extension).  The way to accommodate both 3D and 4D formalisms in this manner is to allow time-explicit assertions with 3D objects that would not be meaningful for 4D extended objects.  Then

 

   (and

       (instance OBJ3D101 Object3D)

       (hasAttributeInInterval OBJ3D101 RedColor (interval 20080101 20080130)))

 

      Asserts that OBJ101 is red from Jan 1 to Jan 30 in 2008, at every time point in that interval.

 

   In 4D one can assert:

     (and

         (startsAt OBJ4D101 20080101)

         (endsAt OBJ4D101 20080130)

         (has Attribute OBJ4D101 RedColor))

  Where ‘hasAttribute’ also means that the attribute holds throughout the time extension of the object.

 

 ===========  To relate these two  ==================================

 

 Then, if one identifies the Type OBJ3D as a subtype of OBJ4D, the relation will be:

   (=>

      (and

          (instance ?OBJ3 OBJ3D)

          (instance ?OBJ4 OBJ4D)

          (isaTimeSliceOf ?OBJ4 ?OBJ4Whole)

          (startsAt ?OBJ4  ?T1)

          (endsAt ?OBJ4 ?T2)

          (isa3DProjectionOf ?OBJ3 ?OBJ4Whole))

     (<=>

         (hasAttributeInInterval ?OBJ3 ?ATTR (interval ?T1 ?T2))

         (has Attribute ?OBJ4 ?ATTR)))

 

   *** the ‘isa3DProjectionOf’ relation relates the 3D and 4D objects and allows both to inhabit the same ontology, if the above axiom is accepted ****

  This allows assertions using the time-explicit relations on 3D objects to be translated into 4D-speak and vice versa.

   The above example only maps the ‘attribute’ property, but if the logic allows one would replace the explicit relation with a variable (possibly needing related pairs, one for 3D and one for 4D)

 

============================================

An alternative method is to define a class of ‘TimeSlice’ which has a begin time point and an end time point (somewhat like the Cyc TemporalThing, but Cyc has things like physical objects as necessary instances, which I prefer to avoid).  In this formalization a PhysicalObject is neither necessarily 3D nor necessarily 4D (dimension-neutral).  One can use the same 3D relation as above on a PhysicalObject in this formalism:

 

   (and

       (instance OBJ101 PhysicalObject)

       (hasAttributeInInterval OBJ101 RedColor (interval 20080101 20080130)))

 

 In order to create a 4D timeslice and assert a property on it, one creates and object that is an instance of both PhysicalObject and TimeSlice:

  (and

    (instance ?OBJ101TS  PhysicalObject)

    (instance ?OBJ101TS  TimeSlice)

    (startsAt ?OBJ101TS  20080101)

    (endsAt  ?OBJ101TS 20080130))

    (has Attribute OBJ101TS  RedColor)))

  

This has the virtue of only requiring binary relations, suitable for those who want to confine themselves to OWL.  Again, the attribute is interpreted as holding throughout the time interval.

 

The two ‘attribute’ relations are related by the axiom, similar to the one above:

 

   (=>

      (and

          (instance OBJ-TS  PhysicalObject)

          (instance OBJ-TS  TimeSlice)

          (startsAt OBJ-TS  ?T1)

          (endsAt  OBJ-TS  ?T1))

     (<=>

         (hasAttributeInInterval OBJ-TS ?ATTR  (interval ?T1 ?T2))

         (has Attribute OBJ-TS ?ATTR)))

 

  In this case, both ‘attribute’ relations -  time explicit and time implicit – can take ?OBJ-TS as an argument because it is an instance of both PhysicalObject and TimeSlice.  Here, one generates 4D objects when creating instances, by asserting them as an instance of TimeSlice.

 

     In neither case is it necessary to handle physical Events as anything other than temporally extended entities (even if they are instantaneous), so no bridging axioms are required there.  A physical event does not need to be asserted as an instance of TimeSlice, since it already has begin and end points as part of its own specification.

      An interesting question is whether anything other than a PhysicalObject could reasonably be asserted to be an instance of TimeSlice.  Roles also can have a beginning and end, but I would prefer to have the ‘Role’ type as a subtype of TimeSlice, where it simply inherits the begin and end time points.  Then asserting something to be an instance of Role also makes it a TimeSlice.

     If one wants to work only with 3D or only with 4D, one can simply eliminate the parts one doesn’t need.  If later one wants to interact with an ontology having the alternative view, one can do a translation or just add back the bridging axiom(s).

 

   I don’t know if either of the above formulations would conflict with any axioms in your 4D system.  If they do, perhaps you might check to see if those axioms are actually necessary, or perhaps were added to create distinctions that do not actually factor into the performance of the system.

 

  Pat

 

Patrick Cassidy

MICRA, Inc.

908-561-3416

cell: 908-565-4053

cassidy@xxxxxxxxx

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of matthew.west@xxxxxxxxx
Sent: Tuesday, March 11, 2008 10:55 AM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] Ontology similarity and accurate communication

 

Dear Pat C.

 

I've been following this loosely, and I mostly agree with Pat H, even if he has been giving you rather a hard time with it. However, I think you are probably right when you say that the case you are trying to make has neither been definitively proved nor disproved. An experiment is therefore worthwhile.

 

I suggest that the 3D/4D interpretation of continuant and occurent mentioned below is a well known example of contradiction between upper ontologies. Whilst I am confident about how to map between them, I do not see how they can co-exist in a canonical ontology, i.e. each object in the real world is represented by just one object in the ontology. If you can show otherwise for just this case, you would have gone a long way towards proving your thesis. I suggest trying this experiment.

 

Regards

Matthew West
Reference Data Architecture and Standards Manager
Shell International Petroleum Company Limited
Registered in England and Wales
Registered number: 621148
Registered office: Shell Centre, London SE1 7NA, United Kingdom

Tel: +44 20 7934 4490 Mobile: +44 7796 336538
Email: matthew.west@xxxxxxxxx
http://www.shell.com
http://www.matthew-west.org.uk/

-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx]On Behalf Of Patrick Cassidy
Sent: 10 March 2008 23:34
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Ontology similarity and accurate communication

Responses to selected comments:

 

[[[1]]]

[PH] >> Second, even a conceptual defining vocabulary is not a formal foundation ontology, since you are using 'defining' in the dictionary sense (which is of limited, if any, use when describing formal ontologies).

       I have tried to explain several times that the word ³defining² in the phrase ³conceptual defining vocabulary² is used in an analogical way, and that the foundation ontology will consist mostly of concept specifications that use necessary conditions, not necessary and sufficient conditions.

 

Then it is not analogous to the dictionary case (which as far as I can see, is the only motivation for your entire proposal.) You can't have it both ways: if you are making an analogy based on definitions, but not meaning actual definitions, then you need to either elucidate the analogy more carefully or abandon it (I'd suggest the latter).

 

A definition is a description of the meaning of a term, whether it is a linguistic definition or a logical specification. That is the analogy.  The

 

 

[[[2]]]<snip>

[PC] >>  Of course, the process of language learning involves multiple clues, including possibly an innate ability to exclude certain combinatorially possible syntactic constructions from the grammar.  My point was that the language learning process is sufficiently *similar* in different people learning the same native language, that the process supports the ability of learners  to develop a common internal ontology (of unknown structure) that is very close.

 

[PH] > Again, that simply does not follow. You are begging the question entirely. I didn't bring up this whole topic of language learning, note: you did in order to justify this idea of a 99.9% common ontology, and I repeat, regardless of the psycholinguistic data, in fact, that is poppycock: nothing about language learning supports the claim that we all have 99.9% agreement on our mental models.

 

    My opinion that our mental models for the basic terms are over 99.9% in agreement is based on personal observation of the high accuracy of communication, when using the basic words.    I believe that that level of accuracy requires common mental models, for the concepts represented by those basic words.  You don’t.  Fine.  But you shouldn’t assert that it has been disproven unless you can cite an accessible reference and point to the passage where the data is summarized.

    My mentioning elements of language learning merely provides a possible mechanism for the achievement of such commonality.  It was never advanced as more than that.  If you think that commonality can’t be achieved that way, fine.  But you shouldn’t assert it has been proven to be impossible unless you can cite an accessible reference and point to the passage where the data is summarized.

  

[[[3]]]  [PC] >> The question of whether people actually use an innate common ontology is a scientific question, but the methods for investigating that are likely to be horrendously complex, and I do not think that past efforts to create a foundation ontology at Cyc actually address this specific question.

 

[PH] I agree, and it would have save a lot of time if you had not brought it up.

     Now, this puzzles me.  It was you who brought up the experience at Cyc as evidence that we don’t have common mental models.  Now you say that the Cyc experience doesn’t address the question.  Can you reconcile these two assertions?  I can’t find the bridging axioms.

 

    On the other hand, I think that trying to develop a foundation ontology *explicitly* as a Conceptual Defining Vocabulary *would* provide evidence for or against this hypothesis, and in the process may help develop a common foundation ontology that is more widely used than the existing ones.   You don’t think the answer is worth the effort?  Fine.  We just disagree.

 

[[[4]]] ]PC] >>  If that process results in a **logical contradiction**, it is my expectation that one or more of the formalizations specifies a concept that is not primitive

 

[PH] > OK, let me immediately give you a counterexample. DOLCE and BOF both require the categories of continuant and occurrent to be disjoint: nothing can possibly be both a continuant and an occurrent in these ontologies. Other ontologies ( I have one, and I think the same is true of Cyc) allow both categories with pretty much the same properties of the respective types, but allow them to overlap. These two categories of ontology are logically incompatible: adding a Cyc axiom to DOLCE will immediately produce inconsistencies. And yet this is all concerned with very basic topics of how to describe time and change, without which a nontrivial ontology can hardly be said to be possible.

 

Good example.  Here is a case where it seems that the disputants are using “continuant’ and “occurrent” terms in different senses.  Clearly, if some entity is an instance of one “occurrent” but not an instance of another “occurrent”, the meanings differ – they are using the terms in different senses.  Once this is recognized, it is necessary to explore the intended meanings of those terms in more detail in order to decide why it is that one ontologist believes that ?X is an instance, but the other doesn’t think so.  I would be delighted to explore any such example in detail, and believe it likely that the problem will be shown to be at basis a terminology dispute.  In order to discover the problem, one needs to analyze the intended meanings of those terms in more detail than can be achieved by merely pointing to subclasses or instances.  The subclass/instance tactic clearly fails in this case to resolve the intended meanings.  It will be necessary to continue dissecting the intended meanings into increasingly finer parts and formalizing them, until the part of the intended meaning that differs between the two ontologists become clear.  Then it will be possible to recompose the basic categories (which may have to change, but now for a good reason) to allow each ontologist to specify the intended meanings, without causing a logical contradiction.  Such a process would in fact help discover what the true fundamental elements of meaning are.  My expectation would be that if the analysis reveals that there are things that can be reasonably labeled as “continuant” and “occurrent” and that the two are truly disjoint, then the ontology that has instances that are considered to be of both types may in fact be a merger of two different views of the same entity.  We would need to look at the specifics of the representations to determine what the problem is.  Perhaps not something that can be done in an on-line discussion, but maybe worth trying.

 

     You may not think that such a process is practical, but I think it is the most plausible path to achieving true semantic interoperability.     BFO is structured as a single-inheritance hierarchy.  If it is considered immutable, it will be quite impossible to include it, without change, into a common ontology.  But I think the required changes will not be too many (I have identified some required changes).  Whether any ontologist will be willing to make any change depends totally on the motivation.  I expect that a substantial project to develop a common foundation ontology, with enough participants to guarantee that it will have wide usage, would be a powerful motivation.

 

    Question: do you think that equating a zero-time-interval timeslice of a person with a 3D “Continuant” person would lead to any logical incompatibility?  It seems to me that the only difference would be in the way the assertions include the time element.  Here I can imagine the bridging axioms.  Yet 3D/4D  is often cited as an incompatibility.

 

PatC

 

Patrick Cassidy

MICRA, Inc.

908-561-3416

cell: 908-565-4053

cassidy@xxxxxxxxx

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pat Hayes
Sent: Monday, March 10, 2008 3:02 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Ontology similarity and accurate communication

 

At 1:36 PM -0400 3/10/08, Patrick Cassidy wrote:

Content-Type: multipart/alternative;
    boundary="----=_NextPart_000_007A_01C882B3.D0E981A0"
Content-Language: en-us

To clear up misinterpretations:

 

[[[1]]]

[PH] >> Second, even a conceptual defining vocabulary is not a formal foundation ontology, since you are using 'defining' in the dictionary sense (which is of limited, if any, use when describing formal ontologies).

       I have tried to explain several times that the word ³defining² in the phrase ³conceptual defining vocabulary² is used in an analogical way, and that the foundation ontology will consist mostly of concept specifications that use necessary conditions, not necessary and sufficient conditions.

 

Then it is not analogous to the dictionary case (which as far as I can see, is the only motivation for your entire proposal.) You can't have it both ways: if you are making an analogy based on definitions, but not meaning actual definitions, then you need to either elucidate the analogy more carefully or abandon it (I'd suggest the latter).

 

 But if you believe that only an ontology with all N&S definitions will be useful, then I can only disagree.

 

Of course not, but then I don't use the term "definition".

 

Too many useful basic notions are not N&S definable.  If you are merely objecting to the use of the term ³defining² in that phrase, then substitute any other word you consider less problematic for your own use.


No, I want to understand what YOU mean.

 

[[[2]]]

[PC] >> You say that psycholinguistic research has already disproven the notion that children can learn to communicate with a common vocabulary by the experience of seeing words used in context to refer to objects and events.

 

[PH]  > No, I did not say that. At some level this is clearly true, as this is simply a description of what children in fact actually do (if you substitute 'hearing' for 'seeing', and avoid the question-begging use of 'refer to' in the last phrase.) The question is, how do they do it? And what I heard you saying was that they did it by a process of association: that when a word is used 'near' an object or event, that proximity is enough to induce an association between words and their meanings which constitutes the word-meaning mapping underlying linguistic competence

 

     And no, I did not say that mere proximity of a word and object is enough to associate a term with a meaning.  I never used the word Œnear¹.  What I said was ³words are associated with their meanings by experience in context².

 

I used scare quotes to acknowledge that you did not use the word, but "experience in context" implies near, I presume. Or if it does not, perhaps you could tell us what YOU mean by 'context'.

 

  Of course, the process of language learning involves multiple clues, including possibly an innate ability to exclude certain combinatorially possible syntactic constructions from the grammar.  My point was that the language learning process is sufficiently *similar* in different people learning the same native language, that the process supports the ability of learners  to develop a common internal ontology (of unknown structure) that is very close.

 

Again, that simply does not follow. You are begging the question entirely. I didn't bring up this whole topic of language learning, note: you did in order to justify this idea of a 99.9% common ontology, and I repeat, regardless of the psycholinguistic data, in fact, that is poppycock: nothing about language learning supports the claim that we all have 99.9% agreement on our mental models.

 

   I merely alluded to a couple of components of the learning process as examples of things that would be similar among learners.  Of course, saying this doesn¹t prove it, and your doubting it doesn¹t disprove it either.

 

Of course not, but the burden of proof is on you, seems to me.

 

    You correctly point out the disputes among adults when trying to formalize a common ontology.  But in those cases the terms used need to acquire a much more precise meaning than the terms used in normal communication.

 

Exactly. It follows, then, that agreement on word meanings in normal usage does NOT imply agreement on meanings at the level of precision needed to support a formal ontology. Which is exactly what I have been arguing through this whole thread, and you have until now been denying.

 

 Even though they consciously try not to get hung up on terminology, it appears to me that some disagreements still are caused by a desire to fix a specific meaning to a term, where different ontologists have a different notion of how that term should be formalized.  My suggestion is to formalize all of the different notions, and give them different names.

 

In what POSSIBLE sense can that be said to amount to agreement to within 99.9% ? I agree this is an interesting, though naive, engineering strategy, but it can't possibly be aligned with your arguments about common mental ontologies and language learning.

 

 If that process results in a **logical contradiction**, it is my expectation that one or more of the formalizations specifies a concept that is not primitive

 

OK, let me immediately give you a counterexample. DOLCE and BOF both require the categories of continuant and occurrent to be disjoint: nothing can possibly be both a continuant and an occurrent in these ontologies. Other ontologies ( I have one, and I think the same is true of Cyc) allow both categories with pretty much the same properties of the respective types, but allow them to overlap. These two categories of ontology are logically incompatible: adding a Cyc axiom to DOLCE will immediately produce inconsistencies. And yet this is all concerned with very basic topics of how to describe time and change, without which a nontrivial ontology can hardly be said to be possible.

 

 This latter expectation is part of the ³conceptual defining vocabulary (CDV) hypothesis².  But this can only be proven by attempting specifically to create a foundation ontology as a CDV.


As is often the case, it is much easier to refute than to prove.

 The question of whether people actually use an innate common ontology is a scientific question, but the methods for investigating that are likely to be horrendously complex, and I do not think that past efforts to create a foundation ontology at Cyc actually address this specific question.

 

I agree, and it would have save a lot of time if you had not brought it up.

 

 The development of a foundation ontology as a CDV would not itself prove that the internal ontologies of sixteen-year-olds are similar, but it would at least show that it is possible to have some reasonably small set of concepts that can be used in combination to describe the meanings of a very large number of other more specialized concepts.  That knowledge would be very useful, in my opinion.

 

    The common terms in English can have multiple meanings.  Yet, it appears to me that assertions using those terms by people trying to be clear are almost always (> 99%) unambiguous, taken in the full context of the text.   So I believe that there is a disambiguation process that results in the selection of the proper sense, at least 99% of the time.

 

That begs the question by assuming that there is a single sense. But you may work with your sense and I with my sense, and most of the time our conclusions agree well enough for us to cooperate. We say that we both understand 'the' meaning of the word, when there is no single meaning.

 

 Three amplifications:

(1)    That statistic represents a use-weighted accuracy.  There may be some basic terms that are misinterpreted with a lot higher frequency, but those terms, if they exist, appear to be used infrequently.  This accuracy is intended to refer to accuracy in interpreting written text, read in a context in which the background of the communication is already known.  Spoken language may be more accurate, by having more situational referents and an opportunity for feedback; it may be less accurate if too much knowledge in the hearer is assumed.

(2)    The senses in which terms are interpreted may still be quite vague.  That would leave room for misinterpretation in the case where the vagueness masks an important distinction.  But I believe that where there are important distinctions, people trying to be clear will use more specific terms or phrases to avoid significant misinterpretation.

(3)    The senses that are represented by the basic linguistic defining vocabulary may not be identical to some enumerated sense in a given dictionary.

 

[[[3]]] [PC]  >> None of this is a theory of language acquisition, but it is a description of how people can acquire the *same* meanings

         [PH] WHY is it that? You keep saying that, but even your own account does not support this claim.

           I didn¹t think that suggesting a hypothesis ­ stated as such -  required an actual proof of the hypothesis.  But I did say that my own observations suggest that communication using the basic vocabulary is highly accurate, and that that implies a common set of meanings that can be attached to the words.  This estimate of accuracy is a non-systematic observation that would need proof if one wanted to investigate the hypothesis.  I think it would  be the first thing that needs proof, if one were to properly investigate the notion of a common ontology used in language understanding.  I am not aware of whether the accuracy of basic communication has been formally investigated (didn¹t do a literature search ­ Googling some relevant phrases has not lead to anything on point).    If you don¹t think that linguistic communication using the basic words (Longman¹s,  e.g.) is accurate, perhaps you have some reason for that skepticism, which I would like to hear.  But the ambiguity of

Individual words is not at issue.  The question is, if one person tries to explain something to another, using the basic words in their basic senses, how often is the explanation misunderstood?

 

Suppose the answer is, almost never. It STILL does not follow that people all have the same internal ontology.

 

PatH

 

-- 

---------------------------------------------------------------------
IHMC               (850)434 8903 or (650)494 3973   home
40 South Alcaniz St.       (850)202 4416   office
Pensacola                 (850)202 4440   fax
FL 32502                     (850)291 0667    cell
http://www.ihmc.us/users/phayes      phayesAT-SIGNihmc.us
http://www.flickr.com/pathayes/collections


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>