[Top] [All Lists]

Re: [ontolog-forum] Fw: Context in a sentence

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Wed, 27 Jan 2010 16:28:01 -0500
Message-id: <060401ca9f97$9ae57ed0$d0b07c70$@com>


   There are a few points where it seems you have misunderstood my previous message – it was short, and a full exposition would take a lot more verbiage, so no fault is suggested here.  But to clarify:


[SB]     I suspect that the claim "we need a common foundational ontology" is exactly equivalent to David's quotation "(1) the entire meaning of a message is self-contained in said message", since if we have a common foundational ontology we should be able to make statements in the ontology that are true irrespective of context.


[PC] Actually, what I was saying was exactly the opposite – that the meaning of a message cannot be properly interpreted unless both sender and receiver have a lot of common knowledge outside the message itself that gives meaning to the message.  This common prior knowledge can include knowledge of contexts, which can be explicitly referenced in the message.


[SB] So, no context free language, no common foundational ontologies

[PC] Sure – the grammar is context-free.  The context is supplied by the agreement on the basic vocabulary (elements in the Foundation Ontology), and explicitly referenced where needed in any communication..


{SB}  However, more broadly, I would reject the idea that there is only one way to talk about the world.


[PC] I never said that, and I don’t believe it either.  But regardless of how one chooses to talk about the world, any two communicating agents must talk about it **in the same language**, or else fail to communicate accurately.


[SB] >> One might propose that, because we are all the same type of creature (human) that we must therefore all use the same mechanisms for thought, and this must lead to the same foundational concepts


[PC] Not necessarily, though see the next paragraph.  I am sure that different people do have different fundamental assumptions and different beliefs, and use words in different ways, and all of that creates a great risk of faulty communication, as one can observe in many situations such as this forum.  In fact, it is probably impossible to people to have **exactly** the same internal states, though with effort we can get close enough to each other for communication accurate enough for most practical purposes (a least when they are not trying to score debating points).   But computers **can** have identical sets of theories (the computer version of beliefs), since the computer owners are in complete control and only have to choose to use the same set of theories in order to communicate accurately.  My point was that since we do have control over our computers’ theories, we can get them to communicate accurately by using the same sets of theories.  That doesn’t mean that there is only **one** true set of theories, it does mean that any group that agrees that **some** particular set of theories is adequate to express what they want their computers to communicate can use that set to enable accurate computer communication.  If there are some who feel that the theories are not adequate for their purposes, they can choose not to communicate accurately with the community that does use the common language – or make some adjustments to get an approximate interpretation – or better yet, try  to collaborate with the others to find some set of theories that includes their needs as well.   But once there is **some** community that uses a common foundation ontology as the basis for accurate computer communication among useful programs, it is likely that one such foundation ontology will be the most commonly used, and therefore will provide the greatest audience.  If a different foundation ontology is used in some specialized community, it can become the preferred basis for communication there, but that community will then not communicate accurately with the other, larger audience.  I expect that one common foundation ontology will eventually dominate the computer communication media for the same reason that English dominates in international scientific conferences – it gives the greatest value per unit effort expended.  That situation may not last forever – English may be replaced by, say Chinese . . .  that depends on unpredictable factors.


However, the variation among humans in the **basic** concepts they use may in fact be quite small.  Most differences I have observed depend on different beliefs (or theories) or even mere preferences, but those different beliefs can actually be expressed using a common set of accepted beliefs – what I refer to as the “defining vocabulary”.  (once you have defined what God is, you can then decide whether to believe that S/he exists and express those beliefs as different theories, described by the same basic language).  In fact, if different beliefs could not both be expressed in some more basic language, we would have no way of knowing that they are different.  Likewise, with ontologies, different ontological theories can probably all be expressed using the same set of fundamental ontological elements – which is what I think of as the foundation ontology: the set of ontological elements sufficient to logically describe all of the ontology elements in any of the domain ontologies  that use the FO to communicate.  This is in a way analogous to the linguistic “defining vocabulary” but one can totally ignore the analogy, as it is not essential to the point being made and seems to cause at least as much disagreement as it helps some to understand the point.  Human languages aren’t ontologies, and analogies are only made to help those who find references to well-known things helpful.  Others can ignore the analogy – please!


A little more about the much derided notion of “one true ontology”.   For accurate communication **it doesn’t matter** whether an ontology expresses everything that could in some Platonic universe be expressed; it only has to express what the **users** want to express.  So, no matter how “basic” one believes that the elements of a foundation ontology are, there may yet be even more basic concepts that could be but aren’t represented in that ontology.   As long as, at any given point, those using the foundation ontology find it adequate to express the intended meanings of their ontology elements, it will suffice to support accurate communication among those ontologies (or database systems) mapped to the FO.  So what happens if another user comes along and concludes that there are some basic concepts needed that are not in the existing FO?  They can be added so that at all time points the FO has what its users need.  Obviously, a need for maintenance is included in this notion of an FO.


The experience of CYC was that, after some initial period of development, the need to add new concepts to the **BaseKB** (common to all theories)  so as to properly represent new domains dwindled to about zero.  We don’t know, for the case of a common foundation ontology used by a large number of independent groups, how long it would take to stabilize so that for some period – say a year – no new elements need to be added to represent new domains.  But even if the Foundation Ontology never completely stabilizes, at any given point in time it will represent the best (IMHO by far the best) means to enable accurate and broad general semantic interoperability.


My point has been that the question of the suitability of a common FO to represent a very large number of domains is a very important question – IMHO currently more important than any other issue in knowledge representation.  And it is an issue susceptible to experimental investigation and verification.  But it will take a large community to properly test it (over 100 users) and the funding for such an effort is not trivial – probably at least $30 million for a  three-year test.  This is not outside the range of efforts in AI (CALO over 200 million, LarKC about 150 million).   So it should be considered seriously.  But it can’t be done by any one person or even one large group like CYC.


Anyone interested in setting up a consortium to make such a proposal?





Patrick Cassidy



cell: 908-565-4053



From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of sean barker
Sent: Tuesday, January 26, 2010 3:12 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Cc: "Patrick Cassidy [pat@xxxxxxxxx]"@mccarthy.cim3.com
Subject: [ontolog-forum] Fw: Context in a sentence





    I suspect that the claim "we need a common foundational ontology" is exactly equivalent to David's quotation "(1) the entire meaning of a message is self-contained in said message", since if we have a common foundational ontology we should be able to make statements in the ontology that are true irrespective of context.


I would interpret C.S.Peirce's definition as saying that communication happens when an agent sends symbol A and it invokes a knowledge based procedure leading to symbol B in a second agent, and both A and B refer to the same (concept) C.


Caveat - I do not claim that this is Peirce's interpretation, or even that he would agree with it, but its my B to his A.


The point being is that context (what ever that is) defines the inference task in which A is used to invoke B. Even on the Semantic Web, the context that it is the semantic web defines particular processing protocols which invoke a system that understands OWL or RDF rather than one that only understands HTML or even EDIFACT.


However, more broadly, I would reject the idea that there is only one way to talk about the world. In this context, I would say there are in fact two distinct types of ontolology, those that talk about the world, and those that model the world, and that these two views of ontology are incompatible. (A foundation ontology is a model of the world). Perhaps, following Protégé, we could distinguish them by having as TOP "word" and "thing".


This is not to say that I don't think common ontologies are a bad idea - they are essential for engineered applications - or rather, applications engineered to match a particular human or business context. However, they are not a universal panacea simply because different contexts will be understood through different ontologies.


One might propose that, because we are all the same type of creature (human) that we must therefore all use the same mechanisms for thought, and this must lead to the same foundational concepts. This would imply firstly, that the variation in humans is too small to allow for different mechanisms for thought, and secondly, that the mechanisms of thought are entirely conditioned by our genetic inheritance and are not affected by environment. Both questions should be scientifically verifiable, and indeed may already have been determined, however, this is not my area of expertise, although I would strongly suspect both hypotheses to be false.


So, no context free language, no common foundational ontologies.


Sean Barker



From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Patrick Cassidy
Sent: 26 January 2010 05:52
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Context in a sentence

                    *** WARNING ***
  This message has originated outside your organisation,
  either from an external partner or the Global Internet. 
      Keep this in mind if you answer this message.


>> I want something--MT?  Ontology support?--that can read Fortran, Jovial, COBOL. Java, PHP, Ruby, C, etc. (oops... that's a computer language) documents & make (more) sense out of said documents.  These are textual artifacts (therefore "documents"?) which may or may not be written by humans, they're decidedly NOT edited for readability, and they are really not intended for human consumption.


I believe that current ontology technology, or  extensions of it (to include procedural attachments) has the technical capability to do such things.  But non-trivial applications will be quite labor-intensive to implement.


As I see it, ontology technology is still in its infancy – or perhaps still embryonic.   I have had great difficulty finding any publicly inspectable (open source) applications that go much beyond an advanced version of database information retrieval – adding in a little logical inference, but not using that inference to do anything conspicuously more impressive than RDB’s themselves.  CYC suggests it has built applications that do that, but we do not have them available for public testing – and much of CYC is still proprietary, a big turn-off for those who need a language that can be used freely.


John Sowa has told us that he uses a combination of techniques to solve knotty problems efficiently.   I believe that is what will be very effective in general, but for that to work outside the confines of a single group – i.e. to enable multiple separately developed agents to cooperate in solving a problem- they will also need a common language to accurately communicate information.


The problem, as I perceive it is that, although up to now there has been great progress in understanding the science (mathematical properties) of inference – for which we can be grateful to the mathematicians and logicians -  understanding inference only provides a **grammar**  and a minimal basic **semantics** for a language that computers can understand.  What we have very little agreement on is the **vocabulary**, without which there is no useful language.  For computers to properly interpret each other’s data, it is necessary to have a common vocabulary – or vocabularies that can be **accurately** translated.   Such a translation mechanism is possible if a common foundation ontology were adopted, which would have representations of all the fundamental concepts necessary to logically describe the domain concepts of the ontologies in programs  that need to communicate data.  It is a measure of the pre-scientific nature of the field that there is actually even disagreement about the need for a common foundation ontology.  To me it is blindingly obvious – one cannot communicate without a common language (including vocabulary); there are no exceptions.  But most efforts at interoperability among separately developed ontologies currently focus on developing mappings in some automated manner – which any inspection immediately reveals cannot be done with enough accuracy to allow machines to make mission-critical decisions based on such inaccurate mappings.  Accurate mappings are possible via a common foundation ontology.  But for reasons that I believe are not based on relevant technical considerations, there is little enthusiasm for developing such an ontology at present.  Past efforts have failed, because they depended on voluntary commitment of a great deal of time from participants in order to find common ground among a large enough user community.  What will work is if a large developing community is **paid** to build and test a common foundation ontology and demonstrate its capability for broad general semantic interoperability.  I am certain it will happen sometime that such an ontology will be developed, because the need for it and benefits of it are so compelling.  The only question for me is how much time and money will be wasted before such a widely used foundation ontology is developed and tested in multiple applications – and who will pay for it.


So, I believe that current ontology technology provides the basis to tackle the problems you cite, but I don’t know of any off-the-shelf programs that can do that now.  Perhaps someone has developed one?





Patrick Cassidy



cell: 908-565-4053


Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>