And yet, John, I believe that I understood every word you said (955 of them
in the note, including the previous quotes) in exactly the sense you
intended it. And some of it was not just the basic vocabulary. Is that hard
to believe? How could I do it without a mental model very similar to yours? (01)
Pat (02)
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx (03)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Tuesday, March 11, 2008 2:57 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] Ontology similarity and accurate
> communication
>
> Dear Matthew, Pat, Pat, and Sean,
>
> The following point raises an important question about degrees of
> closeness:
>
> PC> My point was that the language learning process is sufficiently
> > *similar* in different people learning the same native language,
> > that the process supports the ability of learners to develop a
> > common internal ontology (of unknown structure) that is very close.
>
> Children (and adults) learn a large vocabulary, but the words have
> a very flexible meaning, which results in an open-ended number of
> word senses, each of which has a different set of axioms.
>
> SB> The claim that we simply learn an ontology of fundamental concepts
> > is not one I would be comfortable with. We learn that the elements
> > of earth and water fall, while fire rises - concepts like weight,
> > and near-the-Earth's surface are secondary, and the latter quite
> > outside most people's experience.
>
> I agree. Civilization had developed to a very high degree before
> Aristotle introduced his ontology. That ontology and most of the
> others that anyone has proposed are *abstractions* from the way
> language is actually used. Those ontologies might be useful for
> many computer applications, but there is no evidence that anything
> like Cyc or other computer systems is going on inside the minds
> (or brains) of children and adults who use language.
>
> MW> I suggest that the 3D/4D interpretation of continuant and occurrent
> > mentioned below is a well known example of contradiction between
> upper
> > ontologies.
>
> Yes. That illustrates the point that just agreeing on the choice of
> words in a natural language does not force agreement on any set of
> detailed axioms. As we can see from any large dictionary, there are
> many different word senses for each word, which require very different,
> and as Matthew notes, usually incompatible axioms.
>
> PC> My opinion that our mental models for the basic terms are over
> > 99.9% in agreement is based on personal observation of the high
> > accuracy of communication, when using the basic words.
>
> I have no idea where you got that percentage or what it means.
> Furthermore, communication in any NL is definitely *not* accurate
> without a great deal of explanation, clarification, and negotiation.
>
> Anyone who has ever been a teacher (at any level from Kindergarten
> to graduate school or lectures on specific topics) knows that very
> little of what the teacher says gets through to the students on the
> first try. Discussion, questions, repetition, exercises, exams,
> tutorials, and a wide variety of readings are essential. Even then,
> only a few students really master the material -- and the best ones
> are *not* the ones who memorize the teacher's words.
>
> PH> DOLCE and BOF both require the categories of continuant and
> > occurrent to be disjoint: nothing can possibly be both a continuant
> > and an occurrent in these ontologies. Other ontologies (I have one,
> > and I think the same is true of Cyc) allow both categories wit
> > pretty much the same properties of the respective types, but allow
> > them to overlap.
>
> I agree. In my KR ontology, I made the point that the dividing line
> between the two is task dependent. If you're skiing on it, a glacier,
> for example, is an object (continuant). But a geologist would view
> it as a process (occurrent) that melts, flows, breaks apart, and
> acquires new layers at the top.
>
> PC> Clearly, if some entity is an instance of one "occurrent" but not
> > an instance of another "occurrent", the meanings differ -- they are
> > using the terms in different senses.
>
> That's fine. But similar issues arise with nearly every word in the
> language. The main thing that native speakers of the same language
> agree on is the basic vocabulary. The senses of those words change
> with every application. Just look at any dialog by Plato.
>
> The main point is that nearly every word (especially all the common
> ones) has an open-ended variety of senses -- which the linguist
> Alan Cruse aptly called 'microsenses'. Each of those microsenses
> can be axiomatized for a particular task, but each task requires
> a change of word senses that may involve a total restructuring
> of the axioms for all the task-related terms.
>
> Even in the same so-called field, such as medicine, basic words,
> such as heart, kidney, blood, skin, or bone, are used in very
> different ways with different axiomatizations by a patient,
> a nurse, a general practitioner, a specialist, a pharmacist,
> a microbiologist, etc. They may agree in a vague sense about
> that thumping thing in the chest, but their axioms are very
> different.
>
> Fundamental principles:
>
> 1. Communication between two agents (human, computer, or whatever)
> requires agreement at the *task* level or even the level of
> individual *messages* -- but *not* at any kind of global level.
>
> 2. Different agents talking about different tasks with other agents
> may use very different axiomatizations for the same terms.
>
> 3. No two agents *ever* require a global alignment of their
> ontologies in order to communicate effectively. The only
> agreement necessary (or even possible) is at the level of
> the task they are doing.
>
> 4. When multiple agents are cooperating on the same task,
> e.g., surgeons, nurses, anesthetists, patient, etc.,
> any given agent may use very different ontologies in
> communicating with each of the other agents.
>
> 5. Even for the same two agents, their choices of ontologies
> may differ widely when they are cooperating on different
> tasks.
>
> 6. When misunderstandings arise (as they inevitably do), the
> agents switch to a metalevel of questions, explanations,
> clarifications, and negotiations in order to align subsets
> of their ontologies for the specific task they are doing.
>
> John
>
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-
> forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> (04)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (05)
|