Dear Matthew, Pat, Pat, and Sean, (01)
The following point raises an important question about degrees of
closeness: (02)
PC> My point was that the language learning process is sufficiently
> *similar* in different people learning the same native language,
> that the process supports the ability of learners to develop a
> common internal ontology (of unknown structure) that is very close. (03)
Children (and adults) learn a large vocabulary, but the words have
a very flexible meaning, which results in an open-ended number of
word senses, each of which has a different set of axioms. (04)
SB> The claim that we simply learn an ontology of fundamental concepts
> is not one I would be comfortable with. We learn that the elements
> of earth and water fall, while fire rises - concepts like weight,
> and near-the-Earth's surface are secondary, and the latter quite
> outside most people's experience. (05)
I agree. Civilization had developed to a very high degree before
Aristotle introduced his ontology. That ontology and most of the
others that anyone has proposed are *abstractions* from the way
language is actually used. Those ontologies might be useful for
many computer applications, but there is no evidence that anything
like Cyc or other computer systems is going on inside the minds
(or brains) of children and adults who use language. (06)
MW> I suggest that the 3D/4D interpretation of continuant and occurrent
> mentioned below is a well known example of contradiction between upper
> ontologies. (07)
Yes. That illustrates the point that just agreeing on the choice of
words in a natural language does not force agreement on any set of
detailed axioms. As we can see from any large dictionary, there are
many different word senses for each word, which require very different,
and as Matthew notes, usually incompatible axioms. (08)
PC> My opinion that our mental models for the basic terms are over
> 99.9% in agreement is based on personal observation of the high
> accuracy of communication, when using the basic words. (09)
I have no idea where you got that percentage or what it means.
Furthermore, communication in any NL is definitely *not* accurate
without a great deal of explanation, clarification, and negotiation. (010)
Anyone who has ever been a teacher (at any level from Kindergarten
to graduate school or lectures on specific topics) knows that very
little of what the teacher says gets through to the students on the
first try. Discussion, questions, repetition, exercises, exams,
tutorials, and a wide variety of readings are essential. Even then,
only a few students really master the material -- and the best ones
are *not* the ones who memorize the teacher's words. (011)
PH> DOLCE and BOF both require the categories of continuant and
> occurrent to be disjoint: nothing can possibly be both a continuant
> and an occurrent in these ontologies. Other ontologies (I have one,
> and I think the same is true of Cyc) allow both categories wit
> pretty much the same properties of the respective types, but allow
> them to overlap. (012)
I agree. In my KR ontology, I made the point that the dividing line
between the two is task dependent. If you're skiing on it, a glacier,
for example, is an object (continuant). But a geologist would view
it as a process (occurrent) that melts, flows, breaks apart, and
acquires new layers at the top. (013)
PC> Clearly, if some entity is an instance of one "occurrent" but not
> an instance of another "occurrent", the meanings differ -- they are
> using the terms in different senses. (014)
That's fine. But similar issues arise with nearly every word in the
language. The main thing that native speakers of the same language
agree on is the basic vocabulary. The senses of those words change
with every application. Just look at any dialog by Plato. (015)
The main point is that nearly every word (especially all the common
ones) has an open-ended variety of senses -- which the linguist
Alan Cruse aptly called 'microsenses'. Each of those microsenses
can be axiomatized for a particular task, but each task requires
a change of word senses that may involve a total restructuring
of the axioms for all the task-related terms. (016)
Even in the same so-called field, such as medicine, basic words,
such as heart, kidney, blood, skin, or bone, are used in very
different ways with different axiomatizations by a patient,
a nurse, a general practitioner, a specialist, a pharmacist,
a microbiologist, etc. They may agree in a vague sense about
that thumping thing in the chest, but their axioms are very
different. (017)
Fundamental principles: (018)
1. Communication between two agents (human, computer, or whatever)
requires agreement at the *task* level or even the level of
individual *messages* -- but *not* at any kind of global level. (019)
2. Different agents talking about different tasks with other agents
may use very different axiomatizations for the same terms. (020)
3. No two agents *ever* require a global alignment of their
ontologies in order to communicate effectively. The only
agreement necessary (or even possible) is at the level of
the task they are doing. (021)
4. When multiple agents are cooperating on the same task,
e.g., surgeons, nurses, anesthetists, patient, etc.,
any given agent may use very different ontologies in
communicating with each of the other agents. (022)
5. Even for the same two agents, their choices of ontologies
may differ widely when they are cooperating on different
tasks. (023)
6. When misunderstandings arise (as they inevitably do), the
agents switch to a metalevel of questions, explanations,
clarifications, and negotiations in order to align subsets
of their ontologies for the specific task they are doing. (024)
John (025)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (026)
|