This thread started with Ed Barkmeyer's comment (3-5-08) that he thought
that at least 90% commonality of ontologies was required for error-free
communication. My comment was that I thought that over 99.9% was required,
and that the mental ontologies of **basic concepts** that people associate
with their basic vocabulary of words are that close. Pat Hayes dismisses
that notion. (01)
One difficulty in this discussion is in trying to decide what qualifies
as an "ontology" that people use in their reasoning processes. Obviously it
must bear only a vague resemblance to the formal ontologies we want to use
for computer reasoning. I also agree with your comments that spatial
representation and reasoning (direct or by analogy) form a large part of
human understanding - I yearn for a good public ontology with spatial
reasoning that is as effective as what is used in games and virtual reality
programs. But Pat Hayes doesn't even seem to think that the reasoning the
brain does when interpreting language can be properly called "inferencing".
In several emails I have been frustrated in failing to find any common
ground whatever with PatH on which we can start to discuss the topic
productively. You can see that in the previous posts in this thread. (02)
The reason I think this is an important issue is because, if we do share
a fairly close common *mental* ontology of basic concepts (up to a few
thousand perhaps), then coming to agreement on a formalization of that basic
ontology could be a very useful step in finding a method for accurate
semantic interoperability. I call that possible foundation ontology the
"Conceptual Defining vocabulary" by analogy with the linguistic defining
vocabulary used in Longman's and some other dictionaries. ("defining' is an
analogy - in the ontology most types will only have necessary conditions
for membership specified). But, as I mention repeatedly, this is a
hypothesis (based on what I perceive to be highly accurate communication
when people make an attempt to explain things, using the basic terms) that
requires experimental verification; and in view of the benefits if it is
true, the effort of attempting to verify this is well justified. But as best
I can tell from PatH's comments, he thinks this is so obviously nonsensical
that it is not worth any serious discussion, let alone real investigation. (03)
We all know how difficult it is for ontologists who already have their
own preferred formalizations to come to agreement when that would require a
change in how they represent things. But the methodologies that have been
used for such efforts does not, in my opinion actually test the "conceptual
defining vocabulary" hypothesis. I think a real test would be possible,
provided that those participating agreed to certain ground rules ahead of
time. I do not know in any detail what kind of discussions were conducted
when developing ontologies at Cyc, though PatH's description of one case
makes me think that they were getting mired (at some points) in what I would
consider terminology issues. I have a better understanding of what happened
with the IEEE-SUO effort, and know that (1) there were no ground rules
agreed on by the SUO group; the approach chosen by Teknowledge avoided any
redundancy; (2) there was a strong feeling that issues should be resolved by
consensus rather than voting - which I think will kill any such project
unless other conditions (not present in IEEE-SUO) were agreed to in advance;
(3) there was no funding for anyone outside of Teknowledge. I am sure we
can do much better than that - if there were adequate funding. (04)
The other issue, which you have previously raised and PatH alludes to
with comments about Cyc's ontology - is whether even succeeding in
developing such a common foundation ontology would provide us with an
artifact that would be useful. My strong feeling that it would is based on
the anticipated benefits of a greatly increased ability to share results of
reasoning using the same ontology. I think this would dramatically increase
the rate at which reusable reasoning modules could be developed and shared.
I believe that only a community much larger than the Cyc team will be able
to develop reasoning techniques to take effectively advantage of the
knowledge in a large ontology, and that such a community could function
effectively and reuse results efficiently if they did share the common
foundation ontology. The point of investigating the "Conceptual Defining
vocabulary" hypothesis is that evidence for the feasibility of such a tactic
could encourage some agency to provide adequate funding for the actual
common foundation ontology development project. (05)
I also agree that analogical reasoning and case-based reasoning are
powerful tools, but that they would be even more powerful when linked to a
common foundation ontology. I believe that the development of those
analogical and case-based tools would also benefit from increased sharing
and reuse within the research community if the knowledge resulting from such
reasoning were also represented in the common form enabled by a common
foundation ontology. (06)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Saturday, March 08, 2008 7:12 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] Ontology similarity and accurate
> Pat and Pat,
> I've been traveling for the past week, and I haven't had the time
> to comment on (or even read) my email. But I finally have a bit
> of time to make some comments.
> PC> The core issue is whether, in the early years, people develop
> > a common ontology (obviously, not a formal one) and an associated
> > vocabulary that is so close among individuals that it allows highly
> > accurate communication *when restricted to the most basic concepts*.
> Much closer to the core is whether logic and ontology are fundamental
> to the way people think. My guess (and the guess of many psychologists
> and anthropologists) is that the prelinguistic years are spent in
> building sensory-motor mechanisms that are closer to the mechanisms
> of our fellow mammals -- especially the great apes -- than to anything
> we have been programming on a digital computer.
> I do believe that logic and ontology are important, but not for the
> most basic thinking processes of children (and adults). Following
> are the slides of three lectures I gave last week that develop some
> ideas related to that theme:
> Semantic Technology
> Logic, Ontology, and Analogy
> The Goal of Language Understanding
> Following are some excerpts from those slides that address issues
> related to this thread.
> Concluding slide from the lecture on Semantic Technology:
> Technologies to Consider
> Natural languages:
> * The ultimate knowledge representation languages.
> * Capable of representing anything in human experience.
> * Highly flexible and adaptable to changing circumstances.
> * But not easy to implement on digital computers.
> Formal logics and ontologies:
> * Precise and implementable on digital computers.
> * Can be translated to natural languages.
> * But inflexible, brittle, and uncompromising.
> Statistical methods:
> * Flexible, robust, and designed to handle uncertainty.
> * But there is an open-ended variety of different methods.
> * Not clear how to relate them to language, logic, and ontology.
> Research issue: Find suitable combinations of the above.
> Slide #5 from the lecture on Logic, Ontology, and Analogy:
> Notations for Representing Knowledge
> Writing a precise statement of knowledge in any language, even
> one's own native language, is not easy.
> As Whitehead said, "the problem is to discriminate precisely
> what we know vaguely."
> Informal notations are useful, but many steps are needed to
> convert them to a formal specification.
> Controlled English, as in CLCE, is easy for humans to read, but
> humans require considerable training before they can write it.
> Conceptual graphs are an intermediate notation that can be used
> in informal methods that can be made precise by systematic
> Concluding slide from that lecture:
> No evidence of formal logic as a prerequisite for learning,
> understanding, or speaking a natural language.
> Common logical operators -- and, or, not, if-then, some, every -- are
> present in every NL. But they are used in many different senses, which
> include classical first-order logic as an important special case.
> Reasoning by analogy is fundamental. Induction, deduction, and
> are important, highly disciplined special cases.
> But analogy is a more general reasoning method, which can be used even
> with images, prior to any version of language.
> No evidence of a highly axiomatized ontology for any natural language.
> But many important commonalities result from common human nature,
> experience, and activities.
> Formal, logic-based systems with deeply axiomatized ontologies have
> fragile and limited in their coverage of natural language texts.
> Analogy-based systems with loosely defined terminologies can be far
> more robust and efficient for many applications.
> Slide #7 from the lecture on The Goal of Language Understanding
> Image-like Mental Models
> Modeling hypothesis by Kenneth Craik:
> If the organism carries a small-scale model of external
> reality and of its own possible actions within its head,
> it is able to carry out various alternatives, conclude
> which is the best of them, react to future situations
> before they arise, utilize the knowledge of past events
> in dealing with the present and the future, and in every
> way react in a fuller, safer, and more competent manner
> to the emergencies which face it.
> The amount of information represented in an image is much
> larger than any description in language or logic.
> And it is rarely expressed in words, even by adults.
> Mental models could be simulated as "virtual reality."
> Concluding slide from that lecture:
> Deductive methods are good when there are widely applicable theories,
> as in physics, engineering, and established accounting procedures.
> When there are no reliable theories, analogical reasoning is necessary.
> Even when good theories are available, analogical reasoning can be a
> valuable supplement for handling exceptions.
> Analogical reasoning can also be used at the metalevel to find mappings
> between different theories and ontologies.
> But we are still very far from representing the level of language and
> learning of a three-year-old child.
> Much more work is needed, especially in representing and processing
> image-like mental models.
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (010)