At 5:48 PM -0400 3/9/08, Patrick Cassidy wrote:
John,
This thread started with Ed Barkmeyer's comment (3-5-08)
that he thought
that at least 90% commonality of ontologies was required for
error-free
communication. My comment was that I thought that over 99.9% was
required,
and that the mental ontologies of **basic concepts** that people
associate
with their basic vocabulary of words are that close. Pat Hayes
dismisses
that notion.
I see no reason to believe it, and a great deal of evidence which
suggests it is false; and I see no way that it could have evolved in
nature, as we don't have access to our own mental models with that
degree of fidelity.
One difficulty in this discussion is in trying to decide
what qualifies
as an "ontology" that people use in their reasoning
processes. Obviously it
must bear only a vague resemblance to the formal ontologies we want to
use
for computer reasoning.
Well, let us push on this point a little more, as it seems
important. We are all, in this forum, primarily concerned with the
formal, engineered ontologies. Talk of mental ontologies or mental
models is therefore chiefly relevant when it relates to these formal
ontologies. If all we have is a 'vague resemblance' then it is not at
all clear how we can transfer insights from one to the other.
I also agree with your comments
that spatial
representation and reasoning (direct or by analogy) form a large part
of
human understanding - I yearn for a good public ontology with
spatial
reasoning that is as effective as what is used in games and virtual
reality
programs. But Pat Hayes doesn't even seem to think that the
reasoning the
brain does when interpreting language can be properly called
"inferencing".
I presume that word was intended to connote that the brain used a
logical system of some kind, which I doubt.
In several emails I have been frustrated
in failing to find any common
ground whatever with PatH on which we can start to discuss the
topic
productively.
Which topic, exactly? If you mean your idea about a basic
defining vocabulary, the discussion amounts to my thinking it a very
bad idea to pursue.
You can see that in the previous posts in
this thread.
The reason I think this is an important issue is because,
if we do share
a fairly close common *mental* ontology of basic concepts (up to a
few
thousand perhaps), then coming to agreement on a formalization of that
basic
ontology could be a very useful step in finding a method for
accurate
semantic interoperability.
Explain how in more detail, if you would. Isnt this just the
familiar notion of having a single 'standard' conceptual vocabulary,
and requiring everyone to map their ontology into it? That is, yet
another SUO?
I call that possible foundation
ontology the
"Conceptual Defining vocabulary" by analogy with the
linguistic defining
vocabulary used in Longman's and some other dictionaries.
("defining' is an
analogy - in the ontology most types will only have necessary
conditions
for membership specified). But, as I mention repeatedly, this is
a
hypothesis (based on what I perceive to be highly accurate
communication
when people make an attempt to explain things, using the basic
terms
Do they use the Longmans basic terms, in fact? Is this a viable
observation? Another question: do children learn the Longman's basic
terms earlier than other words? Either of these would be a powerful
support for your proposal.
) that
requires experimental verification; and in view of the benefits if it
is
true, the effort of attempting to verify this is well justified. But
as best
I can tell from PatH's comments, he thinks this is so obviously
nonsensical
that it is not worth any serious discussion, let alone real
investigation.
Its not nonsensical; but I believe it has been adequately trashed
as a psycholinguistic theory; that you have given no adequate
arguments for it; and that considered purely as an engineering
proposal, it has nothing really new, and is just YASUO (Yet Another
SUO, pr. 'yasoo')
We all know how difficult it is for ontologists who
already have their
own preferred formalizations to come to agreement when that would
require a
change in how they represent things.
You imply that this is merely a matter of habit, but it goes much
deeper than this. One finds people reacting immediately to some
proposal along the lines of: but that is obviously wrong, because...
Danny Bobrow once told me that he was quite certain that people
weren't temporal processes; that the idea was so wildly unintuitive
that it sounded like science fiction, was almost impossible to think
about it. etc.. These are not merely reluctances to change a
representation: they are deeply felt intuitions about basic
ontological preferences.
But the methodologies that have
been
used for such efforts does not, in my opinion actually test the
"conceptual
defining vocabulary" hypothesis. I think a real test would
be possible,
provided that those participating agreed to certain ground rules ahead
of
time.
That is interesting. What ground rules?
I do not know in any detail what
kind of discussions were conducted
when developing ontologies at Cyc, though PatH's description of one
case
makes me think that they were getting mired (at some points) in what I
would
consider terminology issues. I have a better understanding of
what happened
with the IEEE-SUO effort, and know that (1) there were no ground
rules
agreed on by the SUO group; the approach chosen by Teknowledge avoided
any
redundancy; (2) there was a strong feeling that issues should be
resolved by
consensus rather than voting - which I
think will kill any such project
But why? Surely under your hypothesis of a single human common
mental core ontology, consensus is what one would expect.
unless other conditions (not present in
IEEE-SUO) were agreed to in advance;
(3) there was no funding for anyone outside of Teknowledge. I am
sure we
can do much better than that - if there were adequate
funding.
The other issue, which you have previously raised and
PatH alludes to
with comments about Cyc's ontology - is whether even succeeding in
developing such a common foundation ontology would provide us with
an
artifact that would be useful. My strong feeling that it would
is based on
the anticipated benefits of a greatly increased ability to share
results of
reasoning using the same
ontology.
But we can share results of reasoning now. The SWeb provides a
common set of communication standards for doing just that.
I think this would dramatically
increase
the rate at which reusable reasoning modules could be developed and
shared.
Can you explain why/how, in broad outline? it would be more
useful that our simply being told what you believe.
PatH
I believe that only a community much
larger than the Cyc team will be able
to develop reasoning techniques to take effectively advantage of
the
knowledge in a large ontology, and that such a community could
function
effectively and reuse results efficiently if they did share the
common
foundation ontology. The point of investigating the
"Conceptual Defining
vocabulary" hypothesis is that evidence for the feasibility of
such a tactic
could encourage some agency to provide adequate funding for the
actual
common foundation ontology development project.
I also agree that analogical reasoning and case-based
reasoning are
powerful tools, but that they would be even more powerful when linked
to a
common foundation ontology. I believe that the development of
those
analogical and case-based tools would also benefit from increased
sharing
and reuse within the research community if the knowledge resulting
from such
reasoning were also represented in the common form enabled by a
common
foundation ontology.
PatC
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Saturday, March 08, 2008 7:12 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] Ontology similarity and
accurate
> communication
>
> Pat and Pat,
>
> I've been traveling for the past week, and I haven't had the
time
> to comment on (or even read) my email. But I finally have a
bit
> of time to make some comments.
>
> PC> The core issue is whether, in the early years, people
develop
> > a common ontology (obviously, not a formal one) and an
associated
> > vocabulary that is so close among individuals that it allows
highly
> > accurate communication *when restricted to the most basic
concepts*.
>
> Much closer to the core is whether logic and ontology are
fundamental
> to the way people think. My guess (and the guess of many
psychologists
> and anthropologists) is that the prelinguistic years are spent
in
> building sensory-motor mechanisms that are closer to the
mechanisms
> of our fellow mammals -- especially the great apes -- than to
anything
> we have been programming on a digital computer.
>
> I do believe that logic and ontology are important, but not for
the
> most basic thinking processes of children (and adults).
Following
> are the slides of three lectures I gave last week that develop
some
> ideas related to that theme:
>
>
http://www.jfsowa.com/talks/semtech1.pdf
> Semantic Technology
>
>
http://www.jfsowa.com/talks/semtech2.pdf
> Logic, Ontology, and Analogy
>
>
http://www.jfsowa.com/talks/semtech3.pdf
> The Goal of Language Understanding
>
> Following are some excerpts from those slides that address
issues
> related to this thread.
>
> John
>
_______________________________________________________________________
> _
>
> Concluding slide from the lecture on Semantic Technology:
>
> Technologies to Consider
>
> Natural languages:
> * The ultimate knowledge representation
languages.
> * Capable of representing anything in
human experience.
> * Highly flexible and adaptable to
changing circumstances.
> * But not easy to implement on digital
computers.
>
> Formal logics and ontologies:
> * Precise and implementable on digital
computers.
> * Can be translated to natural
languages.
> * But inflexible, brittle, and
uncompromising.
>
> Statistical methods:
> * Flexible, robust, and designed to
handle uncertainty.
> * But there
is an open-ended variety of different methods.
> * Not clear how to relate them to
language, logic, and ontology.
>
> Research issue: Find suitable combinations of the above.
>
_______________________________________________________________________
> _
>
> Slide #5 from the lecture on Logic, Ontology, and Analogy:
>
> Notations for Representing Knowledge
>
> Writing a precise statement of knowledge in any language,
even
> one's own native language, is not easy.
>
> As Whitehead said, "the problem is to discriminate
precisely
> what we know vaguely."
>
> Informal notations are useful, but many steps are needed to
> convert them to a formal specification.
>
> Controlled English, as in CLCE, is easy for humans to read,
but
> humans require considerable training before they can write
it.
>
> Conceptual graphs are an intermediate notation that can be
used
> in informal methods that can be made precise by systematic
> steps.
>
_______________________________________________________________________
> _
>
> Concluding slide from that lecture:
>
> Conclusions
>
> No evidence of formal logic as a prerequisite for learning,
> understanding, or speaking a natural language.
>
> Common logical operators -- and, or, not, if-then, some, every --
are
> present in every NL. But they are used in many different senses,
which
> include classical first-order logic as an important special
case.
>
> Reasoning by analogy is fundamental. Induction, deduction,
and
> abduction
> are important, highly disciplined special cases.
>
> But analogy is a more general reasoning method, which can be used
even
> with images, prior to any version of language.
>
> No evidence of a highly axiomatized ontology for any natural
language.
>
> But many important commonalities result from common human
nature,
> experience, and activities.
>
> Formal, logic-based systems with deeply axiomatized ontologies
have
> been
> fragile and limited in their coverage of natural language
texts.
>
> Analogy-based systems with loosely defined terminologies can be
far
> more robust and efficient for many applications.
>
_______________________________________________________________________
> _
>
> Slide #7 from the lecture on The Goal of Language
Understanding
>
> Image-like Mental Models
>
> Modeling hypothesis by Kenneth Craik:
>
> If the organism carries a small-scale
model of external
> reality and of its own possible actions
within its head,
> it is able to carry out various
alternatives, conclude
> which is the best of them, react to
future situations
> before they arise, utilize the knowledge
of past events
> in dealing with the present and the
future, and in every
> way react in a fuller, safer, and more
competent manner
> to the emergencies which face it.
>
> The amount of information represented in an image is much
> larger than any description in language or logic.
>
> And it is rarely expressed in words, even by adults.
>
> Mental models could be simulated as "virtual
reality."
>
_______________________________________________________________________
> _
>
> Concluding slide from that lecture:
>
> Conclusions
>
> Deductive methods are good when there are widely applicable
theories,
> as in physics, engineering, and established accounting
procedures.
>
> When there are no reliable theories, analogical reasoning is
necessary.
>
> Even when good theories are available, analogical reasoning can
be a
> valuable supplement for handling exceptions.
>
> Analogical reasoning can also be used at the metalevel to find
mappings
> between different theories and ontologies.
>
> But we are still very far from representing the level of language
and
> learning of a three-year-old child.
>
> Much more work is needed, especially in representing and
processing
> image-like mental models.
>
>
>
_________________________________________________________________
> Message Archives:
http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config:
http://ontolog.cim3.net/mailman/listinfo/ontolog-
> forum/
> Unsubscribe:
mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files:
http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>
_________________________________________________________________
Message Archives:
http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config:
http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
--
---------------------------------------------------------------------
IHMC
(850)434 8903 or (650)494 3973 home
40 South Alcaniz St.
(850)202 4416 office
Pensacola
(850)202 4440 fax
FL 32502
(850)291 0667 cell
http://www.ihmc.us/users/phayes
phayesAT-SIGNihmc.us
http://www.flickr.com/pathayes/collections
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (01)
|