Pat H., Mala, Chris, and Pat C., (01)
JFS>> Please don't confuse me with Pat C. I do not believe that any
>> of the current schemes is going to achieve any kind of consensus. (02)
PH> OK, sorry if I misunderstood. You seemed to be agreeing with him. (03)
My claim is the following: (04)
1. We should use the words of a natural language as a way of indexing
various ontological theories -- more or less what people do with
WordNet. But I would drop the WordNet upper levels (which very
few people actually use) and treat it (along with other additions)
as a valuable lexical resource. (05)
2. The detailed reasoning would be done with the axioms of various
specialized theories. For example, a theory of skiing might
treat a glacier as a continuant, but a theory of geology would
treat it as an occurrent. (06)
In my note to Pat C., I said that if all he was claiming is a very
sparsely axiomatized hierarchy, that would be close to my position.
However, he has now added a further comment, which I'll discuss
below. (07)
I agree with Lenat that for any serious reasoning, the middle and
lower levels are much more important than the upper levels. In fact,
I would recommend that any widely used axioms in the upper level be
moved into a library of general purpose modules that could be used
in a mix-and-match way for the various specialized ontologies. (08)
MM> However, I often find that the relationships do not fall into
> the five neat categories you have enumerated. They are showing
> more fuzzy relationships - such as similar contexts that have
> been used for defining two classes (by way of similar restrictions,
> etc.) or prototypical patterns of defining certain classes of
> axioms. The axioms are related, but in looser or more subtle senses. (09)
I agree that those relations don't cover all the interesting relations
among theories. It's possible to use those relations in defining a
lattice of theories, but a lattice has multiple branches which may
have similarities that are not captured by those operators. (010)
For example, we could have a general theory of cooking, which might
have some very general axioms about preparing food. But then it
could have many specialized branches for different cuisines, such
as French, Italian, Chinese, Indian, Mexican, etc. None of those
branches would be generalizations or specializations of one another,
but they might have common structures that are not shown in the
lattice. (011)
Sometimes those common structures are obscured by the choice of
names for the types and relations. For example, there might be
specialized ways of frying food in different cuisines. That
would lead to two possibilities: (012)
1. If the ontology had some basic primitives for defining each
of the variations, then the similarities would appear in a
least common generalization of two or more cuisines. (013)
2. But if the different branches used different names for the
types and relations, those commonalities would be hidden. (014)
MM> Is there a way to fit these types of observations as formal
> relationships into your lattice theory? I wonder if you have had
> such experiences with your graph matching algorithms. (015)
I believe that a lot more can be done with the analogy finding
methods, and that is a topic that we (at VivoMind) are eager to
pursue. However, it does take time and effort to do so, and we
currently have just a small group and not enough resources to
do the necessary work right now. (016)
For an example of how the analogy methods can be used to align
ontologies and find hidden patterns between different branches,
see slides 9 to 16 of the following talk: (017)
http://www.jfsowa.com/talks/semtech3.pdf
The Goal of Language Understanding (018)
JFS>>> My ideal for an upper-level ontology would be the barest minimum
>>> number of axioms -- and the ideal number is 0.
>>
PH>> ... I guess I don't understand this. All an ontology is, is
>> axioms. What does it mean to have zero axioms?
>
CM> I took him to mean no *proper* axioms, i.e., the ideal upper-level
> ontology is predicate logic (or whatever your base logic is). I'm
> not really sure what it means to say that. (019)
I should have been clearer about that. Earlier I had mentioned that
WordNet and related terminologies have very few axioms beyond just
the statements of is-a and part-of. Each is-a link or part-of link
would count as one axiom. I should have said that I don't believe
that the upper levels should have any additional axioms beyond those
very minimal statements. (020)
All the more detailed axioms should be put in optional modules that
are used for more detailed specialties. For example, the axioms
about the heart for a heart specialist, a family doctor, a nurse,
or a patient would get progressive less detailed, more simplified,
and less rigorous. (021)
PC> I think we do need a common basis of at least a few thousand
> concepts. I think we differ only on the number of concepts
> about which we can find a large body of agreement (which will
> not be not universal, as PatH correctly asserts). (022)
OK. That clarifies the issue. The Longman's dictionary had a
basic defining vocabulary of 2,000 words, and I think you based
your estimates on that book. But I think that number is misleading
for several reasons: (023)
1. Longman's has done their best to reduce their defining vocabulary
in order to make the dictionary more accessible to readers who are
just learning English. That is indeed a useful goal. (024)
2. But they don't attempt to itemize all the knowledge that a child
would learn about the world in just trying to move around and
interact with things. Most of that knowledge is never verbalized,
but it is a prerequisite for all higher-level knowledge. (025)
3. Most dictionaries try to state definitions in the Aristotleian
style of genus + differentiae. They say "An X is a Y such that..."
That implies that Y is the genus or supertype or hypernym and the
three dots are the differentiae that distinguish X from other
things of type Y. (026)
4. However, most axioms in mathematics do not use that style.
Peano's axioms for integers, for example, don't say "An integer
is a Y such that..." They just say "There is an integer 0,
there is a function S, and and any N satisfying the following
axioms is also an integer." (027)
Because of these points, there is no hard and fast minimum number
of primitives. In fact, would probably be possible to design a
method of distinguishing thousands of different English words with
just a few abstract primitives like 0 and S that could be used to
define relational structures that could define a mechanism for creating
abstract definitions for all the words of English. (028)
In effect, you would only need perhaps 10 primitives and one axiom
for every word (which might be arbitrarily long). But those
definitions would probably be very hard to read. (029)
PC> My point is that it is an important enough issue to warrant
> the effort required to discover that number -- a project adequately
> funded to support at least 50 people half time for a couple of years. (030)
But I don't believe that there is any number to be "discovered".
If somebody set out to do the project with 10 primitives, they
could probably succeed. But then somebody else could add one more
axiom and reduce the number of primitives to 9. And somebody else
would add 8 more axioms and reduce the number of primitives to 1. (031)
If you completed such a project, it would be pointless, since it
wouldn't prove anything. (032)
John (033)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (034)
|