Before proposing new ontologies and projects for developing them,
I suggest that we review past proposals and developments. People
often say that artificial intelligence is a new field. But I
remind them that the founding workshop for the field of AI was
held in 1956, ten years before Alan Perlis established the first
computer science department at Carnegie Tech (now CMU). (01)
The basic ideas of ontology are much older. Aristotle began the
systematic study of the categories of existence, their organization
in a hierarchy, and their definition and analysis in terms of formal
logic. I keep reminding advocates of OWL and other description
logics that the most widely used subset of OWL happens to be
identical with Aristotle's syllogisms. (02)
For a brief review of this history, I recommend the slides for
my talk, "The Challenge of Knowledge Soup": (03)
http://www.jfsowa.com/talks/challenge.pdf (04)
For an even shorter summary, see the slide at the end of this note.
Note the acronym SRKB (Shared Reusable Knowledge Bases) in the
projects of the 1990s. That goals of that project were similar
to those of the SUO email list and ontolog forum. Following are
the SRKB archives from 1991 to 1996: (05)
http://ksl.stanford.edu/email-archives/srkb.index.html (06)
The most discouraging observation is that most of the position papers
written in 1991 would be just as appropriate today. At the end of
this message is a copy of my position paper from 1991. (07)
Other position papers on that list are also relevant. In note #12,
Mark Fox made the following point: (08)
MF> What has been called "upper level" ontologies would of course be
> useful but the sharing of the lower level, more domain specific terms
> and instances is also necessary. The problem here is that different
> parts of the organization do not use the same terms even when
> referring to the same concept. Any attempt to standardize terminology
> fails. Secondly, enforcing the use of the same terminology can lead
> to inefficient problem solving for a particular function. Thus arises
> the distinction between the language of communication and the language
> used by a function to reason. (09)
From note #13 by Lewis Johnson, (010)
LJ> ... we must support multiple models of concepts. The form of
> represented knowledge depends upon the intended use of that knowledge,
> and Aries users inevitably use concepts for different purposes. For
> example, when monitoring the progress of a flight it is sufficient
> to model a flight plan as a sequence of flight segments, each along
> a straight line. When defining radar tracking systems, it is
> important to model aircraft maneuvers in detail, in order to predict
> the position of the aircraft at the next point in time; the overall
> flight plan of the aircraft is unimportant... Analysts will need to
> specialize general models to their particular concerns; they also
> will need to adapt reusable knowledge to remove assumptions that
> turn out to be unwarranted. (011)
From note #14 by Jintae Lee and Tom Malone, (012)
JL&TM> As applied to the problem of shared reusable KBs, or shared
> ontologies, the relevant questions are: Do we want a canonical set
> of primitives? When do we want to allow them to be customized?
> Is translation among the customized or specialized primitives
> feasible, desirable? What kinds of translation mechanism are possible
> and what are the dimensions along which tradeoffs occur?
>
> If we proceed to work on a single shared ontology, without considering
> these broader issues, then we might make the mistake of having a
> technology that solves no real problems. (013)
That cautionary remark at the end is critical. I believe that many
of the ontologies that are being proposed or built today do not even
recognize those critical issues. As a result, they are solutions
in search of nonexistent problems. (014)
From note #16 by Bill Mark, (015)
BM> My belief is that we *can* have lots of knowledge to share, but
> only if we start building it to be sharable. Knowledge can be worth
> sharing for a variety of reasons: it may be a repository of problem
> solving know-how; an integral record that supports reasoning about
> a set of decisions (e.g., that comprise some design); a medium of
> communication to be used by people cooperating to solve a problem;
> and so on. I think that we don't know much at all about how we will
> share knowledge, but I suspect that the different reasons for sharing
> knowledge will require different technologies in their support. (016)
From note #18 by Brian Williams and Danny Bobrow, (017)
BW&DB> It is easy for our knowledge sharing efforts to fall into a
> knowledge representation black hole. Its important to avoid the
> urge to represent knowledge in an absolute, use independent manner.
> Many of us have learned this lesson painfully...
>
> The notion of domain dependent and domain independent are misguided.
> ... The concepts of domain dependence/independence are too black and
> white to be useful. Each theory has a range of applicability
> somewhere in the vast middle between case specific and all
> encompassing....
>
> Knowledge sharing should respect the different granularities of
> knowledge and reasoning about that knowledge. (018)
From note #19 by Mark Tuttle, (019)
MT> I believe that "systems which are used tend to get better".
> Therefore trying to create a path which would both allow early success
> and at the same time lay the groundwork for more ambitious success
> later should be a high priority. Thus "incrementalism" should be
> an important part of the vision. (020)
From note #20 by Pat Hayes, (021)
PH> ... to suggest that there should be a standard ontology for natural
> language and common sense is nothing short of irresponsible for anyone
> competent in AI. Cyc is the only attempt to even try this idea out
> thoroughly, and it is discovering all sorts of difficulties and
> complexities, and doesnt even claim to have got it right or even to
> know for sure whether it can really be done: its an experiment. And
> you want to set a Standard?? Fortunately, the idea is so ridiculous
> that it hasn't a hope in hell of getting anywhere. (022)
Pat made that point in 1991, and he was right. In response, Bob Neches
made the following point: (023)
BN> Well, my personal view is that shared ontologies are possible with
> today's knowledge representation technology -- but they will *evolve*
> rather than being legislated. That is, standard shared ontologies
> will emerge out of a marketplace -- people will use and refine ones
> that seem useful, and they will become more useful in the process.
> I think they'll start small and specific, rather than broad and
> general.
>
> Thus, the only successful "standard" ontologies will be *de facto
> standards*. I don't think it will happen by fiat. (024)
In note #62, Doug Lenat and R. V. Guha made the following comments
about the search for a set of "primitives": (025)
DL&RVG> The problems... are (a) there is no small set, and (b) it's
> almost impossible to nail down the meaning of most interesting terms,
> because of the inherent ambiguity in whatever set of terms are
> "primitive."
>
> So what did we do?
>
> (1) For one thing, we insist only on local coherence. I.e., groups
> share most of the meaning of most of the terms with other groups,
> but within a group (working on a particular micro-theory) they strive
> for complete sharing.
>
> (2) For another thing, both kinds of sharing are greatly facilitated
> by the existing KB content --- i.e., if the terms involved are already
> used in many existing axioms.
>
> While (2) can be achieved through massive straightforward effort,
> (1) is more subtle, and has required certain significant extensions
> to the representation framework. More specifically, we had to
> introduce the whole machinery of contexts/micro-theories into Cyc
> (which is why "divergence" has been much less of a problem, since
> 1990.) (026)
All these points are important. But the most discouraging point is
that they were stated in 1991, they are just as relevant today, and
people are still proposing projects that ignore these principles. (027)
I believe that anybody with a new proposal should review those
notes from 1991 (and other related writings) and demonstrate how
their proposed system will address those issues. (028)
John Sowa
___________________________________________________________________ (029)
Proposals for a Universal Ontology (030)
* 4th century BC: Aristotle’s categories and syllogisms. (031)
* 3rd century AD: The Tree of Porphyry for organizing Aristotle's
categories in a familiar tree diagram. (032)
* 17th century: Universal language schemes by Descartes, Mersenne,
Pascal, Leibniz, Newton, Wilkins, and others. (033)
* 18th century: More schemes, the Grand Academy of Lagado, Kant's
categories. (034)
* 19th century: Roget’s Thesaurus, Oxford English Dictionary. (035)
* Early 20th century: Many terminologies in many different fields. (036)
* 1960s: Computerized versions of the terminologies. (037)
* 1970s: ANSI/SPARC Conceptual Schema. (038)
* 1980s: Cyc, WordNet, Japanese Electronic Dictionary project. (039)
* 1990s: SRKB, ISO Conceptual Schema, Semantic Web, many workshops. (040)
* 2000s: Many proposals, no consensus. (041)
Informal terminologies and dictionaries have been extremely successful. (042)
Formal systems are still research projects. (043)
Source: Slide 4 of http://www.jfsowa.com//talks/semtech2.pdf (044)
____________________________________________________________________ (045)
Source: http://ksl.stanford.edu/email-archives/srkb.messages/17.html (046)
Date: Wed, 20 Mar 91
From: sowa@xxxxxxxxxxxxxx
To: SRKB@xxxxxxx
Subject: Position Paper (047)
AI has been most successful on small domains: the microworlds of
early AI demos; the highly specialized expert systems for commercial
applications; and the machine translation systems like METEO, which
require no human editing, but are restricted to the very narrow topic
of weather reports. Such knowledge bases can be shared and reused, but
only for other projects that are similarly restricted. The position
taken in this paper is that such compartmentalization is inevitable:
all deep knowledge is domain dependent. Only superficial, syntactic
knowledge carries over from one domain to another. A serious question
to consider is whether such superficial knowledge can provide a
framework in which the deeper domain-dependent knowledge can be shared.
The answer given in this paper is maybe: some things can be shared,
but the research needed to support a significant amount of sharing of
knowledge representations across multiple domains is still in a
primitive stage of development. (048)
Examples of Domain-Independent Knowledge (049)
Many different projects have surface similarities that seem to suggest
that shared knowledge representations are possible. Expert systems
designed to assist automobile drivers, airplane pilots, ship captains,
and locomotive engineers, for example, would seem to have a lot in
common. All of them must deal with time, speed, and distance as well
as fuel consumption, equipment condition, and passenger safety.
Programming languages also have a great deal in common, as the
following assignment statements seem to indicate: (050)
APL: X <- A + B
FORTRAN: X = A + B
PL/I: X = A + B;
Pascal: X := A + B; (051)
Yet these surface commonalities mask serious differences in detail. A
deeper analysis indicates that the similarities are more syntactic than
semantic: the concepts required for each domain are so tightly bound
to that domain that they cannot be mapped from one to the other.
Generalizations that cover multiple domains have so little detail that
it is not clear whether they can contribute anything significant to the
development of a new knowledge base in any of the more detailed domains. (052)
First consider the possibility of common knowledge bases for
automobiles, airplanes, ships, and trains. A major difference between
these domains is the number of degrees of freedom in the motion. A
train's motion is purely one dimensional because of the rigid tracks.
At a gross level, a car's motion is also one dimensional, but at a
detailed level, the driver must maneuver in two dimensions to keep the
car in lane and avoid other cars and obstacles. A ship's motion is also
two dimensional, but its greater inertia causes a change in course to
take minutes instead of the split-second changes that are possible with
a car. An airplane's motion is three dimensional, but changes in
attitude introduce three more degrees of freedom. Besides differences
in motion, there are different kinds of signals to consider and
different ways of planning a course and following it. As a result, a
driver, a pilot, a captain, and an engineer have totally different ways
of thinking and reacting. A person who is both a driver and a pilot
would have two independent modes of thought with little or nothing in
common. Expert systems designed for each of these domains would also
have few common concepts and practically no common rules. (053)
For the programming languages, the similarities in syntax mask major
differences in semantics. If A, B, and X were all integers or all
floating-point numbers, the results would be the same for each of the
languages. But differences arise when the data types are different. (054)
FORTRAN and PL/I allow type conversions to or from integer and
floating-point, but Pascal only does automatic conversion from integer
to floating and would print an error message if A+B happened to be
floating-point and X were integer. APL also does automatic conversions
in evaluating A+B; but in doing the assignment, it could change the type
of X instead of converting the result of A+B to X's previous type. PL/I
does many other kinds of automatic conversions and would even convert
character strings to and from numbers. APL and PL/I both allow A, B,
and X to be arrays as well as simple scalars; but PL/I places more
restrictions on the dimensions of the arrays, while APL has fewer
restrictions and APL2 has even less. (055)
Because of these differences, terms like 'assignment statement' can be
given a precise definition only for a single programming language. In
some cases, the language standards are so loose that the definition may
change with every compiler or even every modification of a compiler.
An ontology might include ADDITION as a concept type, but it would also
require subtypes APL-ADDITION, FORTRAN-ADDITION, and so on for every
programming language and dialect. (056)
Even the same physical object may be represented in totally different
ways for different purposes. A highway, for example, is one-dimensional
on a map. For an automobile driver, it is two-dimensional. For the
workers building the roadbed, it is three dimensional, but highly
regular. And for the surveyors who are planning a level road through
hilly terrain, it is three dimensional with highly irregular amounts of
cut and fill. Any physical object or system can be represented at an
unlimited number of levels of detail. There is no stopping point that
is natural to the object itself; the stopping point depends entirely on
the purpose for which that object is being used. (057)
Is Natural Language Domain Independent? (058)
Natural languages can express knowledge about any topic in any domain.
But that does not make them domain independent. The syntax of language
and the constraints at the level of case frames are largely domain
independent, but the meaning of each word is highly dependent on the
domain. As an example, consider the following four sentences: (059)
Tom supported the tomato plant with a stick.
Tom supported his daughter with $8,000 per year.
Tom supported his father with a decisive argument.
Tom supported his partner with a bid of 3 spades. (060)
These sentences all use the verb 'support' in the same syntactic
pattern: (061)
A person supported NP1 with NP2. (062)
Yet each use of the verb can only be understood with respect to a
particular subject matter or domain of discourse: physical structures,
financial arrangements, intellectual debate, or the game of bridge. For
each of these domains, the concept type SUPPORT would require different
subtypes, such as PHYSICAL-SUPPORT, FINANCIAL-SUPPORT, or INTELLECTUAL-
SUPPORT. Each of those subtypes could be subdivided further: physical
support by being tied to a stick could be distinguished from support by
being propped up from below or being suspended from above; financial
support by an allowance could be distinguished from support by a trust
fund or support by payments at irregular intervals. (063)
Each difference in concept type makes a difference in reasoning and
behavior: a child with a regular allowance enjoys some measure of
stability, while a child who gets irregular payments must be on good
behavior, always hoping for another grant at any moment. (064)
The point of these examples is that vagueness and ambiguity do not
result from the nature of language. Instead, they result from the use
and reuse of the same words in many different domains and applications.
The same kinds of ambiguities that arise with a technical term like
assignment statement also arise with a common verb like 'support'. The
number of different concept types associated with a word is unlimited,
and the totality of meanings may be inconsistent. An interior
decorator, for example, may think of walls as parts of a room, while a
construction contractor may think of them as separators between rooms. (065)
Each view is correct for a certain purpose and point of view, but
they are incompatible with one another. The word senses listed in
dictionaries represent the most common applications, and larger
dictionaries list more of them. But even the largest dictionaries fail
to distinguish such nuances as addition in APL vs. addition in FORTRAN
or support by an allowance vs. support by irregular payments. Although
the different meanings of addition, support, and wall are incompatible,
they still have something in common. It is easier for a person to learn
and use a single word for them than to learn different words that change
with every application. But that implies that the only thing that is
easily shared or reusable is the syntax, not the deeper semantics of the
knowledge base. (066)
Language Games (067)
The traditional AI approach to knowledge representation resembles the
early philosophy of Ludwig Wittgenstein, as presented in the _Tractatus
Logico-Philosophicus_. In his later philosophy, Wittgenstein presented
scathing criticisms of his earlier work -- all of which apply equally
well to the current attempts to build shared, reusable knowledge bases. (068)
Yet his later work is not totally negative; it contains the basis for a
solution. His theory of language games suggests that the way to build
large, flexible intelligent systems is to provide a framework that can
use and reuse the same syntactic tokens in different language games for
different domains. Some of the implications of these ideas for AI were
discussed in the last chapter of a book (Sowa 1984), two recent papers
(Sowa 1990, 1991), and a workshop on large knowledge bases (Silverman
and Murray 1991). (069)
References (070)
Silverman, Barry G., and Arthur J. Murray (1991) "Full-sized knowledge-
based systems research workshop," _AI Magazine_, vol. 11, no. 5,
January 1991, pp. 88-94. (071)
Sowa, J. F. (1984) _Conceptual Structures: Information Processing in
Mind and Machine_, Addison-Wesley, Reading, MA. (072)
Sowa, J. F. (1990) "Crystallizing theories out of knowledge soup," in
_Intelligent Systems: State of the Art and Future Directions_, edited
by Zbigniew W. Ras and Maria Zemankova, Ellis Horwood, New York,
pp. 456-487. (073)
Sowa, J. F. (1991) "Lexical structures and conceptual structures,"
in _Semantics in the Lexicon_, edited by James Pustejovsky, to be
published by Kluwer Academic Press. (074)
Wittgenstein, Ludwig (1921) _Tractatus Logico-Philosophicus_,
Routledge and Kegan Paul, London, 1961. (075)
Wittgenstein, Ludwig (1953) _Philosophical Investigations_, Basil
Blackwell, Oxford. (076)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (077)
|