John, Matthew,
Thanks for the thoughtful and constructive comments. I will add some
comments on a few points they made: (01)
[MW] > What (I think) Pat is proposing is to produce one ontology into which
others could be translated/mapped. Those other ontologies need not be changed
at all, so a 3D and a 4D ontology could each be mapped to the "universal"
ontology without having to give up their own commitments. (02)
[PC] Yes, well and succinctly put. In addition, however, any domain ontology
newly created using the FO primitives to specify their domain concepts would
not require any post-hoc mapping effort. And extensions of mapped ontologies
would also be born mapped. After ca. 15 years, that could be a majority of new
domain ontologies. (03)
[MW] > So this leaves two problems for Pat:
>
> 1. Producing a "universal" ontology that is capable of expressing whatever
>any other ontology does or may express. (04)
OK. Creating the ontology that can serve for translation of other ontologies
is indeed a substantial task, and I think a serviceable version suitable for
testing, together with some open-source utilities and example applications, can
be built in 3 years with a consortium of about 100 participants, ca. $30M over
three years. Maintenance support at a lower rate for several more years may
also be required to give it a fair test. (05)
[MW] > 2. Persuading everyone else to use this "universal" ontology as an
intermediary to map to all the others.
>
I don’t expect to persuade *everyone* to use it, and would be quite
flabbergasted if everyone just stopped investigating other approaches. It
just needs to gather a user community of sufficient size so that:
(1) those who *do* want to interoperate accurately will have at least one
widely used FO capable of supporting and translating their local knowledge
representations;
(2) the number of publicly available applications becomes sufficient to
encourage an increasing number of developers to use it.
(3) third-party vendors will develop utilities to make it easier to use (06)
and, if it does grow,
(4) its use is taught in IT departments so that programmers don’t just give
you a blank stare when you suggest that using an ontology might improve their
programs or databases (07)
I am not sure what the trajectory of adoption of ISO15926 has been, but I would
expect that an FO designed to be more inclusive of alternative preferences in
ontological representation could gather an even larger user community,
particularly if a Natural Language interface was developed as part of the
project. I hope and expect that ISO15926 will prove mappable to any such FO
developed, so that anyone using that would have semantic interoperability with
other FO users. (08)
[MW] > Neither of which are exactly trivial.
Indeed not. That’s why I think substantial public funding is needed, and why
it hasn’t been done yet. (09)
[JS]
Even an expression like A+B creates problems because of all the
variations of data types in each of the languages. For a simple add of two
integers, problems arise because of different ways of handling overflow
exceptions in the two languages. An exact translation of A+B to another
language would have to supplement the code with a library of error handling
routines that would accommodate all the variations in exception handling that
are different in the two languages. (010)
[PC] Good example. As I mentioned, I think that for a computational ontology,
among the primitives would be those functions that depend on procedural code
for their execution (aka interpretation) . I think that to support accurate
interoperability, all of those procedure-dependent primitives would have to be
agreed to, as among the basic components from which more complex concepts can
be built as FOL structures. So for an addition function, the procedural code
adopted for all extensions of the FO would have to include the same provisions
for precision, overflow, and rounding (and perhaps other subtle issues). Yes,
there may be some existing ontologies with embedded primitives inconsistent
with those adopted for a common FO, so that accurate translation is not
possible, or too computationally complex. But I think that even FOL is a lot
more restricted in syntax and semantics than bit code, so I suspect that
translating ontologies would be a lot easier than translating computer
languages with arbitrary procedures. (011)
[JS] > 2. Goddard, Wierzbicka, Ogden, LDOCE, and others *never* claim
their primitives are as precisely defined as a mathematical
theory or a programming language. In fact, their examples
show that their primitives are just as "squishy" -- i.e.,
just as vague and fuzzy as any words in any of the languages
they are trying to define. (012)
True. But Guo’s work did attempt to get at the issue of ambiguity by
determining how many senses were actually required to create the composite
concepts, and his conclusion was fewer than 3 senses per word, on average (3800
senses for 1400 words). There may well be some residual “squishiness” even
in the set of senses Guo thought necessary, so the required ontology primitives
may be (I am guessing) twice that high, or even (as Lenat suspects) four times.
I just think that the issue of semantic interoperability (and machine
intelligence, generally) has sufficient economic impact to justify more than a
few efforts in the $30M range to try to answer the question, even if the
scientific question involved is not as interesting to others as it is to me. (013)
[JS] If you demand absolute precision, you need a distinct primitive for each
microsense . . . (014)
[PC] I don’t think that conclusion follows. The ‘conceptual primitives’
hypothesis suggests that all of those microsenses will be constructable as
combinations of the primitives. That has to be tested. (015)
[JS] . . . , and Lenat's estimate of 15,000 primitives is probably too small.
(His previous estimates about the number of concepts and axioms needed for Cyc
have always been too small.) (016)
Possibly. The story of AI for the past 50+ years is a frequent upward revision
of the assumed complexity, as new approaches are tried. But surely we want to
continue trying, till we succeed – at least as long as some progress is made,
and plausible new approaches are available. (017)
>> [PC] > My suggestion was that, rather than guess, we actually conduct
> a proper study to determine whether there is a finite inventory > of
>conceptual primitives and if so what the number is. (018)
[JS] > I have no objection to that as a long-term research project. It might
produce something useful. But I wouldn't expect it to solve the translation
problems for a long, long time. (019)
[PC] Yes, it is a research project, since there are unknowns and success is not
guaranteed. But as with other research projects, if well thought out, the
attempt will at least reduce the number of unknowns and possibly suggest new
approaches. An open public effort has the great advantage over a proprietary
project such as Cyc because the reasons for any success or failure will be
visible for inspection by all, and allow anyone to suggest revisions that can
cure any problems detected. (020)
Perhaps a widely used FO won’t solve the HL translation problem, but:
(1) it could solve the database interoperability problem;
(2) it could provide a tool to make NLU and other AI research more efficient by
providing a common de facto standard of meaning, for (a) transmission of
information among applications or modules of a multi-module aggregate; and (b)
comparison of results of alternative NLU approaches (a function that is now
inadequately served by WordNet).
(3) I think it has at least as good a chance at producing *accurate*
translations as any other approach. I am assuming that there is a practical
demand for translations that are better than one can get with Google’s
current automated translation utility. (021)
[JS] > In summary, any foundation for ontology should accommodate continuous
revision and update. That is why I have recommended a hierarchy of ontologies,
not a single, fixed standard. Let the users decide which, if any, are
appropriate for their problems. (022)
I agree, at least for the near future. If, after much testing, the set of
semantic primitives in the FO required to represent many fields does appear to
reach an asymptote, then the FO may become a candidate for a formal standard.
But to properly test the semantic primitives hypothesis, past experience
indicates that we will need to create a user community of substantial size with
public funding. Such a community seems to have little chance of arising by
spontaneous aggregation of interested parties. The reasons for the latter
would be the subject of a different thread. (023)
Pat (024)
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx (025)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (026)
|