John,
None of the factors you mention is direct evidence against the utility of
the FO. They can be viewed at best as analogical evidence, in the same way
that the Longman defining vocabulary, the Chinese character system, or
AMESLAN can be viewed as analogical evidence *for* the semantic primitives.
And I think the linguistic analogies I use are a lot closer than the others
you cite. And as for the senses for Longman defining words, I have gone
over this several times: Guo's work shows that fewer than 2 senses per word
are used in the definitions (on average), and this is consistent with the
recognition of "defining words so that anyone can understand the
definitions" as one of the "Language Games" that force the use of only the
most common and well-understood senses. But *direct* evidence can only be
obtained by testing the actual process of creating an FO and testing it
directly for its ability to support general semantic interoperability. (01)
Funny thing you should mention Leibniz: he was the one who proposed
creation of a "characteristica universalis", a set of symbols in which
concepts would be precisely represented by symbols so that precise reasoning
can be done. I will quote from my handy "Leibniz Selections" by Wiener:
[Leibniz: from 'Preface to the General Science' (1677)]: <quote>Whence it
is manifest that if we could find characters or signs appropriate for
expressing all our thoughts as definitely and as exactly as arithmetic
expresses numbers or geometric analysis expresses lines, we could in all
subjects <it>insofar as they are amenable to reasoning</it> accomplish what
is done in arithmetic and geometry.
For all inquiries which depend on reasoning would be performed by the
transposition of characters and by a kind of calculus, which would
immediately facilitate the discovery of beautiful results. For we should
not have to break our heads as much as is necessary today, and yet we should
be sure of accomplishing everything the given facts allow.
Moreover we should be able to convince the world what we should have
found or concluded, since it would be easy to verify the calculation either
by doing it over or by trying tests similar to that of casting our nines in
arithmetic. And if someone should doubt my results, I should say to him:
"Let us calculate, Sir," and thus by taking to pen and ink, we should soon
settle the question.
I still add: <it>in so far as the reasoning allows on the given
facts</it>. For although certain experiments are always necessary to serve
as a basis for reasoning, nevertheless, once these experiments are given we
should derive from them everything that anyone at all could possibly derive
. . .</quote> (02)
I don't doubt that Leibniz would also say that ' only an infinite being
such as God could take all possible details into account.", and I would say
exactly the same thing. That is quite different from saying that one can
reason accurately with the knowledge one does have, and communicate it
accurately to other reasoners. Both Leibniz and I believe that accurate
reasoning is possible for the broad range of topics of interest to people,
based on symbols whose meanings are precisely specified, and we both know
that no representation of the real world can be fully complete. It just has
to be complete enough for our practical purposes. There is a difference -
Leibniz seemed to think that people could perform these logical calculations
but I don't - I think that only computers will be able to handle the long
chains of inference that are sometimes required and keep the meanings of all
symbols straight (in a practical amount of time). We don't have access to
the detailed neural structures that people use, but we do have access to the
detailed structures that computers use. I am suggesting that we take
advantage of our knowledge of the details of *computer* reasoning to
accomplish what Leibniz could not with the tools available to him back then. (03)
I think it is revealing that you quote Leibniz to suggest that he
believes something he manifestly does not believe. This indicates to me
that you are really, really stretching to find something, anything that you
can cite as "evidence" against the FO principle. What it does is reinforce
my conviction that there is no meaningful evidence against the FO, just some
gut feelings on the part of some individuals. (04)
As for: [JS]: > The claim that an FO must be perfect at release 1.0 is just
> one more proof that it is a hopelessly unrealistic fantasy.
>
That is not my claim, and I disagree that it must be perfect in order to
perform vastly better than alternative methods to achieve interoperability.
You could make the same argument against mapping ontologies. (05)
You keep citing what has been done in the past. It has been done that
way precisely because there has not been any FO and there was therefore no
alternative. The potential for creating an FO has only recently (in the
past 15 years) become realistic, and in that time no effort of the kind I
have suggested has been undertaken. It's time to try the most direct route
to solve the general problem. (06)
Pat (07)
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx (08)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Thursday, February 25, 2010 10:03 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping
>
> Pat and Sean,
>
> PC> PatH thinks that general accurate semantic interoperability
> > is a "fantasy"[[PC]] and not worth attempting. I could not find
> > any technical arguments for this position.
>
> If you want technical arguments, consider the following:
>
> 1. Philosophy. Leibniz observed that everything in the universe
> affects everything else. He said that only an infinite being
> such as God could take all possible details into account.
> Since then, Kant, Wittgenstein, and many others have extended
> and refined those arguments with abundant evidence.
>
> 2. Science and engineering. Every branch of science searches
> for the most general fundamental principles. But the general
> principles are so difficult to apply to specific examples that
> engineers must always make approximations that are inconsistent
> with those for other applications. Even physics, the most
> precise of all the "hard" sciences, is a hodge-podge of
> inconsistent approximations for each specialized subfield.
>
> 3. Computation. All the practical experience from 60+ years of
> computer applications provide abundant evidence that computer
> systems can interoperate very well on narrow applications, but
> not on broad areas, except when the axioms are underspecified.
> Example: names, dates, and points in time without any detailed
> axioms about what those data items refer to.
>
> For more examples, see my paper, "The challenge of knowledge soup":
>
> http://www.jfsowa.com/pubs/challenge.pdf
>
> If you think that you can solve these problems, go ahead and try.
> But you're making claims for which all the evidence is negative.
> The word 'fantasy' is very appropriate.
>
> PC> PatH asserts that the meanings of elements in an ontology change
> > whenever any new axiom is added. I don't dispute the fact that new
> > inferences become available, but do not believe that this
> mathematical
> > notion of meaning is what is relevant to the practical ask of
> building
> > applications using ontologies.
>
> This point is not a property of mathematics, but of *every* method of
> reasoning and definition. It applies equally well to the definitions
> in your beloved Longman's dictionary. Just adding an axiom specializes
> a term. It doesn't make radical changes. The more serious changes
> are caused by the issues discussed above.
>
> PC> I assert that an important goal that can be advanced by aiming to
> > recognize primitives is the stability of the Foundation Ontology.
>
> You can assert anything you want to. But it's "hope-based reasoning"
> without a shred of evidence to support it.
>
> Just look at Longman's dictionary. I have a copy on my shelf, and I
> checked the list of defining words. Each definition of those words
> has a long list of different word senses, and each use of the word in
> other definitions shifts and adapts those senses to the subject matter.
> Such squishy "primitives" can be useful as rough guidelines, but not
> for precise reasoning.
>
> PH> [General accurate semantic interoperability] is not a viable goal
> > to seek. It is a fantasy, a dream.
>
> PC> Wow! PatH thinks that we will never be able to achieve a level of
> > interoperability that will "let people rely on the inferences drawn
> > by the computer"??!!
>
> That is not what he said. Computers have interoperated successfully
> for over half a century. But they only interoperate on specialized
> applications. That is exactly the same way that people interoperate.
> People have different specialties. You can't replace a chef with
> a carpenter or a plumber with an electrician.
>
> PC> I think that there is a community that wants the computers to be
> > as reliable as people in making inferences from data acquired from
> > remote systems...
>
> Computer systems do that very well for fields they are designed for.
> You wouldn't hire a gardener to make an omelet or a chef to build
> a house. Don't expect computers to surpass human flexibility for
> a long, long time.
>
> PC> I am sure I haven't seen any demonstration of (or any evidence
> > for) the level of hopelessness PatH asserts...
>
> Pat Hayes understands computer reasoning systems very, very well.
> All the evidence supports his points, and there is not a single
> shred of evidence to support your hope-based fantasies.
>
> SB>> ... if you don't get the foundational ontology right first time,
> >> it is useless, because subsequent changes will invalidate all that
> >> has gone before.
>
> PC> I do agree that it is advisable to make the FO as complete as
> > possible at the earliest time, to avoid any changes and minimize
> > the chance of reducing the accuracy of interoperability.
>
> No large system is ever "right first time." Look at all the patches
> and revisions of every major computer program. IBM used the term
> 'functionally stabilized' as a euphemism for systems that were
> obsolete and no longer being maintained.
>
> The claim that an FO must be perfect at release 1.0 is just
> one more proof that it is a hopelessly unrealistic fantasy.
>
> John
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> (09)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (010)
|