Pat and Sean, (01)
PC> PatH thinks that general accurate semantic interoperability
> is a "fantasy" and not worth attempting. I could not find
> any technical arguments for this position. (02)
If you want technical arguments, consider the following: (03)
1. Philosophy. Leibniz observed that everything in the universe
affects everything else. He said that only an infinite being
such as God could take all possible details into account.
Since then, Kant, Wittgenstein, and many others have extended
and refined those arguments with abundant evidence. (04)
2. Science and engineering. Every branch of science searches
for the most general fundamental principles. But the general
principles are so difficult to apply to specific examples that
engineers must always make approximations that are inconsistent
with those for other applications. Even physics, the most
precise of all the "hard" sciences, is a hodge-podge of
inconsistent approximations for each specialized subfield. (05)
3. Computation. All the practical experience from 60+ years of
computer applications provide abundant evidence that computer
systems can interoperate very well on narrow applications, but
not on broad areas, except when the axioms are underspecified.
Example: names, dates, and points in time without any detailed
axioms about what those data items refer to. (06)
For more examples, see my paper, "The challenge of knowledge soup": (07)
http://www.jfsowa.com/pubs/challenge.pdf (08)
If you think that you can solve these problems, go ahead and try.
But you're making claims for which all the evidence is negative.
The word 'fantasy' is very appropriate. (09)
PC> PatH asserts that the meanings of elements in an ontology change
> whenever any new axiom is added. I don't dispute the fact that new
> inferences become available, but do not believe that this mathematical
> notion of meaning is what is relevant to the practical ask of building
> applications using ontologies. (010)
This point is not a property of mathematics, but of *every* method of
reasoning and definition. It applies equally well to the definitions
in your beloved Longman's dictionary. Just adding an axiom specializes
a term. It doesn't make radical changes. The more serious changes
are caused by the issues discussed above. (011)
PC> I assert that an important goal that can be advanced by aiming to
> recognize primitives is the stability of the Foundation Ontology. (012)
You can assert anything you want to. But it's "hope-based reasoning"
without a shred of evidence to support it. (013)
Just look at Longman's dictionary. I have a copy on my shelf, and I
checked the list of defining words. Each definition of those words
has a long list of different word senses, and each use of the word in
other definitions shifts and adapts those senses to the subject matter.
Such squishy "primitives" can be useful as rough guidelines, but not
for precise reasoning. (014)
PH> [General accurate semantic interoperability] is not a viable goal
> to seek. It is a fantasy, a dream. (015)
PC> Wow! PatH thinks that we will never be able to achieve a level of
> interoperability that will "let people rely on the inferences drawn
> by the computer"??!! (016)
That is not what he said. Computers have interoperated successfully
for over half a century. But they only interoperate on specialized
applications. That is exactly the same way that people interoperate.
People have different specialties. You can't replace a chef with
a carpenter or a plumber with an electrician. (017)
PC> I think that there is a community that wants the computers to be
> as reliable as people in making inferences from data acquired from
> remote systems... (018)
Computer systems do that very well for fields they are designed for.
You wouldn't hire a gardener to make an omelet or a chef to build
a house. Don't expect computers to surpass human flexibility for
a long, long time. (019)
PC> I am sure I haven't seen any demonstration of (or any evidence
> for) the level of hopelessness PatH asserts... (020)
Pat Hayes understands computer reasoning systems very, very well.
All the evidence supports his points, and there is not a single
shred of evidence to support your hope-based fantasies. (021)
SB>> ... if you don't get the foundational ontology right first time,
>> it is useless, because subsequent changes will invalidate all that
>> has gone before. (022)
PC> I do agree that it is advisable to make the FO as complete as
> possible at the earliest time, to avoid any changes and minimize
> the chance of reducing the accuracy of interoperability. (023)
No large system is ever "right first time." Look at all the patches
and revisions of every major computer program. IBM used the term
'functionally stabilized' as a euphemism for systems that were
obsolete and no longer being maintained. (024)
The claim that an FO must be perfect at release 1.0 is just
one more proof that it is a hopelessly unrealistic fantasy. (025)
John (026)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (027)
|