ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Foundation ontology, CYC, and Mapping

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Len Yabloko" <lenya@xxxxxxxxxxxxx>
Date: Fri, 26 Feb 2010 06:10:30 +0000
Message-id: <W575622379079331267164630@webmail60>
John, Pat and Sean
>Pat and Sean,
>
>PC> PatH thinks that general accurate semantic interoperability
> > is a "fantasy" and not worth attempting.  I could not find
> > any technical arguments for this position.    (01)

Why even bother? This position is not new and it did not stop the progress on 
achieving better interoperability. This general position is known since 1969 as 
"frame problem" http://en.wikipedia.org/wiki/Frame_problem     (02)

>
>If you want technical arguments, consider the following:
>
>  1. Philosophy.  Leibniz observed that everything in the universe
>     affects everything else.  He said that only an infinite being
>     such as God could take all possible details into account.
>     Since then, Kant, Wittgenstein, and many others have extended
>     and refined those arguments with abundant evidence.
>    (03)

The only evidence needed is observing people cooperating and comparing it with 
machines. Philosophical talk about gods or demons did not stop development of 
thermodynamics despite the same kind of limitations on knowable.     (04)

>  2. Science and engineering.  Every branch of science searches
>     for the most general fundamental principles.  But the general
>     principles are so difficult to apply to specific examples that
>     engineers must always make approximations that are inconsistent
>     with those for other applications.  Even physics, the most
>     precise of all the "hard" sciences, is a hodge-podge of
>     inconsistent approximations for each specialized subfield.
>    (05)

If computer science achieves anything remotely close to what physical sciences 
had achieved we will be wired into collective intelligence the same way we are 
wired into the electrical grid. With that we will not need human level AI 
because it will be surpassed.    (06)

>  3. Computation.  All the practical experience from 60+ years of
>     computer applications provide abundant evidence that computer
>     systems can interoperate very well on narrow applications, but
>     not on broad areas, except when the axioms are underspecified.
>     Example:  names, dates, and points in time without any detailed
>     axioms about what those data items refer to.
>    (07)

Computer cooperation is not a general goal of technological progress. The goal 
is and always was to develop better tools for people and ultimately to extend 
and improve our capabilities and control of the environment.     (08)


>For more examples, see my paper, "The challenge of knowledge soup":
>
>    http://www.jfsowa.com/pubs/challenge.pdf
>
>If you think that you can solve these problems, go ahead and try.
>But you're making claims for which all the evidence is negative.
>The word 'fantasy' is very appropriate.    (09)

What claims are you talking about? If you are referring to practicality of 
gloabaly shared semantic primitives, then I don't see any negative evidence at 
all. People are born with wired into brains language primitives which never 
brake our minds and afford us amazing ability to cooperate despite linguistic 
and cultural barriers. This evidence is enough for me to pursue similar 
capabilities in artificial systems. The world populated by computers 
cooperating in perfect unison is indeed a 'fantasy'.    (010)

>
>PC> PatH asserts that the meanings of elements in an ontology change
> > whenever any new axiom is added.  I don't dispute the fact that new
> > inferences become available, but do not believe that this mathematical
> > notion of meaning is what is relevant to the practical ask of building
> > applications using ontologies.
>    (011)

I agree. It is not any more relevant to practice than "many body problem" is to 
mechanical engineering.    (012)


>This point is not a property of mathematics, but of *every* method of
>reasoning and definition.  It applies equally well to the definitions
>in your beloved Longman's dictionary.  Just adding an axiom specializes
>a term.  It doesn't make radical changes.  The more serious changes
>are caused by the issues discussed above.
>    (013)

If people use dictionaries very effectively then computers can mimic and 
eventually perform the same tasks to free our time for more creative 
activities. I see nothing impossible in that.      (014)

>PC> I assert that an important goal that can be advanced by aiming to
> > recognize primitives is the stability of the Foundation Ontology.
>    (015)

I agree. All the above claim about the impossibility of such stability 
contradict reality of every human culture which has the same stability at the 
core. Stability does not exclude change - it merely avoids chaos. There are 
many different ways to achieve stability which is were this discussion should 
really go.    (016)


>You can assert anything you want to.  But it's "hope-based reasoning"
>without a shred of evidence to support it.
>    (017)

You are looking for evidence in the wrong places. Computer science can not 
produce any evidence - it is not an experimental discipline like physics. The 
only place where evidence of efficient reasoning can be found today is 
neuro-science.      (018)

>Just look at Longman's dictionary.  I have a copy on my shelf, and I
>checked the list of defining words.  Each definition of those words
>has a long list of different word senses, and each use of the word in
>other definitions shifts and adapts those senses to the subject matter.
>Such squishy "primitives" can be useful as rough guidelines, but not
>for precise reasoning.
>    (019)

Again, precise reasoning is not a goal for technical progress. It is only 
isefull as tool and we know that it has limits. That does not make goal or 
automated agent cooperation impossible - just more difficult.     (020)

>PH> [General accurate semantic interoperability] is not a viable goal
> > to seek. It is a fantasy, a dream.
>
>PC> Wow!  PatH thinks that we will never be able to achieve a level of
> > interoperability that will "let people rely on the inferences drawn
> > by the computer"??!!
>
>That is not what he said.  Computers have interoperated successfully
>for over half a century.  But they only interoperate on specialized
>applications.  That is exactly the same way that people interoperate.
>People have different specialties.  You can't replace a chef with
>a carpenter or a plumber with an electrician.
>    (021)

Actually you can if you need to. That is the main goal of developing adaptive 
agents.     (022)

>PC> I think that there is a community that wants the computers to be
> > as reliable as people in making inferences from data acquired from
> > remote systems...
>
>Computer systems do that very well for fields they are designed for.
>You wouldn't hire a gardener to make an omelet or a chef to build
>a house.  Don't expect computers to surpass human flexibility for
>a long, long time.
>    (023)

Computers will probably never surpass humans in creativity. That is not the 
point of technological progress. As tools at the service of people computers 
have a way to go comparing to... say cars.     (024)

>PC> I am sure I haven't seen any demonstration of (or any evidence
> > for) the level of hopelessness PatH asserts...
>
>Pat Hayes understands computer reasoning systems very, very well.
>All the evidence supports his points, and there is not a single
>shred of evidence to support your hope-based fantasies.
>    (025)

I totally disagree. Hope is always better than hopelessness. And before you 
call something a fantasy please offer something real in place of it.     (026)

>SB>> ... if you don't get the foundational ontology right first time,
> >> it is useless, because subsequent changes will invalidate all that
> >> has gone before.
>    (027)

Subsequent changes will only invalidate it temporarily until new knowledge is 
reconciled with previously existing. Some form of non-monotonic reasoning is 
required. This fact only points to limitations of monotonic reasoning adopted 
by Semantic Web.     (028)

>PC>  I do agree that it is advisable to make the FO as complete as
> > possible at the earliest time, to avoid any changes and minimize
> > the chance of reducing the accuracy of interoperability.
>
>No large system is ever "right first time."  Look at all the patches
>and revisions of every major computer program.  IBM used the term
>'functionally stabilized' as a euphemism for systems that were
>obsolete and no longer being maintained.
>    (029)

... and second time and third time. You are confusing "stable" with "static".     (030)

>The claim that an FO must be perfect at release 1.0 is just
>one more proof that it is a hopelessly unrealistic fantasy.
>
>John
>
> 
>_________________________________________________________________
>Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
>Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
>Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
>Shared Files: http://ontolog.cim3.net/file/
>Community Wiki: http://ontolog.cim3.net/wiki/ 
>To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
>To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> 
>    (031)



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (032)

<Prev in Thread] Current Thread [Next in Thread>