ontology-summit
[Top] [All Lists]

Re: [ontology-summit] Ontology driven Data Integration using owl:equival

To: "'Ontology Summit 2014 discussion'" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Sun, 9 Feb 2014 11:13:08 -0500
Message-id: <252401cf25b1$d24833a0$76d89ae0$@micra.com>
John,
   Comments on a few points:    (01)

> > [PC]
 >> but the point is to use a 'foundation ontology' (I prefer that term)
 >> that has **all** of the fundamental ('primitive') concept
 >> representations that are required to logically specify the meanings of
 >> all of the domain concepts in the communicating domain ontologies.
 >
[JS]  >The word 'all' is appropriate *only* in a mathematical domain. Over
two
 >centuries ago, Kant explicitly said that *no empirical concept* can ever
be
 >completely defined -- because new observations and discoveries are always
 >possible.
 >    (02)

         I think you missed the important qualification - "all of the domain
concepts ***in the communicating ontologies***".
For any **given set** of ontologies there will indeed be some foundation
ontology that includes all of the primitive elements that can specify all of
the concepts in those ontologies.  Of course, there may be new primitive
elements that are required when new domains are added to those that desire
to communicate.  It is an open question how rapidly the inventory of
primitives will have to increase to accommodate new domains.  To maximize
the stability of the foundation ontology, I am attempting to identify all of
the primitive concepts that are needed to specify concepts I can understand,
and add them to COSMO.  I have always warmly welcomed suggestions for
primitive concepts that appear necessary and are not already in COSMO.  
      And yes, most logical specifications in the COSMO use necessary, but
not sufficient conditions, so there will always be some ambiguity in the
meanings, but there has always been even more ambiguity in human use of
language.   We can aim or intercomputer communication that is more accurate
than interhuman communication, but it will still not be perfect, just good
enough for our practical purposes.    (03)

>KI
 >> Computers cannot be left alone to mission-critical decisions for humans.    (04)

Well, high-speed traders risk a lot of money on machines that make decisions
in milliseconds.   There are in fact many applications that depend on
computers to make decisions rapidly.  There are (always?) human overrides,
but these may come at a stage after much damage has been done by failure of
the computer.  Most or all automated systems use well-tested programs with
well-understood algorithms that are (usually) very reliable - I don't know
of any that use ontologies and logical inference.   The process of getting
machines to mimic more and more of human intelligence has been slow, but
continues, and using ontologies for automated inference is part of that
effort.   I am of the persuasion that computers will be as capable as humans
**at some point**, but when that will occur depends on how much research
money is available, and how well that research money is allocated, and
perhaps may still require some increase in computational speed or storage
capability.   I view my task as trying to advance that capability.  Getting
**accurate** communication between computers is, IMHO, part of that process
and that is why I am interested in the foundation ontology.  Of course, many
useful application can be now and will continue to be developed that don't
require such a capability.  But I believe that approaching human-level
intelligence does require accurate communication.   If that is not part of
one's current interests, one may ignore the foundation ontology issue.   The
discussion on my end has been about **accurate** intercomputer
communication, which is a subtopic of the ontology integration issue that
has been under discussion in this thread.    (05)

Pat    (06)

Patrick Cassidy
MICRA Inc.
cassidy@xxxxxxxxx
1-908-561-3416    (07)


 >-----Original Message-----
 >From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-
 >summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F Sowa
 >Sent: Sunday, February 09, 2014 10:34 AM
 >To: ontology-summit@xxxxxxxxxxxxxxxx
 >Subject: Re: [ontology-summit] Ontology driven Data Integration using
 >owl:equivalentClass relations
 >
 >Pat C, Kingsley, and Ron,
 >
 >We have been debating the word *all* in the following claim for years:
 >
 >PC
 >> but the point is to use a 'foundation ontology' (I prefer that term)
 >> that has **all** of the fundamental ('primitive') concept
 >> representations that are required to logically specify the meanings of
 >> all of the domain concepts in the communicating domain ontologies.
 >
 >The word 'all' is appropriate *only* in a mathematical domain. Over two
 >centuries ago, Kant explicitly said that *no empirical concept* can ever
be
 >completely defined -- because new observations and discoveries are always
 >possible.
 >
 >Even Aristotle made similar comments about definitions of empirical
 >concepts.  He said that a definition of biological species by genus and
 >differentiae is only possible *after* a thorough examination and
description
 >of specimens (i.e., prototypes).  He also admitted that definitions may
need
 >to be revised when new observations are made.
 >
 >PC
 >> getting computers with increasing ability to perform without humans
 >> is, I believe one of the goals that motivates many workers with
 >> knowledge based systems (including myself), and in other fields as well.
 >
 >I partially agree, but with Kingsley's reservations:
 >
 >KI
 >> A computer can perform autonomously, with varying degrees of
 >> intelligence, while ultimately remaining a productivity tool for human
 >> beings. A computer cannot replace a human being in ...  the realm of
 >cognition.
 >
 >Yes.  On any car I buy, I insist on a manual override.  I like many of the
options
 >on new cars.  But there are horror stories about people getting trapped in
 >cars whose doors are computer controlled.
 >
 >KI
 >> Computers cannot be left alone to mission-critical decisions for humans.
 >> What they can do is perform a lot of the grunt work that makes humans
 >> beings make better decisions, more productively.
 >
 >I very strongly agree.
 >
 >RW
 >> [That] does not take into account systems like Google, Watson or the
 >> BI capabilities available today.
 >
 >Watson beat the Jeopardy! champions in a high-pressure situation.
 >If Watson were given more time, its performance would not improve very
 >much.  But even an average Jeopardy! player with access to Wikipedia could
 >beat Watson if they both took the same time.
 >
 >RW
 >> It is believed that medical errors kill over 400,000 people a year
 >> in the US...   What will be the acceptable loss rates for computers
 >> making mission-critical decisions?  It appears that highly trained
 >> professionals have a very high rate of error.
 >
 >I agree with those observations, but they're consistent with Kingsley's
 >reservations.  If I have a medical emergency, I want all the warning
systems
 >operational.  But the physicians must have a manual override for unusual
 >situations.
 >
 >A flashing light or a siren can cause people to make even worse errors.
 >We need systems that generate *explanations* that can be spoken calmly in
 >the professional's native language.  But in emergencies, it may be
necessary
 >to *shout* the explanations.
 >
 >RW
 >> the relationship between concepts can best be discerned by seeking
 >> patterns in large amounts of data (BIG data)
 >
 >I would qualify the word 'best' in the same way that I qualify the word
'all' in
 >Pat's claim.  There are *always* observations that have not been recorded
in
 >even the largest corpora.  There is no Big Data about landing a plane in
the
 >Hudson River.  I want somebody like Sullenberger to have a manual
override.
 >
 >I admit that some horror stories are the result of a novice overriding the
 >autopilot.  If Sullenbeger had a heart attack, we need systems that can
 >explain the options to the co-pilot -- *and* understand the responses by
the
 >co-pilot.
 >
 >Summary:  Automated systems are essential for emergency responses in
 >milliseconds.  They can be valuable assistants when there is a huge amount
 >of data.  In any situation where an immediate response is not required,
the
 >human should always have the option of making the final decision after
 >getting explanations from the system.
 >
 >John
 >
 >__________________________________________________________
 >_______
 >Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
 >Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-
 >summit/
 >Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
 >Community Files: http://ontolog.cim3.net/file/work/OntologySummit2014/
 >Community Wiki: http://ontolog.cim3.net/cgi-
 >bin/wiki.pl?OntologySummit2014
 >Community Portal: http://ontolog.cim3.net/wiki/    (08)


_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2014/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2014  
Community Portal: http://ontolog.cim3.net/wiki/     (09)
<Prev in Thread] Current Thread [Next in Thread>