ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Ontologies as social mediators

To: "Burkett, William [USA]" <burkett_william@xxxxxxx>
Cc: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Edward Barkmeyer <edward.barkmeyer@xxxxxxxx>
Date: Thu, 03 Dec 2009 16:32:10 -0500
Message-id: <4B182E5A.5060400@xxxxxxxx>
Bill,    (01)

I think we agree, but we are talking about two different topics.    (02)

I wrote:
>> It means exactly what it says and only what it says.  
>>         (03)

Bill wrote:
> Does not make sense without considering (at least a limited) social setting.  
>The ontology means absolutely nothing if there is no human (or his/her 
>software surrogate) to interpret it.  "Meaning" is a human phenomenon - isn't 
>it?  
>       (04)

This is entirely correct.  But, as I said, the intended audience for an 
ontology -- a software artifact that represents knowledge in a formal 
logic language -- is not primarily human.  A human can read the ontology 
and assign all kinds of loads to the terms, but the reason why a human 
reads the ontology is only to determine whether it is suitable for use 
by his software in performing his target application.  A reasoning 
engine, or another software tool, will only work with the formally 
specified elements according to the formal meaning of the formal 
language.  The notion "meaning" for the software tooling is what 
inferences it can make, what derived forms it can produce, what 
decisions it will affect.  And those will not be based on what the human 
thinks the ontology terms "mean".    (05)

The human is judging whether the ontology is "practically consistent" 
with his intent.  And he may be wrong in his initial judgement.  He may 
get specific inferences he didn't expect.  But that is exactly the same 
learning process that two humans go through in establishing effective 
communication.  They think they have agreement on the concepts, terms 
and rules, until some outlier case demonstrates that they don't have 
quite the same concepts.    (06)

The idea here is that what the software will do with the ontology 
concepts should be predictable (within the limits of competencies and 
implementation errors), but what humans will do with any exchange is not 
predictable, because we cannot really know exactly what meaning they 
took from the exchange.  What makes the human interactions work is 
tolerance and fuzziness, but what makes the software work is the rigidity.    (07)

We do not yet know how to make software that reads natural language text 
and behaves as predictably and as usefully as software that interprets 
formal ontologies.     (08)

If your objective is to improve communication between humans, it is not 
clear that an ontology is better than a text corpus.  That said, the 
strictures of a formal language force a discipline on the author of the 
ontology that may improve the communication over what that person would 
have written in natural language.  And in fact, as some previous 
discussion on this exploder revealed, a lot of knowledge engineers have 
discovered that introducing formalism and graphical presentation of 
concepts often significantly improves the ability of a group of domain 
experts to achieve a reliable common terminology and understanding.  It 
gets them past the sloppy text that passes for communication of concepts 
in their communities and the related presumption that unspecified 
properties and constraints are part of the shared understanding.    (09)

You and I are probably in violent agreement.     (010)

My long-standing point is that knowledge engineering is an engineering 
discipline; it is not a branch of philosophy or linguistics or cognitive 
science, and it is not primarily about communications among humans.  It 
is about communicating between humans and software.  The amazing thing 
about AI software is that it doesn't have any idea what domain it 
operates on, or what any of its manipulations actually "mean", but it 
produces valuable results.  But then again, the same is true of Newton's 
calculus, and the IBM 704.    (011)

(I am reminded of a presentation I heard about 30 years ago from a 
person from Shell Oil.  He described an elaborate program for evaluating 
their software performance in various ways and making improvements in 
certain areas to reduce the total computational time.  At the end, he 
said:  "Taken together, this set of improvements (which must have cost 
several million dollars) has reduced our total computational load by 
12%."  And as the audience snickered, he followed that with: "Since the 
Shell research center has 9 5M$ computer systems, that saving is one 
entire computer system!"  The meaning of a result is definitely 
context-dependent!)    (012)

-Ed    (013)

-- 
Edward J. Barkmeyer                        Email: edbark@xxxxxxxx
National Institute of Standards & Technology
Manufacturing Systems Integration Division
100 Bureau Drive, Stop 8263                Tel: +1 301-975-3528
Gaithersburg, MD 20899-8263                FAX: +1 301-975-4694    (014)

"The opinions expressed above do not reflect consensus of NIST, 
 and have not been reviewed by any Government authority."    (015)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (016)

<Prev in Thread] Current Thread [Next in Thread>