[Top] [All Lists]

Re: [ontolog-forum] Topic maps and the "wheel" of "logical semantics": w

To: patrick@xxxxxxxxxxx, "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>, Pat Hayes <phayes@xxxxxxx>
Cc: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Kathryn Blackmond Laskey <klaskey@xxxxxxx>
Date: Wed, 02 May 2007 11:43:47 -0400
Message-id: <p06110431c25e5af510f7@[]>
Patrick -    (01)

>I attended a presentation on the use of ontologies at NASA and the
>speaker took great pains to point out that a single ontology (well,
>multiple ontologies but with mappings to a single master ontology) was a
>prerequisite to success.    (02)

A prerequisite to success of what?    (03)

The "ontology police" want to legislate that we build and then 
conform OTEAO (Ontology to End All Ontologies), and if it isn't in 
ONTEO, you are not allowed to say it.  Like all caricatures, this one 
has a basis in reality. There are some, unfortunately, who come all 
too close to fitting the caricature.  But most people I talk to have 
realized by now that this is a dead end -- it is a recipe for failure 
rather than a prerequisite to success, except in very limited and 
highly constrained problems.    (04)

>When I asked if it wasn't possible to have
>mappings between multiple ontologies that did not share a common basis,
>he said that was possible, but that it was a difficult problem.    (05)

Do a google search on "ontology mapping" and you will see that it's a 
*very* active research area.    (06)

>...until that something is described in
>  > ways that we can analyze with enough mathematical precision to be the
>>  foundation of writing correct code, interoperation with TMs must
>>  always be a matter of guesswork. Which is a poor basis on which to try
>>  to build a planet-wide system of communication.
>Is interoperation a matter of correct code? Or is it understanding the
>semantics of what is to be communicated?    (07)

You need to understand precisely what it is you are communicating if 
you are going to write correct code.  If what you mean is not 
precisely and formally defined, I might interpret it my way when I 
write my software, and Pat might interpret it differently when he 
writes his software, and our respective systems will give different 
results when applied to the inputs you provide to us.    (08)

>I think I have a better understanding of why you have placed such
>emphasis on a "common semantic base." And it would have (I think) the
>advantages that you ascribe for it, but at the cost (unknown) of
>excluding notations that don't share that "common semantic base."    (09)

If a notation does not share a semantic base that enables me to 
define what it means to process it correctly, then how can we 
implement software to process it?  You do it your way, I do it mine, 
and we get inconsistent answers.  Who adjudicates?    (010)

>... 'Amplified Intelligence'.
>  > Ken calls this 'human-centered computing': the idea is create machine
>>  systems which can act as "cognitive prostheses" or amplifiers of human
>>  abilities, so that the entire system of (person + AImachine) is
>>  capable of more than either can achieve alone. I can go an about this
>>  idea at length: too much length for this message. But the point I want
>>  to get across is that it is helpful to think of AI methods, including
>>  mechanical inference, as aids to people rather than competitors to
>>  human dominance. Forget that damn silly Turing Test
>>  (http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL%201/pdf/125.pdf), and stop
>>  worrying about the inhumanity of the machines. Backhoes and eyeglasses
>>  aren't human either, but they are very useful muscle- and
>>  vision-enhancers. What we need now are mind-enhancers :-)    (011)

This parallels the debate that raged in the decision theory community 
in the 70's about normative versus descriptive theories of decision 
making.  Ward Edwards, one of the founders of the field of decision 
analysis, viewed decision analysis as a set of cognitive tools for 
helping people to come closer to achieving the norm of logically 
coherent decisions that serve their values.  Notice that the norm is 
*not* just logically coherent!  Logical coherence for its own sake 
can be disastrous, as history will attest.  Just as you would not 
think of building a house without a hammer and saw, why would we 
think we can solve extremely challenging and complex decision 
problems without tools?    (012)

There were some in the early days of AI who disparaged decision 
analysis because it involved probability and the maximization of 
expected utility, which smacked of "number crunching" rather than the 
AI ideal of symbolic computing.  Papers on probability used to be 
rejected by AI conferences because "AI is about symbols and not 
numbers". Those days are long gone.  There is now a very active 
decision theoretic community within AI, to the benefit of both AI and 
decision analysis.    (013)

>...understood as you
>explain it, "human-centered computing" has much to offer and I really
>should do serious reading on it.    (014)

We might all do better if our first reaction to someone disparaging 
an entire field of study as wrongheaded, if we kept an open mind and 
looked for the value that led thousands of people to devote their 
professional lives to it.    (015)

Kathy    (016)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (017)

<Prev in Thread] Current Thread [Next in Thread>