ontology-summit
[Top] [All Lists]

Re: [ontology-summit] Reasoners and the life cycle

To: Ontology Summit 2013 discussion <ontology-summit@xxxxxxxxxxxxxxxx>
From: Fabian Neuhaus <fneuhaus@xxxxxxxx>
Date: Wed, 23 Jan 2013 15:08:26 -0500
Message-id: <E49C7FC6-A846-43BC-8405-177EEDB02FD8@xxxxxxxx>



On 28 Dec 2012, at 17:50, Fabian Neuhaus wrote:

Second, I don't see the need to explicitly talk about all inferences from the axioms as long as we are concerned with ontology languages that are based on  truth-preserving deductive inference systems like Common Logic or OWL. If all the axioms in X are true it follows that all inferences from the axioms in X are true. 


The statement as given is theoretically true but seriously misleading in practice.  Belief in it has led to serious harm - e.g. potentially life-threatening errors in medical ontologies.  If human beings could recognise all the inferences that follow from a set of axioms, we wouldn't need reasoners.  Axioms can be superficially plausible but have unexpected consequences, especially when combined with other superficially plausible axioms.   Subtle errors in axioms that are difficult to spot can have disproportionate effects.


We can only know that a set of axioms is accurate by examining the inferences that follow from them to see if any are false.  (Of course we can't examine all inferences except in trivial cases, but systematic searches for unanticipated inferences is central to the QA of any ontology in which inference plays a significant role.)

I have watched top logicians spend hours trying to understand the reasoning that led to an obviously false inference from what seemed an obviously correct set of axioms, even with the help of automatic theorem provers, justification finders, etc. 

Add to this the difficulties of axioms derived from work by domain experts, no matter how clever the tools, and there is more than ample opportunity for incorrect inferences from apparently correct axioms.  

If we are going to use logic, then we have to accept that logical inference and precision are not natural to human users, and that we have to debug the resulting inferences just as we have to debug the performance that results from seemingly correct programs.

Regards

Alan



Alan, 
I agree with you. What seems to be a disagreement is just the result of you reading my statement out of context. Matthew and I were discussing the merits of alternative definitions of "accuracy". He suggested  to replace 
"ontology x is accurate" iff all axioms in x are true. 
by 
“Ontology X is accurate” iff all inferences from the axioms in X are true, within the scope of the application of the ontology.

In this context I argued that *for the sake of this definition* it does not make a difference whether you just consider the truth of the axioms or the truth of all their consequences (assuming we consider CL or OWL). Because this definition just defines accuracy as a property of the ontology; it does not say anything about the measurement of accuracy of ontologies. 

For the purpose of evaluating the accuracy of an ontology I completely agree with you that it is usually impossible to evaluate the axioms of an ontology individually, but that one needs to consider their interactions by looking at their logical consequences. And I share the experience that you describe: seemingly evidently true axioms may lead to unintended logical consequences, and it is often a non-trivial task to connect the dots.  

Best
Fabian 

_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>