[Top] [All Lists]

Re: [ontology-summit] Reasoners and the life cycle

To: "'Ontology Summit 2013 discussion'" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Matthew West" <dr.matthew.west@xxxxxxxxx>
Date: Tue, 22 Jan 2013 13:18:16 -0000
Message-id: <50fe9198.4a63b40a.3be8.ffffb467@xxxxxxxxxxxxx>

Dear Alan,

Interesting point. So confirming that the inferences of axioms are true and intended is a way of confirming the axioms are true. Sounds like a Track A or Track B measure.


I tend to test with a number of intended models, particularly outlying intended models, to see if they fit.


One thing I have learnt is that writing good logic is a skill, like writing good English. This can help a lot in making the inferences be what you expected them to be.




Matthew West                           

Information  Junction

Tel: +44 1489 880185

Mobile: +44 750 3385279

Skype: dr.matthew.west





This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.

Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.




From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Alan Rector
Sent: 22 January 2013 11:07
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] Reasoners and the life cycle




On 28 Dec 2012, at 17:50, Fabian Neuhaus wrote:


Second, I don't see the need to explicitly talk about all inferences from the axioms as long as we are concerned with ontology languages that are based on  truth-preserving deductive inference systems like Common Logic or OWL. If all the axioms in X are true it follows that all inferences from the axioms in X are true. 



The statement as given is theoretically true but seriously misleading in practice.  Belief in it has led to serious harm - e.g. potentially life-threatening errors in medical ontologies.  If human beings could recognise all the inferences that follow from a set of axioms, we wouldn't need reasoners.  Axioms can be superficially plausible but have unexpected consequences, especially when combined with other superficially plausible axioms.   Subtle errors in axioms that are difficult to spot can have disproportionate effects.


We can only know that a set of axioms is accurate by examining the inferences that follow from them to see if any are false.  (Of course we can't examine all inferences except in trivial cases, but systematic searches for unanticipated inferences is central to the QA of any ontology in which inference plays a significant role.)


I have watched top logicians spend hours trying to understand the reasoning that led to an obviously false inference from what seemed an obviously correct set of axioms, even with the help of automatic theorem provers, justification finders, etc. 


Add to this the difficulties of axioms derived from work by domain experts, no matter how clever the tools, and there is more than ample opportunity for incorrect inferences from apparently correct axioms.  


If we are going to use logic, then we have to accept that logical inference and precision are not natural to human users, and that we have to debug the resulting inferences just as we have to debug the performance that results from seemingly correct programs.







Alan Rector

Professor of Medical Informatics

School of Computer Science

University of Manchester

Manchester M13 9PL, UK

TEL +44 (0) 161 275 6149/6188

FAX +44 (0) 161 275 6204





Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>