Dear Alan,
Interesting point. So confirming that the inferences of axioms are true and intended is a way of confirming the axioms are true. Sounds like a Track A or Track B measure.
I tend to test with a number of intended models, particularly outlying intended models, to see if they fit.
One thing I have learnt is that writing good logic is a skill, like writing good English. This can help a lot in making the inferences be what you expected them to be.
Regards
Matthew West
Information Junction
Tel: +44 1489 880185
Mobile: +44 750 3385279
Skype: dr.matthew.west
matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx
http://www.informationjunction.co.uk/
http://www.matthew-west.org.uk/
This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.
Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.
From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Alan Rector
Sent: 22 January 2013 11:07
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] Reasoners and the life cycle
Matthew
On 28 Dec 2012, at 17:50, Fabian Neuhaus wrote:
Second, I don't see the need to explicitly talk about all inferences from the axioms as long as we are concerned with ontology languages that are based on truth-preserving deductive inference systems like Common Logic or OWL. If all the axioms in X are true it follows that all inferences from the axioms in X are true.
The statement as given is theoretically true but seriously misleading in practice. Belief in it has led to serious harm - e.g. potentially life-threatening errors in medical ontologies. If human beings could recognise all the inferences that follow from a set of axioms, we wouldn't need reasoners. Axioms can be superficially plausible but have unexpected consequences, especially when combined with other superficially plausible axioms. Subtle errors in axioms that are difficult to spot can have disproportionate effects.
We can only know that a set of axioms is accurate by examining the inferences that follow from them to see if any are false. (Of course we can't examine all inferences except in trivial cases, but systematic searches for unanticipated inferences is central to the QA of any ontology in which inference plays a significant role.)
I have watched top logicians spend hours trying to understand the reasoning that led to an obviously false inference from what seemed an obviously correct set of axioms, even with the help of automatic theorem provers, justification finders, etc.
Add to this the difficulties of axioms derived from work by domain experts, no matter how clever the tools, and there is more than ample opportunity for incorrect inferences from apparently correct axioms.
If we are going to use logic, then we have to accept that logical inference and precision are not natural to human users, and that we have to debug the resulting inferences just as we have to debug the performance that results from seemingly correct programs.
Professor of Medical Informatics
School of Computer Science
TEL +44 (0) 161 275 6149/6188