ontology-summit
[Top] [All Lists]

Re: [ontology-summit] Reasoners and the life cycle

To: "'Ontology Summit 2013 discussion'" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Matthew West" <dr.matthew.west@xxxxxxxxx>
Date: Sat, 29 Dec 2012 10:29:44 -0000
Message-id: <50dec616.c35fb40a.5637.3c08@xxxxxxxxxxxxx>

Dear Fabian,

 

Matthew, 

Yes, we use the word "accuracy" differently. I am not married to the term, but I believe the concept that I denote with the word "accuracy" is important. Since I am not in the word-smithing business, let's just use "accuracy_F" for my notion of accuracy and "accuracy_M" for yours until somebody comes up with prettier terms. 

 

Here are my arguments accuracy_F is wider applicable than accuracy_M, and why I think accuracy_M if defined by "closeness to truth" has its flaws. 

 

(1)   A test for accuracy_M requires an answer to the following question: "Is the axiom X close enough to the truth for the purpose to which it being put", where the purpose derives from the requirements of the application that the ontology is part of. In absentia of a given purpose it does not make sense to ask whether an axiom is accurate_M.

MW: Well really it comes in two parts. Firstly you can say what the accuracy is, so PI to  3SF, or to 5SF. That does not change. When it comes to assessing the ontology for a particular purpose, you need to look at our requirement and whether it is matched.

As we have discussed before, there are ontologies that are not developed with a specific application in mind (e.g, Gene Ontology, Foundational Model of Anatomy). I would argue that in these cases the notion of accuracy_M is not applicable. But even if you think that there are some hidden requirements based on implicit assumptions on the future use of these ontologies, it would be very hard to identify requirements for these ontologies that would allow you to evaluate the accuracy_M of these ontologies. So even if accuracy_M in these cases is theoretically defined, there is no way to measure it. 

 

MW: I disagree rather strongly. I think there is a clear purpose for these ontologies, which is to be an integrating ontology over some scope, which is actually surprisingly narrow. So for example, you would not consider using them for engineering applications, or for product sales.

 

In contrast, at least in the cases of scientific ontologies that cover established knowledge in a domain it is pretty straight forward  to test for accuracy_F: just ask domain experts whether the axioms are true.

 

MW: So is Newtonian Physics true?

 

(Of course, domain experts might be wrong, so this is not a fool-proof approach to measure accuracy_F, but then again no measurement technique is without flaws). 

 

MW: Actually, this is a really bad argument. Most scientists would agree that the current scientific theories are just those that have not yet been proven wrong, and particularly in the field of physics there is a constant expectation that the current set of theories may be overturned by some insight (indeed there is good evidence that they cannot be correct). Hence the well known saying “All theories are wrong, but some are useful”. That gets you back to accuracy_M where you  need to say “useful for what?”

 

(2)   For ontology reuse accuracy_F is more important than accuracy_M. Imagine person A developed an ontology to a given set of requirements R1 and determined  by thorough testing that the ontology is accurate_M with respect to R1. Now person B considers to reuse the ontology within a different application with a different set of requirements R2. For person B it is completely irrelevant to know whether the ontology is accurate_M with respect to R1.  What B would be interested in is whether the ontology is accurate_M with respect to R2, but that information is not available.

 

MW:  That is just not true. Requirements R2 are met if they are a subset of R1.

 

In contrast, since accuracy_F is invariant to the requirements, the information that the ontology has been tested successfully for accuracy_F is valuable to person B. Granted, it is not as good as finding out whether the content of the ontology meets the requirements of B, but it is at least something. 

 

MW: Let’s take another example. I have an ontology that says that a thing can be a part of itself. Is it true? The answer will depend on whether you are using a classical mereology or not. So the only answer you can give is “Yes or no”.

 

(3)   While the notions of "closer to the truth", "absolutely true" might seem to make some intuitive sense in the context of well-chosen examples, it is very hard to generalize these ideas. I am not talking about the lack of a formal theory, obviously, fuzzy logic provides a theoretical framework it. However, I have yet to encounter any satisfying explanation what a truth-value of 0.35628 means. And there is always the question how one determines the truth-values. Unless you have an answer how to determine whether "The earth is a sphere" is closer to the truth than "All birds fly", I don't think we should rely on these ideas in ontology evaluation.

 

MW: That is the wrong idea altogether. It is not a matter of truth values, and it is fine to be exactly true in Accuracy_M, but being close to the truth is about distance from it, not the probability of being absolutely true.

 

(4) I believe that the thing you are ultimately interested in is whether the axioms enable the ontology to meets its requirements as a part of a given application. In other words, the important question is: does the ontology provide the functions that it needs to provide to make the whole system work? And this has nothing to do with axioms being true or "close to true", as the following thought experiment shows. Let's assume that the role of an ontology in an application is  to determine whether there is a train connection between two points. (Not the route, just whether there is a connection or not.) In reality, there is a train line from A to B, from B to C, and from C to A, and no other train line. However, the ontology O contains the following axioms: 

(a) if there is a train line from x to y, x is connected to y. 

(b) if x is connected to y, and there is a train line from y to z, then x is connected to z. 

(c) There is a train line from A to C, a train line from C to B, and a train line from B to A. 

All of axioms in (c) are false. Not "close to true", just plain false; thus these axioms are not accurate_M. Nevertheless, the ontology will perform its function in the application perfectly fine. 

 

MW: I don’t think I follow this. You seem to be saying that there is a train line from A to B, but not from B to A. Not quite sure how that makes sense.

 

Regards

 

Matthew West                           

Information  Junction

Tel: +44 1489 880185

Mobile: +44 750 3385279

Skype: dr.matthew.west

matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx

http://www.informationjunction.co.uk/

http://www.matthew-west.org.uk/

 

This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.

Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.

 

 

Best

Fabian  

 

 

On Dec 28, 2012, at 2:42 PM, Matthew West wrote:



Dear Fabian,

 

It seems we are thinking of different cases.

 

"ontology x is accurate" iff all axioms in x are true. 

I think closer to what you may mean is:

“Ontology X is accurate” iff all inferences from the axioms in X are true, within the scope of the application of the ontology.

 

 

No, that's not what I mean. First, I don't really understand  what "true within the scope of the ontology" is supposed to express. Whether an axioms is true or false does not depend on the scope of an ontology. The axiom (part_of audi volkswagen) is out of scope for an ontology of human anatomy, but its irrelevance does not make the axiom less true. 

 

Second, I don't see the need to explicitly talk about all inferences from the axioms as long as we are concerned with ontology languages that are based on  truth-preserving deductive inference systems like Common Logic or OWL. If all the axioms in X are true it follows that all inferences from the axioms in X are true. 

 

MW: Well I am thinking of Newtonian Physics. This is good enough for most purposes (scope) but is not absolutely true. So most of the time the inferences from using Newtonian Physics will give a good enough answer, but there are some circumstances where it will not.

 

Following Vrandecic "Ontology Evaluation" in "Handbook on Ontologies, Second Edition" (with some modification), I think we need to distinguish (among other things) four questions: 

- Accuracy: Is the content of the ontology true? 

MW: In my data quality work I talk about accuracy as being “How close to the truth is the content?” So I would say that pi = 3.1415 is closer to the truth than pi= 3.14. Neither of course is absolutely true. The more useful question in my opinion is not whether an ontology is accurate absolutely, but whether it is accurate enough for the purpose to which it is being put.

 

- Completeness: Is the domain of interest appropriately covered (this includes terminology as well as axioms)? 

- Conciseness: Does the ontology irrelevant classes, relations, or axioms? 

- Redundancy: Are some axioms redundant because they are already entailed by other axioms? 

 

(Vrandecic considers both redundancy and conciseness, but lumps them in the same category.) Of these four questions two (completeness and conciseness) can be answered only with respect to a  given set of requirements, because the requirements will determine the scope of the ontology and the necessary depth of the axiomatization. Redundancy is independent of any requirements, but it depends on the choice of the logic (or, more specifically, its entailment relationship). Accuracy depends on neither requirements nor entailment relationship. 

 

MW: Well you will gather I have another view about accuracy.

 

The point I try to make in my last email is that even an ontology that contains false axioms might be able to meet all requirements of a given application. This might be because the application never uses the axiom.

 

MW: OK. That is certainly true.

 

(E.g., if the application uses only the taxonomy of the ontology, it does not matter if some parthood axioms are wrong.) A more interesting case is the one I mentioned in my last email, where false information in the ontology enable the proper functioning of a larger application. This can be the case because the false axioms are "close enough" to make no difference or because the larger application accommodates the errors in some form. 

 

MW: This is closer to my “good enough” approach. There is no harm using Newtonian physics if you have something that spots where it does not apply, and routes you to a more appropriate theory.

 

Regards

 

Matthew West                           

Information  Junction

Tel: +44 1489 880185

Mobile: +44 750 3385279

Skype: dr.matthew.west

 

This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.

Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.

 

 

Best

Fabian 

 

 

 

On Dec 28, 2012, at 10:26 AM, Matthew West wrote:




Dear Fabian,

 

I don’t think this:

Obviously, this allows for the possibility that an inaccurate ontology might meet the requirements of a given application.

Follows from this:

"ontology x is accurate" iff all axioms in x are true. 

I think closer to what you may mean is:

“Ontology X is accurate” iff all inferences from the axioms in X are true, within the scope of the application of the ontology.

 

Regards

 

Matthew West                           

Information  Junction

Tel: +44 1489 880185

Mobile: +44 750 3385279

Skype: dr.matthew.west

 

This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.

Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.

 

 

From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Fabian Neuhaus
Sent: 27 December 2012 17:39
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] Reasoners and the life cycle

 

Michael, 

I agree that the appropriate level of detail is dependent on requirements. But I also think that there is a notion of "accuracy" that is not dependent on context or requirements. For this reason I think it is useful to distinguish two notions: accuracy and completeness. I use the terms in the following way:  

"ontology x is accurate" iff all axioms in x are true. 

"ontology x is complete (given a set of requirements)" iff 

          (a) x contains all  classes, relations that are needed to represent a piece of reality sufficiently to meet the requirements, and  

          (b) x contains sufficient axioms that are needed to answer queries that are relevant for the requirements (given an appropriate theorem prover) 

 

Obviously, this allows for the possibility that an inaccurate ontology might meet the requirements of a given application. For example, you might include the axiom (pi = 3.1415), which is inaccurate but might be sufficient for the needs of an application. 

 

Best

Fabian  

 

 

 

 

On Dec 23, 2012, at 4:57 PM, Michael F Uschold wrote:





On "accuracy"

The examples you give are good, but seem unlikely to turn up in most ontologies people are building because the logics used are not designed to support heavy computation. The fact that different representations that are all important but logically inconsistent seems a great argument that logic is not the best tool for the job of representing all three things in a unified framework.

 

I think the more important point is that "accuracy" is not absolute; there is some notion of context or purpose that is relevant. Something can be accurate enough for one purpose, but woefully inadequate for others.  Ideally both representations can seamlessly live together zooming in and out to the appropriate level of detail --  but this may not be possible, as in the cases you note.

--

 

On appealing to authority:

Perhaps you missed my smiley - it was a rhetorical comment.  

On Thu, Dec 20, 2012 at 6:32 PM, John F Sowa <sowa@xxxxxxxxxxx> wrote:

Michael and Alan,

MFU

> There is a broader issue here that reasoners are only a part of. That is:
> "Where do you derive confidence from that your ontology is accurate"?

Accuracy is important, but it's only one of many features that make
an ontology good.  Sometimes, *less* accuracy is better.

For example, relativity and quantum mechanics are known to be
more accurate than Newtonian mechanics.  But Newtonian mechanics
is preferable for large objects moving at typical speeds on earth.

As another example, the Navier-Stokes equations for fluid mechanics
are accurate, but too complex for efficient computation.  Therefore,
nearly every application uses some approximations.  Sometimes, the
*same* application may use *contradictory* approximations for
different aspects:  laminar flow, turbulent flow, subsonic flow,
supersonic flow.

MFU

> There is a nice tool called Ontology Pitfall Scanner which allows
> you to upload an ontology and it will do a bunch of things like this.

I checked the web site, and I noticed that many of the tests it performs
can also be determined by the FCA tools (Formal Concept Analysis).  But
FCA can also *generate* the hierarchy automatically in form that is
guaranteed to avoid those pitfalls.

MFU

> Still other ways to derive confidence that the ontology is accurate
> is to have it checked by experts in the field.

Yes.  That is a good example of how one should appeal to "authority".
See my further comments on that point below.

But first, I'd like to cite Alan Rector's points on another thread:

AR
> Ambitions for global "reference terminologies" lead to artefacts built
> by committees some of whose originators - e.g. IHTSDO/SNOMED CT -
> even disclaim responsibility for how they should be used...
>
> Large scale reference ontologies - or models of any kind - can also
> be caught by conflicts of requirements from multiple potential users
> - clinical care, statistical reporting, billing, speed of use, etc.
>
> The results have not always been happy.

These are examples of "too many authorities spoil the broth".

JFS
>> Jim [Hendler] said that he liked the article very much:

>>
>>  http://www.jfsowa.com/pubs/fflogic.pdf
>>  Fads and fallacies about logic

MFU

> Hmm, the subtext here is scarily close to the fallacy of appealing
> to authority. If Jim liked it it must be good.

I cited Jim in self defense.  He is known to be one of the chief
promoters of the Semantic Web, but I've been known to criticize
many aspects of it.

In any case, citing authority is not, by itself, a fallacy.  Every
academic paper cites authorities, and any paper without such citations
is suspect.  What is wrong is a *fallacious* appeal to authority:

    Linus Pauling was a brilliant physicist.
    Linus Pauling said that megadoses of vitamin C are beneficial.
    Therefore, megadoses of vitamin C are beneficial.

This reasoning is fallacious for two reasons:  (1) being an expert
in physics does not necessarily mean that one is an expert in
medicine; and (2) even among experts in medicine, there is no
consensus that megadoses of vitamin C are beneficial.

Re Semantic Web: Jim Hendler is an acknowledged expert, and he has
been highly supportive of its development.  I cited his authority
as evidence that my points are compatible with that development.

John



 

--

Michael Uschold
   Senior Ontology Consultant, Semantic Arts
   
http://www.semanticarts.com
   LinkedIn: http://tr.im/limfu
   Skype, Twitter: UscholdM

 

 

<ATT00001..c>

 

<ATT00001..c>

 

<ATT00001..c>

 


_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>