ontology-summit
[Top] [All Lists]

Re: [ontology-summit] Reasoners and the life cycle

To: "'Ontology Summit 2013 discussion'" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Matthew West" <dr.matthew.west@xxxxxxxxx>
Date: Thu, 3 Jan 2013 19:59:24 -0000
Message-id: <50e5e31c.c507b40a.77bd.ffffd940@xxxxxxxxxxxxx>

Dear Fabian,

 

Matthew, 

Yes, we use the word "accuracy" differently. I am not married to the term, but I believe the concept that I denote with the word "accuracy" is important. Since I am not in the word-smithing business, let's just use "accuracy_F" for my notion of accuracy and "accuracy_M" for yours until somebody comes up with prettier terms. 

 

Here are my arguments accuracy_F is wider applicable than accuracy_M, and why I think accuracy_M if defined by "closeness to truth" has its flaws. 

 

(1)   A test for accuracy_M requires an answer to the following question: "Is the axiom X close enough to the truth for the purpose to which it being put", where the purpose derives from the requirements of the application that the ontology is part of. In absentia of a given purpose it does not make sense to ask whether an axiom is accurate_M.

MW: Well really it comes in two parts. Firstly you can say what the accuracy is, so PI to  3SF, or to 5SF. That does not change. When it comes to assessing the ontology for a particular purpose, you need to look at our requirement and whether it is matched.

 

Okay. So what is the accuracy of "The world is a sphere?"

 

MW2: Well I would expect that to be expressed as the % difference in volume of the sphere proposed as a model of the world, with a maximum difference between the model and reality. Of course WGS84 is an oblate sphere that is used by the GPS system, and most maps and charts these days, but it is still only an approximation. Indeed, in the Oil Industry when I was last aware of what was happening, different oblate spheres were used for different parts of the earth to give the best local approximation.

 

What is the accuracy of "All birds can fly"? 

 

MW2: That is untrue. I don’t think there is a way to say “Most birds can fly” in logic, which is a shame.

 

As we have discussed before, there are ontologies that are not developed with a specific application in mind (e.g, Gene Ontology, Foundational Model of Anatomy). I would argue that in these cases the notion of accuracy_M is not applicable. But even if you think that there are some hidden requirements based on implicit assumptions on the future use of these ontologies, it would be very hard to identify requirements for these ontologies that would allow you to evaluate the accuracy_M of these ontologies. So even if accuracy_M in these cases is theoretically defined, there is no way to measure it. 

 

MW: I disagree rather strongly. I think there is a clear purpose for these ontologies, which is to be an integrating ontology over some scope, which is actually surprisingly narrow. So for example, you would not consider using them for engineering applications, or for product sales.

 

You are saying: "The purpose of the ontology X is to be an integrating ontology"? Okay, let's take a step back. The whole reason why we are talking about the purpose of an ontology is because the purpose is supposed to give us the requirements that we can use to evaluate the ontology. E.g., if somebody asks "What is the purpose of building the new road between X and Y?" a possible answer is "To reduce the traffic congestion", and one can evaluate the design of the road by studying traffic, use of existing roads etc. But your answer is analog to "The purpose of this road is to be a highway." That's not its purpose, that's what it is. For example, the fact that the FMA is an (integrating?) ontology of human anatomy does not tell you anything about the relationships it is supposed to include. It does not tell you anything about the requirements for representing developmental change in human anatomy etc.  

 

MW2: If an ontology is intended to be an integrating ontology, it has some consequences for how that ontology is developed, and has some properties that you can determine to see if it does that successfully.

 

In contrast, at least in the cases of scientific ontologies that cover established knowledge in a domain it is pretty straight forward  to test for accuracy_F: just ask domain experts whether the axioms are true.

 

MW: So is Newtonian Physics true?

 

 

Newton thought that his results, e.g., the laws of motion, are universal laws that describe the behavior of all objects in the universe. In this sense, Newtonian Physics is false. As a theory about gravity, mass, weight of the slow, medium sized objects people encounter in their daily lives, it is true.  


MW2: Now you see I would rather say that it is accurate for engineering purposes provided you do not travel at speeds greater than X. You could even make that part of your ontology.

 

(Of course, domain experts might be wrong, so this is not a fool-proof approach to measure accuracy_F, but then again no measurement technique is without flaws). 

 

MW: Actually, this is a really bad argument. Most scientists would agree that the current scientific theories are just those that have not yet been proven wrong, and particularly in the field of physics there is a constant expectation that the current set of theories may be overturned by some insight (indeed there is good evidence that they cannot be correct). Hence the well known saying “All theories are wrong, but some are useful”. That gets you back to accuracy_M where you  need to say “useful for what?”

 

I don't know why you think that this is relevant to what I wrote. But what most scientists would agree to is that all scientific knowledge is falsifiable. However, this does not mean that the current scientific theories are "just those that have not been proven wrong yet", let alone that scientists assume that all theories are wrong. There is a vast difference between falsifiable and false. 

 

Anyway, I was not talking about   philosophy of science, but just making a point that there is a measurement technique for accuracy_F, namely asking scientists whether the content of the ontology is true or false.

 

MW2: I suggest you are unlikely to get either of those as an answer. You’re much more likely to get an answer like “it depends...”

 

The challenge for you is to come up with a measurement for accuracy_M. According to what you wrote above, these are actually two questions: How do you measure the accuracy? And how do you measure whether the accuracy is "close enough" to a given purpose of an ontology? 

 

MW2: I notice that I am talking about accuracy in a quantitative sense, and you are talking about it in a purely logical sense. For quantitative accuracy, there is a state of affairs that your model represents, and your accuracy is rather simply the difference between that state of affairs, and your representation of it.

 

MW2: I think logical accuracy is actually harder. You can clearly say that logical inconsistency means your axioms are inaccurate, but what is your basis for saying that they are accurate? What do you compare them to? If you have to logical theories for the same thing, which are inconsistent with each other, but both work (3D vs 4D would be an example) how do you state the accuracy of these? If they are both accurate do you accept both of them? How do you account for their being inconsistent?

 

(2)   For ontology reuse accuracy_F is more important than accuracy_M. Imagine person A developed an ontology to a given set of requirements R1 and determined  by thorough testing that the ontology is accurate_M with respect to R1. Now person B considers to reuse the ontology within a different application with a different set of requirements R2. For person B it is completely irrelevant to know whether the ontology is accurate_M with respect to R1.  What B would be interested in is whether the ontology is accurate_M with respect to R2, but that information is not available.

 

MW:  That is just not true. Requirements R2 are met if they are a subset of R1.

 

Yes, in this specific case. But what in the (much more likely case) that R2 are not a subset of R1? 

 

MW2: Then either all the requirements that were met were not stated, or the requirements are not met.

 

In contrast, since accuracy_F is invariant to the requirements, the information that the ontology has been tested successfully for accuracy_F is valuable to person B. Granted, it is not as good as finding out whether the content of the ontology meets the requirements of B, but it is at least something. 

 

MW: Let’s take another example. I have an ontology that says that a thing can be a part of itself. Is it true? The answer will depend on whether you are using a classical mereology or not. So the only answer you can give is “Yes or no”.

 

This is just an ambiguous use of the word "part". Axiomatic mereology was founded by Leśniewski, who was mainly interested in mereology as a substitute for set theory. Analog to subset and proper subset he distinguished between parthood and proper parthood. And this has become the standard terminology for all logicians and formal ontologists. This choice of terminology is a confusing, since the proper parthood relationship in mereology is a better match to the various parthood relationships that we use in daily life. But if we resolve the ambiguity, there is no problem. If by "part of" you mean the relationship that people use in English to describe the relationships between the first half of a soccer game and the whole game or the first two years of Mr. Obama's presidency and the whole first term, then the answer is: no, things cannot be part of themselves.   

 



 

(3)   While the notions of "closer to the truth", "absolutely true" might seem to make some intuitive sense in the context of well-chosen examples, it is very hard to generalize these ideas. I am not talking about the lack of a formal theory, obviously, fuzzy logic provides a theoretical framework it. However, I have yet to encounter any satisfying explanation what a truth-value of 0.35628 means. And there is always the question how one determines the truth-values. Unless you have an answer how to determine whether "The earth is a sphere" is closer to the truth than "All birds fly", I don't think we should rely on these ideas in ontology evaluation.

 

MW: That is the wrong idea altogether. It is not a matter of truth values, and it is fine to be exactly true in Accuracy_M, but being close to the truth is about distance from it, not the probability of being absolutely true.

 

Fuzzy logic has nothing to do with probability (yes, I know wikipedia says otherwise, but that is just wrong). It is a way to formalize the intuition that you expressed: namely, that it is not sufficient to talk about true and false, but that we need to account for distance from the truth. To put it in the terminology you used: the "distance for the truth" is expressed by a value in the interval from 0 to 1, where 0 is "absolute true", 1 is "absolute false". 

 

 



 

(4) I believe that the thing you are ultimately interested in is whether the axioms enable the ontology to meets its requirements as a part of a given application. In other words, the important question is: does the ontology provide the functions that it needs to provide to make the whole system work? And this has nothing to do with axioms being true or "close to true", as the following thought experiment shows. Let's assume that the role of an ontology in an application is  to determine whether there is a train connection between two points. (Not the route, just whether there is a connection or not.) In reality, there is a train line from A to B, from B to C, and from C to A, and no other train line. However, the ontology O contains the following axioms: 

(a) if there is a train line from x to y, x is connected to y. 

(b) if x is connected to y, and there is a train line from y to z, then x is connected to z. 

(c) There is a train line from A to C, a train line from C to B, and a train line from B to A. 

All of axioms in (c) are false. Not "close to true", just plain false; thus these axioms are not accurate_M. Nevertheless, the ontology will perform its function in the application perfectly fine. 

 

MW: I don’t think I follow this. You seem to be saying that there is a train line from A to B, but not from B to A. Not quite sure how that makes sense. 

 

Yes, I assume for this example that train lines are one-directional. If you think this is unrealistic, just replace "train line" with "one-way street" in the example. The point of the example is that all axioms are false, but that the axiom set will respond to all queries about connectedness with true answers, and thus provides the intended functionality to the application. Hence truth (even closeness to truth) of the axioms in the ontology is not required to enable an application to work. 

 

MW: I am reminded of the observation that “The worst possible thing you can do, is the right thing for the wrong reason.”

 

Regards

 

Matthew West                           

Information  Junction

Tel: +44 1489 880185

Mobile: +44 750 3385279

Skype: dr.matthew.west

matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx

http://www.informationjunction.co.uk/

http://www.matthew-west.org.uk/

 

This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.

Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.

 

 


_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>