> So what is the accuracy of "The world is a sphere?" What is the accuracy of "All birds can fly"?
what about: accuracy can also be perception (a roundness which is not) and overall generalization (over a group with some distinct features and leaving outside the few exceptions, e.g. the poor penguins).
Happy 2013.
Francesca
*************
PhD candidate
The Hong Kong Polytechnic University
From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [ontology-summit-bounces@xxxxxxxxxxxxxxxx] on behalf of Fabian Neuhaus [fneuhaus@xxxxxxxx]
Sent: Tuesday, January 01, 2013 2:02 AM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] Reasoners and the life cycle
Dear Matthew,
On Dec 29, 2012, at 5:29 AM, Matthew West wrote:
Dear Fabian,
Matthew,
Yes, we use the word "accuracy" differently. I am not married to the term, but I believe the concept that I denote with the word "accuracy" is important. Since I am not in the word-smithing business, let's just use "accuracy_F" for my notion of accuracy and
"accuracy_M" for yours until somebody comes up with prettier terms.
Here are my arguments accuracy_F is wider applicable than accuracy_M, and why I think accuracy_M if defined by "closeness to truth" has its flaws.
(1) A test for accuracy_M requires an answer to the following question: "Is the axiom X close enough to the truth for the purpose
to which it being put", where the purpose derives from the requirements of the application that the ontology is part of. In absentia of a given purpose it does not make sense to ask whether an axiom is accurate_M.
MW: Well really it comes in two parts. Firstly you can say what the accuracy is, so PI to 3SF, or to 5SF. That does not change. When it comes to assessing the ontology for a particular purpose, you need to look at our requirement
and whether it is matched.
Okay. So what is the accuracy of "The world is a sphere?" What is the accuracy of "All birds can fly"?
As we have discussed before, there are ontologies that are not developed with a specific application in mind (e.g, Gene Ontology, Foundational Model of Anatomy). I would argue that in these cases the notion of accuracy_M is not applicable. But even if you think
that there are some hidden requirements based on implicit assumptions on the future use of these ontologies, it would be very hard to identify requirements for these ontologies that would allow you to evaluate the accuracy_M of these ontologies. So even if
accuracy_M in these cases is theoretically defined, there is no way to measure it.
MW: I disagree rather strongly. I think there is a clear purpose for these ontologies, which is to be an integrating ontology over some scope, which is actually surprisingly
narrow. So for example, you would not consider using them for engineering applications, or for product sales.
You are saying: "The purpose of the ontology X is to be an integrating ontology"? Okay, let's take a step back. The whole reason why we are talking about the purpose of an ontology is because the purpose is supposed to give us the requirements that we
can use to evaluate the ontology. E.g., if somebody asks "What is the purpose of building the new road between X and Y?" a possible answer is "To reduce the traffic congestion", and one can evaluate the design of the road by studying traffic, use of existing
roads etc. But your answer is analog to "The purpose of this road is to be a highway." That's not its purpose, that's what it is. For example, the fact that the FMA is an (integrating?) ontology of human anatomy does not tell you anything about the relationships
it is supposed to include. It does not tell you anything about the requirements for representing developmental change in human anatomy etc.
In contrast, at least in the cases of scientific ontologies that cover established knowledge in a domain it is pretty straight forward to test for accuracy_F: just ask domain experts whether the axioms are true.
MW: So is Newtonian Physics true?
Newton thought that his results, e.g., the laws of motion, are universal laws that describe the behavior of all objects in the universe. In this sense, Newtonian Physics is false. As a theory about gravity, mass, weight of the slow, medium sized objects
people encounter in their daily lives, it is true.
(Of course, domain experts might be wrong, so this is not a fool-proof approach to measure accuracy_F, but then again no measurement technique is without flaws).
MW: Actually, this is a really bad argument. Most scientists would agree that the current scientific theories are just those that have not yet been proven wrong, and particularly
in the field of physics there is a constant expectation that the current set of theories may be overturned by some insight (indeed there is good evidence that they cannot be correct). Hence the well known saying “All theories are wrong, but some are useful”.
That gets you back to accuracy_M where you need to say “useful for what?”
I don't know why you think that this is relevant to what I wrote. But what most scientists would agree to is that all scientific knowledge is falsifiable. However, this does not mean that the current scientific theories are "just those that have not been
proven wrong yet", let alone that scientists assume that all theories are wrong. There is a vast difference between falsifiable and false.
Anyway, I was not talking about philosophy of science, but just making a point that there is a measurement technique for accuracy_F, namely asking scientists whether the content of the ontology is true or false. The challenge for you is to come up with
a measurement for accuracy_M. According to what you wrote above, these are actually two questions: How do you measure the accuracy? And how do you measure whether the accuracy is "close enough" to a given purpose of an ontology?
(2) For ontology reuse accuracy_F is more important than accuracy_M. Imagine person A developed an ontology to a given set
of requirements R1 and determined by thorough testing that the ontology is accurate_M with respect to R1. Now person B considers to reuse the ontology within a different application with a different set of requirements R2. For person B it is completely irrelevant
to know whether the ontology is accurate_M with respect to R1. What B would be interested in is whether the ontology is accurate_M with respect to R2, but that information is not available.
MW: That is just not true. Requirements R2 are met if they are a subset of R1.
Yes, in this specific case. But what in the (much more likely case) that R2 are not a subset of R1?
In contrast, since accuracy_F is invariant to the requirements, the information that the ontology has been tested successfully for accuracy_F is valuable to person B. Granted, it is not as good as finding out whether the content of the ontology meets the requirements
of B, but it is at least something.
MW: Let’s take another example. I have an ontology that says that a thing can be a part of itself. Is it true? The answer will depend on whether you are using a classical mereology
or not. So the only answer you can give is “Yes or no”.
This is just an ambiguous use of the word "part". Axiomatic mereology was founded by Leśniewski, who was mainly interested in mereology as a substitute for set theory. Analog to subset and proper subset he distinguished between parthood and proper parthood.
And this has become the standard terminology for all logicians and formal ontologists. This choice of terminology is a confusing, since the proper parthood relationship in mereology is a better match to the various parthood relationships that we use in daily
life. But if we resolve the ambiguity, there is no problem. If by "part of" you mean the relationship that people use in English to describe the relationships between the first half of a soccer game and the whole game or the first two years of Mr. Obama's
presidency and the whole first term, then the answer is: no, things cannot be part of themselves.
(3) While the notions of "closer to the truth", "absolutely true" might seem to make some intuitive sense in the context of
well-chosen examples, it is very hard to generalize these ideas. I am not talking about the lack of a formal theory, obviously, fuzzy logic provides a theoretical framework it. However, I have yet to encounter any satisfying explanation what a truth-value
of 0.35628 means. And there is always the question how one determines the truth-values. Unless you have an answer how to determine whether "The earth is a sphere" is closer to the truth than "All birds fly", I don't think we should rely on these ideas in ontology
evaluation.
MW: That is the wrong idea altogether. It is not a matter of truth values, and it is fine to be exactly true in Accuracy_M, but being close to the truth is about distance from
it, not the probability of being absolutely true.
Fuzzy logic has nothing to do with probability (yes, I know wikipedia says otherwise, but that is just wrong). It is a way to formalize the intuition that you expressed: namely, that it is not sufficient to talk about true and false, but that we need to
account for distance from the truth. To put it in the terminology you used: the "distance for the truth" is expressed by a value in the interval from 0 to 1, where 0 is "absolute true", 1 is "absolute false".
(4) I believe that the thing you are ultimately interested in is whether the axioms enable the ontology to meets its requirements as a part of a given application. In other words, the important question is: does the ontology provide the functions that it needs
to provide to make the whole system work? And this has nothing to do with axioms being true or "close to true", as the following thought experiment shows. Let's assume that the role of an ontology in an application is to determine whether there is a train
connection between two points. (Not the route, just whether there is a connection or not.) In reality, there is a train line from A to B, from B to C, and from C to A, and no other train line. However, the ontology O contains the following axioms:
(a) if there is a train line from x to y, x is connected to y.
(b) if x is connected to y, and there is a train line from y to z, then x is connected to z.
(c) There is a train line from A to C, a train line from C to B, and a train line from B to A.
All of axioms in (c) are false. Not "close to true", just plain false; thus these axioms are not accurate_M. Nevertheless, the ontology will perform its function in the application perfectly fine.
MW: I don’t think I follow this. You seem to be saying that there is a train line from A to B, but not from B to A. Not quite sure how that makes sense.
Yes, I assume for this example that train lines are one-directional. If you think this is unrealistic, just replace "train line" with "one-way street" in the example. The point of the example is that all axioms are false, but that the axiom set will respond
to all queries about connectedness with true answers, and thus provides the intended functionality to the application. Hence truth (even closeness to truth) of the axioms in the ontology is not required to enable an application to work.
Best
Fabian
Regards
Matthew West
Information Junction
Tel: +44 1489 880185
Mobile: +44 750 3385279
Skype: dr.matthew.west
This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.
Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.
On Dec 28, 2012, at 2:42 PM, Matthew West wrote:
It seems we are thinking of different cases.
"ontology x is accurate" iff all axioms in x are true.
I think closer to what you may mean is:
“Ontology X is accurate” iff all inferences from the axioms in X are true, within the scope of the application of the ontology.
No, that's not what I mean. First, I don't really understand what "true within the scope of the ontology" is supposed to express. Whether an axioms is true or false does not depend on the scope of an ontology. The axiom (part_of audi volkswagen) is out of
scope for an ontology of human anatomy, but its irrelevance does not make the axiom less true.
Second, I don't see the need to explicitly talk about all inferences from the axioms as long as we are concerned with ontology languages that are based on truth-preserving deductive inference systems like Common Logic or OWL. If all the axioms in X are true
it follows that all inferences from the axioms in X are true.
MW: Well I am thinking of Newtonian Physics. This is good enough for most purposes (scope) but is not absolutely true. So most of the time the inferences from using Newtonian
Physics will give a good enough answer, but there are some circumstances where it will not.
Following Vrandecic "Ontology Evaluation" in "Handbook on Ontologies, Second Edition" (with some modification), I think we need to distinguish (among other things) four questions:
- Accuracy: Is the content of the ontology true?
MW: In my data quality work I talk about accuracy as being “How close to the truth is the content?” So I would say that pi = 3.1415 is closer to the truth than pi= 3.14. Neither
of course is absolutely true. The more useful question in my opinion is not whether an ontology is accurate absolutely, but whether it is accurate enough for the purpose to which it is being put.
- Completeness: Is the domain of interest appropriately covered (this includes terminology as well as axioms)?
- Conciseness: Does the ontology irrelevant classes, relations, or axioms?
- Redundancy: Are some axioms redundant because they are already entailed by other axioms?
(Vrandecic considers both redundancy and conciseness, but lumps them in the same category.) Of these four questions two (completeness and conciseness) can be answered only with respect to a given set of requirements, because the requirements will determine
the scope of the ontology and the necessary depth of the axiomatization. Redundancy is independent of any requirements, but it depends on the choice of the logic (or, more specifically, its entailment relationship). Accuracy depends on neither requirements
nor entailment relationship.
MW: Well you will gather I have another view about accuracy.
The point I try to make in my last email is that even an ontology that contains false axioms might be able to meet all requirements of a given application. This might be because the application never uses the axiom.
MW: OK. That is certainly true.
(E.g., if the application uses only the taxonomy of the ontology, it does not matter if some parthood axioms are wrong.) A more interesting case is the one I mentioned in my last email, where false information in the ontology enable the proper functioning of
a larger application. This can be the case because the false axioms are "close enough" to make no difference or because the larger application accommodates the errors in some form.
MW: This is closer to my “good enough” approach. There is no harm using Newtonian physics if you have something that spots where it does not apply, and routes you to a more appropriate
theory.
This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.
Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.
On Dec 28, 2012, at 10:26 AM, Matthew West wrote:
Obviously, this allows for the possibility that an inaccurate ontology might meet the requirements of a given application.
"ontology x is accurate" iff all axioms in x are true.
I think closer to what you may mean is:
“Ontology X is accurate” iff all inferences from the axioms in X are true, within the scope of the application of the ontology.
This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.
Registered office: 2 Brookside, Meadow Way, Letchworth Garden City, Hertfordshire, SG6 3JE.
I agree that the appropriate level of detail is dependent on requirements. But I also think that there is a notion of "accuracy" that is not dependent on context or requirements. For this reason I think it is useful to distinguish two notions: accuracy and
completeness. I use the terms in the following way:
"ontology x is accurate" iff all axioms in x are true.
"ontology x is complete (given a set of requirements)" iff
(a) x contains all classes, relations that are needed to represent a piece of reality sufficiently to meet the requirements, and
(b) x contains sufficient axioms that are needed to answer queries that are relevant for the requirements (given an appropriate theorem prover)
Obviously, this allows for the possibility that an inaccurate ontology might meet the requirements of a given application. For example, you might include the axiom (pi = 3.1415), which is inaccurate but might be sufficient for the needs of an application.
On Dec 23, 2012, at 4:57 PM, Michael F Uschold wrote:
The examples you give are good, but seem unlikely to turn up in most ontologies people are building because the logics used are not designed to support heavy computation. The fact that different representations
that are all important but logically inconsistent seems a great argument that logic is not the best tool for the job of representing all three things in a unified framework.
I think the more important point is that "accuracy" is not absolute; there is some notion of context or purpose that is relevant. Something can be accurate enough for one purpose, but woefully inadequate
for others. Ideally both representations can seamlessly live together zooming in and out to the appropriate level of detail -- but this may not be possible, as in the cases you note.
On appealing to authority:
Perhaps you missed my smiley - it was a rhetorical comment.
> There is a broader issue here that reasoners are only a part of. That is:
> "Where do you derive confidence from that your ontology is accurate"?
Accuracy is important, but it's only one of many features that make
an ontology good. Sometimes, *less* accuracy is better.
For example, relativity and quantum mechanics are known to be
more accurate than Newtonian mechanics. But Newtonian mechanics
is preferable for large objects moving at typical speeds on earth.
As another example, the Navier-Stokes equations for fluid mechanics
are accurate, but too complex for efficient computation. Therefore,
nearly every application uses some approximations. Sometimes, the
*same* application may use *contradictory* approximations for
different aspects: laminar flow, turbulent flow, subsonic flow,
supersonic flow.
MFU
> There is a nice tool called Ontology Pitfall Scanner which allows
> you to upload an ontology and it will do a bunch of things like this.
I checked the web site, and I noticed that many of the tests it performs
can also be determined by the FCA tools (Formal Concept Analysis). But
FCA can also *generate* the hierarchy automatically in form that is
guaranteed to avoid those pitfalls.
MFU
> Still other ways to derive confidence that the ontology is accurate
> is to have it checked by experts in the field.
Yes. That is a good example of how one should appeal to "authority".
See my further comments on that point below.
But first, I'd like to cite Alan Rector's points on another thread:
AR
> Ambitions for global "reference terminologies" lead to artefacts built
> by committees some of whose originators - e.g. IHTSDO/SNOMED CT -
> even disclaim responsibility for how they should be used...
>
> Large scale reference ontologies - or models of any kind - can also
> be caught by conflicts of requirements from multiple potential users
> - clinical care, statistical reporting, billing, speed of use, etc.
>
> The results have not always been happy.
These are examples of "too many authorities spoil the broth".
JFS
>> Jim [Hendler] said that he liked the article very much:
> Hmm, the subtext here is scarily close to the fallacy of appealing
> to authority. If Jim liked it it must be good.
I cited Jim in self defense. He is known to be one of the chief
promoters of the Semantic Web, but I've been known to criticize
many aspects of it.
In any case, citing authority is not, by itself, a fallacy. Every
academic paper cites authorities, and any paper without such citations
is suspect. What is wrong is a *fallacious* appeal to authority:
Linus Pauling was a brilliant physicist.
Linus Pauling said that megadoses of vitamin C are beneficial.
Therefore, megadoses of vitamin C are beneficial.
This reasoning is fallacious for two reasons: (1) being an expert
in physics does not necessarily mean that one is an expert in
medicine; and (2) even among experts in medicine, there is no
consensus that megadoses of vitamin C are beneficial.
Re Semantic Web: Jim Hendler is an acknowledged expert, and he has
been highly supportive of its development. I cited his authority
as evidence that my points are compatible with that development.
John
Michael Uschold
Senior Ontology Consultant, Semantic Arts
http://www.semanticarts.com
LinkedIn: http://tr.im/limfu
Skype, Twitter: UscholdM
<ATT00001..c>
<ATT00001..c>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Disclaimer:
This message (including any attachments) contains confidential information intended for a specific individual and purpose. If you are not the intended recipient, you should
delete this message and notify the sender and the University immediately. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited and may be unlawful.
The University specifically denies any responsibility for the accuracy or quality of information obtained through University E-mail Facilities. Any views
and opinions expressed are only those of the author(s) and do not necessarily represent those of the University and the University accepts no liability whatsoever for any losses or damages incurred or caused to any party as a result of the use of such information.
|