ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] [ontology-summit] Reasoners and the life cycle

To: "Ontology Summit 2013 discussion" <ontology-summit@xxxxxxxxxxxxxxxx>, "" [ontolog-forum] "" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "doug foxvog" <doug@xxxxxxxxxx>
Date: Thu, 10 Jan 2013 17:16:13 -0500
Message-id: <d30bf29cf7e580f4f46c4ec3f073a3c4.squirrel@xxxxxxxxxxxxxxxxx>
On Wed, January 9, 2013 20:29, Obrst, Leo J. wrote:
> Fabian, Matthew, and all,
>
> I wonder if it were better to distinguish, perhaps under the more general
> notion of "accuracy", the notion of "granularity"? Although there
> are some obvious inter-definitional relations between the terms,
> "accuracy" is perhaps here more pejorative, i.e., your stuff is less
> accurate than my stuff. And thus possibly the term generates more
> argumentation than is needed.    (01)

Some of what is being discussed in this thread is better termed "precision"
-- for example discussions of number of significant figures.  Even the shape
of the Earth discussion seems to fall under "precision", not "accuracy".  The
terms "sphere", "oblate spheroid", and "ellipsoid" all have mathematical
definitions of things with infinite precision.  Such terms when used for a
physical object only approximate that object -- and how closely the
approximation is to the true shape of the object is a precision.    (02)

> I think accuracy_M largely falls out as "accuracy of X to the level of
> granularity M required for some (requirement) Y" (where granularity is a
> scale I: M to N, and call that granularity_I_of_X_for_Y). I don't know
> that the granularity scale can be easily constructed, however, except in
> terms of greater precisification or perhaps granular partitions (and their
> projective relations, see below). Or a more full-fledged
> category-theoretic formalization.
>
> Accuracy_F is perhaps a narrower notion, i.e., what is the most precise
> notion, i.e., most granular notion, of X, and call that
> granularity_N_of_X_for_Y (in which case, Y is now universally quantified).
> Domain experts, let us say in this case, scientists, will nearly always
> say that granularity_N_of_X_for_Y is the "truest" statement we know,
> but some scientists and many engineers will tolerate some
> granularity_I_of_X_for_Y where I is not N.
>
> And yes, the above should be laid out in logical axioms, so that the
> differences are more apparent (exercise for the reader ;).
>
> I think nearly everyone in the 21st century will gauge the following to be
> true (assuming also that they know "the world" is a way of referring
> to "Earth"):    (03)

> A)     The world is round.    (04)

> B)     The world is mostly round.    (05)

This may seem to be a weird claim to many.  People are used to
"round" things being rough -- and being far from spherical.  A basketball,
a baseball, an inflated balloon, and even a rugby ball are all considered
"round".  it would be strange to call them "mostly round".   "Mostly
round" seems to me to suggest that there are large areas (say over
10% of the surface) that are not round.    (06)

Calling the Earth, which is more spherical than a ping pong ball "mostly
round" is highly misleading, imho.    (07)

> C)     The world is a sphere.    (08)

I believe that many will disagree with this, arguing that the surface
of the Earth is not smooth (mountains, etc.) and since spheres are
smooth the Earth can't be a sphere.    (09)

Most everyone would agree to
C') The world is spherical.    (010)

> D)     The world is mostly a sphere.    (011)

Same objection here as to B.    (012)

> But perhaps not (E-H), which may in fact be true to a geo- or
> astrophysicist [1, 2, 3]:    (013)

I note that what is being discussed in [1,2,3] is NOT "the shape
of the Earth", but the shape of the "geoid", defined as
     "the surface within or around the earth that is everywhere
      normal to the direction of gravity and coincides with mean
      sea level in the oceans"
During the last ice age, the oceans were over 100 m lower than they are
now.  The geoid's radii would have thus been over 100 m lower.  If
the Antarctic and Greenland icecaps melt, the oceans will be around 100
m higher than they are now.  The geoid's radii would then be 100 m
higher.    (014)

Does this mean that melting ice caps change the size of the Earth?    (015)

> E)      The world is an ellipsoid. (Many will not know what
> "ellipsoid" means; when given the definition, they may decide this is
> a true statement).    (016)

This is valid for both the Earth and the geoid.    (017)

> F)      The world is pear-shaped. (Many will not know, though they will
> know what "pear-shaped" means).    (018)

This is an (inaccurate) description of the geoid, not the shape of the world.    (019)

Many will strongly disagree with this.  The Earth is far closer to the
shape of a sphere than it is to the shape of any natural pear.    (020)

Few people would know a mathematical definition for "pear-shaped".    (021)

Early satellite measurements indicated that sea level was on the order
of 10 meters further from the center of the Earth in the mid-southern
hemisphere and on the order of 10 meters nearer the center of the
Earth in the mid-northern hemisphere than an oblate spheroid.
Someone called this "pear-shaped".  Later measurements have shown
greater irregularities, so the description should be dropped.    (022)

> G)     The world is an oblate spheroid/oblate ellipsoid.    (023)

This is valid for both the Earth and the geoid.    (024)

> H)     The world is an oblate ellipsoid with equatorial radius
> 6,378,136.6 meters, polar radius 6,356,751.9 meters, inverse flattening
> 298.25642, etc. [3]    (025)

Reference 3 [http://en.wikipedia.org/wiki/IERS] does not state this.    (026)

This statement is certainly precise -- but i find it highly inaccurate.
It may be correct for the geoid, but not for the Earth.    (027)

A statement that something is some geometric figure to 8 significant
figures, strongly suggests to me that the accuracy of the mapping is
on the scale of the precision.  When the surface has depressions over
10,000 m deep and bumps over 8,000 meters high, specifying a radius
to fractional meters seems inaccurate to me.    (028)

An equatorial radius would be the distance from the center of the Earth
to its surface at the Equator.  Such a distance varies by over 10 kilometers,
so anything more precise than, say, 6,378 +/- 5 km would be misleading.    (029)

Now, the 'polar radius' would be between two points, and so could be
very precise, if the identity of those points is unchanging.  Are those
points the surface of the ice at each pole?  If so, the distance is variable.
Is they the points where the the axis of the Earth leave bedrock at each
pole?  If so, it suggests that very accurate and precise measurements
have been made through miles of thickness of ice at the South Pole and
polar water at the North Pole.  The poles in recent years been wobbling
about 1.5 meters, so if the Earth's surface at the poles happens to be
relatively flat that would not change the polar radius much.    (030)

If the 'polar radius' is between where the poles intersect sea level,
then the object whose dimension is given, is not the Earth, but an ideal
figure whose shape is chosen by some "intelligent agent" and according
to some rules approximates the shape of the earth.  This raises the
question of whether sea level rise changes the "Earth's radius".    (031)

> But H entails A-G, no? [You might have a quibble with (B) and (D) over the
> word "mostly", and so think these should be above (A) and (C) in any
> taxonomy.]    (032)

The inaccurate H entails A, C, E, and G.  It does not entail F.    (033)

> And no, this is not "fuzzy logic", which I think is misguided (at
> least for me, truth values are not real numbers between 0 and 1), but
> instead approximation.    (034)

> What's missing is a theory of granularity, i.e., an ontology of
> granularity of ontology.    (035)

Agreed.    (036)

> The vagueness literature does focus on this,
> i.e., what is in the positive extension of some predicate P?  With a
> notion of precisification that enables you to "zoom" in on what you
> intend to say and thereby determine its truth value. That's probably
> where we should start, and start, I would say, from a semantic analysis of
> vagueness for that ontology (not an ontological, i.e., big O of formal
> ontology from philosophy, analysis, where objects are fuzzy), nor from a
> logical analysis of vagueness (i.e.,  fuzzy logic, where truth values are
> fuzzy). [4-12]. For a potential logical (mereological) rendering, see
> [9].    (037)

> For ontology evaluation purposes, it would be good in some way to gauge
> the level of granularity of the ontology, or in fact, the range of levels
> of granularity in the ontology.    (038)

> Thanks,
> Leo
>
> [1] Sperical Earth. http://en.wikipedia.org/wiki/Spherical_Earth. E.g.,
> "As the science of geodesy measured Earth more accurately, the shape of
> the geoid was first found not to be a perfect sphere but to approximate an
> oblate spheroid, a specific type of ellipsoid. More recent measurements
> have measured the geoid to unprecedented accuracy, revealing mass
> concentrations beneath Earth's surface."
> [2] Figure of the Earth. http://en.wikipedia.org/wiki/Figure_of_the_Earth.
> "An ellipsoidal model describes only the ellipsoid's geometry and a
> normal gravity field formula to go with it. Commonly an ellipsoidal model
> is part of a more encompassing geodetic datum. For example, the older
> ED-50 (European Datum 1950) is based on the Hayford or International
> Ellipsoid. WGS-84 is peculiar in that the same name is used for both the
> complete geodetic reference system and its component ellipsoidal model.
> Nevertheless the two concepts -- ellipsoidal model and geodetic reference
> system -- remain distinct.
> Note that the same ellipsoid may be known by different names. It is best
> to mention the defining constants for unambiguous identification."
> [3] International Earth Rotation and Reference Systems Service.
> http://en.wikipedia.org/wiki/IERS.
>
> Selected vagueness, granularity, approximation references:
> [4] Bittner, Thomas; Barry Smith. 2001. A unified theory of granularity,
> vagueness and approximation. In: Proc. of the 1st Workshop on Spatial
> Vagueness, Uncertainty, and Granularity (SVUG01).
> http://www.cs.northwestern.edu/~bittner/BittnerSmithSVUG01.pdf.
> [5] Bittner, Thomas and Barry Smith. 2001. Granular Partitions and
> Vagueness. In: Christopher Welty and Barry Smith (eds.), Formal Ontology
> and Information Systems, New York: ACM Press, 2001, 309–321.
> [6] Cohn, A. G. and Gotts, N. M. 1994. The `Egg-yolk' Representation of
> Regions with Indeterminate Boundaries, in P. Burrough and A. M. Frank
> (eds), Proceedings, GISDATA Specialist Meeting on Geographical Objects
> with Undetermined Boundaries, Francis Taylor.
> [7] Keefe, Rosanna; Smith, Peter, eds. 1999.  Vagueness: A Reader.
> Cambridge, MA: MIT Press.
> [8] Obrst, Leo, and Inderjeet Mani, eds. 2000.  Proceedings of the
> Workshop on Semantic Approximation, Granularity, and Vagueness, April 11,
> 2000, Seventh International Conference on Principles of Knowledge
> Representation and Reasoning (KR-2000), April 12-16, Breckenridge, CO.
> [9] Styrman, Avril, and Aapo Halko. 2012. Finitist Set Theory in Modeling
> Granular Structures. Helsinki 21.8.2012.
> 
>www.cs.helsinki.fi/u/astyrman/fst.pdf<http://www.cs.helsinki.fi/u/astyrman/fst.pdf>.
> [10] Varzi, Achille. 2001. Vagueness in Geography. Philosophy & Geography
> 4, pp. 49-65.A version of this paper appeared in the Proceedings of the
> Workshop on Semantic Approximation, Granularity, and Vagueness, Leo Obrst
> and Inderjeet Mani, co-chairs, as “Vague Names for Sharp Objects,”
> April 11, 2000, Principles of Knowledge Representation and Reasoning
> (KR-2000), April 12-16, Breckenridge, CO.
> http://www.columbia.edu/~av72/papers/P&G_2001.pdf.
> [11] Varzi, Achille. 2001. Vagueness, Logic, and Ontology. The Dialogue 1,
> pp. 135-54.  http://www.columbia.edu/~av72/papers/Dialogue_2000.pdf.
> [12] Williamson, Timothy.  1998.  Vagueness.  London, New York:
> Routledge.
>
> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Fabian
> Neuhaus
> Sent: Monday, January 07, 2013 4:22 PM
> To: Ontology Summit 2013 discussion
> Subject: Re: [ontology-summit] Reasoners and the life cycle
>
> Dear Matthew
>
>
> Yes, we use the word "accuracy" differently. I am not married to the term,
> but I believe the concept that I denote with the word "accuracy" is
> important. Since I am not in the word-smithing business, let's just use
> "accuracy_F" for my notion of accuracy and "accuracy_M" for yours until
> somebody comes up with prettier terms.
>
> Here are my arguments accuracy_F is wider applicable than accuracy_M, and
> why I think accuracy_M if defined by "closeness to truth" has its flaws.
>
> (1)   A test for accuracy_M requires an answer to the following question:
> "Is the axiom X close enough to the truth for the purpose to which it
> being put", where the purpose derives from the requirements of the
> application that the ontology is part of. In absentia of a given purpose
> it does not make sense to ask whether an axiom is accurate_M.
> MW: Well really it comes in two parts. Firstly you can say what the
> accuracy is, so PI to  3SF, or to 5SF. That does not change. When it comes
> to assessing the ontology for a particular purpose, you need to look at
> our requirement and whether it is matched.
>
> Okay. So what is the accuracy of "The world is a sphere?"
>
> MW2: Well I would expect that to be expressed as the % difference in
> volume of the sphere proposed as a model of the world, with a maximum
> difference between the model and reality. Of course WGS84 is an oblate
> sphere that is used by the GPS system, and most maps and charts these
> days, but it is still only an approximation. Indeed, in the Oil Industry
> when I was last aware of what was happening, different oblate spheres were
> used for different parts of the earth to give the best local
> approximation.
>
> It seems that you are now contradicting yourself. Above you said that
> accuracy_M does not change, but that only when it comes to assessing the
> ontology for a particular purpose one needs to consider whether the
> accuracy_M is sufficient for the requirements of that purpose.  But now
> you are saying that accuracy_M of the "The world is a sphere" cannot be
> evaluated independently of a given location or purpose.
>
> MW3: I don’t see how you reach that conclusion from what I have said.
> The evaluation of the accuracy of the world as a sphere will come up with
> the same answer each time you evaluate it (using the same method). What
> may change is whether the accuracy means the model is fit for purpose.
> Probably good enough for movement of the planets, but not for drilling for
> oil in this case.
>
> Well, what you seem to say is: "the world is a sphere" has no accuracy_M
> on its own, to assign it some accuracy_M one needs to consider a specific
> sphere as model and a specific definition of accuracy. Thus, you consider
> "The world has the shape of the WGS 84 oblate sphere" instead of the
> original sentence and suggest to define accuracy_M (in this case) as %
> volume difference. Hence, according to your analysis the accuracy_M of
> "the world is a sphere" depends on two choices, the choice of a sphere and
> of a definition.
>
> But, as you point out,  for different purposes different spheres are used
> to give the best local approximation. And, likely, there are alternative
> ways to define accuracy than the one by volume that you suggested, which
> might be more or less appropriate depending on the application.
>
> Thus, it seems to me that according to your approach "the world is a
> sphere" has no accuracy (or, if you prefer, it has as many accuracies as
> there are possible choices of models for earth and definitions) unless you
> make a certain choice which depends on your purpose.
>
>
>
>
>
> What is the accuracy of "All birds can fly"?
>
> MW2: That is untrue. I don’t think there is a way to say “Most birds
> can fly” in logic, which is a shame.
>
> Well, you cannot say it in OWL or classical first-order logic. There are
> more logics than that. As I mentioned in my last email, fuzzy logic is
> used to represent these kinds of ideas.
>
> However, if I understand you correctly, you claim that "pi = 3.14" and
> "The world is a sphere" are not false (but somewhat true, although not
> absolutely true) but "All birds can fly" is false. Where is the
> difference?
>
> MW3: Well, I would only say “pi=3.14 to 3SF” is true, without the
> statement of accuracy you might reasonably expect absolute accuracy, and
> then it would be false.
>
> MW3: As far as “All birds can fly”. I take words like “All” and
> “None” to have absolute meaning, certainly as used in logic. So if it
> is not all, it is inaccurate to say that it is.
>
>
> Okay, I think we are getting closer to an agreement. It seemed that before
> you were saying that some axioms in the ontology are neither (absolute)
> true nor (absolute) false, but something in between. But now it seems that
> we agree that all sentences in an ontology are either true or false, but
> that that this leads to unintended consequences if accuracy_M is not
> represented explicitly.
>
> For example, "pi = 3.14" is false, but "pi = 3.14 to 3SF" is true.
>
> Hence, the lesson is, a good ontology includes information about accuracy
> of measurements. Note that this is not the same notion as accuracy of an
> ontology.
>
>
>
>
>
> As we have discussed before, there are ontologies that are not developed
> with a specific application in mind (e.g, Gene Ontology, Foundational
> Model of Anatomy). I would argue that in these cases the notion of
> accuracy_M is not applicable. But even if you think that there are some
> hidden requirements based on implicit assumptions on the future use of
> these ontologies, it would be very hard to identify requirements for these
> ontologies that would allow you to evaluate the accuracy_M of these
> ontologies. So even if accuracy_M in these cases is theoretically defined,
> there is no way to measure it.
>
> MW: I disagree rather strongly. I think there is a clear purpose for these
> ontologies, which is to be an integrating ontology over some scope, which
> is actually surprisingly narrow. So for example, you would not consider
> using them for engineering applications, or for product sales.
>
> You are saying: "The purpose of the ontology X is to be an integrating
> ontology"? Okay, let's take a step back. The whole reason why we are
> talking about the purpose of an ontology is because the purpose is
> supposed to give us the requirements that we can use to evaluate the
> ontology. E.g., if somebody asks "What is the purpose of building the new
> road between X and Y?" a possible answer is "To reduce the traffic
> congestion", and one can evaluate the design of the road by studying
> traffic, use of existing roads etc. But your answer is analog to "The
> purpose of this road is to be a highway."
>
>
> That's not its purpose, that's what it is. For example, the fact that the
> FMA is an (integrating?) ontology of human anatomy does not tell you
> anything about the relationships it is supposed to include. It does not
> tell you anything about the requirements for representing developmental
> change in human anatomy etc.
>
> MW2: If an ontology is intended to be an integrating ontology, it has some
> consequences for how that ontology is developed, and has some properties
> that you can determine to see if it does that successfully.
>
> Matthew, with all due respect, but that is rather vague. My argument was
> that accuracy_F is more widely applicable than accuracy_M, because
> measuring accuracy_F does not depend on requirements, which either absent
> or at least hard to nail down for reference ontologies. And unless you are
> able to get more specific than "some consequences" and "some properties" I
> think you are proving my point.
>
> MW3: You continue to confuse requirements with accuracy itself. The
> requirements just give rise to the properties, such as accuracy, that are
> going to be important to know the value of in establishing whether an
> ontology is fit-for-purpose,
>
> Matthew, I am afraid we have talked passed each other in the last emails.
> It seems that we agree on the following
> - it is important to represent the accuracy of measurements or information
> in ontologies (e.g.,   pi = 3.14 to 3S)
> - fit-for-purpose is an important property of ontologies; which can only
> be evaluated with respect to a given set of requirements
> - the requirements determine the necessary level of accuracy of the
> information that is represented in the ontology; if the level of accuracy
> is not met, the ontology is not fit-for-purpose
>
>
>
>
>
> Here is a concrete example: Assume I am developing a reference ontology
> for the canonical male adult human anatomy. The current version of my
> ontology contains the following axioms. The $64000 question is: Is it
> accurate?
> - T1 instance_of thoracicVertebra
> - T2 instance_of thoracicVertebra
> - T1 connected_to T2.
> -  thoracicVertebra subclass Vertebra
> - If x instance_of Vertebra, then x part_of spine.
>
> Now, I can easily tell you whether this ontology is accurate_F, and how to
> find out: all you need is a textbook on human anatomy. Is it accurate_M?
> And if so, how did you find out?
>
> MW3: It seems to me that this is just a question of whether this is an
> accurate representation of some alleged knowledge. It does not tell you
> whether that alleged knowledge is true or not.
>
> No. The question is whether the knowledge is true or not. If I was
> all-knowing, I would not need a textbook or ask a domain expert. Sadly, I
> am not, so the best way to determine whether the content of a scientific
> ontology is true is to ask experts. And as long as we stick to established
> scientific knowledge that is a pretty reliable method. Of course, if you
> can think of a better way to evaluate the truth of the content of an
> ontology, I will be happy to adopt that.
>
>
>
>
> MW3: Is that the essence of accuracy_F? I can quite see that that is a
> useful property, but equally not everything that can be said about
> accuracy.
>
> I agree. Accuracy_F is just one notion of accuracy (and I am happy to use
> a different term for the notion), my only point was that it is a useful
> one.
>
>
>
>
> Accuracy_M is supposed to bridge from the ontology to the reality that you
> wish were a model  (model theory sense) of that ontology. I can quite see
> that that could have elements that were the accuracy of representation of
> a theory, and the accuracy of the theory as a representation of the world,
> and that would be a useful separation.
>
> I understand the distinction you are making here, but that was not what I
> was intended. We need to distinguish two questions: What is the definition
> of "accuracy_F"? And how do we evaluate the "accuracy_F" of an ontology?
>
> (a) The ontology X is accurate_F iff all axioms of X are true.
> (b) If x is a reference ontology for established scientific knowledge,
> then it contains axioms that represent empirically falsifiable knowledge.
> These axioms can be verified/falsified by asking domain experts or using
> scientific literature. If the experts and the literature say the axioms
> are true, then -- for the sake of the evaluation -- we should assume they
> are true.
>
>
>
>
>
>
> In contrast, at least in the cases of scientific ontologies that cover
> established knowledge in a domain it is pretty straight forward  to test
> for accuracy_F: just ask domain experts whether the axioms are true.
>
> MW: So is Newtonian Physics true?
>
>
> Newton thought that his results, e.g., the laws of motion, are universal
> laws that describe the behavior of all objects in the universe. In this
> sense, Newtonian Physics is false. As a theory about gravity, mass, weight
> of the slow, medium sized objects people encounter in their daily lives,
> it is true.
>
> MW2: Now you see I would rather say that it is accurate for engineering
> purposes provided you do not travel at speeds greater than X.
>
> (Of course, domain experts might be wrong, so this is not a fool-proof
> approach to measure accuracy_F, but then again no measurement technique is
> without flaws).
>
> MW: Actually, this is a really bad argument. Most scientists would agree
> that the current scientific theories are just those that have not yet been
> proven wrong, and particularly in the field of physics there is a constant
> expectation that the current set of theories may be overturned by some
> insight (indeed there is good evidence that they cannot be correct). Hence
> the well known saying “All theories are wrong, but some are useful”.
> That gets you back to accuracy_M where you  need to say “useful for
> what?”
>
> I don't know why you think that this is relevant to what I wrote. But what
> most scientists would agree to is that all scientific knowledge is
> falsifiable. However, this does not mean that the current scientific
> theories are "just those that have not been proven wrong yet", let alone
> that scientists assume that all theories are wrong. There is a vast
> difference between falsifiable and false.
>
> Anyway, I was not talking about   philosophy of science, but just making a
> point that there is a measurement technique for accuracy_F, namely asking
> scientists whether the content of the ontology is true or false.
>
> MW2: I suggest you are unlikely to get either of those as an answer.
> You’re much more likely to get an answer like “it depends...”
>
> This is just not true. There is tons and tons of knowledge, which is
> completely uncontroversial, e.g. about the structure of proteins or the
> physical properties of substances (collected in huge reference databases
> of material science).
>
> Scientists obviously disagree on cutting edge stuff and argue about that,
> but even to have that argument they need to assume a lot of things for
> granted.
>
> MW3: OK. I will accept that I am biased by my own  discipline,
> engineering. Perhaps in other fields knowledge is well established and
> unlikely to be challenged. However, engineering has a significant element
> of empirical theories, where it is not only well known that theories have
> limited applicability, but people’s lives can depend on the theories not
> being used outside their range of applicability. So I don’t think we
> will get a lot of credit when say a plane crashes because the wrong theory
> was used somewhere, and the engineers complain “But we used the Ontology
> Summit evaluation of the ontology and that said it was accurate.”
>
>
> Well, given that according to my definition accuracy_F of an ontology
> entails the truth of all axioms, I am not worried about an accurate_F
> ontology could cause the crashing  of planes. :-) But I think we are
> actually not disagreeing, we both agree that the limits of a theory have
> to made explicit. EIther by explicitly representing it in the ontology
> itself or by a meta-level description of the scope of the ontology. E.g.,
> if your ontology is about the behavior of ideal gases, either
> (i) all axioms should start with: If x is an ideal gas ... OR
> (ii) the meta-level description of the ontology should state that the
> universe of discourse of the ontology is restricted to ideal gases.
>
> Assuming the content of the ontology is accurate_F, but somebody blows up
> a machine because he used the ontology for a gas, which behaves
> significantly different than an ideal gas, that's not the responsibility
> of the person who wrote the ontology.
>
>
>
>
>
> The challenge for you is to come up with a measurement for accuracy_M.
> According to what you wrote above, these are actually two questions: How
> do you measure the accuracy? And how do you measure whether the accuracy
> is "close enough" to a given purpose of an ontology?
>
> MW2: I notice that I am talking about accuracy in a quantitative sense,
> and you are talking about it in a purely logical sense. For quantitative
> accuracy, there is a state of affairs that your model represents, and your
> accuracy is rather simply the difference between that state of affairs,
> and your representation of it.
>
> MW2: I think logical accuracy is actually harder. You can clearly say that
> logical inconsistency means your axioms are inaccurate, but what is your
> basis for saying that they are accurate? What do you compare them to?
>
> Since I am  not a domain expert, I ask somebody who knows. The ontologies
> that I am talking about are representing often text-book knowledge, always
> established knowledge. Now there is the possibility that later one finds
> out that the scientific consensus at a time was wrong in that case the
> accuracy_F measure would be faulty. However, that is very rarely the case.
> The big scientific ontologies are pretty stable, and the changes that are
> made are usually because somebody made an error in coding, not because the
> science was false at the time.
>
> MW3: Again, Accuracy_F seems to be about accuracy of representation of a
> theory.
>
> No, its about the truth of the axioms in the ontology. But regardless what
> anybody says when they are in a philosophical mood, de facto we treat
> established scientific knowledge as the truth. We all know that scientific
> knowledge is falsifiable and, thus, that some parts of our scientific
> knowledge will be revised in the future, but that does not change the fact
> that anytime we need to get something done we have to trust the the best
> available science today.
>
>
>
> If you have to logical theories for the same thing, which are inconsistent
> with each other, but both work (3D vs 4D would be an example) how do you
> state the accuracy of these? If they are both accurate do you accept both
> of them? How do you account for their being inconsistent?
>
>
> Rudolf Carnap published in his early years a wonderful article called
> "Überwindung der Metaphysik durch logische Analyse der Sprache" (roughly:
> Overcoming metaphysics by logical analysis of language), where he argues
> that all of ontology consist just of pseudo-problems, that ontological
> assertions are strictly speaking meaningless, and are at best expressions
> of an attitude towards life without any scientific relevance. Obviously,
> he overreached there a little bit, but I believe that Carnap would have
> pointed at the 3D vs. 4D debate as a picture book example for what he was
> on about.
>
> In any case, nobody outside philosophy should care about this debate. The
> only reason why it got any traction outside philosophy is because the
> limits of the expressivity of OWL make it hard to represent change over
> time in OWL.
>
> But I don't want to dodge the underlying question: How do we evaluate the
> accuracy of top-level ontologies? Now, the answer requires some
> background:
>
> Ontologies are an interferentially dependent network of axioms with an
> intended interpretation (which is document by comments, natural language
> definitions, pictures etc.) It represent a "Web of Believes" to use a
> phrase from Quine. Strictly speaking, none of these axioms can really
> evaluated independently, the semantics of each axiom depends on the the
> rest of the ontology. But some of the axioms are closer to empirically
> verifiable observations ("Obama is a U.S. citizen") than others that are
> more theoretical ("parthood is transitive"). Most of scientific knowledge
> that we are representing in ontologies are actually in the middle between
> these two examples. If we measure the accuracy_F of an ontology, then we
> compare the observation-close axioms to available empirical observations.
> Of course, we ontologist usually don't do this directly. What we really do
> is to ask some expert, who has absorbed the relevant empirical data, and
> summarizes the scientific consensus at the time for us. But even if we
> ontologists would go into labs ourselves, we would not be able to  "prove"
> the accuracy of an axiom. However, if the axiom does not clash with any
> observations, then that's evidence that ontology is accurate_F. Thus, the
> theoretical parts of the ontology are not validated directly but only
> indirectly by there connections to the observation-close parts of the
> ontology.
>
> MW3: Now this seems to be different from what I was understanding above.
> Here you are talking about how well the ontology matches the real world.
> Though I do not see why the transitivity of parthood is any harder to test
> this way than whether Obama is a US citizen.
>
>
> The problem with "parthood" is that there are many relationships in
> English that are called "parthood" and for any axiomatization one needs to
> choose one of them. Some people claim parthood is not transitive (e.g.,
> Johansson, I., 2004, ‘On the Transitivity of the Parthood Relations’,
> in H. Hochberg and K. Mulligan (eds.), Relations and Predicates,
> Frankfurt: Ontos Verlag, pp. 161–181). I don't really think that
> empirical evidence is going to help to decide the issue.
>
> But if you don't like the example, let me give you another one. If your
> ontology contains a theory of time, you have a choice between
> time-intervals and time-points as primitive. As far as I know there is no
> empirical evidence either way. Its a matter of choice of a theoretical
> primitive that is justified by the theory as whole.
>
>
>
> So, with that said, back to the question. Top-level ontologies are
> obviously very theoretical.
>
> MW: I think I would say that they are very general, rather than very
> theoretical.
>
> Thus, it is impossible to measure their accuracy_F directly.
>
> MW3: I disagree. See comment about transitivity of parthood above. It
> strikes me this is rather easy to test empirically.
>
> The only way they can be measured is by the role they play as part of
> larger ontologies. If a top-level ontology is used successfully as part of
> many scientific ontologies that are all accurate_F, then the top-level
> ontology is accurate_F. However, this does not exclude the possibility
> that there is a rival top-level ontology that is equally accurate_F.
>
> MW3: I would quite accept that there can be multiple ontologies at any
> level that are equally accurate.
>
>
>
> (2)   For ontology reuse accuracy_F is more important than accuracy_M.
> Imagine person A developed an ontology to a given set of requirements R1
> and determined  by thorough testing that the ontology is accurate_M with
> respect to R1. Now person B considers to reuse the ontology within a
> different application with a different set of requirements R2. For person
> B it is completely irrelevant to know whether the ontology is accurate_M
> with respect to R1.  What B would be interested in is whether the ontology
> is accurate_M with respect to R2, but that information is not available.
>
> MW:  That is just not true. Requirements R2 are met if they are a subset
> of R1.
>
> Yes, in this specific case. But what in the (much more likely case) that
> R2 are not a subset of R1?
>
> MW2: Then either all the requirements that were met were not stated, or
> the requirements are not met.
>
> You are ignoring my point. The point was that accuracy_F is invariant to
> requirements. Accuracy_M is not.
>
> MW3: That is not true,  you have just asserted it without evidence.
>
>
> see comments above on "the world is a sphere"
>
>
> Thus, in cases where somebody is interested in reusing an ontology that
> was build with requirements that are not a subset of the original
> requirements, accuracy_F is useful, while accuracy_M is not.
>
>
>
>
> In contrast, since accuracy_F is invariant to the requirements, the
> information that the ontology has been tested successfully for accuracy_F
> is valuable to person B. Granted, it is not as good as finding out whether
> the content of the ontology meets the requirements of B, but it is at
> least something.
>
> MW: Let’s take another example. I have an ontology that says that a
> thing can be a part of itself. Is it true? The answer will depend on
> whether you are using a classical mereology or not. So the only answer you
> can give is “Yes or no”.
>
> This is just an ambiguous use of the word "part". Axiomatic mereology was
> founded by Leśniewski, who was mainly interested in mereology as a
> substitute for set theory. Analog to subset and proper subset he
> distinguished between parthood and proper parthood. And this has become
> the standard terminology for all logicians and formal ontologists. This
> choice of terminology is a confusing, since the proper parthood
> relationship in mereology is a better match to the various parthood
> relationships that we use in daily life. But if we resolve the ambiguity,
> there is no problem. If by "part of" you mean the relationship that people
> use in English to describe the relationships between the first half of a
> soccer game and the whole game or the first two years of Mr. Obama's
> presidency and the whole first term, then the answer is: no, things cannot
> be part of themselves.
>
>
>
>
>
>
> (3)   While the notions of "closer to the truth", "absolutely true" might
> seem to make some intuitive sense in the context of well-chosen examples,
> it is very hard to generalize these ideas. I am not talking about the lack
> of a formal theory, obviously, fuzzy logic provides a theoretical
> framework it. However, I have yet to encounter any satisfying explanation
> what a truth-value of 0.35628 means. And there is always the question how
> one determines the truth-values. Unless you have an answer how to
> determine whether "The earth is a sphere" is closer to the truth than "All
> birds fly", I don't think we should rely on these ideas in ontology
> evaluation.
>
> MW: That is the wrong idea altogether. It is not a matter of truth values,
> and it is fine to be exactly true in Accuracy_M, but being close to the
> truth is about distance from it, not the probability of being absolutely
> true.
>
> Fuzzy logic has nothing to do with probability (yes, I know wikipedia says
> otherwise, but that is just wrong). It is a way to formalize the intuition
> that you expressed: namely, that it is not sufficient to talk about true
> and false, but that we need to account for distance from the truth. To put
> it in the terminology you used: the "distance for the truth" is expressed
> by a value in the interval from 0 to 1, where 0 is "absolute true", 1 is
> "absolute false".
> http://plato.stanford.edu/entries/logic-fuzzy/
>
>
>
>
>
>
>
> (4) I believe that the thing you are ultimately interested in is whether
> the axioms enable the ontology to meets its requirements as a part of a
> given application. In other words, the important question is: does the
> ontology provide the functions that it needs to provide to make the whole
> system work? And this has nothing to do with axioms being true or "close
> to true", as the following thought experiment shows. Let's assume that the
> role of an ontology in an application is  to determine whether there is a
> train connection between two points. (Not the route, just whether there is
> a connection or not.) In reality, there is a train line from A to B, from
> B to C, and from C to A, and no other train line. However, the ontology O
> contains the following axioms:
> (a) if there is a train line from x to y, x is connected to y.
> (b) if x is connected to y, and there is a train line from y to z, then x
> is connected to z.
> (c) There is a train line from A to C, a train line from C to B, and a
> train line from B to A.
> All of axioms in (c) are false. Not "close to true", just plain false;
> thus these axioms are not accurate_M. Nevertheless, the ontology will
> perform its function in the application perfectly fine.
>
> MW: I don’t think I follow this. You seem to be saying that there is a
> train line from A to B, but not from B to A. Not quite sure how that makes
> sense.
>
> Yes, I assume for this example that train lines are one-directional. If
> you think this is unrealistic, just replace "train line" with "one-way
> street" in the example. The point of the example is that all axioms are
> false, but that the axiom set will respond to all queries about
> connectedness with true answers, and thus provides the intended
> functionality to the application. Hence truth (even closeness to truth) of
> the axioms in the ontology is not required to enable an application to
> work.
>
> MW: I am reminded of the observation that “The worst possible thing you
> can do, is the right thing for the wrong reason.”
>
> I don't argue that this is a situation that one should strife for. This is
> a thought-experiment that shows that the ability to function well within
> the context of an application is logically independent of the accurate
> representation of reality. Which is why we should keep both concepts
> apart, and not muddle them in the definition of "accuracy".
>
> MW: And that is a distinction I make by keeping accuracy (a property of an
> ontology) separate from quality (fitness for some particular purpose).
>
>
> I am happy that we agree on this point. But I still don't get how you
> define accuracy_M as a property of an ontology, and what the exact
> difference to accuracy_F is.
> You wrote above
> "Well, I would only say “pi=3.14 to 3SF” is true, without the
> statement of accuracy you might reasonably expect absolute accuracy, and
> then it would be false."
> So,  the question of accuracy is addressed by putting all relevant
> information within the ontology. Hence, the axioms in ontology1 is false
> and the axiom in ontology2 is true.
>
> ontology1: pi = 3.14
> ontology2: pi = 3.14 to 3SF
>
> I would now claim that an ontology1 is not accurate_F and ontology_2 is
> accurate_F. Is there a difference to accuracy_M?
>
>
> Best
> Fabian
>
>
> Regards
>
> Matthew West
> Information  Junction
> Tel: +44 1489 880185
> Mobile: +44 750 3385279
> Skype: dr.matthew.west
> 
>matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx<mailto:matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx>
> http://www.informationjunction.co.uk/
> http://www.matthew-west.org.uk/
>
> This email originates from Information Junction Ltd. Registered in England
> and Wales No. 6632177.
> Registered office: 2 Brookside, Meadow Way, Letchworth Garden City,
> Hertfordshire, SG6 3JE.
> <ATT00001..c>    (039)



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (040)

<Prev in Thread] Current Thread [Next in Thread>
  • Re: [ontolog-forum] [ontology-summit] Reasoners and the life cycle, doug foxvog <=