ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Foundation ontology, CYC, and Mapping

To: "'Pat Hayes'" <phayes@xxxxxxx>
Cc: "'[ontolog-forum]'" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Thu, 25 Feb 2010 12:03:11 -0500
Message-id: <01e301cab63c$69907bd0$3cb17370$@com>
In response to Pat Hayes,    (01)

> [PH] > >  Pat, we have got to get this sorted out.  We are (I hope)
talking past one another.    (02)

[PC] I do hope so, on both counts.
Once again, thanks for the careful presentation of relevant issues.
I will not directly discuss the relation of Tarskian semantics and model
theory to these issues, as I think its relevance is included in and covered
by the discussions below.    (03)

There appear to be three distinguishable issues, which I will summarize:
(1) PatH thinks that general accurate semantic interoperability is a
"fantasy" and not worth attempting.  I could not find any technical
arguments for this position.  Perhaps PatH thinks that this means "perfect"
interoperability?  It doesn't;  "accurate" does not connote "perfect", just
better than available by other approaches, and meeting a minimum criterion.
(2) PatH asserts that the meanings of elements in an ontology change
whenever any new axiom is added.  I don't dispute the fact that new
inferences become available, but do not believe that this mathematical
notion of meaning is what is relevant to the practical ask of building
applications using ontologies.  In short, the mathematical meanings
represented by inference over an ontology are only *part* of what I consider
the relevant meaning of "meaning" in practical applications.  The way items
are used in applications, including the programmers' interpretation, is also
part of the meaning.
(3) I assert that an important goal that can be advanced by aiming to
recognize primitives is the stability of the Foundation Ontology. PatH
thinks that an FO would need to be changed in "a matter of minutes".  Either
he (a) is mixing up domain ontologies with the FO, or (b) doesn't believe
that multiple domain ontologies can be specified logically by the same set
of primitive ontology elements.  It is not clear which of these is the cause
of that statement.    (04)

------     (05)

More detail.  I will take some of the comments out of order:    (06)

Issue 1: is "general accurate semantic interoperability" feasible at all or
an obvious "fantasy"?
> [PC] > > Perhaps future objections could focus on genuine technical 
> problems
> > (not analogies with human language), and better yet suggest 
> > alternatives to solving the problem at hand: not just *some* level 
> > of interoperability, but accurate interoperability that would let 
> > people rely on the inferences drawn by the computer.  If not a common
FO, then what?
> 
> [PH] > Nothing. This is not a viable goal to seek. It is a fantasy, a
dream.
> One does not seek alternative ways to achieve a fantasy.
>    (07)

  Wow!  PatH thinks that we will never be able to achieve a level of
interoperability that will "let people rely on the inferences drawn by the
computer"??!!    (08)

I think that there is a community that wants the computers to be as reliable
as people in making inferences from data acquired from remote systems, and
PatH seems to think that is impossible.  Of course such things *do* happen
routinely through specialized agreements on the meanings of data elements,
but I assume that PatH's skepticism is about the general case where data is
acquired from some remote system whose only prior agreement is to use the FO
for interpretation.  I prefer not to give up on useful goals such as general
semantic interoperability until they have been demonstrated to be
impossible, and I suspect that there are others besides myself who do not
consider it quite so unattainable.  I am sure I haven't seen any
demonstration of (or any evidence for) the level of hopelessness PatH
asserts, and believe with fervor equal to his (and I feel certain on at
least as solid a factual basis) that any such demonstration (that general
accurate semantic interoperability is impossible) is itself impossible.
But more discussion may clarify other points.    (09)

In the same vein:
> [PC] > > I still think that a project to build and test a common FO
> > is the best bet for getting to general accurate semantic 
> > interoperability as quickly as possible.
> 
> [PH] > "General accurate semantic operability" is like world peace, 
> and just as unlikely to ever be achieved while human beings have the 
> cognitive limitations that we do in fact have. Even if an angel were 
> to appear and give us such a FO, it would work only for a matter of 
> minutes before needing to be changed, and no amount of human effort 
> could keep pace with the necessary changes.
>    (010)

[PC]   Certainly, domain ontologies will constantly be created and changed,
but those will not necessarily need a modified *FO* to create the logical
descriptions of their ontology elements.  The notion of a "stable" FO is
precisely the state where more than a few new domain ontologies can be
created with the intended meanings of their elements logically specified
using only elements of the FO, not requiring any change in the FO.   Since
we have nothing yet similar to the FO that I have suggested, nor any
meaningful test of the principle, and as a consequence no experience to
demonstrate that such changes to the FO would be frequently needed, it is
unclear what the basis is for PatH's assertion here.     (011)

   PatH's skepticism makes sense if one interprets "general accurate
semantic interoperability" to mean "perfect" semantic interoperability, but
I have carefully avoided that word and tried to avoid implying that.  There
are three attributes that I would consider as characterizing "general
accurate semantic interoperability": (1) it is the *most accurate* way to
enable automatic interoperability for any given set of ontologies at any
point in time; (2) it has a level of interoperability such that the number
of different logical inferences from the same data (for some test suite of
interoperability scenarios - clearly it will differ case to case) will be
fewer than 1 in 2000 inferences; (3) it will allow the computer to perform
tasks requiring interpretation of remotely acquired data with at least the
reliability of humans doing the same information processing task (this may
need to be qualified for practical considerations: if we assign a large
number of humans and give them a week or a month to collectively do a task
that the computer does in a few seconds, they may get more accurate results.
I wouldn't consider that to be a disproof of the principle - we are
concerned with practical tasks and practical levels of resources).  As a
bonus, the FO approach appears to me to be the most efficient and therefore
the least expensive means to broad semantic interoperability, since it uses
the hub-and-spoke method for mapping and translating assertions form one
ontology form to another, rather than n^2 mappings.
   It is important also to understand that, if there are any direct mappings
available between ontologies or modules using the FO, these can be used in
preference to using the FO for translating specific assertions anytime they
appear more efficient.  The FO tactic does not preclude use of direct
mappings, it just supplements such mappings for cases where there are no
direct mappings.  Once an FO has become widely adopted, and direct mappings
become unnecessary, some may still be created for efficiency purposes.  The
FO is very flexible in accommodating multiple ways to represent things and
multiple ways to interoperate.  It just makes very broad interoperability
practical while it is not practical to map every new ontology to every other
one with which one might want to interoperate.    (012)

On Issues 2 and 3: What is the meaning of a computational ontology and how
is it affected by changes in the ontology?
> [PC] > > So, if we want the meanings of terms in an ontology to remain
> > stable, and **don't** want the meanings to change any time some 
> > remotely related type appears in a new axiom,
> 
> [PH] > But we DO want this! Surely that is the very point of changing 
> and adding axioms.    (013)

No, we don't want this.  Let me try to illustrate by an example.  We will
likely have in the FO units of currency such as the US Dollar.  Now, the
*value* of the dollar relative to other currencies and its purchasing power
change every day, and economists will debate what mix of M1, M2, M3, etc.
would properly represent the total number of dollars in the economy, but the
*meaning* of "dollar" as understood by people and **as it determines its use
in computer programs** will not be changed by some change to the
axiomatization of a remote entity such as unicorns.  We could have, for
example some ontology in which a unicorn's horn is made of ivory, and then
have that changed to assert that a unicorn's horn is made of gold.  Now, as
I understand PatH, this added axiom changes the meanings of everything in
the ontology, including the "dollar".
  Does anyone other than PatH actually want the meaning of "dollar" to
depend on the definition of a unicorn (or of any other unrelated entity?)
  Now, some of the logical inferences derivable in that ontology will change
when a new axiom is added.  That may well change the use, in ontology-based
applications, of entities that are closely related logically to the point of
change.  But it will not change *everything*, only the total set of all
inferences, most of which will be utterly irrelevant to the uses of most of
the ontology elements.  It won't change the way most data is used in
applications, which I take as the ultimate criterion of meaning.  In fact a
small number of changes to an FO should change very few of the intended
meanings, which (I emphasize) are what the programmers use to determine how
the data is to be manipulated, input and output.    (014)

>[PH] > If meanings are stable across theories, then what is  the point 
>of adding axioms to capture more meaning?    (015)

[PC] the point is just to add *more* elements to represent *more* entities
or more relations,  or change meanings of the existing elements that are
**closely related** to the point of change.   But the point is *not* to
change the meaning of *everything* in the ontology.  This doesn't strike me
as particularly controversial, except perhaps among theoretical
mathematicians.  The issue I think is crucial in how to view the meanings of
ontology elements is the way they are used in applications.  The inferences
derived from the logic place stringent constraints on the meaning and use of
the ontology (the more stringent the better), but they are not the only
factors that determine how the data represented via the ontology will be
used in an application.  The deductive closure of all axioms is *not* the
meaning that programmers or database developers will be thinking of when
developing applications.    (016)

> [PH] > Apparently,
> whatever meaning you are going to have after the addition was already 
> there before. I presume it was there when the ontology had no axioms 
> at all, in fact: for if not, which addition, in the long process of 
> growing the ontology, created or introduced it? It would seem to 
> follow that  all ontology meanings are already present in an empty 
> ontology.
  No, the only meanings in the ontology are those that are explicitly
described by the axioms, and/or examples cited in the documentation (which
can help guide the programmers), and by the way they are used in programs.
Examples of use can also be referenced in the documentation and will help
make the intended meaning clearer and provide further guidance to
programmers so that the actual uses will not be contradictory.  Nothing that
is not part of the explicit documentation, or the transitive closure of
inferences, is part of the ontology.    (017)

> [PH] > This is obvious nonsense. You are confusing the intended 
> meaning, the natural meaning we are seeking to capture in O, with the 
> *actual*, theory-justified meaning that O can be said to have by 
> virtue of its ontological structure. The former is stable, but is not 
> computationally accessible. The latter is computationally accessible, 
> and subject to precise theoretical analysis, but is particular to the 
> ontology. Change the ontology, you change the meaning. Maybe not much, 
> but you do change it. And this is not a 'problem', but on the 
> contrary, is exactly what we would expect and what makes our work 
> possible at all.
>    (018)

[PC]  No, I am not confusing the *intended meaning* and the meaning that is
explicitly represented by the ontology.  I am saying that the latter is only
one *part* of the intended meaning (ideally the most important part), and
that parts of intended meaning not explicit in the logic can still be
specified by citing instances of types and examples of use in the
documentation, and that the intended meaning is the total sum of the logical
inferences *plus* the additional elements of meaning conveyed by
documentation, and by the existence of instances whose properties may be
tested against the ontology representation by access to web information.    (019)

> [PH] > But what one cannot legitimately claim is that (by virtue of 
> being computational instead of mathematical, or by some other 
> mysterian magic) a *formal* ontology captures more meaning, or a 
> different kind of meaning, than the meaning assigned to it by the 
> Tarskian account, by virtue of its logical or computational properties.    (020)

[PC] I do not  claim that the logical elements of a formal ontology by
itself captures more meaning than a mathematical theory, since a formal
ontology is a form of mathematical theory.  What I claimed is that the
logical elements of ontologies as used in applications contain only *part*
of the meaning (ideally all, but sometimes not all) that is significant to
an application and that determines how the elements are used in the
application.  Other parts of meaning are captured by the documentation,
which guides the programmer to properly use the ontology elements and
determines their use in applications.    The use is determined by how the
programmers use it - for example, which field in which input or output form
represents the value of some ontology element in some assertion.  If the
programmer misinterprets the meaning of an ontology element, the data
entered will be misused regardless of how accurate the inferencing is.  But
very importantly in this interpretation of meaning, the intended meanings,
and the way data is used in applications, will *not* change due to the
addition of some distant axiom.  Some logical inferences may change, but the
logical inferences that are important for how the data is used in
applications will not change except for elements closely related to the
point of change.
   Changes to the FO can and should be tested to see how they affect the
results of some suite of test cases.  Undesirable changes may require
additional changes to the ontology or redoing the changes made.  To minimize
the need for changes, I have reiterated that trying to find and include all
known primitives at an early stage will reduce the need to add those
primitives at a later point when more application may be affected by
changes.    (021)

> [PC] > >  But the most important and useful inferences are likely to 
> be
> >  those that arise from short inference chains comparable to how 
> > humans would use them, and to how programmers would imagine them 
> > being used in logical inference.
> 
> [PH] > Really? I see no reason why this would be generally true.
> 
  This is generally true because ontologists, programmers, and database
developers who are going to use ontology elements will not be able to
anticipate long inference chains (except perhaps for some intuitively clear
transitive relations); their understanding of the elements, and consequently
their use and significance in programs will depend on their understanding
based on such short inference chains.  And those are the inferences that
they will want the computer to make to perform the intended processing.
   Now it is clear that there will be many unintended (not consciously
planned for) inferences, and most of those will be neutral with respect to
the purpose of the program.  Some unintended inferences will also be
unwanted inferences, in which case those will be "bugs" in the ontology.
There may also be some desirable inferences that depend on long chains of
reasoning, but because of the limitations of the way ontologists and
programmers visualize the use of their products, those are likely to be a
small minority of inferences, when averaged over all uses of all ontologies.    (022)

PatC    (023)

Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx    (024)


> -----Original Message-----
> From: Pat Hayes [mailto:phayes@xxxxxxx]
> Sent: Wednesday, February 24, 2010 4:55 PM
> To: Patrick Cassidy
> Cc: [ontolog-forum]
> Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping
> 
> 
> Pat, we have got to get this sorted out. We are (I hope) talking past 
> one another.
> 
> FIrst, let me clarify something about 'formal' or 'mathematical'
> notions of meaning. Tarskian semantics does not apply only to 
> 'mathematical' theories, nor does it require that all meanings be 
> "mathematical", whatever that could mean. It is a very general theory 
> of meaning, one that can be applied to a wide range of languages and 
> notations (for example, I have applied it to 2-dimensional maps) and 
> even to mental models of thought. However, it is itself a 
> mathematically expressed theory. That is, it *uses* mathematical 
> notions - of set, and mapping, and function - to state its own 
> theoretical ideas. This is something it shares with almost every other 
> precise theory of almost anything, in fact. But it does not follow 
> from this that the theory is *about* "mathematical things", any more 
> than using, say, differential equations to describe the stress forces 
> in a bridge girder makes this into a mathematical theory and therefore 
> not about real bridges.
> 
> But now, to get to the heart of the matter, Tarskian semantics is a 
> THEORY of meaning. Actual meanings in the wild, the things we 
> apparently refer to when we talk about "intended meanings" or 
> "intuitive meanings" or the like, are (we all sincerely hope) real 
> things in the world, part of our human life experience. We all believe 
> that when we think about things using our concepts of those things, 
> that our thoughts are meaningful, that they *have* real, actual 
> meanings. But in order to be scientific about this kind of talk, we 
> need some *theory* of what these natural, wild, real meanings actually 
> are. Or at least some kind of *account* of them, saying what kind of 
> entity they are, what properties they have, how they relate to other 
> things (like, to the thinkers of the ideas, or to the thins they are 
> ideas of, etc..) Are meanings something linguistic or symbolic in 
> nature? Are them mental or psychological? Or platonic, in some 
> abstract realm, like numbers? Can they be written down, or captured in 
> some other way? Etc..
> 
> It is just wrong to draw the contrast between the natural things, on 
> the one hand, and the account provided of those things by a theory of 
> them, on the other, as a difference of **kind**. Take numbers. There 
> are the natural numbers, which most mathematicians agree exist in the 
> wild, as it were. And then there are various formalized arithmetics, 
> each of which is a theory of the natural numbers. And we happen to 
> know, in this case, that we cannot have a perfect such theory: any 
> theory will miss something, will have its unprovable Goedel sentence.
> But we do not say that there are two kinds of number: the natural 
> ones, and the merely **mathematical** ones, and the formalized 
> arithmetics are about the latter and not the former. They are all 
> theories of the same entities, but some theories capture more truths 
> than others. It is not a matter of chalk and cheese, but rather of 
> varieties of cheese-making.
> 
> Similarly for meanings. There are real meanings in the world, let us 
> agree. Some things out there really do mean something. And then there 
> are some theories of this, and Tarskian semantics is one such theory.
> It is somewhat narrow, it does not by any means capture all the 
> nuances of natural meanings. But, especially when extended in the 
> various ways it has been by such folk as Kripke and Scott, it does 
> cover a surprisingly wide range of examples. And, more to the point, 
> it is the *only* viable theory of meaning we have, as far as I know.
> We have philosophical critiques of it, to be sure, but we do not have 
> any alternatives to hand.
> 
> You, below, contrast meanings in a "mathematical theory" (which I 
> presume you presume I was talking about) with those in a 
> "computational ontology". But I was talking about computational 
> ontologies. Computational ontologies are artifacts, written in formal 
> logical notations. They do not simply 'have' natural meanings, 
> meanings-in-the-wild, in the way that human natural languages are said 
> to have. They do not have intended meanings. We may intend them to 
> have a meaning, but a computational ontology is just as artificial and 
> "formal" (which is to say, "mathematical" in the sense of being 
> mathematically described) as any other artifact. And in the case of 
> logically expressed formalisms, like those used in computational 
> ontologies, the Tarskian theory of meaning applies in a special way.
> Not only is it *a* theory of their meaning, indeed the only one we 
> have,  but Goedel proved that for these formal logics, it is an 
> exactly correct theory of meaning. That is the content of the 
> completeness theorem: something is provable from O precisely when what 
> it means according to the Tarskian theory of meaning is entailed 
> (according to the same theory) by the sentences in O. This is, to 
> emphasize, a provable fact about any FOL-based computational ontology.
> 
> One can say, this ontology O does not capture all my intended meaning, 
> speaking of natural meanings in the wild. Of course, this may well be 
> the case, and may well be a legitimate critique of any formal theory 
> of anything in nature. But what one cannot legitimately claim is that 
> (by virtue of being computational instead of mathematical, or by some 
> other mysterian magic) a *formal* ontology captures more meaning, or a 
> different kind of meaning, than the meaning assigned to it by the 
> Tarskian account, by virtue of its logical or computational properties.
> 
> And what I said, which you quote below, regarding primitives, is a 
> factual observation about a provable consequence of the Tarskian 
> theory of meaning applied to any ontology expressed in any formal 
> assertional logic. I was not talking about mathematical theories in 
> particular, and certainly not in contrast to 'computational ontologies'.
> 
> Your point in response, I take it, is that you include, as part of the 
> meaning-capturing machinery of an ontology the human-readable 
> commentaries which state in English the intended meanings of the 
> formally expressed concepts. Well, that is a stance one can take: but 
> then I would say in response, that your ontology is no longer 
> expressed in a formalism, so is no longer "computational". In fact, I 
> see no reason to call it an ontology at all. Why bother with logic, if 
> I can impose my will upon meanings by writing prose? I need no theory 
> of meaning in order to speak, after all. The entire process can 
> proceed without using any formalism, and the only function that  the 
> computer need play is to be a kind of public record of our 
> discussions, the minutes of the meaning-deciding meetings. But this is 
> a reductio ad absurdum of our enterprise.
> 
> On Feb 14, 2010, at 9:25 PM, Patrick Cassidy wrote:
> 
> > Concerning the meaning of Primitives in a Foundation Ontology:
> >
> > John Sowa said:
> > [JFS] >  " My objection to using 'primitives' as a foundation is 
> > that the meaning of a primitive changes with each theory in which it 
> > occurs.
> >  For example, the term 'point' is a 'primitive' in Euclidean 
> > geometry and various non-Euclidean geometries.  But the meaning of 
> > the term 'point' is specified by axioms that are different in each 
> > of those theories."
> > (and in another note):
> > [JFS] >> As soon as you add more axioms to a theory, the "meaning"
> > of the
> >>> so-called "primitives" changes.
> >
> > Pat Hayes has said something similar but more emphatic:
> >
> > [PH] > "Each theory nails down ONE set of concepts. And they are ALL 
> > 'primitive' in that theory, and they are not primitive or non- 
> > primitive in any other theory, because they aren't in any other 
> > theory AT ALL."
> >
> > Given these two interpretations of "primitive" in a **mathematical** 
> > theory, it seems that the "meanings" of terms (including primitive 
> > terms) in
> a
> > mathematical theory have little resemblance to the meanings of terms 
> > in a computational ontology that is intended to serve some useful 
> > purpose, because the meanings of the terms in the ontology do not 
> > depend solely on the total sum of all the inferences derivable from 
> > the logic, but on the **intended meanings**, which do or at least 
> > should control the way
> the
> > elements are used in applications - and the way the terms are used 
> > in applications is the ultimate arbiter of their meanings.  The 
> > intended meanings can be understood by human programmers not only 
> > from the relations on the ontology elements, but also from the 
> > linguistic documentation, which may reiterate in less formal terms 
> > the direct assertions on each element, but may also include 
> > additional clarification and examples of included or excluded 
> > instances.  It seems quite clear to me that it is a mistake to 
> > assume that the interpretation of "meaning" or "primitive" in a 
> > mathematical theorem is the same as the way that "meaning" and 
> > "primitive" are used in practical computational ontologies.
> >
> >  This discussion was prompted by my assertion that the meanings of 
> > terms in a Foundation ontology, including terms representing 
> > primitive elements, should be as stable as possible so that the FO 
> > can be relied on to produce accurate translations among the domain 
> > ontologies that use the FO as its standard of meaning.  Given the 
> > agreement by JS and PH that each change to an ontology constitutes a 
> > different theory
> 
> This is not a "agreement". It is simply an elementary fact about 
> logical theories. In fact, in logic textbooks, the very word "theory"
> is used to refer to a set of sentences. So OF COURSE different sets 
> constitute different theories.
> 
> > , and the meanings of terms in any
> > one theory are independent of the meanings in any different theory, 
> > I believe that we need to look for a meanings of "meaning" and 
> > "primitive"
> > that is not conflated with the mathematical senses of those words as 
> > expressed by JS and PH.
> 
> They are not "mathematical" senses. And good luck finding an 
> alternative theory of truth.
> >
> > I suggested that a useful part of the interpretation of "meaning" 
> > for practical ontologies would include the "Procedural Semantics" of 
> > Woods.
> > John Sowa replied (Feb. 13, 2010) that:
> > [JFS]> " In short, procedural semantics in WW's definition is 
> > exactly what any programmer does in mapping a formal specification 
> > into a
> program."
> >
> > I agree that computer programmers interpret meanings of ontology 
> > elements using their internal understanding as a wetware 
> > implementation of "procedural semantics".  This does not exhaust the 
> > matter, though, because computers now can perform some grounding by 
> > interaction with the
> world
> > independent of computer programmers in this sense: though their 
> > procedures may be specified by programmers, those procedures can 
> > include functions that themselves perform some of the "procedural 
> > semantics" processes, and the computers therefore are not entirely 
> > dependent on the semantic interpretations of programmers, at least 
> > in theory.  For the present there are probably few programs that 
> > have the capability of independently determining the intended 
> > meanings of the terms in the ontology, but that is likely to change, 
> > though we don't know how fast.  I will provide one example of how 
> > this can happen: an ontologist may assert that the type "Book"
> > includes some real-world instance such as the "Book of Kells" (an 
> > illuminated Irish manuscript from the middle ages, kept at a library 
> > in Dublin).  With an internet connection, the program could (in 
> > theory) check the internet for information about that instance, and 
> > to the extent that the computer can interpret text and images, 
> > perform its own test to determine if the Book of Kells actually fits 
> > the logical specification in the Ontology.
> > The same can be done for instances of any type that are likely to be 
> > discussed on the internet.
> >
> > So, if we want the meanings of terms in an ontology to remain 
> > stable, and
> > **don't** want the meanings to change any time some remotely related 
> > type appears in a new axiom,
> 
> But we DO want this! Surely that is the very point of changing and 
> adding axioms. If meanings are stable across theories, then what is 
> the point of adding axioms to capture more meaning? Apparently, 
> whatever meaning you are going to have after the addition was already 
> there before. I presume it was there when the ontology had no axioms 
> at all, in fact: for if not, which addition, in the long process of 
> growing the ontology, created or introduced it? It would seem to 
> follow that  all ontology meanings are already present in an empty 
> ontology.
> 
> This is obvious nonsense. You are confusing the intended meaning, the 
> natural meaning we are seeking to capture in O, with the *actual*, 
> theory-justified meaning that O can be said to have by virtue of its 
> ontological structure. The former is stable, but is not 
> computationally accessible. The latter is computationally accessible, 
> and subject to precise theoretical analysis, but is particular to the 
> ontology. Change the ontology, you change the meaning. Maybe not much, 
> but you do change it. And this is not a 'problem', but on the 
> contrary, is exactly what we would expect and what makes our work 
> possible at all.
> 
> > what can we do?  Perhaps we can require that the meanings across 
> > *different* versions of the FO can only be relied on to produce 
> > *exactly* the same inferences if the chain of inferences is kept to 
> > some small number, say 5 or 6 links (except for those elements 
> > inferentially close to the changed elements, which can be identified 
> > in the FO version documentation, and whose meanings in this sense 
> > may change).  This is not, IMHO, an onerous condition for several 
> > reasons:
> > (1) no added axiom will be accepted if it produces a logical 
> > inconsistency in the FO;
> > (2) the programmers whose understanding determines how the elements 
> > are used will not in general look beyond a few logical links for 
> > their own interpretation, so only elements very closely linked to 
> > the one changed by new axioms will be used differently in programs.
> > (3) as long as the same version of the FO is used, the inferences 
> > for the same data should be identical regardless of the number of 
> > links in the chain of inference;
> > (4) if there are only a few axioms that change (out of say 10,000) 
> > between two versions of the FO, then the likelihood of getting 
> > conflicting results will be very small for reasonable chains of 
> > inference; if the similarity of two versions of the FO is 99.9% (10 
> > different elements out of 10,000), then a chain of inference would 
> > produce non-identical sets of results on average for an inference 
> > chain 10 long at 0.999(exp 10) = 0.99 (99% of the
> > time) 100
> > long at  0.999(exp 100)= 0.90, or 90% of the time.  But the most 
> > important and useful inferences are likely to be those that arise 
> > from short inference chains
> 
> Really? I see no reason why this would be generally true.
> 
> > , comparable to how humans would use them, and to how programmers 
> > would imagine them being used in logical inference.
> >
> >  The potential for changed inferences as new axioms are added, even 
> > though no changing the intended meanings of the elements of the FO, 
> > does argue for an effort to include as many of the primitive 
> > elements as can be identified at the earliest stages.  This is in 
> > fact the purpose of performing
> the
> > consortium project, to get a common FO thoroughly tested as quickly
> as
> > possible, and minimize the need for new axioms.
> >
> >  In these discussions of the principles of an FO and a proposed FO 
> > project, not only has there been no technical objection to the 
> > feasibility of an FO to serve its purpose (just gut skepticism), but 
> > there has also been a notable lack of suggestions for alternative 
> > approaches that would achieve the goal of general accurate semantic 
> > interoperability (not just interoperability in some narrow sphere or 
> > within some small group.)
> 
> There have also been very few suggestions on how to make a perpetual 
> motion machine. Has it not dawned on you yet that this goal you are 
> seeking might well be impossible? That if it were possible, it would 
> have been done eons ago?
> 
> >  I am
> > quite aware that my discussions do not **prove** that the FO 
> > approach would work, in fact the only way I can conceive of proving 
> > it is to perform a test such as the FO project and see if it does 
> > work.  But neither has anyone suggested a means to **prove** that 
> > other approaches will work, and this hasn't stopped other approaches 
> > (notably mapping) from being funded.
> 
> Ontology mapping has immediate benefits in the real world, which is 
> why it gets funded.
> 
> >
> > In fact in my estimate, no other approach has the potential for 
> > coming as close to accurate interoperability as an FO, and no 
> > approach other than the kind of FO project I have suggested can 
> > possibly achieve that result as quickly.
> 
> You ignore the likely possibility that it will be impossible to create 
> an FO, and that the attempt will cost a great deal of money and 
> effort. And you would never accept a negative outcome in any case, 
> right?
> 
> >
> >  I am grateful for the discussions that have helped sharpen my 
> > understanding of how such a project can be perceived by others, and 
> > improved my understanding of some of the nuances.  But after 
> > considering these points,
> >
> PatH    (025)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (026)

<Prev in Thread] Current Thread [Next in Thread>