Pat, (01)
Agree with a lot of what you say below, but I am having trouble working out
exactly what you mean with this. (02)
But I
> was talking about computational ontologies. Computational ontologies are
> artifacts, written in formal logical notations. They do not simply 'have'
natural
> meanings, meanings-in-the-wild, in the way that human natural languages
are
> said to have. They do not have intended meanings. We may intend them to
have
> a meaning, but a computational ontology is just as artificial and "formal"
(which
> is to say, "mathematical" in the sense of being mathematically described)
as any
> other artifact. And in the case of logically expressed formalisms, like
those used
> in computational ontologies, the Tarskian theory of meaning applies in a
special
> way. (03)
My problem is with the claim that computational ontologies do not have
intended meanings, as it looks like you are saying above.
OK, it is plain that when I send someone an OWL file, that they cannot read
and check the intended interpretation (intended meaning?) in the same formal
way as they can read/check its formal semantics.
(Leaving aside mathematical objects, which can be problematic.) Surely for
most practical uses (e.g. ATM banking, missile defence, etc.) unless the
artefact does have an intend interpretation, it is not much practical use.
And one cannot argue that the intended interpretation is not something
private and subjective, it tends to be things that are public and from a
practical point of view decidable - whether the ATM dispenses £10 when asked
to, whether the radar deciphers the signal, etc. Furthermore, intended
interpretations are not alien to Tarski - did he not mainly consider
interpreted languages in his 1933 paper? So (as you well know) the notion of
an intended interpretation is not novel. (04)
So, it would seem to me that for the artefact to be useful it needs to have
both a formal semantics and an intended interpretation. That does not stop
model theoretic reasoners ignoring the intended interpretation for their
formal work. But if every application of the ontology ignored the intended
interpretation, I cannot see what use it would be. I can see that tools for
dealing with the intended interpretation may not be as mathematical as a
logician would like, but if you want something practical ... (05)
So, what puzzles me is that if you insist on the PatH Theory of
Computational Ontology Meaning you end up with practically useless artefacts
- something I am sure you did not intend. (06)
Chris (07)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pat Hayes
> Sent: 24 February 2010 21:55
> To: Patrick Cassidy
> Cc: [ontolog-forum]
> Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping
>
>
> Pat, we have got to get this sorted out. We are (I hope) talking past one
> another.
>
> FIrst, let me clarify something about 'formal' or 'mathematical'
> notions of meaning. Tarskian semantics does not apply only to
'mathematical'
> theories, nor does it require that all meanings be "mathematical",
whatever that
> could mean. It is a very general theory of meaning, one that can be
applied to a
> wide range of languages and notations (for example, I have applied it to
2-
> dimensional maps) and even to mental models of thought. However, it is
itself a
> mathematically expressed theory. That is, it *uses* mathematical notions -
of
> set, and mapping, and function - to state its own theoretical ideas. This
is
> something it shares with almost every other precise theory of almost
anything,
> in fact. But it does not follow from this that the theory is *about*
> "mathematical things", any more than using, say, differential equations to
> describe the stress forces in a bridge girder makes this into a
mathematical
> theory and therefore not about real bridges.
>
> But now, to get to the heart of the matter, Tarskian semantics is a THEORY
of
> meaning. Actual meanings in the wild, the things we apparently refer to
when
> we talk about "intended meanings" or "intuitive meanings" or the like, are
(we
> all sincerely hope) real things in the world, part of our human life
experience.
> We all believe that when we think about things using our concepts of those
> things, that our thoughts are meaningful, that they *have* real, actual
> meanings. But in order to be scientific about this kind of talk, we need
some
> *theory* of what these natural, wild, real meanings actually are. Or at
least
> some kind of *account* of them, saying what kind of entity they are, what
> properties they have, how they relate to other things (like, to the
thinkers of the
> ideas, or to the thins they are ideas of, etc..) Are meanings something
linguistic
> or symbolic in nature? Are them mental or psychological? Or platonic, in
some
> abstract realm, like numbers? Can they be written down, or captured in
some
> other way? Etc..
>
> It is just wrong to draw the contrast between the natural things, on the
one
> hand, and the account provided of those things by a theory of them, on the
> other, as a difference of **kind**. Take numbers. There are the natural
> numbers, which most mathematicians agree exist in the wild, as it were.
And
> then there are various formalized arithmetics, each of which is a theory
of the
> natural numbers. And we happen to know, in this case, that we cannot have
a
> perfect such theory: any theory will miss something, will have its
unprovable
> Goedel sentence.
> But we do not say that there are two kinds of number: the natural ones,
and the
> merely **mathematical** ones, and the formalized arithmetics are about the
> latter and not the former. They are all theories of the same entities, but
some
> theories capture more truths than others. It is not a matter of chalk and
cheese,
> but rather of varieties of cheese-making.
>
> Similarly for meanings. There are real meanings in the world, let us
agree. Some
> things out there really do mean something. And then there are some
theories of
> this, and Tarskian semantics is one such theory.
> It is somewhat narrow, it does not by any means capture all the nuances of
> natural meanings. But, especially when extended in the various ways it has
been
> by such folk as Kripke and Scott, it does cover a surprisingly wide range
of
> examples. And, more to the point, it is the *only* viable theory of
meaning we
> have, as far as I know.
> We have philosophical critiques of it, to be sure, but we do not have any
> alternatives to hand.
>
> You, below, contrast meanings in a "mathematical theory" (which I presume
you
> presume I was talking about) with those in a "computational ontology". But
I
> was talking about computational ontologies. Computational ontologies are
> artifacts, written in formal logical notations. They do not simply 'have'
natural
> meanings, meanings-in-the-wild, in the way that human natural languages
are
> said to have. They do not have intended meanings. We may intend them to
have
> a meaning, but a computational ontology is just as artificial and "formal"
(which
> is to say, "mathematical" in the sense of being mathematically described)
as any
> other artifact. And in the case of logically expressed formalisms, like
those used
> in computational ontologies, the Tarskian theory of meaning applies in a
special
> way.
> Not only is it *a* theory of their meaning, indeed the only one we have,
but
> Goedel proved that for these formal logics, it is an exactly correct
theory of
> meaning. That is the content of the completeness theorem: something is
> provable from O precisely when what it means according to the Tarskian
theory
> of meaning is entailed (according to the same theory) by the sentences in
O. This
> is, to emphasize, a provable fact about any FOL-based computational
ontology.
>
> One can say, this ontology O does not capture all my intended meaning,
> speaking of natural meanings in the wild. Of course, this may well be the
case,
> and may well be a legitimate critique of any formal theory of anything in
nature.
> But what one cannot legitimately claim is that (by virtue of being
computational
> instead of mathematical, or by some other mysterian magic) a *formal*
> ontology captures more meaning, or a different kind of meaning, than the
> meaning assigned to it by the Tarskian account, by virtue of its logical
or
> computational properties.
>
> And what I said, which you quote below, regarding primitives, is a factual
> observation about a provable consequence of the Tarskian theory of meaning
> applied to any ontology expressed in any formal assertional logic. I was
not
> talking about mathematical theories in particular, and certainly not in
contrast
> to 'computational ontologies'.
>
> Your point in response, I take it, is that you include, as part of the
meaning-
> capturing machinery of an ontology the human-readable commentaries which
> state in English the intended meanings of the formally expressed concepts.
Well,
> that is a stance one can take: but then I would say in response, that your
> ontology is no longer expressed in a formalism, so is no longer
"computational".
> In fact, I see no reason to call it an ontology at all. Why bother with
logic, if I
> can impose my will upon meanings by writing prose? I need no theory of
> meaning in order to speak, after all. The entire process can proceed
without
> using any formalism, and the only function that the computer need play is
to be
> a kind of public record of our discussions, the minutes of the
meaning-deciding
> meetings. But this is a reductio ad absurdum of our enterprise.
>
> On Feb 14, 2010, at 9:25 PM, Patrick Cassidy wrote:
>
> > Concerning the meaning of Primitives in a Foundation Ontology:
> >
> > John Sowa said:
> > [JFS] > " My objection to using 'primitives' as a foundation is that
> > the meaning of a primitive changes with each theory in which it
> > occurs.
> > For example, the term 'point' is a 'primitive' in Euclidean geometry
> > and various non-Euclidean geometries. But the meaning of the term
> > 'point' is specified by axioms that are different in each of those
> > theories."
> > (and in another note):
> > [JFS] >> As soon as you add more axioms to a theory, the "meaning"
> > of the
> >>> so-called "primitives" changes.
> >
> > Pat Hayes has said something similar but more emphatic:
> >
> > [PH] > "Each theory nails down ONE set of concepts. And they are ALL
> > 'primitive' in that theory, and they are not primitive or non-
> > primitive in any other theory, because they aren't in any other theory
> > AT ALL."
> >
> > Given these two interpretations of "primitive" in a **mathematical**
> > theory, it seems that the "meanings" of terms (including primitive
> > terms) in a mathematical theory have little resemblance to the
> > meanings of terms in a computational ontology that is intended to
> > serve some useful purpose, because the meanings of the terms in the
> > ontology do not depend solely on the total sum of all the inferences
> > derivable from the logic, but on the **intended meanings**, which do
> > or at least should control the way the elements are used in
> > applications - and the way the terms are used in applications is the
> > ultimate arbiter of their meanings. The intended meanings can be
> > understood by human programmers not only from the relations on the
> > ontology elements, but also from the linguistic documentation, which
> > may reiterate in less formal terms the direct assertions on each
> > element, but may also include additional clarification and examples of
> > included or excluded instances. It seems quite clear to me that it is
> > a mistake to assume that the interpretation of "meaning" or
> > "primitive" in a mathematical theorem is the same as the way that
> > "meaning" and "primitive" are used in practical computational
> > ontologies.
> >
> > This discussion was prompted by my assertion that the meanings of
> > terms in a Foundation ontology, including terms representing primitive
> > elements, should be as stable as possible so that the FO can be relied
> > on to produce accurate translations among the domain ontologies that
> > use the FO as its standard of meaning. Given the agreement by JS and
> > PH that each change to an ontology constitutes a different theory
>
> This is not a "agreement". It is simply an elementary fact about logical
theories.
> In fact, in logic textbooks, the very word "theory"
> is used to refer to a set of sentences. So OF COURSE different sets
constitute
> different theories.
>
> > , and the meanings of terms in any
> > one theory are independent of the meanings in any different theory, I
> > believe that we need to look for a meanings of "meaning" and
> > "primitive"
> > that is not conflated with the mathematical senses of those words as
> > expressed by JS and PH.
>
> They are not "mathematical" senses. And good luck finding an alternative
theory
> of truth.
> >
> > I suggested that a useful part of the interpretation of "meaning" for
> > practical ontologies would include the "Procedural Semantics" of
> > Woods.
> > John Sowa replied (Feb. 13, 2010) that:
> > [JFS]> " In short, procedural semantics in WW's definition is exactly
> > what any programmer does in mapping a formal specification into a
> > program."
> >
> > I agree that computer programmers interpret meanings of ontology
> > elements using their internal understanding as a wetware
> > implementation of "procedural semantics". This does not exhaust the
> > matter, though, because computers now can perform some grounding by
> > interaction with the world independent of computer programmers in this
> > sense: though their procedures may be specified by programmers, those
> > procedures can include functions that themselves perform some of the
> > "procedural semantics" processes, and the computers therefore are not
> > entirely dependent on the semantic interpretations of programmers, at
> > least in theory. For the present there are probably few programs that
> > have the capability of independently determining the intended meanings
> > of the terms in the ontology, but that is likely to change, though we
> > don't know how fast. I will provide one example of how this can
> > happen: an ontologist may assert that the type "Book"
> > includes some real-world instance such as the "Book of Kells" (an
> > illuminated Irish manuscript from the middle ages, kept at a library
> > in Dublin). With an internet connection, the program could (in
> > theory) check the internet for information about that instance, and to
> > the extent that the computer can interpret text and images, perform
> > its own test to determine if the Book of Kells actually fits the
> > logical specification in the Ontology.
> > The same can be done for instances of any type that are likely to be
> > discussed on the internet.
> >
> > So, if we want the meanings of terms in an ontology to remain stable,
> > and
> > **don't** want the meanings to change any time some remotely related
> > type appears in a new axiom,
>
> But we DO want this! Surely that is the very point of changing and adding
> axioms. If meanings are stable across theories, then what is the point of
adding
> axioms to capture more meaning? Apparently, whatever meaning you are going
> to have after the addition was already there before. I presume it was
there
> when the ontology had no axioms at all, in fact: for if not, which
addition, in the
> long process of growing the ontology, created or introduced it? It would
seem
> to follow that all ontology meanings are already present in an empty
ontology.
>
> This is obvious nonsense. You are confusing the intended meaning, the
natural
> meaning we are seeking to capture in O, with the *actual*,
theory-justified
> meaning that O can be said to have by virtue of its ontological structure.
The
> former is stable, but is not computationally accessible. The latter is
> computationally accessible, and subject to precise theoretical analysis,
but is
> particular to the ontology. Change the ontology, you change the meaning.
> Maybe not much, but you do change it. And this is not a 'problem', but on
the
> contrary, is exactly what we would expect and what makes our work possible
at
> all.
>
> > what can we do? Perhaps we can require that the meanings across
> > *different* versions of the FO can only be relied on to produce
> > *exactly* the same inferences if the chain of inferences is kept to
> > some small number, say 5 or 6 links (except for those elements
> > inferentially close to the changed elements, which can be identified
> > in the FO version documentation, and whose meanings in this sense may
> > change). This is not, IMHO, an onerous condition for several reasons:
> > (1) no added axiom will be accepted if it produces a logical
> > inconsistency in the FO;
> > (2) the programmers whose understanding determines how the elements
> > are used will not in general look beyond a few logical links for their
> > own interpretation, so only elements very closely linked to the one
> > changed by new axioms will be used differently in programs.
> > (3) as long as the same version of the FO is used, the inferences for
> > the same data should be identical regardless of the number of links in
> > the chain of inference;
> > (4) if there are only a few axioms that change (out of say 10,000)
> > between two versions of the FO, then the likelihood of getting
> > conflicting results will be very small for reasonable chains of
> > inference; if the similarity of two versions of the FO is 99.9% (10
> > different elements out of 10,000), then a chain of inference would
> > produce non-identical sets of results on average for an inference
> > chain 10 long at 0.999(exp 10) = 0.99 (99% of the
> > time) 100
> > long at 0.999(exp 100)= 0.90, or 90% of the time. But the most
> > important and useful inferences are likely to be those that arise from
> > short inference chains
>
> Really? I see no reason why this would be generally true.
>
> > , comparable to how humans would use them, and to how programmers
> > would imagine them being used in logical inference.
> >
> > The potential for changed inferences as new axioms are added, even
> > though no changing the intended meanings of the elements of the FO,
> > does argue for an effort to include as many of the primitive elements
> > as can be identified at the earliest stages. This is in fact the
> > purpose of performing the consortium project, to get a common FO
> > thoroughly tested as quickly as possible, and minimize the need for
> > new axioms.
> >
> > In these discussions of the principles of an FO and a proposed FO
> > project, not only has there been no technical objection to the
> > feasibility of an FO to serve its purpose (just gut skepticism), but
> > there has also been a notable lack of suggestions for alternative
> > approaches that would achieve the goal of general accurate semantic
> > interoperability (not just interoperability in some narrow sphere or
> > within some small group.)
>
> There have also been very few suggestions on how to make a perpetual
motion
> machine. Has it not dawned on you yet that this goal you are seeking might
well
> be impossible? That if it were possible, it would have been done eons ago?
>
> > I am
> > quite aware that my discussions do not **prove** that the FO approach
> > would work, in fact the only way I can conceive of proving it is to
> > perform a test such as the FO project and see if it does work. But
> > neither has anyone suggested a means to **prove** that other
> > approaches will work, and this hasn't stopped other approaches
> > (notably mapping) from being funded.
>
> Ontology mapping has immediate benefits in the real world, which is why it
gets
> funded.
>
> >
> > In fact in my estimate, no other approach has the potential for
> > coming as
> > close to accurate interoperability as an FO, and no approach other
> > than the
> > kind of FO project I have suggested can possibly achieve that result
> > as
> > quickly.
>
> You ignore the likely possibility that it will be impossible to create
> an FO, and that the attempt will cost a great deal of money and
> effort. And you would never accept a negative outcome in any case,
> right?
>
> >
> > I am grateful for the discussions that have helped sharpen my
> > understanding of how such a project can be perceived by others, and
> > improved
> > my understanding of some of the nuances. But after considering these
> > points, I still think that a project to build and test a common FO
> > is the
> > best bet for getting to general accurate semantic interoperability as
> > quickly as possible.
>
> "General accurate semantic operability" is like world peace, and just
> as unlikely to ever be achieved while human beings have the cognitive
> limitations that we do in fact have. Even if an angel were to appear
> and give us such a FO, it would work only for a matter of minutes
> before needing to be changed, and no amount of human effort could keep
> pace with the necessary changes.
>
> >
> > Perhaps future objections could focus on genuine technical problems
> > (not
> > analogies with human language), and better yet suggest alternatives to
> > solving the problem at hand: not just *some* level of
> > interoperability, but
> > accurate interoperability that would let people rely on the
> > inferences drawn
> > by the computer. If not a common FO, then what?
>
> Nothing. This is not a viable goal to seek. It is a fantasy, a dream.
> One does not seek alternative ways to achieve a fantasy.
>
> PatH
>
>
> >
> > Pat
> >
> > Patrick Cassidy
> > MICRA, Inc.
> > 908-561-3416
> > cell: 908-565-4053
> > cassidy@xxxxxxxxx
> >
> >
> >> -----Original Message-----
> >> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> >> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> >> Sent: Saturday, February 13, 2010 12:31 PM
> >> To: [ontolog-forum]
> >> Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping
> >>
> >> Cory,
> >>
> >> I'd like to give some examples that may clarify that issue:
> >>
> >> CC> ... foundational concepts are very similar to the "minimally
> >>> axiomatized micro theories" I remember John-S describing as
> >>> a workable foundation, yet John does not see primitives
> >>> as workable - why the difference?
> >>
> >> My objection to using "primitives" as a foundation is that the
> >> meaning of a primitive changes with each theory in which it occurs.
> >> For example, the term 'point' is a "primitive" in Euclidean geometry
> >> and various non-Euclidean geometries. But the meaning of the term
> >> 'point' is specified by axioms that are different in each of those
> >> theories.
> >>
> >> Note that there are two kinds specifications:
> >>
> >> 1. Some terms are defined by a *closed form* definition, such as
> >>
> >> '3' is defined as '2+1'.
> >>
> >> In a closed-form definition, any occurrence of the term on the
> >> left
> >> can be replaced by the expression on the right.
> >>
> >> 2. But every formal theory has terms that cannot be defined by a
> >> closed-form definition.
> >>
> >> For example, both Euclidean and non-Euclidean geometries use the
> >> term 'point' without giving a closed-form definition. But calling
> >> it undefined is misleading because its "meaning" is determined by
> >> the pattern of relationships in the axioms in which the term occurs.
> >>
> >> The axioms specify the "meaning". But the axioms change from one
> >> theory to another. Therefore, the same term may have different
> >> meanings in theories with different axioms.
> >>
> >> For example, Euclidean and non-Euclidean geometries share the
> >> same "primitives". The following web site summarizes Euclid's
> >> five "postulates" (AKA axioms):
> >>
> >> http://www.cut-the-knot.org/triangle/pythpar/Fifth.shtml
> >>
> >> The first four are true in Euclidean and most non-Euclidean
> >> geometries. By deleting the fifth postulate, you would get
> >> a theory of geometry that had exactly the same "primitives",
> >> but with fewer axioms. That theory would be a generalization
> >> of the following three:
> >>
> >> 1. Euclid specified a geometry in which the sum of the three
> >> angles of a triangle always sum to exactly 180 degrees.
> >>
> >> 2. By changing the fifth postulate, Riemann defined a geometry
> >> in which the sum is < 180 degrees.
> >>
> >> 3. By a different change to the fifth postulate, Lobachevsky
> >> defined a geometry in which the sum is > 180 degrees.
> >>
> >> This gives us a generalization hierarchy of theories. The theories
> >> are generalized by adding axioms, specialized by deleting axioms,
> >> and revised by changing axioms (or by deleting some and replacing
> >> them with others).
> >>
> >> I have no objection to using collections of vague words, such as
> >> WordNet or Longman's, as *guidelines*. But the meanings of those
> >> words are ultimately determined by the axioms, not by the choice
> >> of primitives.
> >>
> >> Note to RF: Yes, the patterns of words in NL text impose strong
> >> constraints on the meanings of the words. That is important for
> >> NLP, but more explicit spec's are important for computer software.
> >>
> >> John
> >>
> >>
> >>
> _________________________________________________________________
> >> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> >> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-
> >> forum/
> >> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> >> Shared Files: http://ontolog.cim3.net/file/
> >> Community Wiki: http://ontolog.cim3.net/wiki/
> >> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> >> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> >>
> >
> >
> >
> _________________________________________________________________
> > Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> > Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> > Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> > Shared Files: http://ontolog.cim3.net/file/
> > Community Wiki: http://ontolog.cim3.net/wiki/
> > To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> > To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> >
> >
>
> ------------------------------------------------------------
> IHMC (850)434 8903 or (650)494 3973
> 40 South Alcaniz St. (850)202 4416 office
> Pensacola (850)202 4440 fax
> FL 32502 (850)291 0667 mobile
> phayesAT-SIGNihmc.us http://www.ihmc.us/users/phayes
>
>
>
>
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> (08)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (09)
|