Pat,
Re: Indeed, my reasons for believing that it will not work are based on
experience of trying to capture human intended meanings and the
difficulties of doing so, rather than any technical argument about the
nature of meanings. (01)
With the deep experience on this list I'm sure that most recognize that
conceptual alignment is difficult and sometimes fails. We also
recognize that any of our abstractions are not "the world" and that
connections to the "real thing" under the concepts are purely in our
minds. While this is "old ground" it seems to continually get in our
way. (02)
It is the conceptual alignment in our minds, as flawed and hard as it
may be, that is the point of any formalism. It is our minds that we are
trying to improve and communicate between. One difference I have noted
in the way people think about these things is in the relation between
the concept and (perceived) reality. If we make a model of a particular
chair in your room and we make different models with different axioms -
is it the same chair? Some seem to focus purely on the abstraction and
ignore any relation to "the thing". Others seem to focus on these
different models as "information about" (or assertions about) the "same
thing". The "thing" is still the point, even if the nature and identity
of the thing is viewpoint or model dependent. (03)
So in a conceptual alignment we are worrying about multiple models of a
concept, understanding that different models could even be incompatible
(since people don't see things the same way) but still be "aligned" in
that they are about the same thing. The problem with this in the formal
world is that there is no way to fully prove this alignment. But in the
human world where we want to get value from models, alignment of the
modeled concepts with the perceived "real" concepts is required. For
example, if I have a business model for selling hamburgers - it is the
same hamburger regardless of the language used or assertions made. Of
course a goal should be consistency between models, but even that is
impossible in an open world. (04)
So recognizing that this is hard, imprecise but sill valuable, how do we
do it? Perhaps we are just approaching the required maturity - that
there have been some successes and some failures does not prove
anything. Some things that I have noted seem to help such a process
are:
* Concepts not terms. Many of the failures are because people argue
about the meaning of a term. When the concepts are identified by more
complete phrases we seem to do better. (05)
* There are some representation issues that always seem to get in the
way, I.E. reification. Choice to reify or not should not change the
concept (one way to do this is to reify everything). (06)
* Concepts are systems - you always have to look at a related "SYSTEM OF
CONCEPTS" - understand and define the concepts within these
mini-systems. I.E. Type & Instance are mutually dependent concepts
within the same system (or microtheory). (07)
* There is no one single truth - various models express different
viewpoints or foundations that we have to relate. Some are, of course,
better or more fundamental than others, but such distinctions are a
matter of perspective. The ability to relate different viewpoints, as
peers and/or as a hierarchy, is fundamental to alignment. (08)
There are, of course, more such methodological rules - besides getting a
good foundation we need a process that works. It is my hope and
expectation that we are working towards such maturity that will result
in conceptual alignment. (09)
Regards,
Cory Casanave (010)
-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pat Hayes
Sent: Wednesday, March 03, 2010 1:27 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping (011)
On Feb 26, 2010, at 12:50 PM, Patrick Cassidy wrote: (012)
> I just have to respond immediately to one comment that John made:
>
> [JS]> > What motivated my last note were your implications that Pat
> Hayes
>> didn't know what he was talking about. I'm sure that you realize
>> that he has a very strong background in the field, and any such
>> implications were inappropriate.
>>
> That was not in any way implied by what I have said. Of course I
> have the
> greatest respect for Pat Hayes, not just for his multiple past
> accomplishments but for his continuing contributions to automated
> reasoning
> and its applications. Indeed I take as virtually certain everything
> he says
> about logic, reasoning and math. That's why I quote him whenever
> appropriate.
>
> But on occasion PatH's comments seem to be at variance with what he
> has said
> before, in which case I need to get additional clarification. And on
> occasion I phrase my puzzlement in somewhat jocular phraseology. I
> don't
> believe that PatH has taken any of this as disrespectful - I haven't
> noticed
> any offense taken. (013)
Right. I really don't give a toss for respect, guys. Just focus on the
arguments. (014)
>
> My big problem is that PatH clearly has an intuitive feeling that
> the FO
> tactic won't work, but I have had some difficulty finding any
> technical
> reasons in his comments that argue against the proposal, which
> leaves me
> thinking that this is just an intuitive gut feeling on his part,
> which could
> derive from any of many reasons, not related to technical feasibility. (015)
Indeed, my reasons for believing that it will not work are based on
experience of trying to capture human intended meanings and the
difficulties of doing so, rather than any technical argument about the
nature of meanings. (016)
> The
> one technical point he has made recently is about the meanings of
> ontology
> elements changing as new axioms are added. I don't question that
> this is
> true as far as mathematical interpretations are concerned. (017)
It is true as far as *interpretations* are concerned. This has nothing
whatever to do with mathematics. (018)
> I do question
> that users of an ontology will *want* the meanings of their already-
> defined
> ontology elements to change as new elements are added. But PatH has
> said
> (it seems) that this is what he wants. I will be eager for
> clarification. (019)
We must be at cross purposes. (020)
Suppose we are developing the ontology and we notice something
missing. Perhaps we have introduced a distinction between occurrents
and continuants, but had not noticed that one of our high-level
classes now needs to be subdivided into two categories, an old axiom
which quantifies over the union needs to be rewritten as two axioms
using distinct styles of atomic statements involving the temporal
parameter. This involves deleting an axiom and replacing it with two
others. The set of entailments changes, fortunately, as the axioms
before this change implied (inadvertently, but they did in fact imply)
that the high-level class in question was empty. The axioms had a bug
in them, and we have now fixed that bug. (021)
Why would anyone NOT want conceptual bugs to be fixed in this way? Why
would anyone want the meanings of terms to be fixed, regardless of
what axioms were written to establish or capture those meanings? If
this were so, there would be no purpose in writing axioms at all. (022)
Now, I suspect that your position is that of course we want this to be
so as long as we are writing the FO, but that once the 'core' FO is
done, we want it to be stable, and all the meanings of the terms in it
fixed, while we write the penumbra of application ontologies that fill
in all the details of application areas. And here we get into a more
technical matter, which is how to define 'meaning' so that this will
be possible. The issue, it seems to me, is that the only available
precise sense of "meaning" that we have, simply does not provide any
way to say that the meanings of some terms are fixed by some of the
assertions they occur in, but not by others. So if a term, say
'Human" (the class name for the set of human beings) occurs in the FO
and also in some application module, call it M, then when those two
are used together , there is nothing in the semantic theory of the
underlying language which distinguishes the occurrences in FO from
those in M, when we consider interpretations of the combination (FO
+M). This larger set of axioms is simply a set of sentences, and they
all 'contribute' in exactly the same way to the constraints of truth
that the semantics establishes. SO I simply cannot understand what is
meant by the claim that just the sentences in the FO part of (FO+M)
'fix' the meanings of the terms in this theory, while the other
sentences.... do what? use those meanings without contributing to
them? I am simply at a loss to know what is being claimed here. (023)
Take the example of "Human". The FO might establish that Humans are a
subclass of Mammals and of Rational Agents and general stuff like
that. But maybe M is all about sociobiology, and it tells us that
human beings are descended from a race of early hominids hailing from
Africa. Surely this tells us more about what Human means, changes the
meaning of 'human'. Everything we learn involving the term tells us
something new about the term and changes, if only slightly or subtly,
its meaning. Where do we draw a line around the essential core of
things we know about humanity, that constitutes the single, eternally
fixed, universally accepted, single *definition* of the term "human"?
I don't believe this can be done. All our intended meanings are
embedded in, and take their authority from, some accepted theory of
the world. And those theories are far too big, too extensive, to be
something like a FO. (024)
Pat H (025)
> And, importantly this is only significant if indeed the FO does
> change.
> That's why I think an FO project should strive to make the FO (not the
> domain ontologies) as complete as possible at the earliest possible
> point,
> so that changes, if they are needed, will be rare.
>
> PatC
>
> Patrick Cassidy
> MICRA, Inc.
> 908-561-3416
> cell: 908-565-4053
> cassidy@xxxxxxxxx
>
>
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>
> (026)
------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 mobile
phayesAT-SIGNihmc.us http://www.ihmc.us/users/phayes (027)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (028)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (029)
|