On Mar 2, 2010, at 2:31 AM, Matthew West wrote: (01)
> Dear PatH and PatC,
>
> Let me have a go at this.
>
>>> The disconnect between PatH's view of "meaning" and mine is that
>>> he is
>>> content to believe that the meanings of the elements used in
>> programs,
>>> databases, ontologies (e.g. time, distance, physical object, dollar,
>>> person)
>>> all change every time we add a new assertion about unicorns, and I
>>> am not.
>>
>> It is not a matter of being content to believe. I am asserting this
>> AS
>> A FACT, and you are simply in denial about elementary facts of
>> semantic theory. Now, of course, you are free to invent an
>> alternative
>> semantic theory, one that supports your intuitions about meanings
>> being fixed when axioms change, but I would like to see that theory
>> given some reasonably precise flesh before proceeding to discuss this
>> matter very much further.
>
> MW: I suspect the real problem here is that you are each looking
> through
> opposite ends of the telescope. Let me describe the different views:
>
> PatH looks at it from the theory end, and says that when you change
> an axiom
> in a theory there is a different set of models that it picks out.
> Absolutely
> right. It "means" something different.
>
> PatC looks at it from the other end. He has a particular intended
> interpretation, and his question is: if he changes this axiom does
> it still
> pick out his intended interpretation (he doesn't care about any
> unintended
> interpretations). If it does, as far as he is concerned it "means"
> the same
> thing. Also true.
>
> I think there is something to be accommodated from both sides here in
> practical ontology development (02)
No doubt. In my defense, I will point out that Pat C was (until
recently) referring to the meaning that the ontology primitives
*actually have*, not to intentions. Hence my insistence on the point. (03)
As you say, both true. However, I have doubts about the notion of a
fixed intended interpretation, even if it is only an intention. First,
it is important to note that this is not (usually) quite the same
notion of interpretation as is used when we speak about formal
semantics, what you refer to above as 'models'. It can be, but it is
unusual for intentions to be that precise. It is rare to find any
concept on which humans agree about its meaning well enough to
completely and absolutely rule out every logically possible way to
distinguish alternatives. I certainly have never come across a single
example of this. At the very least, there will be areas of doubt,
areas where people simply have not thought out the consequences of
their own ideas well enough to come to a decision. Right now, just for
one example, I am engaged with a colleague trying to develop an
ontology of images, and we have been debating for several days about
exactly what counts as an image. If I copy a digital image, have I
made a new image, or simply a new copy of the *same* image? How exact
does the copy need to be? (JPEg compression is lossy, for example, so
some pixels may change, yet we do not usually say that the image has
changed.) Is part of an image also an image? Is a 'view' of part of
the actual world, eg when looking out of a window, itself an image, or
does it only become one when a camera shutter is opened? And so on
(and on...) Now, anyone who has tried to actually develop an ontology
in a group of people will recognize this kind of situation
immediately. Two competent, even skilled, native speakers of the same
language with a shared culture, etc.., can still disagree, or at least
find a lot to discuss, when they have to capture their 'intended'
meanings in a formal framework. And the final result does not exactly
conform to either of their intentions: it is a compromise. Neither of
us are wholly comfortable with the final result. If it were up to us,
alone, we would have done it our way, and there would have been two
ontologies. And this is just two people, and indeed two people who
know one another well and are about as motivated as two people can be
to want to agree and for their joint project to succeed. Longman's
dictionary isn't going to be any help for us. We have consulted every
authority we can find: existing ontologies, the Getty vocabularies,
the Jago vocabularies in DBPedia, the press standards used for
describing news images, EXIF, the Umbel distillation of Cyc, thesauri
developed for museum curation, the lot. Guess what: on this (and many
other) points of detail, they are either silent, or they disagree with
one another. (04)
It is experiences like this (repeated so many times that this has now
become the object of methodologies in its own right: check out the
literature of 'knowledge extraction') which make me so convinced that
the idea that a committee is going to magically agree on a single
universal upper ontology, which is then going to be accepted with
cries of gratitude by a fair fraction of the human race, is a complete
fantasy. It is based, I suspect, on the idea that since humans manage
to communicate well enough to cooperate in a single world, that they
must be thinking about that world in more or less the same way. But
not only does this not follow, the conclusion is demonstrably false. I
invite anyone to actually try it, and see for themselves. (05)
But back to the main point. The other worry I have about 'intended
interpretation' is that, even when working alone, one finds that the
very process of writing the formal axioms sharpens and sometimes
forces one to modify ones own pre-formal intuitions. Is a thing part
of itself? Almost everyone unschooled in mathematics or formal
techniques will say, no. Almost everyone who has been exposed to
algebra or formalization techniques will say, yes. The change of mind
is not really a change in ideas, so much as a recognition that
allowing this limiting case of parthood gives such a cleaner and more
useful formal description that it is worth putting up with the slight
linguistic frisson which wants a 'part' to be something, well, smaller
or less significant than the whole (and wants a subset to be less than
the whole, and prefers < to =<, and so on.) Insisting upon fixing the
intended meanings in cases like this is in fact a symptom of bad
ontology engineering practice, a kind of naive stubbornness that
refuses to allow useful engineering optimizations. So even apart from
the fact that we all disagree, and even allowing for the fact that we
are talking about intentions, I still think Pat C is wrong :-) (06)
Pat H (07)
------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 mobile
phayesAT-SIGNihmc.us http://www.ihmc.us/users/phayes (08)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (09)
|