ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Foundation ontology, CYC, and Mapping

To: "'[ontolog-forum]'" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Matthew West" <dr.matthew.west@xxxxxxxxx>
Date: Tue, 2 Mar 2010 21:38:12 -0000
Message-id: <4b8d853f.8d0acc0a.6a03.55bd@xxxxxxxxxxxxx>
Dear Pat,    (01)

> >
> > Let me have a go at this.
> >
> >>>  The disconnect between PatH's view of "meaning" and mine is that
> >>> he is
> >>> content to believe that the meanings of the elements used in
> >> programs,
> >>> databases, ontologies (e.g. time, distance, physical object,
> dollar,
> >>> person)
> >>> all change every time we add a new assertion about unicorns, and I
> >>> am not.
> >>
> >> It is not a matter of being content to believe. I am asserting this
> >> AS
> >> A FACT, and you are simply in denial about elementary facts of
> >> semantic theory. Now, of course, you are free to invent an
> >> alternative
> >> semantic theory, one that supports your intuitions about meanings
> >> being fixed when axioms change, but I would like to see that theory
> >> given some reasonably precise flesh before proceeding to discuss
> this
> >> matter very much further.
> >
> > MW: I suspect the real problem here is that you are each looking
> > through
> > opposite ends of the telescope. Let me describe the different views:
> >
> > PatH looks at it from the theory end, and says that when you change
> > an axiom
> > in a theory there is a different set of models that it picks out.
> > Absolutely
> > right. It "means" something different.
> >
> > PatC looks at it from the other end. He has a particular intended
> > interpretation, and his question is: if he changes this axiom does
> > it still
> > pick out his intended interpretation (he doesn't care about any
> > unintended
> > interpretations). If it does, as far as he is concerned it "means"
> > the same
> > thing. Also true.
> >
> > I think there is something to be accommodated from both sides here in
> > practical ontology development
> 
> No doubt.  In my defense, I will point out that Pat C was (until
> recently) referring to the meaning that the ontology primitives
> *actually have*, not to intentions. Hence my insistence on the point.
> 
> As you say, both true. However, I have doubts about the notion of a
> fixed intended interpretation, even if it is only an intention. First,
> it is important to note that this is not (usually) quite the same
> notion of interpretation as is used when we speak about formal
> semantics, what you refer to above as 'models'. It can be, but it is
> unusual for intentions to be that precise.     (02)

MW: I quite agree, and there is always the problem that I believe a football
coach complained about with the press "They wrote down what I said, not what
I meant!" And that is without us changing our minds, even for the perfectly
reasonable reason of seeing we are wrong.    (03)

> It is rare to find any
> concept on which humans agree about its meaning well enough to
> completely and absolutely rule out every logically possible way to
> distinguish alternatives. I certainly have never come across a single
> example of this. At the very least, there will be areas of doubt,
> areas where people simply have not thought out the consequences of
> their own ideas well enough to come to a decision. Right now, just for
> one example, I am engaged with a colleague trying to develop an
> ontology of images, and we have been debating for several days about
> exactly what counts as an image. If I copy a digital image, have I
> made a new image, or simply a new copy of the *same* image? How exact
> does the copy need to be? (JPEg compression is lossy, for example, so
> some pixels may change, yet we do not usually say that the image has
> changed.) Is part of an image also an image? Is a 'view' of part of
> the actual world, eg when looking out of a window, itself an image, or
> does it only become one when a camera shutter is opened? And so on
> (and on...) Now, anyone who has tried to actually develop an ontology
> in a group of people will recognize this kind of situation
> immediately. Two competent, even skilled, native speakers of the same
> language with a shared culture, etc.., can still disagree, or at least
> find a lot to discuss, when they have to capture their 'intended'
> meanings in a formal framework. And the final result does not exactly
> conform to either of their intentions: it is a compromise. Neither of
> us are wholly comfortable with the final result. If it were up to us,
> alone, we would have done it our way, and there would have been two
> ontologies. And this is just two people, and indeed two people who
> know one another well and are about as motivated as two people can be
> to want to agree and for their joint project to succeed.     (04)

MW: Quite. The problem with the way you put it is though, that I suspect
most people would say you paint the case as simply being hopeless, and
therefore we should not do anything, but you are at least developing an
ontology of images, so you don't think it is hopeless really.    (05)

> Longman's
> dictionary isn't going to be any help for us. We have consulted every
> authority we can find: existing ontologies, the Getty vocabularies,
> the Jago vocabularies in DBPedia, the press standards used for
> describing news images, EXIF, the Umbel distillation of Cyc, thesauri
> developed for museum curation, the lot. Guess what: on this (and many
> other) points of detail, they are either silent, or they disagree with
> one another.    (06)

MW: I agree with you that I don't think Longman's will do much good for us,
though I equally doubt it will do much harm. Frankly, I'd rather PatC got on
with doing something with it so we had some evidence we could look at rather
than debate a priore what the effect would be.
> 
> It is experiences like this (repeated so many times that this has now
> become the object of methodologies in its own right: check out the
> literature of 'knowledge extraction') which make me so convinced that
> the idea that a committee is going to magically agree on a single
> universal upper ontology, which is then going to be accepted with
> cries of gratitude by a fair fraction of the human race, is a complete
> fantasy.     (07)

MW: Again I agree. Been there got the teeshirt. But I don't think that is
the point. There inevitably are/will be several upper/foundation ontologies.
I do think there is a chance to abstract something as John says
underspecified, and I do think that could be useful. This is not really what
PatC is after, but it is not incompatible with his aims either.    (08)

> It is based, I suspect, on the idea that since humans manage
> to communicate well enough to cooperate in a single world, that they
> must be thinking about that world in more or less the same way. But
> not only does this not follow, the conclusion is demonstrably false. I
> invite anyone to actually try it, and see for themselves.    (09)

MW: I agree, a common misconception. That is why I think we need to allow
multiple upper ontologies and mappings between them. Underspecified/abstract
elements would be useful to make the mapping easier. They do take on
different meanings when added to different ontologies, but that is fine.
> 
> But back to the main point. The other worry I have about 'intended
> interpretation' is that, even when working alone, one finds that the
> very process of writing the formal axioms sharpens and sometimes
> forces one to modify ones own pre-formal intuitions. Is a thing part
> of itself? Almost everyone unschooled in mathematics or formal
> techniques will say, no. Almost everyone who has been exposed to
> algebra or formalization techniques will say, yes. The change of mind
> is not really a change in ideas, so much as a recognition that
> allowing this limiting case of parthood gives such a cleaner and more
> useful formal description that it is worth putting up with the slight
> linguistic frisson which wants a 'part' to be something, well, smaller
> or less significant than the whole (and wants a subset to be less than
> the whole, and prefers < to =<, and so on.) Insisting upon fixing the
> intended meanings in cases like this is in fact a symptom of bad
> ontology engineering practice, a kind of naive stubbornness that
> refuses to allow useful engineering optimizations.     (010)

MW: I still struggle with this. I agree with the cleaner theory part, but
I'm not so sure about going against intuitions of parthood - you get
unexpected results.    (011)

> So even apart from
> the fact that we all disagree, and even allowing for the fact that we
> are talking about intentions, I still think Pat C is wrong :-)    (012)

MW: Well sure, but so are you and so am I.     (013)

However, I do think there is a way forward, that will, at worst, produce
some evidence that will allow us to make a better attempt another time.    (014)

Regards    (015)

Matthew West                            
Information  Junction
Tel: +44 560 302 3685
Mobile: +44 750 3385279
matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx
http://www.informationjunction.co.uk/
http://www.matthew-west.org.uk/    (016)

This email originates from Information Junction Ltd. Registered in England
and Wales No. 6632177.
Registered office: 2 Brookside, Meadow Way, Letchworth Garden City,
Hertfordshire, SG6 3JE.    (017)





_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (018)

<Prev in Thread] Current Thread [Next in Thread>