Pat, (01)
I agree that it is a common fallacy to somehow assume that the computer can
read our intentions - and it is important to check we are not making it (and
to point it out when we see it).
And that the formal ontology we give the computer has a formal semantics
that may constrain the intended interpretation, but does not (and is not
intended to) capture it. (02)
My intention was to make (in a sense) the inverse point - that when we build
an ontology it is a similar fallacy to say we are only interested in the
formal semantics. (Does anyone on the list think this is not a fallacy? I
have met people who do.) (03)
(To reiterate your buggy software point below a little more prosaically, but
hopefully accurately.) When we construct the ontology we have an intended
interpretation - if we are building a financial ontology when we add axioms
about equities, we have an intended interpretation for 'equities' and we
judge whether the axioms make sense in terms of this interpretation.
Similarly, when we deploy the ontology, we use it to work with the intended
interpretation - we would not normally deploy the financial ontology in,
say, an offshore oil processing system and interpret 'equities' as oil rigs. (04)
So, clearly, without an intended interpretation, the ontology is of no
practical use. (05)
This seems to me to imply that when we are thinking about the process of
developing an ontology, it is useful to have some way of talking about and
sharing this intended interpretation. In the scenario you describe below,
where you are the (only) person writing the program then " My intended
behavior is quite clear", but in larger teams and where external QA is
required - more of a framework is needed. So the 'intended interpretation'
can be practically formalised the better - it is not a good strategy to
dismiss it as subjective and irrelevant. (The notion of intended
interpretation is a good starting point.) (06)
I suspect this is partly why PatC (and others) make some of the points they
do (PatC, feel free to tell me I am mistaken). (07)
If one can agree an intended interpretation for a FO (or some high level
elements of one) then this is inherited by the sub-types - so it is a
cost-effective mechanism for tying down the interpretation. (As an aside, my
gut feeling is that PatC's strategy for doing this is not going to bear
fruit - but it is good research to explore the options.)
At the lower level, if one can agree the intended interpretation for some
terms with a high degree of certainty (which may require some work) - then
this is independent of the theory/ontology the terms are embedded in and is
portable across them. (08)
Where this works well is with proper names (especially if you assume a
direct reference theory (an intended interpretation?)).
It works less well where the terms (in the wild) come with connation/sense. (09)
I think some of the feeling expressed in earlier posts that high level terms
(such as set) can be ported across theories with contradictory axioms arise
from (mistakenly) assuming they have simply portable intended
interpretations.
That does not mean that other terms are not simply portable or that we do
not need to work with intended interpretations. Indeed, given that the
practical value of ontologies arises from the intended interpretations, in
many situations when producing ontologies these may be key and the formal
semantics merely a supporting mechanism. (010)
Regards,
Chris (011)
> -----Original Message-----
> From: Pat Hayes [mailto:phayes@xxxxxxx]
> Sent: 02 March 2010 02:05
> To: mail@xxxxxxxxxxxxxxxxxx; [ontolog-forum] ; Chris Partridge
> Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping
>
> com>
> X-Mailer: Apple Mail (2.936)
>
>
> On Feb 26, 2010, at 11:16 AM, Chris Partridge wrote:
>
> > Pat,
> >
> > Agree with a lot of what you say below, but I am having trouble
> > working out exactly what you mean with this.
> >
> > But I
> >> was talking about computational ontologies. Computational ontologies
> >> are artifacts, written in formal logical notations. They do not
> >> simply 'have'
> > natural
> >> meanings, meanings-in-the-wild, in the way that human natural
> >> languages
> > are
> >> said to have. They do not have intended meanings. We may intend them
> >> to
> > have
> >> a meaning, but a computational ontology is just as artificial and
> >> "formal"
> > (which
> >> is to say, "mathematical" in the sense of being mathematically
> >> described)
> > as any
> >> other artifact. And in the case of logically expressed formalisms,
> >> like
> > those used
> >> in computational ontologies, the Tarskian theory of meaning applies
> >> in a
> > special
> >> way.
> >
> > My problem is with the claim that computational ontologies do not have
> > intended meanings, as it looks like you are saying above.
>
> OK, I spoke too quickly. Of course they have intended meanings in the
sense that
> whoever wrote them probably intended them to mean something. What I meant
> was, that this is not the notion of 'meaning'
> that can be analyzed by a semantic theory and, more to the point,
connected to
> formal inference. The completeness theorem cuts both ways. If we want some
> conclusion to follow (because we intend it to) but it does not, in fact,
follow,
> then the completeness theorem tells us that our axioms, whether we like
this or
> not, do not IN FACT mean what we intended them to mean. What they
> ACTUALLY mean allows an interpretation which makes them true while also
> rendering our intended conclusion false. And it is foolish, and poor
methodoloy,
> to just stamp our foot and insist that the axioms REALLY DO mean what we
want
> them to mean. What we should do in such a case is fix the axioms.
>
> The situation is exactly similar to running buggy software. I write a
program to
> do a task. I know exactly what it should do. My intended behavior is quite
clear.
> But it does something else. Moral: I need to fix it. Bad strategy to
insist, no, it
> really is doing what I intend it to do, because running it is just a
formal theory of
> what it is supposed to do. What it is *really* doing is what I say it
should be
> doing.
>
> Pat H (012)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (013)
|