One view that I thought relevent to this conversation, if I understand it, is the one developed a while ago by Nicola Guarino in "Formal Ontology and Information Systems". In this he distinguishd a a conceptualization process from a subsequent formalization on that conceptualization with ontological commitments. Here is the way he presents it:
"Together with Pierdaniele Giaretta, I have discussed such a definition in .. arguing that, in order for it to have some sense, a different, intensional account of the notion of conceptualization has to be introduced. I try here to further clarify these notions, making clear the relationship between an ontology, its intended models, and a conceptualization. The problem with Genesereth and Nilsson’s notion of conceptualization is that it refers to ordinary mathematical relations on D, i.e. extensional relations. These relations reflect a particular state of affairs: for instance, in the blocks world, they may reflect a particular arrangement of blocks on the table. We need instead to focus on the meaning of these relations, independently of a state of affairs: for instance, the meaning of the “above” relation lies in the way it refers to certain couples of blocks according to their spatial arrangement. We need therefore to speak of intensional relations: we shall call them conceptual relations, reserving the simple term “relation” to ordinary mathematical relations."
After a technical discussion, omitted here for the time being, he goes on to say:
" In general, there will be no way to reconstruct the ontological commitment of a language from a set of its intended models, since a model does not necessarily reflect a particular world: in fact, since the relevant relations considered may not be enough to completely characterize a state of affairs, a model may actually describe a situation common to many states of affairs. This means that it is impossible to reconstruct the correspondence between worlds and extensional relations established by the underlying conceptualization. A set of intended models is therefore only a weak characterization of a conceptualization: it just excludes some absurd interpretations, without really describing the “meaning” of the vocabulary."
SOCoP Executive Secretary
On Tue, Mar 2, 2010 at 8:58 AM, John F. Sowa <sowa@xxxxxxxxxxx>
Dear Matthew, Chris, Pat, Pat, and Gary,
I believe that Mathew, Chris P., and Gary have made some points
that can clarify and resolve the arguments between Pat and Pat.
I'd like to emphasize some points and add a few more.
MW> I suspect the real problem here is that you [Pat and Pat] are
> each looking through opposite ends of the telescope. Let meYes. And Chris emphasizes a point that is often lost in the
> describe the different views:
> PatH looks at it from the theory end, and says that when you change
> an axiom in a theory there is a different set of models that it
> picks out. Absolutely right. It "means" something different.
> PatC looks at it from the other end. He has a particular intended
> interpretation, and his question is: if he changes this axiom does
> it still pick out his intended interpretation (he doesn't care
> about any unintended interpretations). If it does, as far as he
> is concerned it "means" the same thing. Also true.
debates about logic:
CP> ... without an intended interpretation, the ontology is of
> no practical use.PC> But recognizing that the goal of the FO is to support the
> This seems to me to imply that when we are thinking about the
> process of developing an ontology, it is useful to have some way
> of talking about and sharing this intended interpretation.
> purposes of the programmers and ontologists who use it in > applications, it is the *intended meanings* of the ontology
> elements that are of primary importance, because they determineYes, but the path from intention to implementation is never
> what goes into the ontology.
PC> The users are not at the mercy of any unintended inference
> that, willy-nilly, may pop up when a change is made to the > ontology.
The users really *are* at the mercy of unintended implications:
what the programmer, system analyst, or user had intended.
1. There is no such thing as a vague program. Every computer
program does something that can be very precisely specified.
2. But what it does so precisely may have no similarity to
3. Customers don't know what they want until they see what they get.
In other words, "Be careful what you ask for."
These points imply that the process of mapping intentions to any
kind of formalism (programs or axioms) must be an iterative process
that compares the intentions to the implications at each stage.
PC>> The disconnect between PatH's view of "meaning" and mine is that
>> he is content to believe that the meanings of the elements usedPH> It is not a matter of being content to believe. I am asserting
>> in programs, databases, ontologies (e.g. time, distance, physical
>> object, dollar, person) all change every time we add a new
>> assertion about unicorns, and I am not.
> this AS A FACT, and you are simply in denial about elementary factsThis exchange revolves around two vague words: 'change' and 'meaning'.
> of semantic theory.
I suggest that we restate the issues in terms of concrete examples,
such as Gary's:
GBC> Since Euclidean geometry, an axiomatized system, was mentioned
> it reminded me of Euclid's first 4 postulates (axioms) which involveThe example of Euclidean geometry and the different varieties of
> primitives like point. When a 5th axioms is added it didn't seem
> like the primitives changed and that any conclusion just using the
> first 4 axioms would be the same. If the 5ht axioms is involved
> then new conclusions are reached. On the face of it this seems to
> be the an intuition that supports (in part) what Pat C was saying.
non-Euclidean geometry illustrates the issues very well.
If you consider Euclid's first four postulates by themselves, you
get a primitive, underspecified version of geometry, which is
neutral with respect to issues about parallel lines. One might
reasonably claim that this simple geometry is sufficient to capture
the basic meaning of the word 'point" and its relationships to
other words, such as 'line'.
But the 5th axiom about parallel lines is critical to the meaning
of the word 'line', which changes when different axioms are added
to the underspecified, 4-axiom theory.
This is one more reason why looking at the lattice of all possible
theories can help resolve these disputes: The theories in the
infinite lattice never change, but the finite collection of
implemented theories may change when we add new theories to it.
CP> It may turn out that a more important consideration is to
> translate the formal structure into text to make it more readableI would call the "stilted" English of ISO 19526 a "controlled NL",
> for the users - ISO 19526 does this (though you may find the English
> a bit stilted). My gut instinct would be to formalise where possible
> for explanatory reasons - I think these are at least as important in
> large ontologies as inference.
which I have been recommending for years. The version in ISO 19526
was written by humans, but I would urge them to test it by automated
means to verify the mapping to logic. If any errors are detected,
the controlled NL could be revised by semi-automated means.
Controlled natural languages are important intermediate notation
that can help narrow the inevitable gaps between the intended
meaning and the implemented formalism.
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (01)