Dear David,
You wrote:
… In this
example, the terms as used in C' and C'' are effectively specializations (via
added constraints) of the term in C. To transmit a C' or C'' thing as a C
thing is a fair substitution; but to receive a C thing as a C' or C'' thing
does an implicit narrowing that is not necessarily valid.
…
In practice, though, such
an understanding of the differences (or that there are differences) among similar terms as used in C, C' and
C'' often comes out only after a failure has occurred. In real-world use of any
sort of language that does not have mechanical, closed-world semantics, that
potentially invalid narrowing is not only unpreventable, but is often the
"least worst" translation that can be made into the receiver's
conceptualization. Every organization and every person applies their own
semantic baggage (added constraints) to supposedly common terms; said
"local modifications" are discovered, defined and communicated only after a problem arises.
Your analysis seems promising, but I
suggest there is at least one more complication; the description of C must also
have been loaded with the “semantic baggage” of the person who
defined it, just as C’ and C” and therefore C seems likely to also
be a specialization of some even more abstract concept C- which may not have
contained the baggage of C, C’ or C”.
There is no pure abstraction C- in most of
the descriptions for concepts so far as I have seen in our discussions.
Every concept seems to have been modulated by the proposer’s semantic
baggage. Since it is always a PERSON who produces the conceptualization C
in the first place, it isn’t possible to be that abstract.
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of David Flater
Sent: Friday, March 09, 2012 10:19
AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Constructs, primitives, terms
On 3/5/2012 9:08 AM, John F. Sowa wrote:
Base vocabulary V: A collection of terms defined precisely at a level
of detail sufficient for interpreting messages that use those terms
in a general context C.
System A: A computational system that imports vocabulary V and uses
the definitions designated by the URIs. But it uses the terms in
a context C' that adds further information that is consistent with C.
That info may be implicit in declarative or procedural statements.
System B: Another computational system that imports and uses terms
in V. B was developed independently of A. It may use terms in V
in a context C'' that is consistent with the general context C,
but possibly inconsistent with the context C' of System A.
Problem: During operations, Systems A and B send messages from
one to the other that use only the vocabulary defined in V.
But the "same" message, which is consistent with the general
context C, may have inconsistent implications in the more
specialized contexts C' and C''.
My thinking began similar to what Patrick Cassidy wrote. In this example,
the terms as used in C' and C'' are effectively specializations (via added
constraints) of the term in C. To transmit a C' or C'' thing as a C thing
is a fair substitution; but to receive a C thing as a C' or C'' thing does an
implicit narrowing that is not necessarily valid.
In practice, though, such an understanding of the differences (or that there are differences) among similar terms as
used in C, C' and C'' often comes out only after a failure has occurred.
In real-world use of any sort of language that does not have mechanical,
closed-world semantics, that potentially invalid narrowing is not only
unpreventable, but is often the "least worst" translation that can be
made into the receiver's conceptualization. Every organization and every
person applies their own semantic baggage (added constraints) to supposedly
common terms; said "local modifications" are discovered, defined and
communicated only after a problem
arises.
Should we then blame the common model (ontology, lexicon, schema, exchange
format, whatever) for having been incomplete or wrong for the task at
hand? Nobody wants to complicate the model with the infinite number of
properties/attributes that don't matter. You just need to model exactly
the set of properties/attributes that are necessary and sufficient to prevent
all future catastrophes under all integration scenarios that will actually
happen, and none of those that won't happen. Easy! if you can predict the
future.
In digest mode,
--
David Flater, National Institute of Standards and Technology,
U.S.A.