[Top] [All Lists]

Re: [ontolog-forum] Constructs, primitives, terms

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: David Flater <dflater@xxxxxxxx>
Date: Fri, 09 Mar 2012 13:19:19 -0500
Message-id: <4F5A49A7.3020903@xxxxxxxx>
On 3/5/2012 9:08 AM, John F. Sowa wrote:
Base vocabulary V: A collection of terms defined precisely at a level
of detail sufficient for interpreting messages that use those terms
in a general context C.

System A: A computational system that imports vocabulary V and uses
the definitions designated by the URIs. But it uses the terms in
a context C' that adds further information that is consistent with C.
That info may be implicit in declarative or procedural statements.

System B: Another computational system that imports and uses terms
in V. B was developed independently of A. It may use terms in V
in a context C'' that is consistent with the general context C,
but possibly inconsistent with the context C' of System A.

Problem: During operations, Systems A and B send messages from
one to the other that use only the vocabulary defined in V.
But the "same" message, which is consistent with the general
context C, may have inconsistent implications in the more
specialized contexts C' and C''.

My thinking began similar to what Patrick Cassidy wrote.  In this example, the terms as used in C' and C'' are effectively specializations (via added constraints) of the term in C.  To transmit a C' or C'' thing as a C thing is a fair substitution; but to receive a C thing as a C' or C'' thing does an implicit narrowing that is not necessarily valid.

In practice, though, such an understanding of the differences (or that there are differences) among similar terms as used in C, C' and C'' often comes out only after a failure has occurred.  In real-world use of any sort of language that does not have mechanical, closed-world semantics, that potentially invalid narrowing is not only unpreventable, but is often the "least worst" translation that can be made into the receiver's conceptualization.  Every organization and every person applies their own semantic baggage (added constraints) to supposedly common terms; said "local modifications" are discovered, defined and communicated only after a problem arises.

Should we then blame the common model (ontology, lexicon, schema, exchange format, whatever) for having been incomplete or wrong for the task at hand?  Nobody wants to complicate the model with the infinite number of properties/attributes that don't matter.  You just need to model exactly the set of properties/attributes that are necessary and sufficient to prevent all future catastrophes under all integration scenarios that will actually happen, and none of those that won't happen.  Easy! if you can predict the future.

In digest mode,
David Flater, National Institute of Standards and Technology, U.S.A.

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>