Agree completely, David. The problem is that people have a natural tendency to overlook their own implicit context and scope assumptions, so they ignore context scope dimensions when doing their pragmatic analysis of which context scope dimensions which turn out to lead to incorrect or even disastrous outcomes when some unanticipated portion of context scope space is encountered in real world operations. The NCOIC SCOPE model was developed to assist such an analysis by providing a fairly extensive (but not exhaustive) and extensible set of scope space dimensions, and a process for developing additional domain-specific context scope dimensions. The problem with this is that it can be a lengthy and painful process to go through and runs the risk of asking people to look at scope dimensions that may not be relevant to the context and purpose at hand. Still, we think it is better to be a little more exhaustive up front in the process and look at possibilities that ultimately prove irrelevant or unpragmatic, than to assume away these possibilities implicitly – only to find later in the deployment of a system that the possibility was not safe to ignore.
Related to these points is the dynamic nature of the environment in which such contexts might occur. One should consider the possibility that the important context dimensions and related scope attribute values may change significantly over the duration/lifespan of the intended purpose of the concepts/systems being defined or developed. Anticipating such changes is often cheaper than having to re-engineer a system that did not anticipate the changes that ended up actually occurring. Of course, that itself depends on what the eventual lifespan of the definition and intended context space and purposes for which it is defined ends up being.
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of David Flater
Sent: Friday, March 09, 2012 1:19 PM
Subject: Re: [ontolog-forum] Constructs, primitives, terms
On 3/5/2012 9:08 AM, John F. Sowa wrote:
Base vocabulary V: A collection of terms defined precisely at a level
of detail sufficient for interpreting messages that use those terms
in a general context C.
System A: A computational system that imports vocabulary V and uses
the definitions designated by the URIs. But it uses the terms in
a context C' that adds further information that is consistent with C.
That info may be implicit in declarative or procedural statements.
System B: Another computational system that imports and uses terms
in V. B was developed independently of A. It may use terms in V
in a context C'' that is consistent with the general context C,
but possibly inconsistent with the context C' of System A.
Problem: During operations, Systems A and B send messages from
one to the other that use only the vocabulary defined in V.
But the "same" message, which is consistent with the general
context C, may have inconsistent implications in the more
specialized contexts C' and C''.
My thinking began similar to what Patrick Cassidy wrote. In this example, the terms as used in C' and C'' are effectively specializations (via added constraints) of the term in C. To transmit a C' or C'' thing as a C thing is a fair substitution; but to receive a C thing as a C' or C'' thing does an implicit narrowing that is not necessarily valid.
In practice, though, such an understanding of the differences (or that there are differences) among similar terms as used in C, C' and C'' often comes out only after a failure has occurred. In real-world use of any sort of language that does not have mechanical, closed-world semantics, that potentially invalid narrowing is not only unpreventable, but is often the "least worst" translation that can be made into the receiver's conceptualization. Every organization and every person applies their own semantic baggage (added constraints) to supposedly common terms; said "local modifications" are discovered, defined and communicated only after a problem arises.
Should we then blame the common model (ontology, lexicon, schema, exchange format, whatever) for having been incomplete or wrong for the task at hand? Nobody wants to complicate the model with the infinite number of properties/attributes that don't matter. You just need to model exactly the set of properties/attributes that are necessary and sufficient to prevent all future catastrophes under all integration scenarios that will actually happen, and none of those that won't happen. Easy! if you can predict the future.
In digest mode,
David Flater, National Institute of Standards and Technology, U.S.A.