ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Constructs, primitives, terms

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Rich Cooper" <rich@xxxxxxxxxxxxxxxxxxxxxx>
Date: Sat, 10 Mar 2012 11:28:04 -0800
Message-id: <912327C28AF34E8FAAD4A80FDC00E264@Gateway>

Dear Hans,

 

You wrote:

The NCOIC SCOPE model is an attempt to define such a context space and scope dimensional “scales” so that two or more systems can determine whether they can interoperate correctly for their intended purposes. Note that semantic interoperability is only a portion of the SCOPE model dimension set. Conversely, the SCOPE model is explicitly limited in scope to interactions that are possible over a network connection. It does not address physical interoperability, for example.

 

That sounds interesting.  Do you have a URL for an overview of SCOPE to get us started reading about it?

 

Thanks,

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer
Sent: Friday, March 09, 2012 1:12 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

I guess I didn’t read far enough down in the email trail and notice that C, C’, and C” were intended to refer to contexts themselves, not to concepts defined within some context, which was what I took them as in my email below. Still, I think my points below are pertinent when this misinterpretation of C, etc. is taken into account.

 

Hans

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer
Sent: Friday, March 09, 2012 3:40 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Rich,

 

I think it would be better not to use terms like “semantic baggage”, which suggest some lack of objectivity on the part of whoever defined C. At the risk of getting into a discussion of Plato, the key point is that every definition of C, C’, and C”, are based on some context (often assumed and implicit), some frame(s) of reference for describing entities/concepts within that context, and with specific (if often implicit) scope, and from some perspective upon that context. Until we have a shared language for describing context, frames of reference, their scope, and the perspective from which the context is described, we will always have variations in definitions of C, C’. and C”. Indeed, there will be as many variations of C as there are context dimensions and scope values for those dimensions as might have a material influence on the definition of C. 

 

Which brings up another important point, namely that of purpose of the definition, or of the concept/entity being defined, modulo the above discussion. The purpose of the definition is what determines whether a context dimension is material or not. If the differences in definition of C and C’ do not alter the intended/desired outcome for some purpose (or set of purposes over some context dimension scope ranges), then they are functionally equivalent definitions in that context “space”.  This is the pragmatic aspect of “common” semantics, which many on this forum have brought up in the past. Commonality is a meaningful concept only if one specifies the context “space” (i.e., the range of context dimensions and scope attribute value ranges for each dimension in that “n”-space) over which the concept or entity definition is functionally equivalent among the actors intending to use that definition for some set of purposes.

 

The NCOIC SCOPE model is an attempt to define such a context space and scope dimensional “scales” so that two or more systems can determine whether they can interoperate correctly for their intended purposes. Note that semantic interoperability is only   a portion of the SCOPE model dimension set. Conversely, the SCOPE model is explicitly limited in scope to interactions that are possible over a network connection. It does not address physical interoperability, for example.

 

Hans

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Friday, March 09, 2012 1:41 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Dear David,

 

You wrote:

 

…  In this example, the terms as used in C' and C'' are effectively specializations (via added constraints) of the term in C.  To transmit a C' or C'' thing as a C thing is a fair substitution; but to receive a C thing as a C' or C'' thing does an implicit narrowing that is not necessarily valid.

In practice, though, such an understanding of the differences (or that there are differences) among similar terms as used in C, C' and C'' often comes out only after a failure has occurred. In real-world use of any sort of language that does not have mechanical, closed-world semantics, that potentially invalid narrowing is not only unpreventable, but is often the "least worst" translation that can be made into the receiver's conceptualization. Every organization and every person applies their own semantic baggage (added constraints) to supposedly common terms; said "local modifications" are discovered, defined and communicated only after a problem arises.

 

Your analysis seems promising, but I suggest there is at least one more complication; the description of C must also have been loaded with the “semantic baggage” of the person who defined it, just as C’ and C” and therefore C seems likely to also be a specialization of some even more abstract concept C- which may not have contained the baggage of C, C’ or C”. 

 

There is no pure abstraction C- in most of the descriptions for concepts so far as I have seen in our discussions.  Every concept seems to have been modulated by the proposer’s semantic baggage.  Since it is always a PERSON who produces the conceptualization C in the first place, it isn’t possible to be that abstract. 

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of David Flater
Sent: Friday, March 09, 2012 10:19 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

On 3/5/2012 9:08 AM, John F. Sowa wrote:

Base vocabulary V: A collection of terms defined precisely at a level
of detail sufficient for interpreting messages that use those terms
in a general context C.
 
System A: A computational system that imports vocabulary V and uses
the definitions designated by the URIs. But it uses the terms in
a context C' that adds further information that is consistent with C.
That info may be implicit in declarative or procedural statements.
 
System B: Another computational system that imports and uses terms
in V. B was developed independently of A. It may use terms in V
in a context C'' that is consistent with the general context C,
but possibly inconsistent with the context C' of System A.
 
Problem: During operations, Systems A and B send messages from
one to the other that use only the vocabulary defined in V.
But the "same" message, which is consistent with the general
context C, may have inconsistent implications in the more
specialized contexts C' and C''.


My thinking began similar to what Patrick Cassidy wrote.  In this example, the terms as used in C' and C'' are effectively specializations (via added constraints) of the term in C.  To transmit a C' or C'' thing as a C thing is a fair substitution; but to receive a C thing as a C' or C'' thing does an implicit narrowing that is not necessarily valid.

In practice, though, such an understanding of the differences (or that there are differences) among similar terms as used in C, C' and C'' often comes out only after a failure has occurred.  In real-world use of any sort of language that does not have mechanical, closed-world semantics, that potentially invalid narrowing is not only unpreventable, but is often the "least worst" translation that can be made into the receiver's conceptualization.  Every organization and every person applies their own semantic baggage (added constraints) to supposedly common terms; said "local modifications" are discovered, defined and communicated only after a problem arises.

Should we then blame the common model (ontology, lexicon, schema, exchange format, whatever) for having been incomplete or wrong for the task at hand?  Nobody wants to complicate the model with the infinite number of properties/attributes that don't matter.  You just need to model exactly the set of properties/attributes that are necessary and sufficient to prevent all future catastrophes under all integration scenarios that will actually happen, and none of those that won't happen.  Easy! if you can predict the future.

In digest mode,

--
David Flater, National Institute of Standards and Technology, U.S.A.

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>