ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Constructs, primitives, terms

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Rich Cooper" <rich@xxxxxxxxxxxxxxxxxxxxxx>
Date: Sat, 10 Mar 2012 13:57:16 -0800
Message-id: <4C450D6FB9A24E6AAB50497918321CB9@Gateway>

Dear Hans,

 

Thanks again for your inputs.  My replies are embedded below,

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer
Sent: Saturday, March 10, 2012 1:18 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Rich,

 

My experience may be to blame, but the military often does things that have varying degrees of “unreality”, like field exercises, command post exercises, and models of potential courses of action, or the possible use/effects of “future” weapons or capabilities. So we end up with real world systems trying to deal with hypothetical entities, locations, and activities that aren’t detectable by actual radars, sensors, etc.

 

I’m very familiar with military simulations, training exercises, doctrines, and the other plans and rules for engagement.  But remember the military saying, that battle plans go out the window as soon as the first engagement happens.  That is because no military planner can foresee the actual results of an operational plan. 

 

The reasoning behind these “unreal” plans is simply to let the troops and the commanders experience the kinds of situations that planners believe are likely outcomes of actions.  There is plenty of justification for those simulations, exercises and doctrines.  Because they work to bring experience to new recruits and to remind old warriors of the principles they were taught and practiced, they have made the US military highly ready for a wide range of engagements. 

 

But what that means is just that the training, simulations, doctrines are approximations of the reality that will actually occur. 

 

That’s why the SCOPE model includes dimensions on different degrees and types of coupling to reality for entities represented on the network, as well as different types of reality – because the type of reality determines what might constitute “ground truth” for that reality type. I should have also pointed out that the dimensions in the SCOPE model are traceable to actual interoperability problems encountered in real world systems (including real world systems that represent artificial worlds, such as Second Life – or Facebook, if you like J).

 

Hans

 

Thanks for the reference to SCOPE.  I hope to have time to review it in the next few days.  It sounds promising, though I again suggest that every ontology needs an easy customization facility of the distributed ontology, but not an enforced application of each and every detail, which is bound to fail in some context. 

 

-Rich

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Saturday, March 10, 2012 3:57 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Dear Hans and Matthew,

 

I agree with that:

It’s like experiencing “society” – I interact with individuals and with products of individuals and institutions, and I abstract and generalize to society. Clearly, this abstraction and generalization will be different than someone else’s based on the different samplings of other individuals and institutions we would both have – which is what  thought your main theme has been. But we are not tied to physical reality in our concepts (although heavily influenced by it, of course). Hans

I agree with that.  We can imagine unreal concepts (unicorns, honest government, extraterrestrial visitors …) and communicate about them, but if they are unreal, they are not useful things to model with ontologies.  The purpose of communicating concepts among observers is to describe things that are real.  Concepts which abstract reality can be generated and named interminably, but they only represent concepts, not realities that can be operated upon, or interchanged among the observers’ databases.  Only representations, not concepts, can be communicated.  That is why perceptions are so deeply embedded in our conceptualizations. 

 

Again we agree! 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Matthew Kaufman
Sent: Saturday, March 10, 2012 12:25 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

 Well written

 

It’s like experiencing “society” – I interact with individuals and with products of individuals and institutions, and I abstract and generalize to society. Clearly, this abstraction and generalization will be different than someone else’s based on the different samplings of other individuals and institutions we would both have – which is what  thought your main theme has been. But we are not tied to physical reality in our concepts (although heavily influenced by it, of course). Hans

 

 

On Sat, Mar 10, 2012 at 3:18 PM, Hans Polzer <hpolzer@xxxxxxxxxxx> wrote:

 

Rich,

 

My reference was to the Powerpoint presentation you sent out, not something you said in your emails. However, I think the human mind is capable of developing concepts that transcend physical sensory inputs and the brain’s interpretations of them. Certainly those concepts won’t be completely identical from one human to another, but we can, for example, talk about teleportation or interstellar gas clouds, even though none of us have experienced anything like them physically. We can abstract and generalize concepts from experience, but the abstractions and generalizations can go beyond anything we have experienced. And how does one experience a corporation physically? We can interact with employees of a corporation (if they choose to reveal that affiliation), and we can interact with the products or services produced by a corporation, but I don’t know what it means to experience the concept of a corporation or to experience a specific corporation. It’s like experiencing “society” – I interact with individuals and with products of individuals and institutions, and I abstract and generalize to society. Clearly, this abstraction and generalization will be different than someone else’s based on the different samplings of other individuals and institutions we would both have – which is what  thought your main theme has been. But we are not tied to physical reality in our concepts (although heavily influenced by it, of course).

 

Hans

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper

Sent: Saturday, March 10, 2012 2:49 PM

To: '[ontolog-forum] '

Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Dear Hans,

 

You wrote:

We should also be aware that there are different types of reality that we might want to share or discuss. There is physical reality, detectable and measurable by physical sensors – and our own senses. The coupling of that reality to the perceptions presented to us in our brain is the topic of Rich’s presentation. This is the domain of physics, chemistry, biology, and similar science and engineering domains. Ironically, it is also an area where there is a fairly healthy debate about whether there is such a thing as “objective” physical reality related to interpretations of quantum mechanics, string theory, and cosmology.

 

No, my claim is that our individual mental concepts are correlated to the perceptions we can individually experience.  I can’t talk about an abstract “circle” without having actually experienced what a manifested circle is.  The circle I actually experienced seemed to wrap back on itself, but other circles have little gaps between two ends (e.g., a key chain on which I can skewer keys, but it has to have an open end so I can add or remove keys from the chain). 

 

Except for the case of extremely simple concepts (circles, rectangles, triangles …), perception and experience play essential roles in how I think about the concepts in which I classify those perceptions and experiences.  Even the extremely simple Michelson Morley experiment, which concluded that light speed is constant, was interpreted completely differently by AE who concluded that constant light speed implied changing spatial and velocity factors.  Yet in both MM’s actual experiment and AE’s gedankenexperiment there were the same objects waiting for MM and AE to perceive them as different manifestations of different conceptualizations of space time. 

 

So the only way to experience a concept is through individual perceptions of the physical reality of the concepts.  We individually, through a lifetime of experience, develop mental concepts which are simply abstractions of our experiences, ways of mapping contexts into relationally organized concepts, but always based on the individual’s experience.  If I discuss my concept with you, it is very likely that you have a concept of your own which is different than my concept, but similar enough that we can discuss our own experiences.  But no matter how vividly you explain your experience to me, I will not experience exactly the same thing you did until I am placed in an identical situation, I perceive the situation in my own terms, and I am able to experience the reality for myself. 

 

So I insist that perceptions aggregated into concepts provide the only way of actually experiencing the concepts.

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer

Sent: Friday, March 09, 2012 1:53 PM

Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Rich,

 

One other important point on this issue of “semantic baggage” is the issue that some types of “reality” and associated frames of reference are inherently institutional or social in nature. I had written on this point in response to an earlier post by Leo Obrst, but my IP address was temporarily on a spammer list used by the forum to filter emails and it never got sent (no wise-cracks, please J ). I’ve reproduced most of that email below.

 

We should also be aware that there are different types of reality that we might want to share or discuss. There is physical reality, detectable and measurable by physical sensors – and our own senses. The coupling of that reality to the perceptions presented to us in our brain is the topic of Rich’s presentation. This is the domain of physics, chemistry, biology, and similar science and engineering domains. Ironically, it is also an area where there is a fairly healthy debate about whether there is such a thing as “objective” physical reality related to interpretations of quantum mechanics, string theory, and cosmology.

 

Then there is conceptual reality, which includes most of the discussion in this forum, but transcends it, at least at the margins. For example, we can have the concept of a corporation or a nation independent of any particular corporation or nation, but specific corporations depend on the existence and processes of actual nations (and in the case of the US, the existence of actual states – each of which have somewhat different rules for the formation and continued existence of corporations).  Note that there are no physical sensors that can detect corporations or nations – they are a reality created by human brains and sustained by society. But there is an objective sense for conceptual reality – for example, a corporation either exists or it doesn’t, in some jurisdiction – it’s not up to our individual opinion as to whether said corporation or our driver’s license exists.

 

The third type of reality, social reality, is not grounded in any physical or conceptual reality, but rather is based on social convention and the preponderance of opinion and behavior. Most  (all?) language is really of this nature. What a word means is grounded in social reality. So is the value of real estate, both in terms of current market value and value for a particular purpose (parking lot, lab, high-rise office, etc.). There is no “ground truth” or “objective” truth associated with terms and attribute values of conversations or data elements discussing/describing social reality constructs. We have to be explicit about what contexts and what associated “anchors” we are using when capturing, representing, and communicating social reality constructs or activities. Saying that someone is lying about some social reality can be problematic, and may well be perceived as wrong by some other party with a different social frame of reference. That’s why politics and the stock market are so much fun – they tend to mix all these different types of reality together and most people have a hard time distinguishing among them (and some are motivated to avoid those distinctions).

 

The main point of all this is that the nature of the frames of reference used to claim some objective/absolute truth for a definition depend on the type of reality that the definition is attempting to describe. Some concepts and “facts” are inherently subjective in nature, and we need to be clear about which are which.

 

A last point on this topic is that we can have “alternate realities”, that don’t correspond to any physical, conceptual, or non-fictional social reality, at least not in any straightforward way. Second Life comes to mind as a great example of an alternate reality, as are the multi role player games on the Internet, but there are many possible variations on this theme. The modeling and simulation domain is mostly about creating alternate realities or partial, incomplete representations of the realities above. The key again is to be explicit about the context and frames of reference used to describe these alternate realities so as to have them properly used or interpreted by others, over the network or otherwise.

 

Hans

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer

Sent: Friday, March 09, 2012 3:40 PM

To: '[ontolog-forum] '

Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Rich,

 

I think it would be better not to use terms like “semantic baggage”, which suggest some lack of objectivity on the part of whoever defined C. At the risk of getting into a discussion of Plato, the key point is that every definition of C, C’, and C”, are based on some context (often assumed and implicit), some frame(s) of reference for describing entities/concepts within that context, and with specific (if often implicit) scope, and from some perspective upon that context. Until we have a shared language for describing context, frames of reference, their scope, and the perspective from which the context is described, we will always have variations in definitions of C, C’. and C”. Indeed, there will be as many variations of C as there are context dimensions and scope values for those dimensions as might have a material influence on the definition of C. 

 

Which brings up another important point, namely that of purpose of the definition, or of the concept/entity being defined, modulo the above discussion. The purpose of the definition is what determines whether a context dimension is material or not. If the differences in definition of C and C’ do not alter the intended/desired outcome for some purpose (or set of purposes over some context dimension scope ranges), then they are functionally equivalent definitions in that context “space”.  This is the pragmatic aspect of “common” semantics, which many on this forum have brought up in the past. Commonality is a meaningful concept only if one specifies the context “space” (i.e., the range of context dimensions and scope attribute value ranges for each dimension in that “n”-space) over which the concept or entity definition is functionally equivalent among the actors intending to use that definition for some set of purposes.

 

The NCOIC SCOPE model is an attempt to define such a context space and scope dimensional “scales” so that two or more systems can determine whether they can interoperate correctly for their intended purposes. Note that semantic interoperability is only   a portion of the SCOPE model dimension set. Conversely, the SCOPE model is explicitly limited in scope to interactions that are possible over a network connection. It does not address physical interoperability, for example.

 

Hans

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper

Sent: Friday, March 09, 2012 1:41 PM

To: '[ontolog-forum] '

Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

Dear David,

 

You wrote:

 

…  In this example, the terms as used in C' and C'' are effectively specializations (via added constraints) of the term in C.  To transmit a C' or C'' thing as a C thing is a fair substitution; but to receive a C thing as a C' or C'' thing does an implicit narrowing that is not necessarily valid.

In practice, though, such an understanding of the differences (or that there are differences) among similar terms as used in C, C' and C'' often comes out only after a failure has occurred. In real-world use of any sort of language that does not have mechanical, closed-world semantics, that potentially invalid narrowing is not only unpreventable, but is often the "least worst" translation that can be made into the receiver's conceptualization. Every organization and every person applies their own semantic baggage (added constraints) to supposedly common terms; said "local modifications" are discovered, defined and communicated only after a problem arises.

 

Your analysis seems promising, but I suggest there is at least one more complication; the description of C must also have been loaded with the “semantic baggage” of the person who defined it, just as C’ and C” and therefore C seems likely to also be a specialization of some even more abstract concept C- which may not have contained the baggage of C, C’ or C”. 

 

There is no pure abstraction C- in most of the descriptions for concepts so far as I have seen in our discussions.  Every concept seems to have been modulated by the proposer’s semantic baggage.  Since it is always a PERSON who produces the conceptualization C in the first place, it isn’t possible to be that abstract. 

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of David Flater

Sent: Friday, March 09, 2012 10:19 AM

To: [ontolog-forum]

Subject: Re: [ontolog-forum] Constructs, primitives, terms

 

On 3/5/2012 9:08 AM, John F. Sowa wrote:

Base vocabulary V: A collection of terms defined precisely at a level
of detail sufficient for interpreting messages that use those terms
in a general context C.
 
System A: A computational system that imports vocabulary V and uses
the definitions designated by the URIs. But it uses the terms in
a context C' that adds further information that is consistent with C.
That info may be implicit in declarative or procedural statements.
 
System B: Another computational system that imports and uses terms
in V. B was developed independently of A. It may use terms in V
in a context C'' that is consistent with the general context C,
but possibly inconsistent with the context C' of System A.
 
Problem: During operations, Systems A and B send messages from
one to the other that use only the vocabulary defined in V.
But the "same" message, which is consistent with the general
context C, may have inconsistent implications in the more
specialized contexts C' and C''.

My thinking began similar to what Patrick Cassidy wrote.  In this example, the terms as used in C' and C'' are effectively specializations (via added constraints) of the term in C.  To transmit a C' or C'' thing as a C thing is a fair substitution; but to receive a C thing as a C' or C'' thing does an implicit narrowing that is not necessarily valid.

 

In practice, though, such an understanding of the differences (or that there are differences) among similar terms as used in C, C' and C'' often comes out only after a failure has occurred.  In real-world use of any sort of language that does not have mechanical, closed-world semantics, that potentially invalid narrowing is not only unpreventable, but is often the "least worst" translation that can be made into the receiver's conceptualization.  Every organization and every person applies their own semantic baggage (added constraints) to supposedly common terms; said "local modifications" are discovered, defined and communicated only after a problem arises.

 

Should we then blame the common model (ontology, lexicon, schema, exchange format, whatever) for having been incomplete or wrong for the task at hand?  Nobody wants to complicate the model with the infinite number of properties/attributes that don't matter.  You just need to model exactly the set of properties/attributes that are necessary and sufficient to prevent all future catastrophes under all integration scenarios that will actually happen, and none of those that won't happen.  Easy! if you can predict the future.

 

In digest mode,

--

David Flater, National Institute of Standards and Technology, U.S.A.

 

_________________________________________________________________

  

 

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>