Dear Hans,
My comments are embedded below,
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer
Sent: Saturday, March 10, 2012
12:59 PM
To: '[ontolog-forum]
'
Subject: Re: [ontolog-forum] Constructs, primitives, terms
Yes, Rich. I was
only trying to point out that using the term “semantic baggage”
actually encourages the thought that the answer is simply to get some smart
people with no semantic baggage to run the world.
My belief is that, while
there are tons of smart people, there are NO people who are free of semantic
baggage. We have all generated our individual conceptualizations based on
our individual experiences, self interests, and realistic constraints.
Instead, I think ALL
people should run the world because there are NO smart people cleaned of all
semantic baggage.
That is why I believe
that so called progressive and conservative approaches are both inherently
wrong; they do not take into account the other points of view; the don’t
leave room for choice making as described by Milton Friedman.
In describing their regulatees
as “actors” or “users” or some other classification
name, they do not consider individuals as empowered, self regulating creatures
capable of their own individual pursuits of happiness. Instead, the small
groups that construct and impose social constraints on ALL people of certain “classes”
(citizens, employees, patients, consumers, taxpayers, immigrants, drivers,
insurance companies …).
The belief that a small
group can dictate and enforce a complex ontology on ALL players is what is
wrong here. Reality is so much more complex than any one of us, or any
group of us, can possibly describe and write rules for.
Your earlier point, that
failures are needed to expand rules which are inadequate, is right on the mark
that way. It seems to me that there ought to be a way to organize ontologies
so that each individual can elaborate or compress the concepts and rules.
That makes concepts and rules more like suggestions than singularities of
meaning. Each person will conceptualize his own view of reality, so the
most appropriate thing to do with ontologies is to allow for individual
customization and choices among what portions of the ontology make sense to
that each individual.
My point was similar
to the situation I have run into repeatedly in the area of interoperability
– it’s an easy problem to solve: simply put me or my company or my
government in charge and we’ll specify the standard/definition that everyone
else (i.e., those with semantic baggage) should follow/use/comply with. My
experience has been that people don’t like to think that they have
semantic baggage, and have difficulty accepting the legitimacy of alternative
views and purposes for a choice of meaning or representation of some term or
concept.
I agree completely.
More often than not
these alternate views are seen as evidence of narrow-mindedness,
short-sightedness, self-interest (e.g., proprietary representation), or just
plain stupidity (in the most uncharitable sense). The problem is that
some of these pejorative factors do come into play, but my experience has been
that even if you eliminate all these negative factors, you still end up with
lots of legitimate alternate perspectives.
Again I agree. The
pejorative factors likely are motivated by people’s self interests, as
perceived by said people. If A’s ontology badly expresses B’s
conceptualization, B’s purpose is impaired by sticking with that bad
_expression_. Instead, B should be able to modify the ontology to fit B’s
needs, while A continues using A’s ontology if it is satisfactory for
A. For that reason, I don’t expect interoperability to be 100%
acceptable without such a capability for individualization of ontologies.
So I think we need
to counter the thinking that these different perspectives are simply the result
of negative behaviors and stupidity – because it’s such a simple
and easy excuse for dismissing diversity of perspectives that are really
inherent in what different entities have experienced and are trying to
accomplish. I have yet to read a published report in the news media on
interoperability problems that doesn’t cast one party or the other (or
both) as somehow incompetent or short-sighted.
The media is motivated by
its advertising base. When Limbaugh recently apologized, it was because
his advertisers were dropping out. When Beck’s and Napolitano’s
shows were cancelled, it was because they couldn’t keep advertising
revenues up even though their shows were popular with viewers. Like all
the rest of us, the media (each individual in it) has a purpose of survival in
the current context, and will act to preserve their individual values related
to that purpose.
The message is
consistently that if only these parties had been a bit smarter or forward
looking (i.e., if only they didn’t have all this semantic baggage), this
problem would not have happened. Such thinking might be satisfying to
some, but it does nothing to address the real underlying cause of the problem,
nor does it help the public understand that in many cases both parties
built/did whatever they did for very good reasons.
But the same thing is
true of other groups, not just the media. The political parties have
agendae that they are pursuing, and most politicians are swept up in the party
system. For survivability, they form a consensus even when they disagree
with major parts of bills that have implications on their funding
sources. That doesn’t make politicians bad, stupid, evil, or other
pejoratives; it makes them human like the rest of us.
An ontology becomes a
social construct when people are constrained to abide by its rules,
classifications, operations and logic. But in reality, every one of us
has some unique ontology organized into our behaviors and belief systems.
That ontology is built on our unique perceptions of the totality around
us.
I co-authored a
paper on unanticipated context shifts as the root cause of most
interoperability problems that were not simply errors in implementing an agreed
definition/standard. One can argue that such context shifts should have been
anticipated – and the SCOPE model is one way to do that
gedankenexperiment in a more disciplined and exhaustive way – but if you
look at actual cases, my experience has been that it often would have been
unlikely or even impossible for the parties to have made different decisions at
the time they made them.
Agreed.
At the time the
decisions were made, the situation that eventually occurred would have been
considered “safe to ignore” or “we can’t boil the
ocean”. We should still strive to raise awareness of possible context
shifts/scope expansions to make sure that people/sponsors aren’t simply
unaware of their possibility, or have implicitly assumed them away, as opposed
to consciously considering them and explicitly deciding not to address those
possibilities.
Hans
What we need, IMHO, is an
ontology that is easy to modify, tailor, customize and maintain on an
individual user level. But trying to force fit an ontology onto
previously uncharted applications will always result in errors due to lack of
coverage of real contexts and due to individuals experiencing those contexts in
different ways.
-Rich
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Saturday, March 10, 2012
3:06 PM
To: '[ontolog-forum]
'
Subject: Re: [ontolog-forum] Constructs, primitives, terms
Dear Hans,
You summarized:
Your
comments and example from the healthcare domain just underscore my point. I was
simply trying to point out that subjectivity doesn’t have to be viewed as
a pejorative term. Subjectivity, based on personal or institutional
perspectives and objectives, are unavoidable and are the basis for interpreting
communications from/with others in the environment. People often assume that
subjectivity is motivated by selfishness, but it can equally be motivated by
altruism or simply by things that interest an individual or organization.
Then we agree! Subjectivity is
unavoidable, the basis for interpreting language acts, inevitable in developing
any models including ontologies, and not only a selfish pastime.
It’s good to end on a note of
agreement!
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
Rich,
Your comments and
example from the healthcare domain just underscore my point. I was simply
trying to point out that subjectivity doesn’t have to be viewed as a
pejorative term. Subjectivity, based on personal or institutional perspectives
and objectives, are unavoidable and are the basis for interpreting
communications from/with others in the environment. People often assume that
subjectivity is motivated by selfishness, but it can equally be motivated by
altruism or simply by things that interest an individual or organization.
I do need to
emphasize that net-centric operations are not focused on communications or the
network, but rather on the entities and resources one can interact with via the
network (i.e., “cyberspace”). Thus net-centric operations entail
all of the personal and organizational perspectives and purposes/motivations,
and associated semantic interoperability issues being discussed on this forum.
Representing and being aware of context and scope over a network connection is
in some ways more challenging than doing so face to face because of the lack of
non-verbal cues we use to assess context and meaning/motivations.
Hans
Dear Hans,
Thanks for your reply. Please see my
responses embedded into your email below,
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
Rich,
I think it would be
better not to use terms like “semantic baggage”, which suggest some
lack of objectivity on the part of whoever defined C.
I can’t take credit
for that term; David was the first to use it, but I heartily agree with the
term because I firmly believe that the originator(s) of any ontology
necessarily exercised their subjective cache of beliefs to create the
ontology. That is really my point in this discussion. People are
always subjective, even when pretending toward the objective view. We see
our world through the glasses of a subjective experience from birth to
now. Objectivity is what we call it when people don’t disagree much
on a concept. That is why people can agree on very simple ontologies such
as Dublin Core, but not on more complex ontologies.
For example, the new
health coding system for doctors to use in seeking reimbursement has 140,000
codes, each with an English description. Here is an article describing
one point of view on how that change from the present 18,000 codes to the new
140,000 codes that distinguish more detail on health conditions. For
example, according to this author, there are 36 different codes for treating a
snake bite, depending on the type of snake, its geographical region, and
whether the incident was accidental, intentional self-harm, assault, or
undetermined. The new codes also thoroughly differentiate between nine
different types of hang-gliding injuries, four different types of alligator
attacks, and the important difference between injuries sustained by walking
into a wall and those resulting from walking into a lamppost:
http://www.amazon.com/forum/politics/ref=cm_cd_dp_rft_tft_tp?_encoding=UTF8&cdForum=Fx1S3QSZRUL93V8&cdThread=TxBWPF3HLJGNLL
At the risk of
getting into a discussion of Plato, the key point is that every definition of
C, C’, and C”, are based on some context (often assumed and
implicit), some frame(s) of reference for describing entities/concepts within
that context, and with specific (if often implicit) scope, and from some
perspective upon that context. Until we have a shared language for describing
context, frames of reference, their scope, and the perspective from which the
context is described, we will always have variations in definitions of C,
C’. and C”. Indeed, there will be as many variations of C as there
are context dimensions and scope values for those dimensions as might have a
material influence on the definition of C.
What is the cost of
developing these shared codes, i.e., training every physician to use the
“proper” code for each condition of the 140,000 distinct
codes? The context is captured in more detail than ever before, but will
the codes be “properly” assigned? I doubt it. What is
the motivation for the physician to distinguish states of mind, such as the
difference between assault with a snake, versus self injury with a snake,
versus accidental discovery of a snake engaged in biting the patient?
Let’s call this
ontology of 140,000 codes, each code j corresponding to one context C[j], the
health code ontology. But what physicians will actually memorize,
discriminate and record health conditions correctly within this 140,000 health
code ontology forest? The subjectivity of the reporting physician will be
superimposed on this ontology, and the data that is actually recorded will not
truly match the code. That mismatch is the “semantic baggage”
mentioned above.
Furthermore, the original
developers (probably a committee) who created the 140,000 code health ontology
must have debated and reconsidered their codes many times to reach the
complexity of 140,000 codes. But why stop there? I am sure there
are other, more specialized contexts which could be coded, and also more
general, aggregated contexts that could be matched against a vector of those
codes. The choices they made to reach the specific ontology of 140,000
codes were due simply to their collectively subjective judgments leading to a
result by the project deadline.
Which brings up
another important point, namely that of purpose of the definition, or of the
concept/entity being defined, modulo the above discussion. The purpose of the
definition is what determines whether a context dimension is material or not.
If the differences in definition of C and C’ do not alter the
intended/desired outcome for some purpose (or set of purposes over some context
dimension scope ranges), then they are functionally equivalent definitions in
that context “space”.
A “purpose”
is by definition subjective. I suspect that the committee making up the
140,000 codes in the health ontology considered some attributes of the health
care situation, though even at that rich level of predication, The considered
attributes couldn’t possibly describe every situation into which a
patient can find herself disposed. Yet I doubt very much whether the
committee members all truly agreed with the attributions made on health
contexts. More likely, the chairman, or manager, or director, or pick
some other title for the alpha leader, overruled some which she considered
outlandish, added some on which she alone insisted, and broke ties among
committee members to reach a politically acceptable consensus for her own
context of working on the project to get results which satisfy her and her
bankers.
So I still believe that
there is a C- context, not just a C, C’ and C” context, which has
to be considered. Large ontologies such as the health ontology above
absolutely require politically acceptable contexts in which to operate.
That context is the C- (if it could only be described by a perfectly objective
uninvolved and unconflicted observer who would probably fall asleep designing
the ontology since there would be no motivations for such an objective,
uninvolved, unimpacted and unaffected observer. That is why those
observers don’t exist.
This is the
pragmatic aspect of “common” semantics, which many on this forum
have brought up in the past. Commonality is a meaningful concept only if one
specifies the context “space” (i.e., the range of context
dimensions and scope attribute value ranges for each dimension in that
“n”-space) over which the concept or entity definition is
functionally equivalent among the actors intending to use that definition for
some set of purposes.
Again, those aren’t
“actors”, those are subjectively motivated and peripherally
inspired politically directed participants seeking the implementation of their
own aspects of the ontology which they individually feel are important to them
and perhaps to the people they represent. The rest of the ontology, each
“actor” must feel, can do whatever they want with it. That
doesn’t make all committee members motivated to apply the 140,000 codes
in toto, just in the areas they want to measure.
The NCOIC SCOPE
model is an attempt to define such a context space and scope dimensional
“scales” so that two or more systems can determine whether they can
interoperate correctly for their intended purposes. Note that semantic
interoperability is only a portion of the SCOPE model dimension
set. Conversely, the SCOPE model is explicitly limited in scope to interactions
that are possible over a network connection. It does not address physical
interoperability, for example.
It doesn’t seem to
me that network communications are as significant as representational
divergence. The 140,000 codes will not be interpreted in the same way by
all physicians, most of whom will only worry about what has to be reported so
they can get reimbursed. Physicians are already over managed and
overregulated; they don’t even have time to talk to patients much any
more. At most, fifteen minutes goes toward listening to the patient and
giving a prescription or a referral.
Analysis of the data
force fitted into this 140,000 code ontology will be based on what little
familiarity each physician has to gain about the codes in his specialty
area. Yet all kinds of statistical analysis, classifications, inferences
and abductions will be drawn from databases containing signs entered into databases
purportedly in compliance with the ontology.
Hans
-Rich
Dear David,
You wrote:
… In this
example, the terms as used in C' and C'' are effectively specializations (via
added constraints) of the term in C. To transmit a C' or C'' thing as a C
thing is a fair substitution; but to receive a C thing as a C' or C'' thing
does an implicit narrowing that is not necessarily valid.
…
In practice, though, such
an understanding of the differences (or that there are differences) among similar terms as used in C, C' and
C'' often comes out only after a failure has occurred. In real-world use of any
sort of language that does not have mechanical, closed-world semantics, that
potentially invalid narrowing is not only unpreventable, but is often the
"least worst" translation that can be made into the receiver's
conceptualization. Every organization and every person applies their own semantic
baggage (added constraints) to supposedly common terms; said "local
modifications" are discovered, defined and communicated only after a problem arises.
Your analysis seems promising, but I
suggest there is at least one more complication; the description of C must also
have been loaded with the “semantic baggage” of the person who
defined it, just as C’ and C” and therefore C seems likely to also
be a specialization of some even more abstract concept C- which may not have
contained the baggage of C, C’ or C”.
There is no pure abstraction C- in most of
the descriptions for concepts so far as I have seen in our discussions.
Every concept seems to have been modulated by the proposer’s semantic
baggage. Since it is always a PERSON who produces the conceptualization C
in the first place, it isn’t possible to be that abstract.
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
On 3/5/2012 9:08 AM, John F. Sowa wrote:
Base vocabulary V: A collection of terms defined precisely at a level
of detail sufficient for interpreting messages that use those terms
in a general context C.
System A: A computational system that imports vocabulary V and uses
the definitions designated by the URIs. But it uses the terms in
a context C' that adds further information that is consistent with C.
That info may be implicit in declarative or procedural statements.
System B: Another computational system that imports and uses terms
in V. B was developed independently of A. It may use terms in V
in a context C'' that is consistent with the general context C,
but possibly inconsistent with the context C' of System A.
Problem: During operations, Systems A and B send messages from
one to the other that use only the vocabulary defined in V.
But the "same" message, which is consistent with the general
context C, may have inconsistent implications in the more
specialized contexts C' and C''.
My thinking began similar to what Patrick Cassidy wrote. In this example,
the terms as used in C' and C'' are effectively specializations (via added
constraints) of the term in C. To transmit a C' or C'' thing as a C thing
is a fair substitution; but to receive a C thing as a C' or C'' thing does an
implicit narrowing that is not necessarily valid.
In practice, though, such an understanding of the differences (or that there are differences) among similar terms as
used in C, C' and C'' often comes out only after a failure has occurred.
In real-world use of any sort of language that does not have mechanical,
closed-world semantics, that potentially invalid narrowing is not only
unpreventable, but is often the "least worst" translation that can be
made into the receiver's conceptualization. Every organization and every
person applies their own semantic baggage (added constraints) to supposedly
common terms; said "local modifications" are discovered, defined and
communicated only after a problem
arises.
Should we then blame the common model (ontology, lexicon, schema, exchange
format, whatever) for having been incomplete or wrong for the task at
hand? Nobody wants to complicate the model with the infinite number of
properties/attributes that don't matter. You just need to model exactly
the set of properties/attributes that are necessary and sufficient to prevent
all future catastrophes under all integration scenarios that will actually
happen, and none of those that won't happen. Easy! if you can predict the
future.
In digest mode,
--
David Flater, National Institute of Standards and Technology,
U.S.A.