[Top] [All Lists]

Re: [ontolog-forum] LInked Data meme revisited

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Hans Polzer" <hpolzer@xxxxxxxxxxx>
Date: Thu, 12 Dec 2013 21:32:10 -0500
Message-id: <028e01cef7ab$86c94a10$945bde30$@verizon.net>

The reason for bringing in the notion of context is that while possible context attributes are indeed open-ended, the operative context dimensions in any given situation are usually enumerable and discoverable. But they require participants to “zoom out” from their interaction focus in order to become aware of them and understand their impact on the interchange. That’s something that humans don’t do very well, which is why we have interoperability issues in the first place – we tend to assume other entities see the world the same way we do. It’s an unnatural act to explore one’s context assumptions – until confronted by some other entity which clearly doesn’t share them.


But we don’t want to wait until we attempt to connect our systems and associated data and service to someone else’s and then “discover” what doesn’t work/align. We’d like to explore context assumptions at system or ontology design time and try to make as many of them explicit and discoverable by external entities as possible/pragmatic, so as to permit more dynamic rendezvous in cyberspace. It is possible to identify major context elements/dimensions that are likely to affect the feasibility of any two or more diverse/autonomous entities interacting with each other – although it gets more difficult as the number of such entities increases and as their operational domains and scopes diverge. Any large scale integration or “federation” or “system of systems” or “virtual enterprise”, or “globalization”  project already does this, albeit in an ad hoc manner. The NCOIC SCOPE model is an attempt to identify major context/scope dimensions that drive data representation and interoperability issues for network-connected systems of diverse domains and representing autonomous entities. It’s based on generalizing/abstracting actual problems encountered when trying to connect a sizable number of real-world systems representing different domains and institutions.


In other words, we’d like to move both the “detection process”, and the “discovery process” upstream in the system life cycle timeline so that any run-time code for detection and discovery is able to deal with mismatches likely to be encountered (ideally), or require only limited/local modification (rather than major redesign/re-architecting because anticipatable context/scope differences were not anticipated). This is not much of an issue in human to human dialog because dissonance detection and discovery dialog comes fairly naturally to most people. But information system dynamism is a lot more complicated, expensive, risky, fraught with uncertainty, and time consuming. Will you trust your system to work correctly (from your perspective) when it adapts dynamically to discovered context/scope assumptions in some other system you asked it to connect with. i.e. executes the detection and discovery process on your behalf? Heaven forbid it should encounter some system/organization in cyberspace you weren’t even aware of and decides to leverage its capabilities to your advantage, but without your knowledge.




From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Black
Sent: Thursday, December 12, 2013 7:21 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] LInked Data meme revisited





On Dec 11, 2013, at 8:50 PM, "Hans Polzer" <hpolzer@xxxxxxxxxxx> wrote:



I agree that it is the execution of a process that allows convergence on the sense of a particular word in a particular exchange.


How about this: the "sense" or "meaning" of a term is the execution of a process, in more than one agent, that continuously increases coordination between those agents around that term.  What I am trying to get at is that the process increases coordination so long as the process continues, but it never finally converges on anything.  Or perhaps, that thing is the coordination between the agents, not a representation of anything. In comparison, unlike a zero-sum game theoretical semantics, we would have a cooperative game that continued as long as the players wanted to play. 

However, I would suggest that the additional “inputs” you mention as being part of that process are in fact context attributes/parameters and values that are most germane to selecting the appropriate sense  of the word for the exchange in question.


Unless we can say that there is something more than the inputs to a process, why bring in the idea of context?  A set of inputs implies a finite, or at least countable, number of discrete datums. As Pat Hayes and others have argued at times, context ends up covering everything. I agree however that the set of inputs is very much like what we have all called context.  

I further  believe there are two distinct processes at play here.


The two processes you name are the kind of thing I am thinking of.  But I think there are many more such sub-processes.  I also think that there are very different processes for different kinds of terms. That is why, I believe, we need to think of the coordination that is achieved, which is truly remarkable, and not how closely we arrive an identical representation, which is impossible. 



One is what I call the “cognitive dissonance detection” process, which allows humans to detect that the word sense that is being attempted to convey by the “sender” or ”publisher” isn’t what a receiver is thinking is being sent. This process is two-way in dialog but only one-way in reading published material (but a good author attempts to anticipate this or plays on it deliberately, depending on the nature of the publication). The second process is the “discovery dialog process”, in which the exchange participants, once any of them detect cognitive dissonance, launch into additional exchanges that attempt to clarify each other’s word senses and either converge on an appropriate word sense or “agree to disagree”, depending on the nature of the triggering exchange.




From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Black
Sent: Wednesday, December 11, 2013 7:29 PM
To: [ontolog-forum] 
Subject: Re: [ontolog-forum] LInked Data meme revisited



Hans, John, Rich,


Recently I asserted that the sense of a word is an affordance resulting from the execution of a process. Furthermore, I asserted that it was common knowledge of that process that allowed for communication with a term to exist. But now I see that common knowledge is not what is required. In fact, since it implies some sort of representation, which I want to replace entirely by process, it falls into the very error I am trying to avoid.  Instead, I should have asserted that it is the shared ownership of a common process, combined with shared inputs to that process, among a set of agents, that affords the utility of a term, word or URI. 



Your points are further underscored by the issue
of context


If the sense of a term, word or URI is some utility afforded by a process, then there is no need for appeal to "context".  Instead, the process of making sense of a term needs only to be able to accept  inputs in addition to the term itself.  Different values for those inputs afford different results, the same inputs gets the same results, that's all. No context is required. When a group of agents shares ownership of both the process and the inputs to that process for making sense of a term, then all members of the group are afforded a greater utility. It is an example of the network effect. I like to think of a trending hashtag on Twitter. These have virtually no syntax, just the semantics that results from the shared process of making sense of it given the inputs available to all. 



Of course, this is all very frustrating to people
who want universal
interoperability and understandability - that
"universal business language
translator" mentioned somewhat tongue-in-cheek(ly)
in a classic commercial


What is remarkable, in my opinion, is how effective languages are, be they natural, formal or some hybrid. The difficulty of reaching perfection is dwarfed by the ubiquity of the utility afforded by languages.  And I personally am very optimistic about creating machines that can share in the utility afforded to those communities surrounding common terms - when once we learn how to simulate the processes and inputs to those processes that humans use to make sense of terms. 



The idea of using precise symbols and terminology
in science and in
programming languages is useful -- but only for a
very narrow application.

The reason why natural languages are so flexible
is that a finite vocabulary
can be adapted to an infinite range of
applications.  That implies that it's
impossible (and undesirable) to force words to be
used with fixed and frozen


I don't think it will be feasible in the next

decade to find a 

universal dictionary.

I would revise that point in the following way:

   It will *never* be possible or desirable to
have a fixed dictionary
   of precisely defined word senses for any
natural language.


I would dispute that there is much difference between the difficulties and utility of natural vs scientific or programming languages.  And I certainly hope you are not implying that it is possible to have fixed representations, definitions or precisely defined word senses of symbols in scientific and programming languages. I don't think it is any more than with natural languages. But instead, here again, it is the shared ownership of a process, and inputs to that process, that afford us the utilities of formal languages as well.  In other words, it is not how rigidly that the sense is somehow defined or represented, but how consistently the process and the inputs to it are shared amongst agents, which affords formal terms a more consistent utility. 


Or so it seems to me.


John Black



Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J


Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>