[Top] [All Lists]

Re: [ontolog-forum] LInked Data meme revisited

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: John Black <JohnBlack@xxxxxxxxxxx>
Date: Thu, 12 Dec 2013 07:21:05 -0500
Message-id: <71D9F928-9B0D-45FD-BD8D-5BE92B3DB577@xxxxxxxxxxx>

On Dec 11, 2013, at 8:50 PM, "Hans Polzer" <hpolzer@xxxxxxxxxxx> wrote:

I agree that it is the execution of a process that allows convergence on the sense of a particular word in a particular exchange.

How about this: the "sense" or "meaning" of a term is the execution of a process, in more than one agent, that continuously increases coordination between those agents around that term.  What I am trying to get at is that the process increases coordination so long as the process continues, but it never finally converges on anything.  Or perhaps, that thing is the coordination between the agents, not a representation of anything. In comparison, unlike a zero-sum game theoretical semantics, we would have a cooperative game that continued as long as the players wanted to play. 

However, I would suggest that the additional “inputs” you mention as being part of that process are in fact context attributes/parameters and values that are most germane to selecting the appropriate sense  of the word for the exchange in question.

Unless we can say that there is something more than the inputs to a process, why bring in the idea of context?  A set of inputs implies a finite, or at least countable, number of discrete datums. As Pat Hayes and others have argued at times, context ends up covering everything. I agree however that the set of inputs is very much like what we have all called context.  

I further  believe there are two distinct processes at play here.

The two processes you name are the kind of thing I am thinking of.  But I think there are many more such sub-processes.  I also think that there are very different processes for different kinds of terms. That is why, I believe, we need to think of the coordination that is achieved, which is truly remarkable, and not how closely we arrive an identical representation, which is impossible. 


One is what I call the “cognitive dissonance detection” process, which allows humans to detect that the word sense that is being attempted to convey by the “sender” or ”publisher” isn’t what a receiver is thinking is being sent. This process is two-way in dialog but only one-way in reading published material (but a good author attempts to anticipate this or plays on it deliberately, depending on the nature of the publication). The second process is the “discovery dialog process”, in which the exchange participants, once any of them detect cognitive dissonance, launch into additional exchanges that attempt to clarify each other’s word senses and either converge on an appropriate word sense or “agree to disagree”, depending on the nature of the triggering exchange.
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Black
Sent: Wednesday, December 11, 2013 7:29 PM
To: [ontolog-forum] 
Subject: Re: [ontolog-forum] LInked Data meme revisited
Hans, John, Rich,
Recently I asserted that the sense of a word is an affordance resulting from the execution of a process. Furthermore, I asserted that it was common knowledge of that process that allowed for communication with a term to exist. But now I see that common knowledge is not what is required. In fact, since it implies some sort of representation, which I want to replace entirely by process, it falls into the very error I am trying to avoid.  Instead, I should have asserted that it is the shared ownership of a common process, combined with shared inputs to that process, among a set of agents, that affords the utility of a term, word or URI. 

Your points are further underscored by the issue
of context
If the sense of a term, word or URI is some utility afforded by a process, then there is no need for appeal to "context".  Instead, the process of making sense of a term needs only to be able to accept  inputs in addition to the term itself.  Different values for those inputs afford different results, the same inputs gets the same results, that's all. No context is required. When a group of agents shares ownership of both the process and the inputs to that process for making sense of a term, then all members of the group are afforded a greater utility. It is an example of the network effect. I like to think of a trending hashtag on Twitter. These have virtually no syntax, just the semantics that results from the shared process of making sense of it given the inputs available to all. 
Of course, this is all very frustrating to people
who want universal
interoperability and understandability - that
"universal business language
translator" mentioned somewhat tongue-in-cheek(ly)
in a classic commercial
What is remarkable, in my opinion, is how effective languages are, be they natural, formal or some hybrid. The difficulty of reaching perfection is dwarfed by the ubiquity of the utility afforded by languages.  And I personally am very optimistic about creating machines that can share in the utility afforded to those communities surrounding common terms - when once we learn how to simulate the processes and inputs to those processes that humans use to make sense of terms. 
The idea of using precise symbols and terminology
in science and in
programming languages is useful -- but only for a
very narrow application.
The reason why natural languages are so flexible
is that a finite vocabulary
can be adapted to an infinite range of
applications.  That implies that it's
impossible (and undesirable) to force words to be
used with fixed and frozen


I don't think it will be feasible in the next
decade to find a 

universal dictionary.

I would revise that point in the following way:

   It will *never* be possible or desirable to
have a fixed dictionary
   of precisely defined word senses for any
natural language.
I would dispute that there is much difference between the difficulties and utility of natural vs scientific or programming languages.  And I certainly hope you are not implying that it is possible to have fixed representations, definitions or precisely defined word senses of symbols in scientific and programming languages. I don't think it is any more than with natural languages. But instead, here again, it is the shared ownership of a process, and inputs to that process, that afford us the utilities of formal languages as well.  In other words, it is not how rigidly that the sense is somehow defined or represented, but how consistently the process and the inputs to it are shared amongst agents, which affords formal terms a more consistent utility. 
Or so it seems to me.
John Black

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>