Sean, (01)
I do not have a direct answer to your question. And as you know I have been
monitoring and contributing to this forum for years. Lately I have developed a
"combat fatigue" and looks like I am not alone. But looking at the answers your
have gotten so far- I feel that it is safe for me to express my opinion without
being attacked for misspelling or misunderstanding something. I agree with your
observations about different camps and schools of thought contributing (at
least in past) to this forum. Despite all the criticism it received, it remains
(IMHO) a unique place where questions like your can be raised and given
adequate attention. I also appreciate your responses which are always
constructive. (02)
Coming back to your question, I have argued on this forum more than once that
some form of logical grounding is necessary to allow agents to cooperate.
However I had always struggled finding a formal definition of grounding that
would be universal. There are of course other existing notions of grounding,
such as classic AI problem of symbolic grounding that connects internal
symbolic representation of the world with the actual world. But in my view, the
grounding needed for agents to cooperate is fundamentally dynamic and local as
is context or cooperation. In my mind context and grounding have the same
relationship as intension and extension in logic where grounding takes place of
extension. This resonates with what you call a decision route. Unfortunately I
don't know any formalization of that kind of grounding. Intensional logic based
on possible worlds does not help since (as was pointed out on this forum) it is
based on enumeration or indexing of possible words- which is extensional. I
have suggested that Category Theory could be used to formalise this contextual
grounding, possibly based on the Institution Theory of Gaugen. But I lack the
expertise to show it. (03)
My own research led me to idea of 'stream' which is a sequence of
event-condition-action transitions in a network of typed tuples. The only
publication describing it is my patent
http://www.patents.com/Method-apparatus-mediated-cooperation-7634501.html (04)
My company is getting funding from DARPA for developing a prototype based on
that idea and using grounding as a first class notion for constructing
situational ontologies and mapping it to databases and each other. You can get
a quick demo here
http://code.google.com/p/ontobase/ (05)
I am very open to any suggestions and collaboration opportunities. (06)
Len Yabloko, Owner/CEO
Next Generation Software
www.ontospace.net (07)
>-----Original Message-----
>From: sean barker [mailto:sean.barker@xxxxxxxxxxxxx]
>Sent: Monday, August 9, 2010 03:17 PM
>To: ontolog-forum@xxxxxxxxxxxxxxxx
>Subject: [ontolog-forum] Looking for a razor
>
>
>Is there a named area of study which considers the specifically process of of
>interpreting a sign together with the shared knowledge needed by two agents
>who communicate (using signs)?
>
>
>
>
>At one extreme, Agent 1 goes into a grocer's shop, and presses buttons on
>Agent 2 for "3", "red", and "apples", and a simple mechanical system delivers
>the fruit. Here the knowledge is all on Agent 1's side, and includes both the
>semantics of "3", "red", and "apples", and knowledge about vending machines.
>
>
>
>
>At the other extreme, the two agents are people, say an American tourist
>having got off the Paris RER in one of the suburbs, and an Algerian shop
>keeper. In this, the American uses knowledge about common social systems, and
>therefore identifies the context "shop" and so knows the appropriateness of
>attempting to buy apples. On the other, the shop keeper identifies the
>probable language from knowledge of a range of languages, translates the
>phrasing to a probable match "Trois" "Pommes" and "Rouge" (including allowing
>for different syntactical structures in each language), and so on. Here both
>agents use a considerable amount of knowledge to be able to communicate at
>all. (The complete sequence of "Hungarian Tourist Guide" sketches by Monty
>Python can be used to extend the argument).
>
>
>
>
>The reason for the question is that the semantic web relies on symbols which
>are effectively decoded in advance (are the fixed buttons in the first example
>or URIs in RDF). A major goal of the semantic web is to broker communication
>between agents which either use common symbols or equivalent symbols (sameAs).
>However, the business processes which stand behind such operations ground the
>symbols in the artefacts and actions of the systems operating those processes.
>Communication is reliably only if the symbols used by both agents are grounded
>in the same way - I note that a number of the arguments on this forum seem to
>be between two camps, one assuming that the grounding problem is trivial, the
>other assuming that it is extremely difficult. Therefore I am looking for a
>razor that can cut between the "ontologies as a formal system" and "ontology
>term grounding" parts of the discussion, and so ensure that both parts are
>solved.
>
>
>
>
>I should also throw in my view is that the ontology classes used by a business
>process are exactly those classes which label the alternative routes onward
>from a decision process, and therefore define the grounding of terms.
>
>Sean Barker
>
>Bristol
>
>
>
>
>
>
>
>
>
>
> (08)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (09)
|