ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Car attitudes

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Mills Davis <lmd@xxxxxxxxxxxxxx>
Date: Sat, 4 Aug 2007 07:27:56 -0400
Message-id: <36E0981F-C211-4E23-BD2E-3E458D1C5F1C@xxxxxxxxxxxxxx>
Danny,

Part of the beauty of the software agent concept is thinking about it in different ways. For example, agents can be thought of as a society (ala Marvin Minsky) of cooperating entities. In this reading, agents become fundamental (most granular) building blocks of potentially very large assemblages, that each are part content, part behavior, and part knowledge representation. For this to work, each agent needs to have some basic self-awareness  and autonomic properties. That is, the our notions object-orientation (i.e., black boxes whose insides are invisible to other software), and stack architecture (i.e., layered abstractions in which communications are limited to layers immediately above and below) need to be deconstructed.  Self-awareness at the most granular level (whether we are speaking of software, models, documents), becomes necessary in order to automate change management, versioning.

So, what if concepts, relationships, etc. of ontologies are not "data structures" but rather assemblages of semantic agents? What if documents are not "content objects," but rather assemblages of semantic agents at the most granular level?  What if services and behaviors of software are not expressed as procedural "objects" but rather assemblages of declarative semantic agents?  What if all forms of intellectual property in a pervasive computing ecosystem are  autonomic and expressed as declarative semantic agents.?

Mills

 
On Aug 4, 2007, at 5:10 AM, Danny Ayers wrote:

What a wonderful thread!

It's well-timed for me, having recently rediscovered an enthusiasm for
software agents (specifically on the web, write-up in progress...).

It looks like agent & agency are more of those terms that seem really
hard to pin down. The definitions[1] that Google's aware of (meta-pun
intended) seem to be split between acting on a person's behalf, or
being in some way animated. Neither of which really avoids John's
"miracle".

But I do think, for_practical_purposes, the notion of an agent is
really helpful. For example, it might be reasonable to model a large
pebble as an inanimate object. But that large pebble might be acting
as a door stop. It seems a lot easier to say that the pebble is an
agent which doesn't really do a great deal than to say it's an
inanimate thing that under certain circumstances can take on
quasi-active roles.

I'm not sure, but I suspect the neatest way of reflecting this kind of
thing in software is for the agent to have "self-awareness" in the
form of descriptive data, accessible through a standard protocol. The
default situation, as in a pebble's case, would just be a
question/answer that would go something like: "you there?"..."yep, I'm
here". If it was holding the door open, the relationship between the
pebble and the door would also be encoded in the self-description.
(Ok, I admit I'm thinking that pebble, door and relationship should
all have URIs dereferencable with HTTP).

Hmm, that was a bit tangential...what I'm trying to say is that there
are likely to be fewer surprises if everything is modelled as being
potentially active, rather than drawing the distinction, i.e.
everything is miraculous.

Cheers,
Danny.



-- 


_________________________________________________________________



Mills Davis
Managing Director
Project10X
202-667-6400
202-255-6655 cel
1-800-713-8049 fax




_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>