What a wonderful thread!
It's well-timed for me, having recently rediscovered an enthusiasm for
software agents (specifically on the web, write-up in progress...).
It looks like agent & agency are more of those terms that seem really
hard to pin down. The definitions that Google's aware of (meta-pun
intended) seem to be split between acting on a person's behalf, or
being in some way animated. Neither of which really avoids John's
But I do think, for_practical_purposes, the notion of an agent is
really helpful. For example, it might be reasonable to model a large
pebble as an inanimate object. But that large pebble might be acting
as a door stop. It seems a lot easier to say that the pebble is an
agent which doesn't really do a great deal than to say it's an
inanimate thing that under certain circumstances can take on
I'm not sure, but I suspect the neatest way of reflecting this kind of
thing in software is for the agent to have "self-awareness" in the
form of descriptive data, accessible through a standard protocol. The
default situation, as in a pebble's case, would just be a
question/answer that would go something like: "you there?"..."yep, I'm
here". If it was holding the door open, the relationship between the
pebble and the door would also be encoded in the self-description.
(Ok, I admit I'm thinking that pebble, door and relationship should
all have URIs dereferencable with HTTP).
Hmm, that was a bit tangential...what I'm trying to say is that there
are likely to be fewer surprises if everything is modelled as being
potentially active, rather than drawing the distinction, i.e.
everything is miraculous.