Pat wrote in response to Sean: (01)
Sean>>Autonomous military systems require significant "separation of
>concerns", especially including separation of the concern for humanity
>as a whole from concern for the success of a narrowly-defined military
>>mission. (02)
Pat>It is very unlikely indeed that autonomous military systems will
have
any ability to think about humanity as a whole. I think it likely
that this is often true for autonomous biological military systems,
especially when under enemy fire. (03)
Sean>>A robot that fetches claret is amusing, but an autonomous target
selector/destroyer is monstrous. (04)
Pat>Better get used to the idea. Prototypes are being built as we speak. (05)
Already there are devices deployed in Iraq which return fire from a
humvee completely automatically (and with deadly precision.) They can
extrapolate back to the firing point by listening to the attacking
bullets. Personally, I have no problem with this, >myself. (06)
Well then I hope that we can come up with a REALLY good definition for
"free fire zones" for these agents.
:-) (07)
We can put off the fuzzier topic of "concern for humanity". But really,
as a semantic agent community, shouldn't we give some more thought to
"target selection" constraints? Humans have distaste for certain
things/actions as result of a long evolution and cultures enhance some
of our possible selection. Building in such constraints seems like a
good strategy...but of course combat lowers the barriers to such
things....and that's perhaps a problem for development of our early
autonomous agents. (08)
Gary Berg-Cross, Ph.D.
Spatial Ontology Community of Practice (SOCoP)
http://www.visualknowledge.com/wiki/socop
Executive Secretariat
Semantic Technology
EM&I
Suite 350 455 Spring park Place
Herndon VA 20170
703-742-0585 (09)
-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pat Hayes
Sent: Friday, August 31, 2007 1:29 PM
To: Steve Newcomb
Cc: [ontolog-forum]
Subject: Re: [ontolog-forum] electric sheep (010)
>Sean Barker wrote:
>
>> At what level of complexity do I need to start concerning
>> myself with Semantics rather that just Pragmatics? At what point
>> would one say the robot "understands concepts", rather than behaves
>> according to particular pragmatics?
>
>> I should add that as we develop increasing complex autonomous
>> systems, we need to create architectures that provide proper
>> separation of concerns, so this is primarily a question about
>> engineering, rather than philosophy.
>
>Autonomous military systems require significant "separation of
>concerns", especially including separation of the concern for humanity
>as a whole from concern for the success of a narrowly-defined military
>mission. (011)
It is very unlikely indeed that autonomous military systems will have
any ability to think about humanity as a whole. I think it likely
that this is often true for autonomous biological military systems,
especially when under enemy fire. (012)
>A robot that fetches claret is amusing, but an autonomous target
>selector/destroyer is monstrous. (013)
Better get used to the idea. Prototypes are being built as we speak.
Already there are devices deployed in Iraq which return fire from a
humvee completely automatically (and with deadly precision.) They can
extrapolate back to the firing point by listening to the attacking
bullets. Personally, I have no problem with this, myself. (014)
> If we must have such things, then it
>might be a good idea to insist that their behaviors reflect deep
>"concerns" about many things other than their narrowly-defined
>missions. (015)
Not a chance. The best we can do is to make sure that they are not
*completely* autonomous, but that human advisors are still in their
decision loops. This at least passes the buck to something that can
be prosecuted in a military court, in order to protect its
commander-in-chief. (016)
Pat (017)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (018)
|