ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] electric sheep

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Joshua Lieberman <jlieberman@xxxxxxxxxxxxxxxxxxxxxxxx>
Date: Fri, 31 Aug 2007 14:18:24 -0400
Message-id: <2F9657B8-BD70-4720-9CDE-97C372FDC495@xxxxxxxxxxxxxxxxxxxxxxxx>
There is an amusing / alarming story from the filming of Lord of the  
Rings concerning the use of a massive autonomous agent animation  
program for the larger battles. It seems the first attempts at  
defining the parameters of warrior behavior resulted in most of the  
agents fleeing the battle scene. Only after concern for personal  
safety was reduced to an appalling level could the battle animations  
actually proceed in a "realistic" fashion.    (01)

To stretch the point a bit, what we might consider "understanding" vs  
pragmatics may also be a matter of scale.    (02)

Joshua Lieberman    (03)

On Aug 31, 2007, at 1:28 PM, Pat Hayes wrote:    (04)

>> Sean Barker wrote:
>>
>>>     At what level of complexity do I need to start concerning
>>>  myself with Semantics rather that just Pragmatics? At what point
>>>  would one say the robot "understands concepts", rather than behaves
>>>  according to particular pragmatics?
>>
>>>     I should add that as we develop increasing complex autonomous
>>>  systems, we need to create architectures that provide proper
>>>  separation of concerns, so this is primarily a question about
>>>  engineering, rather than philosophy.
>>
>> Autonomous military systems require significant "separation of
>> concerns", especially including separation of the concern for  
>> humanity
>> as a whole from concern for the success of a narrowly-defined  
>> military
>> mission.
>
> It is very unlikely indeed that autonomous military systems will have
> any ability to think about humanity as a whole. I think it likely
> that this is often true for autonomous biological military systems,
> especially when under enemy fire.
>
>> A robot that fetches claret is amusing, but an autonomous target
>> selector/destroyer is monstrous.
>
> Better get used to the idea. Prototypes are being built as we speak.
> Already there are devices deployed in Iraq which return fire from a
> humvee completely automatically (and with deadly precision.) They can
> extrapolate back to the firing point by listening to the attacking
> bullets. Personally, I have no problem with this, myself.
>
>>  If we must have such things, then it
>> might be a good idea to insist that their behaviors reflect deep
>> "concerns" about many things other than their narrowly-defined
>> missions.
>
> Not a chance. The best we can do is to make sure that they are not
> *completely* autonomous, but that human advisors are still in their
> decision loops. This at least passes the buck to something that can
> be prosecuted in a military court, in order to protect its
> commander-in-chief.
>
> Pat
> -- 
> ---------------------------------------------------------------------
> IHMC          (850)434 8903 or (650)494 3973   home
> 40 South Alcaniz St.  (850)202 4416   office
> Pensacola                     (850)202 4440   fax
> FL 32502                      (850)291 0667    cell
> phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog- 
> forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>    (05)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (06)

<Prev in Thread] Current Thread [Next in Thread>