ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] electric sheep

To: edbark@xxxxxxxx, "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Dennis L. Thomas" <DLThomas@xxxxxxxxxxxxxxxxxxxxxxxx>
Date: Sun, 2 Sep 2007 11:29:08 -0700
Message-id: <151A5331-B7F8-4B50-A3A2-283F760689B5@xxxxxxxxxxxxxxxxxxxxxxxx>

On Aug 31, 2007, at 2:40 PM, Ed Barkmeyer wrote:

But managing conflicting directives and values is a problem for 
many humans.  So we shouldn't be surprised that it is a problem 
for artificial intelligences.  And it is a favorite topic for 
study in perhaps the only part of philosophy this forum hasn't 
yet talked much about -- ethics.

Ethics and values are excellent topics, since most every organization worth its salt adheres to a set of core values which serve to guide (constrain) the missions and objectives of the organization.  These values are essential constraints for decision-making at every level.  In large organizations, say the U.S. Government, the Constitution of the United States provides national values, but each Agency has their own values and missions as do each of its departments.  This compounds the complexity problem and places enormous compliance costs on government.

Dennis L. Thomas 
Knowledge Foundations, Inc.
Ofc (714) 890-5984 
Cell (760) 500-9167 
------------------------------------------------
Managing the Complexity of Enterprise Knowledge


On Aug 31, 2007, at 2:40 PM, Ed Barkmeyer wrote:

Joshua Lieberman wrote:

There is an amusing / alarming story from the filming of Lord of the  
Rings concerning the use of a massive autonomous agent animation  
program for the larger battles. It seems the first attempts at  
defining the parameters of warrior behavior resulted in most of the  
agents fleeing the battle scene. Only after concern for personal  
safety was reduced to an appalling level could the battle animations  
actually proceed in a "realistic" fashion.

This comes as no surprise.  For thousands of years, soldiering 
has been motivated by one or more of:
  (1) protection of home and family
  (2) belief in a higher good, that transcends self-preservation
  (3) lust for battle and loot, something to make life worth 
living in a world where life was drab and short,
  (4) belief that your army is invincible, and therefore, you are 
unlikely to take damage, or at least that it is no more dangerous 
than many other jobs
  (5) force and fear, soldiers were serfs and criminals serving 
time, and while the enemy might kill you if you charge, your own 
MPs will surely kill you if you run.
  (6) social force, soldiering is a requirement of manhood, a 
sacred duty, cowardice makes you a worthless outcast.

And in practice, it is the inverse of (4) -- the perception that 
your army is going to lose -- that makes self-preservation the 
higher priority over the other motivations involved.  If you 
lose, the other objectives/consequences will not be fulfilled. 
And (6) regularly trumps even that -- Leonidas' 300 Spartans, the 
captain who goes down with his ship, the 12-year-old drummer 
beating rally-round-the-flag for a routed regiment, and so on.

You don't have to program in sheer recklessness, although it is 
definitely a factor;  you just have to model the operating value 
systems correctly.

But managing conflicting directives and values is a problem for 
many humans.  So we shouldn't be surprised that it is a problem 
for artificial intelligences.  And it is a favorite topic for 
study in perhaps the only part of philosophy this forum hasn't 
yet talked much about -- ethics.

-Ed

P.S. My son observed: Players of the Lord of the Rings card game 
all know that the Retreat and Cancel a Skirmish cards are among 
the most effective for advancing your Fellowship.  Running away 
is a great tactic if you can make it work to your advantage.

-- 
Edward J. Barkmeyer                        Email: edbark@xxxxxxxx
National Institute of Standards & Technology
Manufacturing Systems Integration Division
100 Bureau Drive, Stop 8263                Tel: +1 301-975-3528
Gaithersburg, MD 20899-8263                FAX: +1 301-975-4694

"The opinions expressed above do not reflect consensus of NIST,
  and have not been reviewed by any Government authority."


_________________________________________________________________



Dennis L. Thomas 
Knowledge Foundations, Inc.
Ofc (714) 890-5984 
Cell (760) 500-9167 
------------------------------------------------
Managing the Complexity of Enterprise Knowledge




_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>