ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] electric sheep

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Steve Newcomb <srn@xxxxxxxxxxxxx>
Date: 31 Aug 2007 09:31:21 -0400
Message-id: <87642vq192.fsf@xxxxxxxxxxxxxxxxxxx>
Sean Barker wrote:    (01)

>       At what level of complexity do I need to start concerning
> myself with Semantics rather that just Pragmatics? At what point
> would one say the robot "understands concepts", rather than behaves
> according to particular pragmatics?    (02)

>       I should add that as we develop increasing complex autonomous
> systems, we need to create architectures that provide proper
> separation of concerns, so this is primarily a question about
> engineering, rather than philosophy.    (03)

Autonomous military systems require significant "separation of
concerns", especially including separation of the concern for humanity
as a whole from concern for the success of a narrowly-defined military
mission.    (04)

A robot that fetches claret is amusing, but an autonomous target
selector/destroyer is monstrous.  If we must have such things, then it
might be a good idea to insist that their behaviors reflect deep
"concerns" about many things other than their narrowly-defined
missions.    (05)

In a 19th-century novel that still reverberates strongly in popular
culture, Mary Shelley wrote about what happens when a marvelous
engineering task is accomplished in the absence of awareness of
broader issues.    (06)

In a series of novels about robots, Isaac Asimov examined the
implications of having "Laws of Robotics" that reflect the broadest
concerns for the welfare of humanity.  One of the later novels is kind
of a murder mystery; it's all about a robot who is already dead when
the novel begins.  By the end of the novel, we understand that the
robot had got himself into a jam in which he had no options at all,
under the "Laws" he was bound to obey.  As a result, he suffered from
a kind of halting problem.  It turned out to have been neither murder,
nor suicide, nor a system failure.  In a sense, the Laws of Robotics
were Broken As Designed (BAD), in that they did not provide a way
for a robot to survive their demands.    (07)

It's so much easier to build a monster.  Let's just forget about those
pesky philosophical questions.  Let's get on with the engineering!
(;^)    (08)

-- Steve    (09)

Steven R. Newcomb, Consultant
Coolheads Consulting    (010)

Co-editor, Topic Maps International Standard (ISO/IEC 13250)
Co-editor, draft Topic Maps -- Reference Model (ISO/IEC 13250-5)    (011)

srn@xxxxxxxxxxxxx
http://www.coolheads.com    (012)

direct: +1 910 363 4032
main:   +1 910 363 4033
fax:    +1 910 454 8461    (013)

268 Bonnet Way
Southport, North Carolina 28461 USA    (014)

(This communication is not private.  Since the destruction of the 1978
Foreign Intelligence Surveillance Act by the U.S. Congress on August
5, 2007, no electronic communications of innocent citizens can be
hidden from the U.S. government.  Shamefully, our own generation,
acting on fears promoted by fraudulently-elected rogues, has allowed
absolute power (codenamed "unitary Executive") to be usurped by those
very same rogues.  Hail Caesar!)    (015)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (016)

<Prev in Thread] Current Thread [Next in Thread>