ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Future Consequences of AI and Automation

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Michael Brunnbauer <brunni@xxxxxxxxxxxx>
Date: Sun, 7 Sep 2014 19:25:07 +0200
Message-id: <20140907172507.GA5287@xxxxxxxxxxxx>

Hello Philip,    (01)

On Sun, Sep 07, 2014 at 11:39:04AM -0400, Philip Jackson wrote:
> > > The self-preservation instinct is very important to humans, so important
> > > that people may think having goals implies self-preservation. Yet there
> > > is not such a logical implication.
> MB: 
> > That depends on how you formulate the goals. A human-level AI will probably
> > not even care if it's a logical implication.
>  
> I would agree one could formulate a goal for self-preservation, if one wished 
>to include it in the initial set of concepts for a human-level AI system, i.e. 
>its 'intelligence kernel'.    (02)

If the goal is that ?self should do ?y, then the system should be able to
recognize that something affecting ?self also affects the goal.    (03)

How a human-level AI would react to this is of course unknown at this point.    (04)

My opinion that human-level AI would probably not be controllable and servile
came from a more conservative definition of human-level AI that involves
being indistinguishable from humans. Your definition of human-level AI seems
to be broader.    (05)

> I don't know what is meant by "A human-level AI will probably not even care 
>if it's a logical implication", so cannot comment on this remark unless it is 
>elaborated.    (06)

I should have written "A human-level AI may not even care if it's a logical 
implication".    (07)

You pointed out that self-preservation is not a logical implication so I
tried to point out that human-level AI thinking does not have to be logical.    (08)

> The concept of "self-preservation" could be quite different for a human-level 
>AI, than it is for a human.    (09)

Agreed. Let's stop talking about self-preservation. It does not seem to lead
anywhere.    (010)

> More generally, the issue is about trust, rather than control of human-level 
>AI systems.    (011)

Yes - we may have to trust them or limit their physical abilities - just
like with a human being. I was supposing that there will be no practical
third option of looking into or manipulating the mind. That is what I mean
by being uncontrollable.    (012)

But with a broader sense of "human-level AI", such a third option may become
feasible - though I doubt that TalaMind traces of such an AI would be really 
accessible for humans in any way.    (013)

> People seem to have little problem trusting (and liking) R2D2 and C3PO of 
>Star Wars, and Data of Star Trek    (014)

IMO, R2D2 and Data are good examples of AIs that are neither controllable nor
servile - C3PO a bit less so :-)    (015)

> and Robbie the Robot of Forbidden Planet.    (016)

Unfortunately, I do not remember Forbidden Planet well enough.    (017)

> It seems my arguments are not convincing you to abandon a belief that 
>human-level AI will necessarily be a "threat" to humanity.    (018)

I do not see it as threat. If it can be called human, it cannot be a threat
to humanity.    (019)

Regards,    (020)

Michael Brunnbauer    (021)

-- 
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München
++  Tel +49 89 32 19 77 80
++  Fax +49 89 32 19 77 89 
++  E-Mail brunni@xxxxxxxxxxxx
++  http://www.netestate.de/
++
++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
++  USt-IdNr. DE221033342
++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel    (022)

Attachment: pgpEtb5x2Ngau.pgp
Description: PGP signature


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>