ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Future Consequences of AI and Automation

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Michael Brunnbauer <brunni@xxxxxxxxxxxx>
Date: Sat, 6 Sep 2014 22:27:57 +0200
Message-id: <20140906202756.GA2877@xxxxxxxxxxxx>

Hello Philip,    (01)

On Sat, Sep 06, 2014 at 03:50:25PM -0400, Philip Jackson wrote:
> > It would conclude that switching it off threatens its goals.
>  
> Not necessarily. If it were designed to focus on only a single problem at a 
>time, then it might conclude that it could not make any more progress on a 
>particular problem until certain physical experiments were performed, and 
>recommend that it be switched off until experimental results were available.
> Further, it might not have any physical abilities that would prevent its 
>being switched off, if people decided to.    (02)

I am just suggesting that having goals implies self-preservation. You quoted
someone saying that AIs need not have self-preservation instinct.    (03)

> > It would also have to be augmented with another human-level AI 
> > deciding which sub-goals or actions lie within "understanding 
> > theoretical physics", which would have to be augmented with 
> > another human-level AI deciding which sub-goals or actions
> > lie within deciding which sub-goals or actions lie within... etc.
>  
> I think this is a specious argument, against the possibility of human-level 
>AI in general. There does not seem to be any basis for a claim that every 
>problem an AI system must solve, requires another, completely different AI 
>system. Indeed, there is an existence proof to the contrary, in the fact that 
>human-level intelligence is achieved within an individual human brain, which 
>is a finite organism. If a computer can perform the processing that 
>corresponds to intelligence in the brain, then human-level AI does not involve 
>an infinite regress. If something beyond a Turing machine is required to 
>perform such processing, then presumably whatever physical process is required 
>could be replicated in a physical device of some sort.    (04)

I certainly did not want to argue against the possibility of human-level AI
but against the possibility of controlling it.    (05)

Constraining an AI to such a high level goal as understanding theoretical
physics seems a really hard problem to me. How do you decide which activity
may lead to that goal and stop the AI from daydreaming or reading novels
(which might actually be helpful)?    (06)

If you manually develop heuristics for this you will probably not get a good
physicist. An AI that is not free to set its goals seems crippled.    (07)

Do you give it an electrical jolt if it does not churn out a paper every 
week? :-)    (08)

Well, nudges may actually work and you may get the "domesticated animal" from
one of your quotes. Those still get feral sometimes, of course.    (09)

Regards,    (010)

Michael Brunnbauer    (011)

-- 
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München
++  Tel +49 89 32 19 77 80
++  Fax +49 89 32 19 77 89 
++  E-Mail brunni@xxxxxxxxxxxx
++  http://www.netestate.de/
++
++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
++  USt-IdNr. DE221033342
++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel    (012)

Attachment: pgpvx3gpVp47v.pgp
Description: PGP signature


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>