ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Future Consequences of AI and Automation

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sat, 6 Sep 2014 18:02:47 -0400
Message-id: <SNT147-W45A7395F3911B06A0C3515C1C30@xxxxxxx>
Hello Michael,
 
On Sat, Sep 06, 2014, Michael Brunnbauer wrote:
> On Sat, Sep 06, 2014 at 03:50:25PM -0400, Philip Jackson wrote:
> > MB:
> > > It would conclude that switching it off threatens its goals.
> PJ:
> > Not necessarily. If it were designed to focus on only a single
> > problem at a time, then it might conclude that it could not make any
> > more progress on a particular problem until certain physical experiments
> > were performed, and recommend that it be switched off until experimental
> > results were available.
> >
> > Further, it might not have any physical abilities that would prevent its
> > being switched off, if people decided to.
>
MB:
> I am just suggesting that having goals implies self-preservation. You quoted
> someone saying that AIs need not have self-preservation instinct.
 
I am suggesting that having goals does not necessarily imply self-preservation. I quoted a passage from Leslie Valiant's 2013 book on machine learning, saying that systems could have intelligence matching or superior to humans in some respects, without having an instinct for self-preservation.
 
The self-preservation instinct is very important to humans, so important that people may think having goals implies self-preservation. Yet there is not such a logical implication. AI systems were developed with goals and goal-processing back in the 50's and 60's, and such systems did not have goals for self-preservation.
 
Also, an AI system would not necessarily equate being switched off, with self-destruction. It might consider being switched off the same way humans think of sleep, expecting to be re-awakened whenever appropriate. On a long space voyage, an AI system might switch itself off, expecting to be reawakened automatically at its destination.
 
MB:
> Constraining an AI to such a high level goal as understanding theoretical
> physics seems a really hard problem to me.
 
I don't want to claim it's an easy problem, but it doesn't seem in principle any harder than other problems related to human-level AI.
 
> How do you decide which activity may lead to that goal and stop the AI from
> daydreaming or reading novels (which might actually be helpful)?
 
Maybe you don't try. Maybe you allow the system to daydream or read novels. One aspect of discovery, discussed in the thesis, is reasoning metaphorically across domains.
 
Maybe all you do is give the system a target date, and ask it to try to write a paper on a certain problem by the target date. The system's abilities to act, or not act physically, could be what prevents it from doing something harmful to people.
 
> If you manually develop heuristics for this you will probably not get a good
> physicist. An AI that is not free to set its goals seems crippled.
 
I would agree that a human-level AI system needs to be able to develop its own heuristics and have some freedom to set its own goals. Control may be accomplished by limiting its sphere of action. If its goals are expressible in English, and open to examination by people, or by other AI systems, that could help also.
 
> Do you give it an electrical jolt if it does not churn out a paper every
> week? :-)
 
It might enjoy an electric jolt now and then :)
 
> Well, nudges may actually work and you may get the "domesticated animal" from
> one of your quotes. Those still get feral sometimes, of course.
 
The reference to domesticated animals was also in the quote from Valiant. He didn't note that such animals sometimes become feral. My understanding is that ferality is a result of the self-preservation instinct, in domestic animals that have been abandoned.
 
Regards,
 
Phil

 

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>