ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Future Consequences of AI and Automation

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sun, 7 Sep 2014 11:39:04 -0400
Message-id: <SNT147-W4002E60123B910CE44453BC1C00@xxxxxxx>
Hello Michael,
 
On Sat, Sep 06, 2014, Michael Brunnbauer wrote:
>
> On Sat, Sep 06, 2014 at 06:02:47PM -0400, Philip Jackson wrote:
> > MB:
> > > I am just suggesting that having goals implies self-preservation.[...]
> PJ:
> > I am suggesting that having goals does not necessarily imply self-preservation.
MB:
> Not necessarily, yes.
 
OK, we are agreed on this point.
 
> PJ:
> > The self-preservation instinct is very important to humans, so important
> > that people may think having goals implies self-preservation. Yet there
> > is not such a logical implication.
MB:
> That depends on how you formulate the goals. A human-level AI will probably
> not even care if it's a logical implication.
 
I would agree one could formulate a goal for self-preservation, if one wished to include it in the initial set of concepts for a human-level AI system, i.e. its 'intelligence kernel'.
 
I don't know what is meant by "A human-level AI will probably not even care if it's a logical implication", so cannot comment on this remark unless it is elaborated.
 
The concept of "self-preservation" could be quite different for a human-level AI, than it is for a human. Earlier I noted an AI system might consider being switched off in the same way that humans think of going to sleep, expecting to be awakened later.
 
In addition, a human-level AI could periodically backup its memory, and if it were physically destroyed, it could be reconstructed and its memory restored to the backup point. It would not remember events between the backup point and its restoration.
 
So even if it had a goal for self-preservation, a human-level AI might not give that goal the same importance a human being does. It might be more concerned about protection of the technical infrastructure for the backup system, which might include the cloud, and by extension, civilization in general.
 
A human-level AI could understand that humans cannot backup and restore their minds, and regenerate their bodies if they die, at least with present technologies. It could understand that self-preservation is more important for humans, than for AI systems. The AI system could be willing to sacrifice itself to save human life, especially knowing that as an artificial system it could be restored.
 
I don't say all these things will necessarily happen, only that they are possibilities for how such systems could be developed.
 
> PJ:
> > Maybe all you do is give the system a target date, and ask it to try to write a paper on a certain problem by the target date.
MB:
> What if you change your mind and it learns of this? That this is a threat
> to the goal could be a valid logical conclusion.
 
It would not be a valid logical conclusion, unless the system were designed to view changes to goals as "threats", which would be an irrational design: In general, a human-level AI must be able to change or abandon goals; such flexibility is a hallmark of intelligence. Lack of such flexibility can indicate lack of intelligence.This may be especially true of goals that are established through interaction with humans or other AI's.
 
Perhaps the AI system thinks of itself as an entity in an economic system: It receives requests for services, and attempts to fulfill them, yet is open to having such requests changed or withdrawn, and willing to accept different requests.
 
> PJ:
> > The system's abilities to act, or not act physically, could be what prevents it from doing something harmful to people.
MB:
> If you have to use such provisions, the AI is already uncontrollable - in a sense.
 
I did not say such provisions were necessary, only that they are possible, if one wishes to ensure a system cannot harm others, even if it has intelligence matching or superior to humans in a particular domain. My description of an "artificial physicist" was just one suggestion for how a system might be designed, to address some of the concerns you mentioned at the beginning of this thread.
 
More generally, the issue is about trust, rather than control of human-level AI systems. Can such systems be designed so that humans will trust them, and so that they will honor the trust humans give them, and not harm humans?
 
To recap, a human-level AI would not necessarily have an instinct for self-preservation, nor would it necessarily value its existence over that of humans. Its thoughts and processing would be open to inspection by humans, or other systems. It could view itself as an entity in an economic system, performing only specific, legally permitted services.
 
Beyond that, an AI system's ability to merit trust would depend on its understanding of human sociality, emotions, and values. This is an area that merits much future research, and is very important to the development and use of human-level AI.
 
People seem to have little problem trusting (and liking) R2D2 and C3PO of Star Wars, and Data of Star Trek, and Robbie the Robot of Forbidden Planet. The goal of research on human-level AI should be to develop systems like these.
 
We may be at a point where we should agree to disagree. It seems my arguments are not convincing you to abandon a belief that human-level AI will necessarily be a "threat" to humanity. There are certainly many people, including Stephen Hawking, who feel that way.
 
Hopefully an impartial reader may conclude that human-level AI will not necessarily be harmful to humans. Indeed, section 7.9 of my thesis gives reasons why human-level AI may be beneficial, and necessary for humanity's long-term survival and prosperity.
 
The issue of technological unemployment is a more immediate question for consequences of AI and automation, discussed in section 7.9. It involves present technologies, whether or not human-level AI is developed.
 
Regards,
 
Phil
 
link to thesis information
 
 

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>