ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Future Consequences of AI and Automation

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sat, 6 Sep 2014 15:50:25 -0400
Message-id: <SNT147-W70D1E37050D45B570EA2B5C1C30@xxxxxxx>
Hello Michael,

On Sat, Sep 06, 2014, Michael Brunnbauer wrote:
>
> On Sat, Sep 06, 2014 at 10:33:19AM -0400, Philip Jackson wrote:
> > These are useful distinctions, and I agree this is a very important
> > topic for study and research. However, these distinctions aren't mutually
> > exclusive or exhaustive: A human-level AI could be controllable and servile,
> > and also have some capabilities that match ours, some that exceed ours, and
> > some that don't match ours. The extent to which it is controllable and servile
> > would depend on its goals and range of capabilities. A human-level AI would
> > not necessarily have all the same priorities and capabilities that human beings do.
> >
> > For instance, a human-level AI might be designed to be an "artificial scientist",
> > specializing in a particular domain, e.g. theoretical physics. It might not
> > have any goals outside understanding theoretical physics.
>
> It would conclude that switching it off threatens its goals.
 
Not necessarily. If it were designed to focus on only a single problem at a time, then it might conclude that it could not make any more progress on a particular problem until certain physical experiments were performed, and recommend that it be switched off until experimental results were available.
 
Further, it might not have any physical abilities that would prevent its being switched off, if people decided to.
 
> It would also have to be augmented with another human-level AI
> deciding which sub-goals or actions lie within "understanding
> theoretical physics", which would have to be augmented with
> another human-level AI deciding which sub-goals or actions
> lie within deciding which sub-goals or actions lie within... etc.
 
I think this is a specious argument, against the possibility of human-level AI in general. There does not seem to be any basis for a claim that every problem an AI system must solve, requires another, completely different AI system. Indeed, there is an existence proof to the contrary, in the fact that human-level intelligence is achieved within an individual human brain, which is a finite organism. If a computer can perform the processing that corresponds to intelligence in the brain, then human-level AI does not involve an infinite regress. If something beyond a Turing machine is required to perform such processing, then presumably whatever physical process is required could be replicated in a physical device of some sort.

Regards,
 
Phil

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>