ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Future Consequences of AI and Automation

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sun, 7 Sep 2014 14:23:23 -0400
Message-id: <SNT147-W6961049803336803791FDEC1C00@xxxxxxx>
Hello Michael,
 
On Sun, Sep 07, 2014, Michael Brunnbauer wrote:
>
> My opinion that human-level AI would probably not be controllable and servile
> came from a more conservative definition of human-level AI that involves
> being indistinguishable from humans. Your definition of human-level AI seems
> to be broader.
 
Thesis section 2.1.1 discusses issues related to the Turing Test, and gives my thoughts on how to define and recognize human-level AI. My perspective is that in seeking to achieve human-level AI, we need not seek to replicate erroneous human reasoning, nor in general to fool people into thinking the AI system is a human being. That is, human-level AI is not the same as human-identical AI.
 
To define and recognize human-level AI, I propose a 'design inspection' approach, rather than a behaviorist test. This would be an analysis of a system's design and operation that supports saying a system has abilities we would say demonstrate human-level intelligence.
 
Section 2.1.2 discusses characteristics of human-level intelligence which to date have not been fully achieved in AI systems. It proposes an initial list of abilities for human-level AI that includes natural language understanding, higher-level forms of learning and reasoning, imagination, and consciousness.
 
>PJ:
> > More generally, the issue is about trust, rather than control of
> > human-level AI systems.
MB:
> Yes - we may have to trust them or limit their physical abilities - just
> like with a human being. I was supposing that there will be no practical
> third option of looking into or manipulating the mind. That is what I mean
> by being uncontrollable.
>
> But with a broader sense of "human-level AI", such a third option may become
> feasible - though I doubt that TalaMind traces of such an AI would be really
> accessible for humans in any way.
 
Since Tala is a conceptual language with a syntax based on English, a TalaMind system could provide traces of its reasoning about a problem, represented as English sentences. So, this information could be accessible to humans. It could also be accessible to other AI systems, for review and checking.
 
> > People seem to have little problem trusting (and liking) R2D2 and C3PO of Star Wars, and Data of Star Trek
>
> IMO, R2D2 and Data are good examples of AIs that are neither controllable nor
> servile - C3PO a bit less so :-)
>
> > and Robbie the Robot of Forbidden Planet.
>
> Unfortunately, I do not remember Forbidden Planet well enough.
 
A great movie, well worth seeing again!
 
> > It seems my arguments are not convincing you to abandon a belief
> > that human-level AI will necessarily be a "threat" to humanity.
>
> I do not see it as threat. If it can be called human, it cannot be a threat
> to humanity.
 
OK, thanks for this clarification. Arguably, human-identical AI would be more of a threat to humanity, than human-level AI -- since human-identical AI would have the same emotions, instincts for self-preservation, etc., that humans have.
 
Regards,
 
Phil
 
link to thesis information

 

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>