[Top] [All Lists]

Re: [ontolog-forum] P. C. Jackson's Thesis - Toward Human-Level AI

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sat, 6 Sep 2014 16:39:59 -0400
Message-id: <SNT147-W779BD98211244DA9D01282C1C30@xxxxxxx>
Hello Michael,
On Sat, Sep 06, 2014, Michael Brunnbauer wrote:
> On Sat, Sep 06, 2014 at 03:01:45PM -0400, Philip Jackson wrote:
> >
> > I say "probably", because the jury is out regarding whether Turing machines
> > are sufficient to achieve human-level AI.
> Such speculations along with philosophical discussions like the Chinese Room
> Argument and the hard problem of consciousness do not really contribute at this
> point. They may be interesting later when we have something that looks like
> human level AI or when we have more insight in the relation between
> consciousness and physics. I think most AI researchers have the stance that
> computability is sufficient.
I think you are probably right that most AI researchers have this stance. And if I were to make a small wager at this point, it would be in favor of this stance.
Even so, my thesis discusses the question because I don't want to prejudge it. From a theoretical standpoint, it is a question that remains open.
For similar theoretical reasons, the thesis discusses the Chinese Room Argument (section 4.2.4) and the Hard Problem of consciousness (section 4.2.7). The thesis contends the Chinese Room Argument is invalid. It considers the hard problem of consciousness to be open, yet contends the TalaMind approach does not depend on how it is answered. The thesis presents what may be a new answer to the hard problem, at the top of page 168.
> > One key issue is that however knowledge is represented, human-level AI involves
> > a "complex mess of unknown rules that have to be developed or evolved and the
> > interplay and evolution of the rules has to be guided." The challenge is not
> > made any simpler by trying to solve all the problems with formal logic.
> Yes - this is my argument turned around. So you basically make a pledge of
> using a framework that keeps all options open?
True, essentially. The TalaMind approach and framework keeps almost all options open. Perhaps the only option it would not employ is the approach of trying to reverse engineer the human brain, though that approach could yield important insights (viz., and 7.8).  TalaMind's difference from other approaches is one of direction, in recommending that research focus on developing systems according to the three TalaMind hypotheses. It is also different in proposing a design inspection alternative to the Turing Test, focusing on ‘higher-level mentalities’ of human intelligence, which include natural language understanding, higher-level forms of learning and reasoning, imagination, and consciousness.

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>