ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] A No-Go Result For Human-Level Machine Intelligence‏

To: ontolog-forum@xxxxxxxxxxxxxxxx
From: John F Sowa <sowa@xxxxxxxxxxx>
Date: Tue, 06 Jan 2015 11:35:10 -0500
Message-id: <54AC0EBE.9000102@xxxxxxxxxxx>
Steven, Rich, and Melvin,    (01)

All these issues are intimately interconnected.    (02)

SEZ
> The physical ability to stand at a sink with dexterous arms and hands
> counts for something. Implanting the cognitive structure for dish
> washing, for example, into a bird or a beaver will not get your dishes
> washed.    (03)

By "built-in tools", I meant the systems for perception and action:
the bird's wings, beak and claws; the beaver's teeth & tail; the
human's limbs, vocal tract, etc.  They determine what skills are
physically possible.  But the cognitive structures for the skills
are not "implanted" -- a better term would be 'discovered'.    (04)

The enormous flexibility of animal brains + physical "toolset" +
environment + parental training + playful exploration by the young
+ what Minsky calls "the emotion engine" motivates the animals to
discover, develop, and perfect a wide range of skills necessary
"to make a living" in an often hostile world.    (05)

RC
> The learning part can be on a completely different computer, or
> a cloud of said servers and clients.    (06)

SEZ
> the separation of cognition and learning, and the robot is a dualism.    (07)

I'll avoid metaphysical issues about dualism.  The critical requirement
for designing intelligent systems is the integration of learning with
perception, action, and all forms of what is called cognition.    (08)

The algorithms called "machine learning" are applied to "dead data"
stored in a cloud or otherwise isolated from perception and action.
But the word 'learning' for those algorithms is a misnomer.  In the
1970s, similar methods were more properly called pattern recognition.    (09)

Behaviorists used the word 'learning' for related experimental methods.
And that's why behaviorism was called "rat psychology".  Behaviorism
died in the 1960s when cognitive psychologists showed that even rats
were more intelligent than the behaviorists claimed.    (010)

See slide 49 of http://www.jfsowa.com/talks/micai.pdf for the three-way
distinction of so-called "deep learning" by artificial neural networks,
the more complex "active learning", and the more realistic "cognitive
learning".  Slides 41 to 51 describe the cognitive cycle.    (011)

MC
> I enjoyed this presentation by jeff hawkins :
> https://www.youtube.com/watch?v=cz-3WDdqbj0
> "Brains, Data, and Machine Intelligence"    (012)

Jeff H. talks fast and makes some very strong claims for the methods
he advocates.  But those methods are specialized for processing
dead data, and his claims are based on oversimplified assumptions
about brains, neurons, and what so-called "neural nets" can do:    (013)

  * He limits his theories to the neocortex (AKA cerebral cortex).
    He correctly states that the neural columns of the cortex are
    relatively homogeneous from one area to another.  But he ignores
    many very important and very complex issues.    (014)

  * First issue:  Different regions of the cortex are specialized for
    purposes determined by their connections to sensory inputs, motor
    outputs, and other parts of the brain, including other parts of
    the cortex itself.  Those connections (AKA connectome) perform
    highly specialized and still poorly understood processes.    (015)

  * Second:  Many neuroscientists believe that the dynamic *circuits*
    that connect various regions are more important for the processes
    than the particular regions of the cortex.  A patient whose brain
    lesion affects *only* the cortex can recover much or all the lost
    function fairly soon.  But lesions that go deeper (basal ganglia)
    cause permanent damage.    (016)

  * Third:  Jeff H. assumes that the simple switches of the artificial
    neural networks (passing or inhibiting ones and zeros) are adequate
    models of what neurons do.  But neuroscientists are discovering much
    greater complexity in each neuron.  By some estimates, each cell of
    an animal body (including each neuron) can store the equivalent of
    a billion bits (10^9) and do some rather complex processing.    (017)

A book by an eminent neuroscientist who debunks oversimplified models:
Lieberman, Philip (2013) The Unpredictable Species: What Makes Humans
Unique, Princeton: Princeton University Press.    (018)

For an overview of issues, see http://www.jfsowa.com/talks/goal2.pdf    (019)

MC
> I would be willing to wager we will have ant like intelligence on
> the device by 2020.    (020)

The structures of insect brains are as complex and poorly understood
as mammalian brains.  Bees and ants are closely related species, but
bees can detect, remember, and communicate information about the
distance, direction, and relative importance of a food source.  Then
other bees can interpret that information to find the same source.
And they do that *without* a neocortex.    (021)

Specialists on insect brains claim that they are highly efficient,
highly optimized "precision instruments" compared to mammalian brains.
One explanation may be that the short life spans enabled an order
of magnitude more generations for "fine tuning" by evolution.    (022)

John    (023)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (024)

<Prev in Thread] Current Thread [Next in Thread>