ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Deep Learning Godfather says machines learn like tod

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Ravi Sharma <drravisharma@xxxxxxxxx>
Date: Sat, 6 Jun 2015 15:22:07 -0700
Message-id: <CAAN3-5cFx8m_74Y30vGJ+LCQaWtbYZvCFbgpTgCNN_PsxaRuQA@xxxxxxxxxxxxxx>
John

I like what you discuss immensely and agree that computers and ANN and AI are very far from human intelligence, however can process stored data fast.

One example of deep dive searches that has been admirable, exposed while I was a IV&V Consultant at DOE, built by efforts of Dr. Walter Warnick, Director of OSTI and his colleagues at the Office of Science& Technology Information, US Department of Energy, deals with access to research papers at more than 80 countries online databases and libraries is WWS.org, http://worldwidescience.org/. This uses rich scientific terms and vocabularies and for some nuclear physics terms provided very accurate and limited responses, what was also aimed by Wolfram
 engine. Regards

On Sat, Jun 6, 2015 at 11:51 AM, John F Sowa <sowa@xxxxxxxxxxx> wrote:
I received an offline note from a colleague who is more sympathetic
to the claims about the so-called "deep learning" nets than I am.

Artificial neural networks (ANNs) have proved to be very useful for
pattern recognition.  That's an important component of learning.
But it's very far from supporting the complex learning and reasoning
for language, planning, problem solving, science, business, etc.

This thread was triggered by a CBC interview with Geoffrey Hinton:
http://www.cbc.ca/radio/thecurrent/the-current-for-may-5-2015-1.3061292/deep-learning-godfather-says-machines-learn-like-toddlers-1.3061318

Claim by GH:
> Anything you can do, neural networks will also eventually be able to do.
> All industries will be affected by it.

I agree with the second sentence. But there's no evidence for the first.
ANNs are good at pattern recognition, but the adjective 'deep' in front
of NN is pure hype.  As hype, it was successful in getting big bucks
from Google.  That's the best (and worst) you can say for it.

John

-------- Forwarded Message --------
Subject: Re: Deep Learning Godfather says machines learn like toddlers
Date: Sat, 06 Jun 2015 13:52:02 -0400
From: John F Sowa

Those are good questions.  I'll present an invited talk in September
on "The Cognitive Cycle".  Your questions are helpful for suggesting
some of the issues I should address.  I'm preparing new slides,
among which I'll include some updates to slides 41 to 57 of
http://www.jfsowa.com/talks/micai.pdf

> as interviews go I thought GH was not bad

I agree. But the following point is where he goes off the deep end:

> GH seems to believe that deep learning is the magic bullet that
> will jumpstart AI

For many years, psychologists have used a slogan that serves as a good
bullcrap detector:  "Beware the man with a one-factor theory."

When talking about computer hardware/software architecture, Fred Brooks
stated a similar principle:  "There is no silver bullet."  That holds
for computers made of silicon or meat.

> answer this
> how can something as fuzzy and mushy and analog as the brain invent
> and do something as crisp as math?

Short answer:  The brain supports powerful, but still poorly understood
tradeoffs and combinations of discrete symbols and continuous patterns.

Longer answer:

  1. Neurons are discrete, and each one has a large but discrete
     set of synapses.  And each neuron is capable of storing a large
     amount of information -- perhaps in the microtubules, which
     contain about a billion discrete tubulin pairs (each capable
     of switching between two states).  Those tubules are present
     in all cells, not just neurons.  For an overview, see
     http://en.wikipedia.org/wiki/Microtubule,

  2. The existence of microtubules has been known for years.
     Sherrington claimed that they are involved in learning even for
     a single-celled paramecium (which exhibits some rather complex
     behavior *without* neurons).  A paramecium can detect a barrier
     and go around it, find a mate and mate, and perform similar
     actions more quickly on repeated occurrences of similar stimuli.
     Since a single cell can retain information for some period of
     time and do some learning, each neuron must have that ability.

  3. In the cerebral cortex, neurons are organized in columns (mini-
     or microcolumns), which seem to be grouped in larger macrocolumns.
     For a good review of a half century of research, starting with
     Mountcastle (1955), see the article (Horton & Adams 2005) with
     the title "The cortical column: a structure without a function",
     http://rstb.royalsocietypublishing.org/content/360/1456/837

  4. Since those columns are present in all mammalian brains, it's
     highly unlikely that they would be preserved for over 100 million
     years without serving some important function(s).  Both the neural
     network and the symbolic proponents claim that they support their
     pet theories.  It's quite possible that they are critical for
     the tradeoffs between the discrete and continuous.  But anybody
     who claims to know exactly what they do is lying.

> what is the equivalent of firing for an ensemble, and how is it
> processed by what follows? (eg I decide to reach for an object)

That is the killer question that destroys Hinton's claim that his
networks can do anything more than pattern recognition.  A single
neuron can trigger one or more motor neurons to make a muscle move.
That's the level of reflex action when you touch a hot stove.

But to do any kind of complex action, you need a structured pattern
of firings that coordinate multiple sensory inputs and motor responses
over a period of milliseconds to minutes.  Note the cycles by Boyd,
Ohlsson, and Albus, which I copied in my micai.pdf slides.

That requires *structure* -- some kind of pattern of interconnections
that relate multiple neurons (and/or columns) across wide areas of the
brain for some organized temporal sequence.  You may call those patterns
schemata, scripts, semantic networks, or a language of thought.  But
they go far beyond anything that Hinton & Co. talk about or implement.

Re toddlers:  Please look at the Ted talks I cited in my last note
(copy below).  They show that babies, chimps, and elephants can
learn and perform complex coordinated actions.  The cognitive cycle
when combined with *structured* patterns can explain such things.
The so-called "deep learning" by ANNs cannot.

John
___________________________________________________________________

The first is a Ted page about the way language affects thinking and
a talk about how babies learn and generalize:

http://ideas.ted.com/5-examples-of-how-the-languages-we-speak-can-affect-the-way-we-think/?utm_source=pocket&utm_campaign=fftutorial

The note by Jessica Gross discusses evidence from English vs Chinese,
Australian Aboriginal languages, Spanish, Japanese, Zuñi, Russian,
Hebrew, and Finnish.  Her note includes pointers to a Ted talk and
some articles that go into more detail.

The Ted talk by Laura Schulz includes short video clips about the
way babies generalize from examples of toys and how they quickly
reach for the toys to test their hypotheses:

http://www.ted.com/talks/laura_schulz_the_surprisingly_logical_minds_of_babies

She does not believe that computer systems with the learning ability
of young children will be developed within the lifetimes of herself
or anyone in the audience.  Given other research in AI and cognitive
science, I agree.  But I also believe that more can be done in AI.
That's the theme of http://www.jfsowa.com/talks/micai.pdf

I also heard an NPR interview with Franz de Waal, who has written
several very good books about chimpanzees, bonobos, and humans.
For the interview and a Ted talk "Do animals have morals", see
http://www.npr.org/2014/08/15/338936897/do-animals-have-morals

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 



--
Thanks.
Ravi
(Dr. Ravi Sharma)
313 204 1740 Mobile

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>