[Top] [All Lists]

[ontolog-forum] Warning about "cartoon models" of the human brain

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: John F Sowa <sowa@xxxxxxxxxxx>
Date: Fri, 24 Oct 2014 11:07:15 -0400
Message-id: <544A6B23.5040702@xxxxxxxxxxx>
The thread on "Data/digital object identities" has wandered around a
large number of loosely related topics.  Some of them involve issues
discussed in a recent interview published in IEEE Spectrum (ref below).    (01)

 From the intro to that interview:
> Big-data boondoggles and brain-inspired chips are just two of the things
> we’re really getting wrong... The overeager adoption of big data is likely
> to result in catastrophes of analysis comparable to a national epidemic
> of collapsing bridges.  Hardware designers creating chips based on the
> human brain are engaged in a faith-based undertaking likely to prove a
> fool’s errand. Despite recent claims to the contrary, we are no further
> along with computer vision than we were with physics when Isaac Newton
> sat under his apple tree...
> [Those] opinions belong to IEEE Fellow Michael I. Jordan, one of the
> world’s most respected authorities on machine learning and an astute
> observer of the field.    (02)

By the way, Jordan's criticisms of Big Data "boondoggles" do not
imply that he's opposed to analyzing and using Big Data.  See
http://bayesian.org/sites/default/files/fm/bulletins/1106.pdf    (03)

URL of the interview and some excerpts below.  Note the concluding
passage, where he endorses the idea of getting inspiration from
research on the brain.  But inspiration does not imply equation.    (04)

_______________________________________________________________________    (05)

http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts/?utm_source=techalert&utm_medium=email&;    (06)

Topics covered:    (07)

> Why We Should Stop Using Brain Metaphors When We Talk About Computing
> Our Foggy Vision About Machine Vision
> Why Big Data Could Be a Big Fail
> What He’d Do With US $1 Billion
> How Not to Talk About the Singularity
> What He Cares About More Than Whether P = NP
> What the Turing Test Really Means    (08)

Some excerpts:    (09)

On the topic of deep learning, it’s largely a rebranding of neural
networks, which go back to the 1980s. They actually go back to the
1960s; it seems like every 20 years there is a new wave that involves
them. In the current wave, the main success story is the convolutional
neural network, but that idea was already present in the previous wave.
And one of the problems with both the previous wave, that has
unfortunately persisted in the current wave, is that people continue to
infer that something involving neuroscience is behind it, and that deep
learning is taking advantage of an understanding of how the brain
processes information, learns, makes decisions, or copes with large
amounts of data. And that is just patently false...    (010)

I think it’s important to distinguish two areas where the word neural
is currently being used.    (011)

One of them is in deep learning. And there, each “neuron” is really
a cartoon. It’s a linear-weighted sum that’s passed through a
nonlinearity. Anyone in electrical engineering would recognize those
kinds of nonlinear systems. Calling that a neuron is clearly, at best,
a shorthand. It’s really a cartoon. There is a procedure called
logistic regression in statistics that dates from the 1950s, which
had nothing to do with neurons but which is exactly the same little
piece of architecture.    (012)

A second area involves what you were describing and is aiming to get
closer to a simulation of an actual brain, or at least to a simplified
model of actual neural circuitry, if I understand correctly. But the
problem I see is that the research is not coupled with any
understanding of what algorithmically this system might do. It’s not
coupled with a learning system that takes in data and solves problems,
like in vision. It’s really just a piece of architecture with the hope
that someday people will discover algorithms that are useful for it.
And there’s no clear reason that hope should be borne out. It is based,
I believe, on faith, that if you build something like the brain, that
it will become clear what it can do...    (013)

Spectrum: If you could, would you declare a ban on using the biology
of the brain as a model in computation?    (014)

Michael Jordan: No. You should get inspiration from wherever you can
get it. As I alluded to before, back in the 1980s, it was actually
helpful to say, “Let’s move out of the sequential, von Neumann paradigm
and think more about highly parallel systems.” But in this current era,
where it’s clear that the detailed processing the brain is doing is not
informing algorithmic process, I think it’s inappropriate to use the
brain to make claims about what we’ve achieved. We don’t know how the
brain processes visual information.    (015)

Other publications: http://www.cs.berkeley.edu/~jordan/publications.html    (016)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (017)

<Prev in Thread] Current Thread [Next in Thread>