ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Neuro-ontology, Onto-neurology, and the Semantic Web

To: "John Black" <JohnBlack@xxxxxxxxxxx>
Cc: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Pat Hayes <phayes@xxxxxxx>
Date: Wed, 8 Aug 2007 13:55:41 -0500
Message-id: <p06230903c2dfa24e123b@[10.100.0.67]>
>Pat Hayes wrote:
>>  >Pat Hayes wrote:
>>>>  >Does it really involve breakdown and analysis?
>>>>>
>>>>>I can program a mainframe to view a rubber ball
>>>>>and calculate a space time coordinate where a
>>>>>robotic arm can intercept the ball and catch it
>>>>>after one bounce. I would have to feed in
>>>>>numbers for constant effect of gravity, mass of
>>>>>ball, air resistance, durometer reading of ball,
>>>>>initial trajectory and velocity etc.
>>>>>
>>>>>How is it a dog can catch the ball but does not
>>>>>process any of this? I think there is something
>>>>>more than simple calculations.
>>>>
>>>>  There is a very simple algorithm for catching a
>>>>  ball, which was worked out by the psychologist
>>>>  Gibson. Assume the ball is in the air coming
>>>>  roughly towards you (if not, give up). Look at
>>>>  the ball and keep it in the center of your
>>>>  vision. If the ball is moving left, run left; if
>>>>  right, run right; if upwards, run back; if
>>>>  downwards, run forward. Try to keep the ball
>>>>  stationary in your field of view. When it gets
>>>>  close enough, catch it. You don't need to do all
>>>>  the simulation with numbers. Neither does the
>>>>  robot.
>>>>
>>>>  Now, this is a very basic algorithm, and dogs,
>>>>  people and robots can all do better than this;
>>>>  but this is the basic technique.
>>>
>>>
>>>Jeff Hawkins, in his book "On Intelligence", the book Frank mentioned
>>
>>I wonder why this potboiler
>
>Oh come on, do you really think Jeff Hawkins needs book royalty 
>money? Surely not.    (01)

I chose the wrong word there. "Tract" might have been better.    (02)

>>gets so much publicity. This is really an extremely poor book. I 
>>was sent it to review, and turned the offer down on the grounds 
>>that it was impossible to review without slandering the author.
>
>"slandering", is maliciously making *FALSE* statements about 
>someone. Was that your intent?    (03)

No. perhaps slanderous-sounding. In brief, the book left me with a 
bad taste in my mouth. It is anti-intellectual, self-serving, shallow 
and pompous. It is more about the author than his ideas, and it loses 
no opportunity to compare himself with a list of Great Thinkers. I 
don't know Jeff Hawkins, and those who do tell me the impression is 
mistaken, but the book gives off a truly offensive air of having been 
written by a man with a huge opinion of himself based largely, like 
all his other opinions, on ignorance. (BTW, this may well be caused 
by its having been written not by him, but by a ghostwriter who has 
been dazzled by him in some way. In which case the only blame that it 
casts of Hawkins is his poor judgement in allowing it to be published 
without rigorous editing.)    (04)

>>It displays ignorance and arrogance on almost every page in roughly 
>>equal proportions. There are no new ideas in it. The central 
>>"insight", that the cortex is basically performing the same 
>>computation everywhere, is a suggestion which certainly goes back 
>>at least to Valentino Braitenberg in the 1960s, and probably before 
>>that.
>
>So what? FOL is even older. Is there something wrong with working 
>with existing ideas like that? I don't think so.    (05)

Not as long as you don't claim them as your own "epiphany", after 
first using several pages explaining to the reader what "epiphany" 
means, using as examples Einstein and Aristotle. Hawkins, by obvious 
implication, belonging in this exalted company for having HIS 
brilliant idea (when those AI bumkins at MIT wouldn't even let him 
into their graduate program, as he reminds us at least four times). 
But in any case, this idea isn't in the same league as FOL. Its the 
germ of an idea, a kind of idea-spark. It immediately suggests a host 
of next questions: like, HOW can this single cortical process give 
rise to such a wide variety of functions? Hawkins doesn't tackle that 
(or any of the other obvious next questions.) Braintenberg ran an 
entire laboratory for years investigating the consequences of this 
idea.    (06)

>The only part
>>that one should read carefully is the repeated observation, by the 
>>author, that he knows nothing at all about AI or neuroscience.
>>
>>>, uses
>>>the dog catching a ball example.
>>
>>Its been familiar to AI and psychology (particularly Gibsonian 
>>psychologists) for 40 years. This idea that computers must be doing 
>>numerical-style simulations in order to act in the world is a trope 
>>that only someone completely ignorant of actual AI work could 
>>possibly take seriously. Check out the ideas of an 'affordance' and 
>>a 'heuristic'.
>
>But I think he is also objecting to ridicuouly over-simplified, but 
>pompouly stated and stupidly named 'basic algorithms' such as the 
>one you laid out here for catching a ball. I feel certain that no 
>dog, human, or robot has ever or will ever catch anything with such 
>trivial nonsense.    (07)

I'm afraid then that you are about as ignorant of the underlying 
science (and the current state of technology) as Hawkins seems to be. 
In fact, this is the basic way to catch a ball. Children, and robots 
that play ping-pong both use it. Really skilled catchers, as in 
baseball, have a host of more complex heuristics and methods, but 
they still use this basic technique when running for a fly ball. What 
it boils down to is that if you can keep the image of the ball 
stationary in your field of view, then you(r head) and the ball will 
eventually collide, which you can check by simple geometry. This is 
by the way a textbook illustration of a basic point, which is that 
complex behaviors - often very effective complex behaviors - can 
emerge from simple algorithms interacting with a dynamic world. Now, 
that *really is* an important insight (not mine, I hasten to add).    (08)

(Why is it, I wonder, that people feel that they are competent to 
dish out opinions about how to do AI and psychology without knowing 
the first thing about it? If someone were to take this attitude to 
chemistry of physics we would treat them as insane. I wouldn't dare 
to try to tell you how to do XML message passing.)    (09)

>And it was just this kind of exagerated, hyperbolic claim about how 
>easy it would be for AI solve such problems    (010)

I did not claim it was *easy* to catch a ball. It is however a lot 
easier if you understand the basic ideas of how to write algorithms 
which interact opportunistically with a dynamic world, than if you 
try to simulate that world numerically. (Here's one skill not 
explained by the above. Baseball fielders will run in the right 
general direction even when all they can see is the swing of the bat.)    (011)

>  that led to first to extreme enthusiasm and later bitter 
>disappointment with the whole AI enterprise.    (012)

The "AI Winter" of the 1980s was indeed a reaction to hyperbole, 
mostly by people in funding agencies, but those days are long past. 
And it was a funding crisis, not a real crisis. Expert systems and 
robotics and learning algorithms are now mainstream stuff. AI has 
been a source of industrial technologies for about 15 years now, and 
continues to be.    (013)

>>>  He offers another example as well. I will
>>>adapt it to this conversation:
>>>
>>>You might claim that a mainframe computer could calculate the new position
>>>of each of four individuals sitting on a waterbed after a fifth one climbs
>>>on board. You would have to feed it numbers about the weight of each of the
>>>five participants, the force of gravity, the stretchiness of the plastic,
>>>etc.
>>
>>That is one approach. Or, you might try using AI techniques. 
>>Qualitiative physical reasoning for example could tell you a lot 
>>without using a single number.
>>
>>>Or you could come along and say there is a very basic algorithm for
>>>adjusting the position of people on a waterbed when an additional person
>>>climbs on. As the new weight depresses the plastic in the location of the
>>>fifth person, simultaneously move water out from under the new person and
>>>put it under the four that are already there, etc.
>>>
>>>Hawkins' point, as I understand it, is that neither calculation nor
>>>algorithms are needed to explain how a waterbed adjusts to changing
>>>conditions.
>>
>>This really is kind of stupidly obvious.
>
>And yet people still stupidly claim that when biological organisms 
>with brains solve difficult problems they must be doing some sort of 
>computation or following some 'basic algorithm'. So maybe its not so 
>obvious and deserves repeating.    (014)

You are repeating a mistake which even Hawkins did not make. The 
water bed adjusts because it is what it is, because of its physical 
form and make-up. No computation required. An *explanation* of how it 
does what it does can be approached in various ways. Qualitative 
reasoning can account for observations such as that the people 
already on it move higher when someone else sits down (and AI 
programs can, and do, use QR to make predictions like this reliably, 
even for much more complex systems.) If you want to know exactly how 
much higher then you will likely have to use some numbers.    (015)

So, when biological systems with brains solve problems, what exactly 
is the hypothesis? They they work like water beds? Surely not. That 
they think about water beds by having an internal 'mental water bed' 
and trying it out? Well, maybe. But how does one get a water bed into 
a cortex? Presumably what is in the cortex isn't a real bed or real 
water (injecting water into a cortex tends to damage it) but some 
kind of ...mental model of a water bed. Hm... modelled by the state 
of the cortical neurons, presumably. Well, what KIND of mental model? 
AI and psychology have really tried out a lot of these, and they all 
have their adherents. One idea is that whatever a mental model is, 
its functionally similar to something that can figure out 
consequences of hypotheses which the cortex makes to itself (lets 
see, IF we had five instead of four people on the bed, THEN what 
would happen ?...). If you try making an imitation of this on a 
computer, you quickly find yourself writing sets of axioms. Or, you 
might agree with Johnson-Laird that mental models are more like, 
well, models than sets of sentences. Or with Kosslyn and many others 
that they are more like pictures or images. Or with Lakoff and others 
that, while symbolic, they are essentially metaphorical in nature 
rather than deductive. Or with Leake that they are essentially 
digests of memories of previous experiences, stored for later re-use 
when matching circumstances arise. But all of these require some kind 
of information to be in stored and manipulated in the cortex. And 
this is computation, in an admittedly broad sense (not using a Von 
Neumann architecture.) A lot has been written on neurologically 
plausible architectures, and some of it has been verified by findings 
in actual biological systems (eg phase encodings in bat brains allow 
them to hear frequencies higher than neural response times seem to 
allow.) This hypothesis is not 'stupid'. It is in fact the only 
viable hypothesis that we have for how the brain operates.    (016)

>>>But further, that they are not necessary to explain how dogs or
>>>people learn how to catch balls.
>>
>>Where does he claim that?
>
>How about on page 68, when he says, "The answer is the brain doesn't 
>"compute" the answers to problems; it retrieves the answers from 
>memory. ....The entire cortex is a memory system. It isn't a 
>computer at all.    (017)

He is using "compute" too narrowly here. Of course the brain isn't a 
Von Neumann computer. Nobody in AI thinks it is. And when he talks of 
retrieval of answers from memory, he is in fact not talking about a 
simple retrieval, but something much more like an opportunistic 
process of matching. Which is a computation, whether Hawkins realizes 
that or not. (And one which has been extensively studied under the 
heading "case-based learning", an entire field about which Hawkins 
and you seem to be completely ignorant, but which is being used for 
example in Web 2.0 applications as we speak. ) And he doesn't say how 
this retrieval or matching can be used to solve new problems: his 
running example is a simple sub-cognitive task (opening the door of 
your home) which you perform every day in the same way. Fine, that 
feels memory-like. Now figure out how to play chess, or learn to 
balance three plates on your arm (which I did a year or so ago) or 
how to hold a half-carved sculpture still while you carve its back 
(use straps, the eventual solution). None of these felt very much 
like memory retrieval.    (018)

>....Let me show, through an example, the difference between 
>computing a solution to a problem and using memory to solve the same 
>problem.  Consider the task of catching a ball. Someone throws a 
>ball to you, you see it traveling toward you, and  in less than a 
>second you snatch it out of the air. This doesn't seem too difficult 
>- until you try to program a robot arm to do the same. As many a 
>graduate student has found out the hard way, it seems nearly 
>impossible    (019)

It is harder than it seems naively to be, because we have virtually 
no ability to introspect our own motor system. Again, a handy 
observation for a neuroscience-101 course intro., but hardly a 
world-shaking insight. The 100-step rule, by the way, was first 
enunciated, AFAIK, by Jerome Feldman in his early (1970s) papers on 
"local connectionist" models. Indeed, it is a hard road to hoe. A lot 
of people have taken it very seriously and have discovered a lot 
about what can be done within it. You could probably read it all in 
about a year of evenings.    (020)

>......And although a computer might be programmed to successfully 
>solve this problem, the one hundred step rule tells us that a brain 
>solves it in a different way. It uses memory......The memory of how 
>to catch a ball was not programmed into your brain; it was learned 
>over years of repetitive practice, and it is solved, not calculated, 
>in your neurons."
>
>>>And he goes even further, he claims that
>>>this ability is due to the way brains, the cortex in particular, are
>>>constructed and operate, as it is with the waterbed.
>>
>>Taken literally, that is clearly false. The brain doesn't have an 
>>internal *physical* model of all the things that brains can think 
>>about. No account like this can possibly account for the generality 
>>or the plasticity of neural functioning. But in any case, I don't 
>>think that he does claim this, in fact.
>
>He makes no such claim, so stop indulging yourself, you're 
>demolishing only your own strawman. The idea is that the structure 
>and function of the cortex, combined with worldly experiences, 
>transforms the cortex in ways that increase skills.    (021)

That is simply not an idea. At one level it is just an observational 
fact. That is indeed what the function of the cortex seems to be, and 
presumably it does it in part by virtue of its structure. That much 
was known in about 1900. Now, let us try to get started on some 
actual science. HOW does it do this? This is the question that was 
asked by every neuroscientist and psychologist from Ramon y Cajal to 
now (See http://faculty.washington.edu/chudler/hist.html for a 
history overview). Nobody really knows. (Hawkins says nothing at all 
about this, or about actual neuroscience, anywhere in his entire 
book.) There are so many partial answers and hypotheses and pieces of 
information which might be relevant that one gets lost trying to keep 
up with it all. One very general, broad idea is that it performs some 
kind of information processing, where information is somehow encoded 
in the connectivity state of the neurons (and maybe the glial cells 
as well). There is no clear evidence against this idea, and lots of 
evidence to support it. In fact it is hard to see how else the cortex 
could possibly work. And, to repeat, information processing is 
computation.    (022)

>Skill and knowledge are the inevitable consequences of the cortex 
>encountering experiences. They are learned, not programmed.    (023)

Learning takes place as a result of appropriate symbol processing, 
AKA computation. Quite a lot is known about how this is possible and 
what kinds of processing seem to be needed. Skill and knowledge are 
by no means 'inevitable': for example, people with Down's syndrome 
seem to have functioning cortexes. Rats have cortexes but they are 
lousy at carpentery. Many people with normal cortexes are not 
especially good at acquiring skills. Mere possession of a cortex 
doesn't seem to be enough: there is overwhelming evidence for example 
that non-cortical structures are fundamentally involved in perception 
and learning.    (024)

>Skills are effective memories, not algorithms.    (025)

This is a false contrast. Taken literally, the skill = memory claim 
is obviously false. I have many skills which are not what a laymen 
would call memories (walking, for example). So what you and Hawkins 
must mean is that skills are made possible by some kind of 
information in my cortex which was laid down during earlier 
interactions with the world. Right. Almost everyone in AI will agree. 
Now, many questions appear. What selects which information to lay 
down in this way? How is it stored and represented? How is the 
relevant part of it accessed when needed? And so on. Hawkins does not 
even seem to be aware that questions like this need to be answered. 
AI, neuroscience and cognitive psychology have been tackling these 
questions since their inception. It is a little late to be announcing 
an entirely new project to *begin* to investigate all this stuff when 
there is a century or so of established science to work from. 
Particularly if one seems to be completely ignorant of it all.    (026)

>  Knowledge is accurate prediction based on memories    (027)

How is that done? On the face of it, having even a flawless memory of 
the past would not seem, by itself, to provide any ability to predict 
the future, except perhaps in a rigidly controlled environment. (Or, 
put another way, what distinguishes an 'effective' memory from a mere 
memory?) And indeed, it is quite easy to see that in order to predict 
from memory, one has to perform some kind of abstraction on the 
memories, to isolate the predictive part from the accidental dross: 
to perform, in fact, some kind of induction. Now, how does the cortex 
do this? Any ideas? (No sarcasm intended: we do need some good new 
ideas here. We have lots of ideas, but none of them seem to be fully 
adequate. Many of them, by the way, use neurologically plausible 
architectures.)    (028)

>, not a set of axioms. Memories happen to brains exposed to 
>experiences the way deformations happen to a waterbed when sat on    (029)

That is most definitely NOT how brains work. Brains are not plastic 
bags full of water.    (030)

>. And Hawkins accounts for plasticity as well.
>
>>>  But he doesn't stop
>>>there, he also claims that progress in getting  machines to due similar
>>>tasks has been severely hampered by the erroneous belief that algorithms and
>>>calculation could somehow reproduce the functionality of the cortex.
>>
>>About which he knows virtually nothing. And, by the way, he does 
>>not claim that the cortex isn't performing computations at all: in 
>>fact, he gives a sketch of the IT process he thinks it is 
>>performing. His idea here is about 50 years old also, and has been 
>>investigated thoroughly, and is known to be incomplete or wrong.
>
>I'm not convinced you really know what his ideas are.    (031)

Im going only by his book. People tell me his talks are more 
informative. If you can expound his ideas better than he can, please 
go ahead. I'd love to hear a new idea relevant to AI, other than 
something which merely tells me that AI is full of crap because MIT 
wasn't interested in neural modelling (why didn't he apply to grad 
school somewhere that was? Its not hard to find out.)    (032)

>I'm not sure you were/are capable of giving them a fair 
>consideration. I get a sense that you may have an ax to grind. But 
>if not, then how is his "old" idea incomplete    (033)

Well, it is certainly incomplete, almost laughably so. The most 
glaring incompleteness is that he nowhere even seems to consider how 
these magical "memories" get into the cortex in the first place.    (034)

>or wrong. And what is wrong with old ideas?    (035)

That depends on whether they turned out to be useful or not. But what 
gets under my skin (and I will admit makes me somewhat irrationally 
irritable, which is why I declined to review the book) is listening 
to what amounts to a litany of unscientific abuse of my field made by 
someone who never studied it and doesn't know anything about it, and 
who claims to have had earth-shattering new ideas which are in fact 
old, and to be able to see new solutions which when examined turn out 
to simply be re-statements of old problems.    (036)

>>>I think this is relevant to ontology and logic both when it comes to the
>>>ability to choose and interpret symbols to use to identify the things about
>>>which the ontology and logic are about.
>>
>>I don't think this book is relevant to anything except the size of 
>>its author's ego.
>
>This is a classic angry Hayesism, and thus easy to dismiss, but I 
>still disagree.    (037)

Why should I be even slightly concerned with your opinions of a 
technical field about which you apparently know nothing? (The remark 
applies to both AI and Hayesisms, by the way.)    (038)

Pat    (039)

PS. OK, y'all know where I stand on this. No more from me about 
Hawkins or the theoretical foundations of AI. Its not relevant to 
this forum anyway.
-- 
---------------------------------------------------------------------
IHMC            (850)434 8903 or (650)494 3973   home
40 South Alcaniz St.    (850)202 4416   office
Pensacola                       (850)202 4440   fax
FL 32502                        (850)291 0667    cell
phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes    (040)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (041)

<Prev in Thread] Current Thread [Next in Thread>