On Tuesday 07 August 2007 06:34, John Black wrote:
> ...
>
> Hawkins' point, as I understand it, is that neither calculation nor
> algorithms are needed to explain how a waterbed adjusts to changing
> conditions. But further, that they are not necessary to explain how
> dogs or people learn how to catch balls. And he goes even further, he
> claims that this ability is due to the way brains, the cortex in
> particular, are constructed and operate, as it is with the waterbed.
> But he doesn't stop there, he also claims that progress in getting
> machines to due similar tasks has been severely hampered by the
> erroneous belief that algorithms and calculation could somehow
> reproduce the functionality of the cortex. (01)
These are examples of analog computation. I consider what analog
computers do to be calculation. The algorithms are encoded in the
nature of the neurons and synapses and in their interconnection
topology. (02)
> I think this is relevant to ontology and logic both when it comes to
> the ability to choose and interpret symbols to use to identify the
> things about which the ontology and logic are about. (03)
Well, I suppose the symbol grounding problem is still with us, isn't it? (04)
I'm inclined to believe that recreating human-like cognitive abilities
will require embodiment. After all, meaning and consequence as humans
experience them are about what happens to their own bodies and in the
physical (and information) world around them. Logic is part of what's
required to predict consequences (a necessity for survival), but most
often that logic is carried out unconsciously. Most human knowledge,
even when abstract, is not about mathematics or other formal systems,
it's about the affairs of humans and other living beings. I'm not
saying our knowledge is impervious to formal treatment, but just that
it is ultimately mostly about being human and is gained through sense
experience, directly, through witnessing others' experiences or by
hearing them recount their experiences. One way or another, it's about
experience. And much of our prediction is based on simulation;
internal, mental simulation. Still, that's amenable to computational
replication (again, I'd say it _is_ computation, of the sort very
large-scale neural systems perform). (05)
But that's about a kind of AI that is not necessarily what we'd like to
accomplish with, e.g., the Semantic Web. At least not as I see it. We
don't really want to create a conscious, self-aware Web, do we? (06)
> John Black
> www.kashori.com (07)
Randall Schulz (08)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (09)
|