Tom, (01)
I am well aware of those debates and of the intensity on all sides: (02)
> This is the “vs.” I am referring to, and in spite of your “should”, the
> facts on the current ground is that there is this debate. Indeed, the
> article by Fodor and Lepore and the reply by David Chalmers, both of
> which you recently provided links to, make it quite clear how intense
> the “vs.” remains. (03)
The most cutthroat debates are among philosophers and theologians --
primarily because they're searching for certainty, and they have no way
of knowing when they're wrong. (04)
That was the point of my talk at the Mexican AI conference in November: (05)
http://www.jfsowa.com/talks/micai.pdf
Why has AI failed? And how can it succeed? (06)
That wrangling led to single-paradigm systems, which are very strong
on one type of problem and useless for anything else. (07)
> Here's Gardenfors, on my "vs.":
> “Within cognitive science, there are currently two dominating
> approaches to the problem of modeling representations.” From the
> point of view of the symbolic approach (which I and others call
> the “mental representation” approach), “cognition is seen as
> essentially being computation, involving symbol manipulation.” (08)
I presented a guest lecture at Lund at PG's invitation, so I won't
be too harsh on him. Peter did good work on belief revision, which
I strongly recommend. He's the G of the AGM axioms. But that
quotation is an extremely oversimplified and misleading summary
of AI and cognitive science. (09)
Marvin Minsky's _Society of Mind_ is a good antidote to that kind
of partisanship. See the reference in Slide 13 of micai.pdf:
http://web.media.mit.edu/~push/CognitiveDiversity.pdf (010)
That was a strong influence on my "Flexible Modular System" (FMF):
http://www.jfsowa.com/pubs/arch.pdf (011)
A quotation from arch.pdf
> The lack of progress in building general-purpose intelligent systems
> could be explained by several different hypotheses:
>
> * Simulating human intelligence on a digital computer is impossible.
>
> * The ideal architecture for true AI has not yet been found.
>
> * Human intelligence is so flexible that no fixed architecture can do
> more than simulate a single aspect of what is humanly possible.
>
> Many people have presented strong, but not completely convincing
> arguments for the first hypothesis. In the search for an ideal
> architecture, others have implemented a variety of at best partially
> successful designs. The purpose of this paper is to explore the third
> hypothesis: propose a flexible modular framework that can be tailored
> to an open-ended variety of architectures for different kinds of
> applications. (012)
For examples that show how the FMF works, see "Two paradigms are
better than one, and multiple paradigms are even better":
http://www.jfsowa.com/pubs/paradigm.pdf (013)
Fundamental principle: Neuroscientists are the first to emphasize that
*nobody* really knows how the brain works. For philosophers to engage
in endless wrangling about the virtues of one half-baked theory or
another is fundamentally misguided. (014)
Recommendation in micai.pdf: Implement various theories. Test them
alone and in different combinations. See what works. Collaborate! (015)
John (016)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (017)
|