ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Fruit fly emotions mimic human emotions - ontology d

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Rich Cooper" <metasemantics@xxxxxxxxxxxxxxxxxxxxxx>
Date: Wed, 27 May 2015 12:51:43 -0700
Message-id: <0be001d098b6$8fb3a7d0$af1af770$@com>

Here is another spiffy quote from PG:

 

PG: Conceptual spaces can also provide a better way of representing learning in general and concept formation in particular than what can be achieved on the symbolic level.

 

For Gaerdenfor, the phrase "conceptual spaces" refers to "intrinsic" senses scattered across an array of (cortical columns)", if I may so freely interpret his intent against his terminology.  He cites examples like the Cochlea's scattering by frequency of harmonics, visual color limits to RGB cones and rods, muscular extension, and so forth, each representing a scattering dimension.  He uses this finiteness of scattering along the neocortex (again I assisted his interpretation) to conclude that induction over domains of infinities is biologically illegitimate:

 

Many of the problems of induction that are created by the symbolic approach dissolve into thin air when analysed on the conceptual level. Similarly, the problem of how transducers work becomes a non-problem since no transducers are needed for the information represented in conceptual spaces.

 

RC: He seems to say that specialized processing in  neural compartments (noncortical brain areas) is very limited, so the dimensions are scattered across dimensions of each of the cortices, and the brain does the rest.  I suppose that is partly the connectionist image. 

 

But it also shows that the designation of any symbol used by the patient is localized by name or description in the appropriate cortical column(s) so it can be referenced linguistically. 

 

PG: The theory of conceptual spaces may also indicate a direction where a solution to the frame problem can be ferreted out. The starting point is to separate the information to be represented into domains. The combinatorial explosion of symbolic representations of a changing world is a result of not keeping symbolic information about different domains separated.

 

RC: There is some truth to that, but also lots of work left to do being stated.  Basically, database models use the primitive domains (integer, real, Boolean, char, string, ...) and don't build more object-oriented domains that might be more intuitively understandable by the salient crowd. 

 

But to do so leaves you with only a snapshot of the data model at that point in time, while the actual data model varies with new or updated requirements changes. 

 

Since the need for those more detailed views of the database are merely conceptual, it doesn't help the bankers of the system justify funding such work.  So that level of detail is usually not completed. 

 

So, if we (Tonto) can map the fruit fly sensor qualities and their neurons onto their cortical locations (assuming they have cortices big enough to do so), perhaps we can even relate that to some primitive functionality in the brain which corresponds to humans.  If so, then we (Tonto) have that fly model of related phenomenon to work on. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F Sowa
Sent: Monday, May 25, 2015 1:30 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] Fruit fly emotions mimic human emotions - ontology discovery possible?

 

Tom,

 

I am well aware of those debates and of the intensity on all sides:

 

> This is the “vs.” I am referring to, and in spite of your “should”,

> the facts on the current ground is that there is this debate. Indeed,

> the article by Fodor and Lepore and the reply by David Chalmers, both

> of which you recently provided links to, make it quite clear how

> intense the “vs.” remains.

 

The most cutthroat debates are among philosophers and theologians -- primarily because they're searching for certainty, and they have no way of knowing when they're wrong.

 

That was the point of my talk at the Mexican AI conference in November:

 

    http://www.jfsowa.com/talks/micai.pdf

    Why has AI failed?  And how can it succeed?

 

That wrangling led to single-paradigm systems, which are very strong on one type of problem and useless for anything else.

 

> Here's Gardenfors, on my "vs.":

> “Within cognitive science, there are currently two dominating

> approaches to the problem of modeling representations.” From the point

> of view of the symbolic approach (which I and others call the “mental

> representation” approach), “cognition is seen as essentially being

> computation, involving symbol manipulation.”

 

I presented a guest lecture at Lund at PG's invitation, so I won't be too harsh on him.  Peter did good work on belief revision, which I strongly recommend.  He's the G of the AGM axioms.  But that quotation is an extremely oversimplified and misleading summary of AI and cognitive science.

 

Marvin Minsky's _Society of Mind_ is a good antidote to that kind of partisanship.  See the reference in Slide 13 of micai.pdf:

http://web.media.mit.edu/~push/CognitiveDiversity.pdf

 

That was a strong influence on my "Flexible Modular System" (FMF):

http://www.jfsowa.com/pubs/arch.pdf

 

A quotation from arch.pdf

> The lack of progress in building general-purpose intelligent systems

> could be explained by several different hypotheses:

> 

>  * Simulating human intelligence on a digital computer is impossible.

> 

>  * The ideal architecture for true AI has not yet been found.

> 

>  * Human intelligence is so flexible that no fixed architecture can do

>    more than simulate a single aspect of what is humanly possible.

> 

> Many people have presented strong, but not completely convincing

> arguments for the first hypothesis.  In the search for an ideal

> architecture, others have implemented a variety of at best partially

> successful designs. The purpose of this paper is to explore the third

> hypothesis:  propose a flexible modular framework that can be tailored

> to an open-ended variety of architectures for different kinds of

> applications.

 

For examples that show how the FMF works, see "Two paradigms are better than one, and multiple paradigms are even better":

http://www.jfsowa.com/pubs/paradigm.pdf

 

Fundamental principle:  Neuroscientists are the first to emphasize that

*nobody* really knows how the brain works.  For philosophers to engage in endless wrangling about the virtues of one half-baked theory or another is fundamentally misguided.

 

Recommendation in micai.pdf:  Implement various theories.  Test them alone and in different combinations.  See what works.  Collaborate!

 

John

 

_________________________________________________________________

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/

Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/

Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx

Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>