ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Semantics, Representations and Grammars for Deep Lea

To: ontolog-forum@xxxxxxxxxxxxxxxx
From: John Bottoms <john@xxxxxxxxxxxxxxxxxxxx>
Date: Sun, 25 Oct 2015 19:17:46 -0400
Message-id: <562D631A.20109@xxxxxxxxxxxxxxxxxxxx>
Rich,

It looks like you dropped a thought (flagged below).

And thank you for identifying that you are talking of Strong AI. It seems that much of the W3C work is focused on database work which has differing requirements, they are not exactly clear on their road map. I think that having a defined work destination is sufficiently important that it must not be ignored as it informs the steps taken and navigation techniques to get there. IBM in their work with Watson has identified 111 different major algorithmic areas for their development work. And they appear to be following a somewhat evolutionary approach to Strong AI. They have a relatively new group in Australia that has 80 developers working to extend the capabilities of Watson.

While it is true that we have zero exemplars of intelligent systems we do have the prime example of humans. And we see no cases of newborns being able to do Elementary Algebra. It must be that all intelligent systems must learn their knowledge in a sequential fashion. Much of the NN efforts seem to be trying to leapfrog this important aspect of reality.  Further, It appears to me that we did not learn from the failed Japanese Fifth Generation project; that a single approach to framework and logic designs will not lead to Strong AI. And, we do not have the myriad of corpus' at hand to feed into super-computer simulations of NN's.I believe we would benefit from the development of a progressive curriculum for Strong AI that presents realistic evolutionary goals.

As far as Text-To-Speech goes, we know that is a deep problem but great strides have been made and the Dragon Systems algorithms are in use in fighter jet cockpits. That technology is sufficiently developed for the level of development in current semantic areas. It is my view that we greatly underestimate the complexity of the brain in terms of "components" and topology. Most recognize that there are two hemispheres but little is said beyond that. The hemisphere count (2) doubles the basic constructs once they have been identified.

Gross anatomy identifies three layers of brains and typically 6 layers of neural nets along with about 50,000 columns across the cortex. The visual centers, V1-V6, are dedicated to heavy duty NN processes of images and are tied closely to reasoning with complex sets of reciprocal firings that supervise the learning, recognition and communication visual processes. Keep in mind that human neurons are one-way only so each communication with another portion of the brain is accompanied with a small set of neurons that act as the traffic copy for the full duplex circuit. Counting just the three brain layers and the visual centers across two hemispheres yields about 18 ill defined functional areas along with the extensive highways of neurons that link the various centers together. In other words, the study of brain physiology is appropriate for medicine and basic research, but will unlikely provide deep knowledge about Strong AI.

The early researcher, Dr. Brodmann, identified some 50 or so different physiological areas and we have not begun to account for the differences in their operation. That work is very old and of little consequence in the light of the newer fMRI techniques for sorting out signaling during task execution. Interestingly, the fMRI work does mesh well with the Vector Space Model used in the PageRank algorithm.

Finally, I don't understand your reference to: 7,209,923. If that is a paper, I overlooked it, please excuse me and point me in the right direction.

-John Bottoms
 Concord, MA USA


On 10/25/2015 4:38 PM, Rich Cooper wrote:

Dear John B,

 

You wrote:

 

I'm Skeptical,

 

A total of 103 references for a 16 page paper seems a bit out of whack. And his approach relies on inputs to the NN's that, "A biologically-plausible deep learning algorithm should take advantage of the particularities of the reinforcement learning setting." That amounts to saying that an appropriately sized corpus is required.

 

Yes, it sounds like news blather.  And his writing is more opaque than appropriate. 

 

Further, it is always a red flag for me is when someone coins a new term or acronym such as "KICKS", when there are better ways to explain the algorithm. I believe he presents a reasonable overview of some useful techniques that would be better conceived using semantics.

 

But there are no semantics in the voice input stream.  It's just numbers from the analog converter as sampled every so often.  To get to what has become commonly called semantics means to interpret those numbers, and semantics can indeed help with the error feedback and such.  But it can't do what the NN NLP authors purport to do, which is to deal with the analog signals and convert them into lexical or dictionary units, often words and phrases. 

 

The problem is the interpretive interface between the number series and the

 

The  front end problem is to do pattern recognition so that you can identify many of the sounds as plosives, fricatives, vowels, whatthehellatives, ... before turning them into words, where semantics analysis of the accumulating situation knowledge would finally provide the basis on which semantics can work.  Only after that point is semantics appropriate to the feed forward direction of processing. 

 

Semantics can help. The reason linguistics provides valuable tools is because natural languages have undergone thousands of years of adaptation and pruning that embody the deep learning of the members of the community of interest. We should take advantage of that learning in a semantic fashion, rather than trying to recreate those lessons. Humans can provide the right answers if we carefully ask the correct questions.

 

-John Bottoms

 Concord, MA USA

 

That is absolutely true;  +1.  But what we have now includes dictionaries, thesauri, WordNet, VerbNet, ... , and lots of embodiments of resources at the lexical level and above.  Not at the analog level, to my knowledge (please correct me if you know otherwise) that can be used to train these NN NLP machines further. 

 

But the machines that hold a credible dialog with us, a la the Turing Test, can presently be counted on no fingers.  Its zero. 

 

Since we haven't yet got to the depths in how language works for people, let's keep ALL resources working that provide improvements, as compared to its objectives.  Just keep the budget in reasonable limits and keep pruning out the non productive approaches until only productive ones remain. 

 

Because, futile as this search has been for so long, a linguistically competent Q&A agent is essential to get really deep into artificial intelligence.  Once it becomes fairly available to obsessive users, it will also take one heck of a discovery system, but I already showed how to do that in the 7,209,923. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Bottoms
Sent: Sunday, October 25, 2015 1:08 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] Semantics, Representations and Grammars for Deep Learning - David Balduzzi

 

I'm Skeptical,

A total of 103 references for a 16 page paper seems a bit out of whack. And his approach relies on inputs to the NN's that, "A biologically-plausible deep learning algorithm should take advantage of the particularities of the reinforcement learning setting." That amounts to saying that an appropriately sized corpus is required.

Further, it is always a red flag for me is when someone coins a new term or acronym such as "KICKS", when there are better ways to explain the algorithm. I believe he presents a reasonable overview of some useful techniques that would be better conceived using semantics.

Semantics can help. The reason linguistics provides valuable tools is because natural languages have undergone thousands of years of adaptation and pruning that embody the deep learning of the members of the community of interest. We should take advantage of that learning in a semantic fashion, rather than trying to recreate those lessons. Humans can provide the right answers if we carefully ask the correct questions.

-John Bottoms
 Concord, MA USA

On 10/25/2015 3:32 PM, Rich Cooper wrote:

Dear OntoLogicists,

 

The subject says it all.  This is a 17 page paper with 3 pages more in references about how the neural net crowd seems to have straightened a curve in getting to natural language in a deeper sense than just voice recognition.  The paper is at:

 

http://arxiv.org/pdf/1509.08627.pdf

 

The author seems to have the EE math culture viewpoint leading to some very interesting ways to learn such odd things as those discussed in Women, Fire and Dangerous Things by exposure to a large enough number of samples. 

 

Does anyone have a good tutorial in mind on recent NN to NLP practice?

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

 



 
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>