Dear John B.
You wrote:
Rich,
It looks like you dropped a thought
(flagged below).
And thank you for identifying
that you are talking of Strong AI. It seems that much of the W3C work is
focused on database work which has differing requirements, they are not exactly
clear on their road map.
I think the main driver for W3C is the interoperability
concept. Will it ever be practical? To some extent, it probably
will be when there are enough extra support services.
I think that having a defined
work destination is sufficiently important that it must not be ignored as it
informs the steps taken and navigation techniques to get there. IBM in their
work with Watson has identified 111 different major algorithmic areas for their
development work. And they appear to be following a somewhat evolutionary
approach to Strong AI. They have a relatively new group in Australia that has
80 developers working to extend the capabilities of Watson.
First, IBM bought SPSS which had a great product for data mining
without dealing with the unstructured text columns. That was around 2011
or something near that. I know nothing about their product other than
what they publish on their web site. If you are interested, you might
want to look at:
Google up site:http://www-01.ibm.com/support/knowledgecenter/
Watson
That will bring up lots of variations on their approach.
While it is true that we have
zero exemplars of intelligent systems we do have the prime example of humans.
And we see no cases of newborns being able to do Elementary Algebra. It must be
that all intelligent systems must learn their knowledge in a sequential
fashion.
Yes, what's needed is a system that learns, modifies its
behavior to "improve" the result, and then tries again. Repeat
until wise and intelligent.
NN NLP work uses structures like those in the referenced
paper. That is still learning, but we don't get to understand the learned
lessons if they are in just the math used to find uniform measurements which
indicate the class of membership. But note that the attempt to intellectualize
everything that might be considered an intelligent agent is not paying off
yet.
Much of the NN efforts seem
to be trying to leapfrog this important aspect of reality. Further, It
appears to me that we did not learn from the failed Japanese Fifth Generation
project; that a single approach to framework and logic designs will not lead to
Strong AI.
Is that really the best lesson to learn? It seems the 5G
papers came and went within a year or two. It seems the Japanese simply
did not want to spend the money on a wild goose chase. It is pointless to
offer huge batches of bucks to get people to work on things we don't truly
understand yet.
And, we do not have the
myriad of corpus' at hand to feed into super-computer simulations of NN's.I
believe we would benefit from the development of a progressive curriculum for
Strong AI that presents realistic evolutionary goals.
Agreed, but "realistic revolutionary goals" have already
been tried and met with failure. They also have to be achievable
"realistic revolutionary goals" and at this time, we still
don't really know how to formulate the problem in a clear and unambiguous
way.
As far as Text-To-Speech
goes, we know that is a deep problem but great strides have been made and the
Dragon Systems algorithms are in use in fighter jet cockpits.
Yes, text-to-speech is going reasonably well except for the speech
quality, which is often not so great. There is not sufficient prosody
capability with current text-to-speech, IMHO. I want that emotional
component to be recognizable in an agent's behavior, and that is very, very
poorly understood at this time.
That technology is
sufficiently developed for the level of development in current semantic areas.
It is my view that we greatly underestimate the complexity of the brain in
terms of "components" and topology. Most recognize that there are two
hemispheres but little is said beyond that. The hemisphere count (2) doubles
the basic constructs once they have been identified.
Gross anatomy identifies
three layers of brains and typically 6 layers of neural nets along with about
50,000 columns across the cortex.
What? Only 50,000 columns on the cortex? That seems
very small considering the 100 billion total neurons. A column is
expected to be six neural layers deep, from Jeff Hawkins' descriptions. So
assuming a hundred neurons per column (your estimate may vary), that should
leave room for 5 million columns, but over the entire brain, not just the
cortex. That number is more in line with my expectations about the
capabilities of humans performing normal expected tasks. 50K seems
extraordinarily small, IMHO.
The visual centers, V1-V6,
are dedicated to heavy duty NN processes of images and are tied closely to
reasoning with complex sets of reciprocal firings that supervise the learning,
recognition and communication visual processes. Keep in mind that human neurons
are one-way only so each communication with another portion of the brain is
accompanied with a small set of neurons that act as the traffic copy for the
full duplex circuit. Counting just the three brain layers and the visual
centers across two hemispheres yields about 18 ill defined functional areas
along with the extensive highways of neurons that link the various centers
together. In other words, the study of brain physiology is appropriate for
medicine and basic research, but will unlikely provide deep knowledge about
Strong AI.
Agreed. But the lessons we learn from examining the brain,
and working out the wiring bundles between functional parts, should help us
come up with a few theories about brain function that could also suggest some
algorithmic advances. So, with no actual examples of equipment that works
like people.
The early researcher, Dr.
Brodmann, identified some 50 or so different physiological areas and we have
not begun to account for the differences in their operation. That work is very
old and of little consequence in the light of the newer fMRI techniques for
sorting out signaling during task execution. Interestingly, the fMRI work does
mesh well with the Vector Space Model used in the PageRank algorithm.
And in the same vein, clustering of words based on context seems
to be a fruitful way to treat knowledge for indexing and retrieval in computer
processes.
Finally, I don't understand
your reference to: 7,209,923. If that is a paper, I overlooked it, please
excuse me and point me in the right direction.
Oh, sorry. Attached to this email is the patent 7,209,923
which describes how discovery systems can be engineered for performing more
intelligent tasks. The number is the patent number which can identify the
patent in the USPTO database.
-John Bottoms
Concord, MA USA
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John
Bottoms
Sent: Sunday, October 25, 2015 4:18 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] Semantics, Representations and Grammars for
Deep Learning - David Balduzzi
On 10/25/2015 4:38 PM, Rich Cooper wrote:
Dear John B,
You wrote:
I'm Skeptical,
A total of 103 references for
a 16 page paper seems a bit out of whack. And his approach relies on inputs to
the NN's that, "A biologically-plausible deep learning algorithm should
take advantage of the particularities of the reinforcement learning
setting." That amounts to saying that an appropriately sized corpus is
required.
Yes, it sounds like news blather. And his writing is more
opaque than appropriate.
Further, it is always a red
flag for me is when someone coins a new term or acronym such as
"KICKS", when there are better ways to explain the algorithm. I
believe he presents a reasonable overview of some useful techniques that would
be better conceived using semantics.
But there are no semantics in the voice input stream. It's
just numbers from the analog converter as sampled every so often. To get
to what has become commonly called semantics means to interpret those numbers,
and semantics can indeed help with the error feedback and such. But it
can't do what the NN NLP authors purport to do, which is to deal with the
analog signals and convert them into lexical or dictionary units, often words
and phrases.
The problem is the interpretive interface between the number series and the
The front end problem is to do pattern recognition so that
you can identify many of the sounds as plosives, fricatives, vowels,
whatthehellatives, ... before turning them into words, where semantics analysis
of the accumulating situation knowledge would finally provide the basis on
which semantics can work. Only after that point is semantics appropriate
to the feed forward direction of processing.
Semantics can help. The
reason linguistics provides valuable tools is because natural languages have
undergone thousands of years of adaptation and pruning that embody the deep
learning of the members of the community of interest. We should take advantage
of that learning in a semantic fashion, rather than trying to recreate those
lessons. Humans can provide the right answers if we carefully ask the correct
questions.
-John Bottoms
Concord, MA USA
That is absolutely true; +1. But what we have now
includes dictionaries, thesauri, WordNet, VerbNet, ... , and lots of
embodiments of resources at the lexical level and above. Not at the
analog level, to my knowledge (please correct me if you know otherwise) that can
be used to train these NN NLP machines further.
But the machines that hold a credible dialog with us, a la the
Turing Test, can presently be counted on no fingers. Its zero.
Since we haven't yet got to the depths in how language works for
people, let's keep ALL resources working that provide improvements, as compared
to its objectives. Just keep the budget in reasonable limits and keep
pruning out the non productive approaches until only productive ones
remain.
Because, futile as this search has been for so long, a
linguistically competent Q&A agent is essential to get really deep into
artificial intelligence. Once it becomes fairly available to obsessive
users, it will also take one heck of a discovery system, but I already showed
how to do that in the 7,209,923.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
I'm Skeptical,
A total of 103 references for a 16 page paper seems a bit out of whack. And his
approach relies on inputs to the NN's that, "A biologically-plausible deep
learning algorithm should take advantage of the particularities of the
reinforcement learning setting." That amounts to saying that an
appropriately sized corpus is required.
Further, it is always a red flag for me is when someone coins a new term or
acronym such as "KICKS", when there are better ways to explain the
algorithm. I believe he presents a reasonable overview of some useful
techniques that would be better conceived using semantics.
Semantics can help. The reason linguistics provides valuable tools is because
natural languages have undergone thousands of years of adaptation and pruning
that embody the deep learning of the members of the community of interest. We
should take advantage of that learning in a semantic fashion, rather than
trying to recreate those lessons. Humans can provide the right answers if we
carefully ask the correct questions.
-John Bottoms
Concord, MA USA
On 10/25/2015 3:32 PM, Rich Cooper wrote:
Dear OntoLogicists,
The subject says it all. This is a 17 page paper with 3
pages more in references about how the neural net crowd seems to have
straightened a curve in getting to natural language in a deeper sense than just
voice recognition. The paper is at:
http://arxiv.org/pdf/1509.08627.pdf
The author seems to have the EE math culture viewpoint leading
to some very interesting ways to learn such odd things as those discussed in Women,
Fire and Dangerous Things by exposure to a large enough number of
samples.
Does anyone have a good tutorial in mind on recent NN to NLP
practice?
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J