Here is a quote from the thesis
re the definition of "perplexity":
The
notion "probability of a sentence" is an entirely useless one, under
any known interpretation of this term. (Chomsky, 1969)
Still,
we can consider entropy and perplexity as very useful measures. The simple reason
is that in the real-world applications (such as speech recognizers), there is a
strong positive correlation between perplexity of involved language model and
the system's performance [24].
This comes from page 16 of the
thesis.
This is the English version of
someone who is thinking in Czech. Does anyone have a more usual
interpretation of the word "perplexity" as it is being used
here? I don't recall Solomonoff using the term perplexity.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT
EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: Rich Cooper
[mailto:metasemantics@xxxxxxxxxxxxxxxxxxxxxx]
Sent: Thursday, November 05, 2015 11:14 AM
To: '[ontolog-forum]'
Subject: Recurrent Neural Nets and Natural Language Models
Dear Ontologizers,
I have been looking for good explanations of how RNNs
handle language, since there have been some recent advances in that area.
I found this dissertation described on the Corpora list, and it seems to be the
clearest I have seen yet. Its only 133 PDF pages; it's in paired Czech and English sections for those
looking for parallel corpora; and it's intelligently written, at least the
first few pages I have read.
This is the dissertation:
http://www.fit.vutbr.cz/~imikolov/rnnlm/thesis.pdf
Sincerely,
Rich
Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com