On Sep 11, 2009, at 12:13 PM, Rich Cooper wrote:
And since Lisp, we’ve known how to represent the programs to the programs themselves.
Since long before LISP. If I understand you, you are simply talking (in the most general case) about the idea of a universal Turing machine, a Turing machine that is capable of emulating any other Turing machine — or, equivalently, since we can correlate Turing machines with (classes of) programs, a program that, when implemented, takes (codes of) other programs as input and can thereby simulate their behavior. This idea dates back to Turing's groundbreaking 1936 paper "On Computable Numbers, With an
Application to the Entscheidungsproblem" in which he introduced the idea of a Turing machine and proved the unsolvability of the halting problem.
But lisp wasn’t a good enough vehicle for natural language at the time. Too simple in the face of huge conceptual complexity.
I suspect most computational linguists would take issue with your characterization of the problems with NLP.
But the old days are transitioning into tomorrow and we can already toss adequate computing power at any problems that we can approximate iteratively.
Adequate for what? Intractable problems do not become tractable with faster computers; the size of the problem space simply blows up more quickly. (This is not to deny that fast computers are a huge boon to knowledge engineering, NLP, etc.)