On Jun 13, 2007, at 6:23 AM, Kathryn Blackmond Laskey wrote:
I'm being careful not to sell you snake oil. AI has suffered far too much from fads and extravagant and unfulfillable promises. However, building the disciplines of AI and ontology on classical logic and classical computing is building on a foundation of sand.
For particular purposes, that might be true. As a general view, it is simply out of step with the facts.
Real devices are not Turing machines. Real physical devices are quantum systems (more precisely, quantum theory provides an excellent approximation to their behavior). The only real intelligent systems we know of are very bad at executing algorithms. They are very good at wandering through life trying interesting things, and over time getting a better idea both of what their goals are and how to direct their actions to bring about results in line with their goals.
I'm not sure what you mean by "real devices" -- and perhaps when we restrict our attention to them what you say might be correct. However, many ontologies are designed to provide distillations of knowledge in domains where quantum-level descriptions and probabilities are utterly irrelevant and classical reasoning is exactly what is required. I don't deny that there are domains where, for some purposes, classical logical is not appropriate, but the suggestion that all useful ontological reasoning is probabilistic reflects a narrow and misleading perspective.