>[DC] >I find it curious that the more focused this group is on having a
>>logically consistent solution that yields mathematically precise
>>answers,
>
>[PH] For the record, I don't think any of us would claim that our
>ontologies give answers this good. (01)
[KL] For the record, "logically correct solution that yields
mathematically precise answers" does not equate with "good." A
logically correct, mathematically precise answer can be AWFUL if it
is answers the wrong question. An imprecise answer from a system
riddled with inconsistency can be WONDERFUL if it is a close enough
approximation to the answer to the right question. In engineering,
logical correctness should be pursued for the sake of getting good
answers in the second sense. Pursuing logical correctness for its
own sake may be good logic, but it is bad engineering. (02)
>[DC] > the more I am thinking that a really 'smart' system would tell
>>me that with imperfect (and often downright misleading) data that it
>>reasons why a particular event occurs when and how it does, and learns
>>with each iteration.
>
>[PH] That sounds like the AI dream of heaven. (03)
[KL] Dreams of heaven give us a guiding vision. (04)
>[PH] Ontologies are really neat,
>but they aren't miraculous. (05)
[KL] What led me to study ontologies, and what keeps me working on
them, is that I think ontologies done right can lead us in the
direction of that guiding vision. (06)
I'm being careful not to sell you snake oil. AI has suffered far too
much from fads and extravagant and unfulfillable promises. However,
building the disciplines of AI and ontology on classical logic and
classical computing is building on a foundation of sand. Real
devices are not Turing machines. Real physical devices are quantum
systems (more precisely, quantum theory provides an excellent
approximation to their behavior). The only real intelligent systems
we know of are very bad at executing algorithms. They are very good
at wandering through life trying interesting things, and over time
getting a better idea both of what their goals are and how to direct
their actions to bring about results in line with their goals. (07)
Quantum computing and approximate Bayesian decision theory is a much
stronger foundation for intelligent systems of the next century than
what we've been doing. Incidentally, if your logic is Bayesian,
learning comes for free. Every time a Bayesian agent observes
something, it updates everything in its knowledge base to take
account of the new information. More precisely, learning comes "for
free" in the logic -- but implementing an approximate Bayesian agent
in a physical system is a *very* challenging engineering problem,
which we're only beginning to understand how to do well. (08)
Regarding snake oil, I don't think moving in the direction I'm
sketching here is a fast track to Pat's AI Heaven. We've got some
extremely tough theoretical and engineering challenges ahead of us,
and lots of tedious and difficult labor, as we engage in the
all-important enterprise of building the world's information and
knowledge infrastructure. I don't think there are any silver bullets.
But in my own wandering through life trying various things, I've come
to the conclusion that if we don't move toward a theory of
approximate Bayesian agents engaged in goal-directed action and
learning, then we're dooming our planet to a shadow of what it could
otherwise become. The quality of life, and perhaps the very survival,
of future generations depends on what we are doing. (09)
Kathy (010)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (011)
|