|From:||Amanda Vizedom <amanda.vizedom@xxxxxxxxx>|
|Date:||Mon, 19 Nov 2012 16:31:31 -0500|
Thanks, Doug. A few comments below:|
Sorry, I wasn't clear enough there; by "a guaranteed finite-time result," I meant " a result guaranteed to come with an unknown, but finite, amount of time. That's the guarantee that comes with "decidable" and is of little-to-no value in deployed applications (as you know).
FYI, that link did not work for me; it went to an IEEE Xplore Error Page.
In any case, though, it's a nice example, illustrating a couple of key points:
1) Why *would* you care whether the methods were decidable? It wouldn't contribute one wit to determining where the methods fell in relation to a goal that involves finding acceptable answers within an acceptable amount of time. It might be of purely theoretical interest to do such an analysis, but it would be of no practical value with respect to design, development, evaluation, tuning, evaluation, or performance of the system.
2) A multiple-method system is often the best performer. The overwhelming majority of systems I see, read about, or hear about attempt to use a single reasoning method to solving all problems. Whatever that method is, it may perform well on some of the problems within the domain of use; it often performs poorly on many others, or even rules out addressing them at all. It seems to be taken for granted by many that one simply has to accept trade-offs at this level.
Some system designers think of, and perhaps even consider, building a smarter system incorporating multiple reasoning methods. Such a system can incorporate knowledge about the kinds of problems different methods to better or worse on, an analysis step in which the relevant features of a reasoning problem are evaluated, and a best method is chosen and applied. However, there seems to be a general belief that such a smarter system will be inherently a worse performer, or to complicated to build and maintain. Neither is true, IME, but the belief circulates out there without support.
I wonder to what extent simple lack of familiarity with smarter systems -- that is, systems incorporating a layer in which reasoning task parameters and candidate methods are themselves reasoned about -- is behind much of the DL-centric and single-method-limited design that we see. I'm not talking here about the few who chose a DL-based and/or single-method approach for some application, understanding other options but seeing this as the best fit for the case. I'm talking about the apparent majority who never work outside of those constraints.
_________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (01)
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: [ontolog-forum] Webby objects, doug foxvog|
|Next by Date:||Re: [ontolog-forum] Webby objects, Edward Barkmeyer|
|Previous by Thread:||Re: [ontolog-forum] Webby objects, doug foxvog|
|Next by Thread:||Re: [ontolog-forum] Webby objects, doug foxvog|
|Indexes:||[Date] [Thread] [Top] [All Lists]|