Pat, (01)
I agree that it requires a parser: (02)
PH> The restricted method CAN be used, of course, but in practice,
> it is only feasible if one knows that the logical problem one
> is tackling does in fact satisfy the constraints of the restricted
> language; and this may not be easy to compute. (03)
In many important cases, the constraints are very easy to detect.
For example, detecting whether a statement can be converted to
the Horn-clause subset or to the Aristotelian subset is trivial. (04)
PH> If there are many possible such restricted sublanguages (as there
> are of FOL: maybe several dozen by now, counting all the variants
> of the DLs) then it becomes infeasible to parse every piece of FOL
> to see whether it might fit into one or more of these syntactic
> restrictions. (05)
I agree that some of those languages mix many issues in ways
that are not easy to disentangle. But that is another argument
against the use on special-purpose languages. Easily detectable
syntactic criteria for efficiently computable subsets of FOL
have been known and used for decades. One example is the
knowledge compiler by Andersen & Co. (06)
As another example, the Cyc system has several dozen different
inference engines under the covers. For any particular problem,
Cyc uses syntactic criteria to determine which one(s) are best
suited to the current task. (07)
Furthermore, the practice of checking the syntax inside a theorem
prover has been developed in great detail in the past few decades.
The best theorem provers for the challenge of Thousands of Problems
for Theorem Provers (TPTP) use syntactic tests to determine which
algorithm(s) to apply to any given problem. (08)
PH> What is needed is that each piece of logic comes with a declaration
> of some kind which identifies which subset or fragment is falls into;
> and if one includes this declaration as part of the syntax, then the
> result is something more than FOL, it is a kind of FOL-plus-metadata
> hybrid. (09)
I'm not opposed to that idea, and such declarations could be used
with any version of FOL as comments that suggest a recommended
inference engine. Since those comments don't change the semantics,
they don't really make the language a hybrid. But a good knowledge
compiler would rarely need such comments. (010)
The basic point is that domain experts aren't experts in logic and
theorem proving. Even knowledge engineers who have taken a course
in logic aren't experts in theorem proving methods. Getting them
to state the domain knowledge precisely is difficult. But the
knowledge about efficient computational methods are orthogonal to
the domain knowledge. The domain experts shouldn't be forced to
worry about both topics simultaneously. (011)
That is why I recommend we use knowledge compilers that translate
the knowledge from whatever form the domain experts prefer and
to forms that are computationally efficient. The technology for
doing such translations has been around for more than 20 years. (012)
It's a major step *backwards* to force domain experts and knowledge
engineers to do "hand compilation" when a good knowledge compiler
can do as well or better. In fact, I believe that such compilers
often produce *significantly better* optimization because they
tailor the representation for each problem. Even a professional
logician wouldn't have the time or the patience to do that. (013)
John (014)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (015)
|