Mitch, Pat, and Ed, (01)
Your comments raise three kinds of issues: theoretical points
about computational complexity, compilerbased methods for
dealing with them, and methodologies for the humans who use
the KR languages. All three of them are very important, but
the idea that all three should be addressed by the brute force
method of restricting the KR language is hopelessly misguided. (02)
MH> For a particular problem, if you know ahead of time which
> complexity class it lives in, you can gear the algorithm
> to that. (03)
I agree. But that time should be just before the algorithm
is invoked, not the time when the knowledge engineer or the
domain expert expresses the knowledge in some formalism. (04)
MH> But if you don't know that about the input, a simple
> problem may be expressed (equivalently of course) using
> the larger set of operators and you may still be bound
> by the algorithm needed to deal with the larger set to
> do all sorts of expensive, time consuming things. (05)
PH> Exactly. This is the key point that John seems to miss
> in his posts on this topic. (06)
I have always acknowledged that point. But Mitch and others
have ignored the R & D on knowledge compilers. A good compiler
can and very easily does translate the operators to a form
suitable for a particular algorithm. (07)
Just look at the technology used in the theorem provers for
the Thousands of Problems for Theorem Provers (TPTP). All the
best theorem provers use a collection of multiple specialcase
algorithms for specialized subproblems. When they get a new
problem, syntactic checks and transformations determine which
algorithm(s) to invoke. (08)
PH> ... many purely empirical studies of large sets of problems
> have shown again and again that there are real 'tradeoff cliffs'
> in practical problem sets which clearly partition the problem
> space into easy vs. nearworstcasehard (the latter usually
> in a 'boundary zone' between two 'easy' areas.) (09)
I very strongly agree. That is why it is absurd to force
a domain expert with no knowledge of the complexity issues
to pick a KR language that is bound to a specific algorithm. (010)
PH> Citing stoneage results like Wang's theoremprover isn't
> relevant. That engine could not have even input the problems
> that get tackled these days: there wasn't enough RAM on the
> planet at that time. (011)
Of course the modern technology is vastly superior to Wang's.
I cited it to show how trivial it is to prove all the theorems
in FOL that Whitehead and Russell stated  despite the fact
that they had no knowledge of computational complexity. And
the overwhelming majority of statements written by people
who are not mathematicians or logicians are structurally and
computationally far simpler than the ones that W & R wrote. (012)
MH> Some algorithms are 'adaptive' in that they are faster for
> simpler inputs (less expressive; using fewer of the fancy
> operators), but some algorithms are not, and no matter what
> the input (for a given size) it runs in the same time (e.g.
> tableau algorithms vs automata based ones for description
> logics). There is no guarantee that a set of problems
> has an adaptive algorithm. (013)
I agree. But to quote Don Knuth (and others), "Premature
optimization is the root of all evil." An ontology must be
usable for an openended number of different kinds of problems.
The idea of designing a KR language around an algorithm forces
a knowledge engineer (or worse, a domain expert) to choose a
solution before anyone knows what problem(s) it will be used for. (014)
MH> Any problem can be solved as fast as it ..well... can be.
> But you don't know ahead of time which problem it is. (015)
Again, I agree. That is why the idea of choosing a KR language
just because it is tied to a specialized algorithm is not only
counter productive, it is absurd. (016)
MH> Increasing expressiveness increases the complexity of solving
> the whole class. (017)
There is zero evidence to support that claim. Just look at the
practical results from theorem provers, from systems such as Cyc,
and from experience with knowledge compilers. (018)
EB> ... the OWL/DL language forces a discipline on the knowledge
> engineer that the more powerful and less complex KIF/CLIF
> language doesn't. The knowledge engineer is forced to cast
> the problem at hand into the carefully honed elements of the
> OWL/DL language. (019)
Such discipline can be extremely valuable. But it should be
supported by the methodology for knowledge acquisition and
knowledge engineering. For example, look at the UML diagrams
and tools, which use a variety of different notations that
eventually map to programming languages that are Turing
complete  i.e., it is undecidable whether or not they will
get hung up in a loop. (020)
Nobody has ever suggested that we must restrict the expressive
power of those languages. Instead, we use tools, methodologies,
patterns, and compilers for software design and development. (021)
EB> In Mitch's terms, this means that the knowledge engineer is
> forced to determine the category of problem and express an OWL
> ontology that can deal with that category of problem. (022)
That is absolutely the worst conceivable approach. Any piece
of knowledge can be used in an openended number of ways, and
the kn. engineer has no idea how it might be used in any
particular application. A statement that might take exponential
time in a theorem prover might take a fraction of a millisecond
to evaluate as an SQL query or constraint in a relational DB. (023)
EB> In my experience, people who are constrained to use a language
> like OWL/DL to express their ontology can, in many cases, find
> a way to express all or nearly all of their intent. (024)
Yes, I have seen the kinds of things that OWL programmers write.
They go through endless contortions to express statements that
are trivial to express in CL. And for specific problems those
CL statements can be compiled to vastly more efficient code than
the excruciatingly contorted, hopelessly unreadable OWL versions. (025)
EB> Humans have an easier time following a straightforward
> presentation of the problem, even if the axioms become more
> complex. The simpler but less constrained language is easier
> for humans to follow, but harder for the reasoner to work with. (026)
False. As an example, look at the medical "English", which I
translated to controlled English and then to CLIF and CGIF.
The original statement is in slide 15 of the following talk: (027)
http://www.jfsowa.com/talks/cl_sowa.pdf (028)
My translation to controlled English is in slide 21, the CLIF
is in slide 22, and the CGIF is in slide 23. The person who
translated it to OWL couldn't get it on one slide. Instead,
he scrolled through page after page after page of hopelessly
unreadable  I don't have a good word for it, and my usual
comment is "garbage". (029)
EB> But if you take the CLIF ontology that the KIF knowledge
> engineer prefers to build, it will take substantially longer.
> Because "thinking in KIF" will often result in axioms that
> force the reasoner "to do all sorts of expensive, time
> consuming things." (030)
There is zero evidence for that claim. The Cyc language is
comparable in expressive power to CL (or the slightly more
expressive language IKRIS), and a good knowledge compiler
can translate a typical kn. rep. in CycL to highly efficient
code. For an example, see the paper by Bill Andersen & Co,
which I discuss (and cite) in Section 6 of the following: (031)
http://www.jfsowa.com/pubs/fflogic.pdf (032)
John (033)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontologforum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontologforum/
Unsubscribe: mailto:ontologforumleave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontologforum@xxxxxxxxxxxxxxxx (034)
