On Oct 21, 2008, at 10:35 AM, Mitch Harris wrote: On Tue, Oct 21, 2008 at 10:11 AM, John F. Sowa < sowa@xxxxxxxxxxx> wrote: But the most misleading point of all is the final sentence:
Lastly, expressiveness increases the computational costs of reasoning.
In the Fads and Fallacies paper, I make the point that computational
complexity is *totally dependent* on the nature of the problem.
Any problem stated in a restricted language can be solved just as
fast when stated in a more expressive language.
I think there is a mismatch between how people are using implicit quantifiers here.
Indeed, and your analysis below is right on the money, with just one small caveat: you seem to identify being more expressive with having more operators. But there are other ways it can happen. Sometimes, a language may be made more expressive simply by removing syntactic restrictions. For example, the only syntactic difference between FOL and (some versions of) HOL is that the latter allows quantifiers to range over relation symbols, and provides some extra inference rules to establish the existence of relations. No extra operators at all. (Some versions of HOL do provide extra syntax, such as lambda-expressions.)
The moral is that establishing more/less expressiveness isn't always as simple as counting operators; it can be a subtle and delicate matter. And, same point made differently, subtle changes in a KR language can have large effects on expressivity (and hence of inferential complexity).
True, for any particular problem, it is in a particular complexity class or not. Its complexity (computational) does not change.
But for given a set of language operators with specific meanings, the complexity of a the set of -all- problems expressible with larger selection of operators is at least as difficult (takes as long or longer) for a smaller set.
For a particular problem, if you know ahead of time which complexity class it lives in, you can gear the algorithm to that. But if you don't know that about the input, a simple problem may be expressed (equivalently of course) using the larger set of operators and you may still be bound by the algorithm -needed- to deal with the larger set to do all sorts of expensive, time consuming things.
Exactly. This is the key point that John seems to miss in his posts on this topic.
It should perhaps be stated for the benefit of forum readers, that this topic of inference complexity and the complexity/expressivity tradeoff is not at all new or speculative: it is a mature science now, with a fully worked-out underlying theory, and is regularly applied in industrial-scale applications, undergraduate textbooks, etc. etc.. True, it focuses on worst-case analysis, and 'average-case' or 'typical-case' results would be a lot more use in practice but are virtually unobtainable: still, many purely empirical studies of large sets of problems have shown again and again that there are real 'trade-off cliffs' in practical problem sets which clearly partition the problem space into easy vs. near-worst-case-hard (the latter usually in a 'boundary zone' between two 'easy' areas.) Citing stone-age results like Wang's theorem-prover isn't relevant. That engine could not have even input the problems that get tackled these days: there wasn't enough RAM on the planet at that time. Some algorithms are 'adaptive' in that they are faster for simpler inputs(less expressive; using fewer of the fancy operators), but some algorithms are not, and no matter what the input (for a given size) it runs in the same time (e.g. tableau algorithms vs automata based ones for description logics). There is no guarantee that a set of problems has an adaptive algorithm.
Any problem -can- be solved as fast as it ..well... can be. But you don't know ahead of time which problem it is.
Of course, just syntax checking is an extremely effective triage method. But there's no guarantee (a simple statement can be expressed badly, with lots of annoying high-complexity operators). Increasing expressiveness increases the complexity of solving the whole class.
So, yes, for a specific problem, stated in a particular manner, of course adding operators to the language it is expressed in doesn't change the complexity of the original problem. But equally of course the extra operators make more problems expressible and they might take longer, and if you don't realize it, you may spend more time than necessary on the simpler problem (that doesn't actually use those new operators).
Does that clarify how I think people are talking about 'expressiveness'
Yup.
Pat
------------------------------------------------------------ IHMC (850)434 8903 or (650)494 3973 40 South Alcaniz St. (850)202 4416 office Pensacola (850)202 4440 fax FL 32502 (850)291 0667 mobile
|