It's unfortunate that a hole was found in the Deolilakar claim ... (01)
... I consulted my friend, Mike, for his advice on the matter, and got
the following (which just came in this morning.) (02)
Regards. =ppy
 (03)
 Forwarded message 
From: Michael Sipser <sipser@xxxxxxxxxxxx>
Date: Thu, 12 Aug 2010 09:29:43 0400
Subject: Fwd: Re: Proof announcement: P is not equal to NP
To: peter.yim@xxxxxxxx (04)
Immerman is one of the main experts on the correction
between logic and complexity, a key ingredient of the claim,
and which unfortunately appears to be used incorrectly. (05)
MS (06)
 Forwarded message 
From: Neil Immerman <neil.immerman@xxxxxxxxx>
To: "Deolalikar, Vinay" <vinay.deolalikar@xxxxxx>
Date: Thu, 12 Aug 2010 07:57:49 0400
Subject: Re: Proof announcement: P is not equal to NP
Dear Vinay Deolalikar, (07)
Thank you very much for sharing your paper with me. I find your
approach and your ideas fascinating, but I am afraid that there is
currently a serious hole in your paper as I now describe. For page
numbers, I refer to the 102 page, 12 pt. version of your paper. (08)
Your main idea for the lower bound is to show that FO(LFP) is not
powerful enough to express SAT, by using HanfGaifman locality to
limit the connectivity of the graphs you consider at successive stages
of the fixed point computation. As you point out, if a total ordering
is given as part of the input structure, then the Gaifman graph has
diameter one, so locality is meaningless. Thus you restrict to a
successor relation and as you point out, it is still true that FO(LFP)
is equal to P in the presence of a successor relation. However, you
make two assertions that are not true. (09)
You use the relation "R_E" as your successor relation. On page 67
you write, "The reason for the relation R_E that creates the chain is
that on such structures,
polynomial time queries are captured by FO(LFP) [EF06, S11.2]. This is a
technicality. Recall that an order on the structure enables the LFP computation
(or the Turing machine the runs this computation) to represent tuples
in a lexicographical ordering. In our problem kSAT, it plays no
further role. Specifically,
the assignments to the variables that are computed by the LFP have nothing to
do with their order. They depend only on the relation R_C which encodes the
clauses and the relation R_P that holds the initial partial assignment
that we are
going to ask the LFP to extend. In other words, each stage of the LFP
is order invariant. It is known that the class of order invariant
queries is also Gaifman local [GS00]." (010)
Unfortunately, it is not true that each stage of the fixed point
must be order invariant. In particular, consider the definition of
ordering from successor, easily defined by a least fixed point and
thus the reason that successor suffices to capture P. The ordering is
defined by taking the transitive closure of the successor relation.
At each stage, the successor distance is doubled, so in log n stages
we have the whole ordering. Note that all these stages contain the
order dependent information that is part of the original successor
relation. It is true that the final aim of your computation is the
SAT property which is order independent. But that definitely does not
imply that each stage of the induction is order independent. (011)
The other problem is that you restrict your attention to monadic
fixed points. You write, (012)
"Remark 7.4. The relation being constructed is monadic, and so it does
not introduce
new edges into the Gaifman graph at each stage of the LFP computation.
When we compute a kary LFP, we can encode it into a monadic LFP over a
polynomially (n^k) larger product space, as is done in the canonical structure,
for instance, but with the linear order replaced by a weaker successor
type relation.
Therefore, we can always chose to deal with monadic LFP. This is really a
restatement of the transitivity principle for inductive definitions
that says that
if one can write an inductive definition in terms of other inductively defined
relations over a structure, then one can write it directly in terms of
the original
relations that existed in the structure [Mos74, p. 16]." (013)
It is not the case that you can freely assume that your fixed
points are monadic. If you actually work on the canonical structure,
then you require the multiple arity relations that can take us from a
tuple to its individual elements and back again. These would again
make the diameter of the whole graph bounded. However, in your proof
you do not include these relations. Thus, your restriction to only
have successor and to restrict to monadic fixed points is fundamental.
In this domain  only monadic fixed points and successor  FO(LFP)
does not express all of P! (014)
Currently, as I see it, the strongest tool we have in descriptive
complexity  rather than the locality theorems  is the Håstad
Switching Lemma. Beame and Håstad used this to shown that Sipser's
hierarchy theorem extends all the way to FO[log n/log log n]. As you
know, FO(LFP) is the same thing as FO[n^{O(1)}]  properties
expressible by the polynomial iteration of a fixed block of restricted
quantifiers.
We know that NC^1 is contained in FO[log n/log log n] and this is
tight. Furthermore, L and NL are contained in AC^1 = FO[log n], and
it remains open whether NC^1 is equal to NP. A state of the art
paper that I very much recommend is Ben Rossman's result that
expressing the existence of a k clique requires k/4 variables in FO,
and even in FO[clog n/log log n] for appropriate c. (In Rossman's
result, as in all results that use the switching lemma, the lower
bound is true in the presence not just of order, but of any numeric
relations including addition and multiplication.) (015)
My hope is that work such as yours can get us beyond the log n/log
log n boundary. (016)
I look forward to meeting you and talking with you in person
about your work sometime in the future. (017)
Good luck and best wishes, (018)
 Neil (019)
On Fri, Aug 6, 2010 at 5:28 PM, Deolalikar, Vinay
<vinay.deolalikar@xxxxxx> wrote:
Dear Fellow Researchers, (020)
I am pleased to announce a proof that P is not equal to NP, which is
attached in 10pt and 12pt fonts. (021)
The proof required the piecing together of principles from multiple areas
within mathematics. The major effort in constructing this proof was uncovering
a chain of conceptual links between various fields and viewing them through
a common lens. Second to this were the technical hurdles faced
at each stage in the proof. (022)
This work builds upon fundamental contributions many esteemed researchers
have made to their fields. In the presentation of this paper, it was my
intention to provide the reader with an understanding of the global framework
for this proof. Technical and computational details within chapters were
minimized as much as possible. (023)
This work was pursued independently of my duties as a HP Labs researcher, and
without the knowledge of others. I made several unsuccessful attempts these
past two years trying other combinations of ideas before I began this work. (024)
Comments and suggestions for improvements to the paper are highly welcomed. (025)
Sincerely,
Vinay Deolalikar (026)
On 8/10/10, John F. Sowa <sowa@xxxxxxxxxxx> wrote:
> Last week, Vinay Deolalikar presented a claimed proof that P is not
> equal to NP. I'd like to discuss it because (a) it's an important
> problem, (b) the paper has some interesting ideas that are relevant
> to the way we design and use ontologies and the languages for
> expressing them, and (c) it suggests ways of organizing ontologies
> and using languages to express them.
>
> Deolalikar's paper:
>
> http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp12pt.pdf
>
> Two blogs that discuss this paper and whether the proof is correct:
>
> http://rjlipton.wordpress.com/
>
> http://scottaaronson.com/blog/?p=456
>
> I share some of the concerns mentioned in these blogs. But whether
> or not there are flaws in Deolalikar's paper, he makes points that
> are valuable in themselves, independently of their use in the proof.
>
> In the discussion at the end of this note, I quote some excerpts
> from the paper and add some comments about their implications.
>
> As that discussion shows, the complexity class for many kinds
> of problems can be determined by checking the problem statement.
> If a problem is in an easy class, solve it quickly. But if it's
> in the NPhard class, warn the user and don't attempt to solve it,
> unless the user insists.
>
> For the design of languages and systems, it shows that there
> is a choice between different design options:
>
> 1. Restrict a language to make it impossible to state problems
> that are inefficient (NP hard or worse).
>
> 2. Allow the people who state a problem to use any language
> they find convenient. Before trying to solve a problem,
> the system can check its complexity class and warn the
> user(s) before trying to solve problems that might take
> excessive amounts of resources.
>
> Option #1 can make it impossible to state solvable cases of
> hard problems. Option #2 is more flexible, because it makes
> the determination at run time.
>
> John Sowa
> __________________________________________________________________
>
> Following are quotations from Deolalikar's paper with my comments
> about their implications. The page numbers are the ones in the paper
> itself, which are 4 less than the numbers by the Adobe counter.
>
> p. iii (or 3 by the Adobe reader count) is the abstract:
>
> "Our work shows that every polynomial time algorithm must fail to
> produce solutions to large enough problem instances of kSAT in the
> d1RSB phase."
>
> kSAT is the set of problems in satisfying a collection of Boolean
> assertions with k variables. Any NPhard problem (which takes an
> exponential amount of time to solve) can be reduced to some kSAT
> problem that also takes exponential time.
>
> The d1RSB phase is the "dynamical one step replica symmetry breaking"
> phase. Outside this phase, the problems are easy (P time). But in
> this phase, the problems are hard (exponential time).
>
> "This shows that polynomial time algorithms are not capable of
> solving NPcomplete problems in their hard phases, and demonstrates
> the separation of P from NP."
>
> The heart of the proof shows that some kSAT problems lie in the
> d1RSB phase. But as a byproduct, it also shows that large numbers
> of problems are easy (P time) because they are outside that phase.
>
> p. 51 (or 55 by the Adobe reader count)
>
> "While a given CSP [constraint satisfaction problem] ... might
> be NPcomplete, many instances of the CSP might be quite easy to
> solve, even using fairly simple algorithms. Furthermore, such
> 'easy' instances lay in certain well defined regimes of the CSP,
> while 'harder' instances lay in clearly separated regimes."
>
> This is good news for programmers and knowledge engineers,
> because the method of proof provides important criteria:
>
> 1. For distinguishing easy problems from hard problems, and
>
> 2. For stating problems in such a way that they are guaranteed
> to stay outside the hard phase.
>
> p. 53:
>
> "physicists have conjectured that there is a second threshold that
> divides the SAT phase into two  an 'easy' SAT phase, and a 'hard'
> SAT phase. In both phases, there is a solution with high probability,
> but while in the easy phase one giant connected cluster of solutions
> contains almost all the solutions, in the hard phase this giant
> cluster shatters into exponentially many communities that are far
> apart from each other in terms of least Hamming distance between
> solutions that lie in distinct communities."
>
> What Deolalikar did was to prove this conjecture. Even more
> importantly, he showed how to recognize most of the easy cases.
> There may be some easy cases near the boundary, but most of
> them are clearly distinguishable from the hard cases.
>
> p. 68:
>
> "The central concern of this work has been to understand what are
> the irreducible interactions between the variables in a system 
> namely, those that cannot be expressed in terms of interactions
> between smaller sets of variables with conditional independencies
> between them."
>
> p. 81:
>
> "This explains why polynomial time algorithms fail when interactions
> between variables are O(n) at a time, without the possibility of
> factoring into smaller pieces through conditional independencies."
>
> In other words, hard problems have longdistance interactions among
> most of the variables. The symbol O(n) means that most variables
> interact with some significant percentage of the other n variables,
> including many that are in remote regions of the problem statement.
>
> "This also puts on rigorous ground the empirical observation that
> even NPcomplete problems are easy in large regimes, and become
> hard only when the densities of constraints increase above a
> certain threshold."
>
> By "empirical observation", he means what most programmers and
> knowledge engineers know: large numbers of such problems are
> easy to solve, and they only become hard when the density of
> longdistance interconnections becomes high.
>
> "This threshold is precisely the value where O(n) interactions
> first appear in almost all randomly constructed instances."
>
> That is the criterion for distinguishing easy problems from
> hard problems.
>
> p. 82:
>
> "The framework we have constructed allows us to analyze the set
> of polynomial time algorithms simultaneously, since they can all
> be captured by some LFP [leastfixedpoint method], instead of
> dealing with each individual algorithm separately. It makes
> precise the notion that polynomial time algorithms can take
> into account only interactions between variables that grow
> as poly(log n), not as O(n)."
>
> In other words, the easy problems are ones for which the number
> of interactions among variables grows very slowly (log n) as the
> the number n of variables increases.
>
> For hard problems, each variable is connected to a significant
> fraction O(n) of all other variables. Such problems are also
> characterized by a high density of variables in each Boolean
> assertion: i.e., each assertion contains O(n) variables.
>
> In terms of graph isomorphism, there is a clear criterion for
> easy problems: Easy graphs are those for which an upper bound on
> the number of links to any node is determined by some polynomial
> in (log n). For typical conceptual graphs and RDF graphs on the
> WWW, this kind of upper bound is almost always true. That means
> graph matching can be done in polynomial time.
>
> p. 86:
>
> "polynomial time algorithms succeed by successively 'breaking up'
> the problem into smaller subproblems that are joined to each
> other through conditional independence."
>
> This is what every programmer and knowledge engineer knows:
> modularize your programs or ontologies.
>
> "Polynomial time algorithms resolve the variables in CSPs in a
> certain order, and with a certain structure. This structure is
> important in their study."
>
> It also suggests algorithms for determining whether a problem
> is easy or hard. In other words, you can perform a simple test
> on the problem description before wasting time on NP hard cases.
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontologforum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontologforum/
> Unsubscribe: mailto:ontologforumleave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgibin/wiki.pl?WikiHomePage#nid1J
> To Post: mailto:ontologforum@xxxxxxxxxxxxxxxx
>
> (027)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontologforum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontologforum/
Unsubscribe: mailto:ontologforumleave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgibin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontologforum@xxxxxxxxxxxxxxxx (028)
