Posted
by
kdawson
on Sunday August 08, 2010 @06:30PM
from the sufficiently-complex dept.

morsch writes "Researcher Vinay Deolalikar from HP Labs claims proof that P != NP. The 100-page paper has apparently not been peer-reviewed yet, so feel free to dig in and find some flaws. However, the attempt seems to be quite genuine, and Deolalikar has published papers in the same field in the past. So this may be the real thing. Given that $1M from the Millennium Prize is involved, it will certainly get enough scrutiny. Greg Baker broke the story on his blog, including the email Deolalikar sent around."

Okay... sometimes when it's really really really obvious that something's a joke, it's okay to play along and make mock serious follow ups and even *gasp* non-serious moderations. This works on the basis that it's obvious that both the original poster and the modder are joking.

I had an unshakable image of some web design "IT guy" who had one forgotten lecture on this subject back in trade school nodding sagely as he thinks he understands what everyone's talking about and reaching for the Insightful moderation.

I didn't have the benefit of the reassuring Score:5, Funny that you had.

or it may have been modded by a mathematician who figured out that modding comments "funny" will likely result in negative karma.

Of course since the OP was an AC karma isn't a factor. But myself, I've just gotten into the habit of never using +1 funny, since it could result in negative karma for the poster. Also something thats funny, is made even more funny when it gets modded as insightful or informative.

I don't. I often wonder why, because I actually am a troll. I turn to stone if I spend more time the sunlight longer than it takes to snatch a passing goat. I always stay late at the data center in order to avoid venturing out into the Big Blue Room. I show up late for the same reason.

My bosses have gotten used to this; they tolerate it because they're afraid of confronting me. My technical skills are too valuable to lose, and in any case I'm the only one who has all the passwords. Even if that weren't true, they subconsciously know that any confrontation with me would end with me sucking the marrow from their dead white bones (yum!), or at least filing a grievance with personnel over a violation of the company diversity policy.

Oh, by the way, it's "modded", not "moded", moron. Sorry about that outburst, I can't help it. It's a cultural thing (look it up in the employee handbook, dimwit).

If "non-deterministic polynomial time" is an actual "algorithmic complexity class' then so is my "anomalous quantum field" which generally intersects with a "temporal phase disturbance."

So apparently if something sounds technical and you don't know what it is you assume it is nonsense? Non-deterministic polynomial time http://en.wikipedia.org/wiki/NP_(complexity) [wikipedia.org] is a concept is theoretical computer science. The idea is that a set of problems is in NP if when there is a "yes" answer to an example there is a short proof of the answer (where short means the length of the proof and checking its validity is bounded by a polynomial function of the length of the input). For example, the problem "is a given integer n composite?" is in NP because if the answer is yes, one can prove this quickly by simply giving a non-trivial divisor. The other relevant class is P, which are the set of problems which can be answered within time bounded by a polynomial function of the input. One of the great unsolved mathematical problems of our time is whether P equals NP. Roughly speaking, the question asks whether there exist problems which are hard to solve but where solutions can be checked quickly.

However, it seems you could read that page, and still have not a single useable clue.

I have no idea why people say maths is a strong point of Wikipedia. There is no topic whatever on which I have received enlightenment by reading of a Wikipedia page. Indeed, even when it's a result I just want to look for, I can be sure that Wikipedia's going to go off on some tangent about a minor topic some guy's just learnt about in class and felt the need to mention, while the description of the result itself misses out some fundamental assumptions or uses ambiguous language. And, when I'm confronted w

Some university really should hire them, because if they can prove P!=NP just to cover up a sex scandal, imagine what they could do if they didn't waste time writing press releases.

Oh man, if you think the politics and backbiting are bad in typical academia, wait until you see an entire computer science department arguing over who gets to have illicit sex in order to set off the proof-generating team.

What would the impacts of this be for cryptography?from a theoretical point of view at least?

I was under the impression that a lot of cryptography was based upon the hope that P!=NP and while in practice this wouldn't change much about anyone acts it might have an impact on how people think about the old cryptology vs cryptanalysis race.

Off the top of my head, if P = NP, then a lot of cryptography like RSA and elliptic curve cryptography become, in principle, mathematically solvable. Much of their security is premised on the idea that their equations are prohibitively difficult to brute force because they're NP.

If this proof holds up, then RSA and ECC become provably secure in a way they weren't before.

No. Even if there are found to be integer factorization algorithms theoretically in P, there's still no practical way to crack it. This is all asymptotic complexity (what happens as your key size goes to infinity), which might be important when we start using tebibit symmetric keys, but in the real world the constant coefficients (which are thrown away in asymptotic analysis) would be really high.

Off the top of my head, if P = NP, then a lot of cryptography like RSA and elliptic curve cryptography become, in principle, mathematically solvable. Much of their security is premised on the idea that their equations are prohibitively difficult to brute force because they're NP.

If this proof holds up, then RSA and ECC become provably secure in a way they weren't before.

The security of RSA is based on the idea that it is very difficult to factor large integers. However, this has not been shown to be an NP-hard problem and so really doesn't have anything to do with this.

The point is that if P did = NP, then there wouldn't be any reason to think further about whether RSA is an NP problem. The constants might be huge, but there would clearly exist a poly-time algorithm for solving it. If P != NP as this result claims, then there may not be one, which is what cryptographers hope.

Currently the best factoring algorithm is GNFS, which can factor in exp( ( n * log^2(n) )^(1/3) ). However, that's still exponential because it's greater than exp(n^(1/3)).

The paper claims a deterministic time lower bound for the hardest problems in NP^CoNP at exp( log^k(n) ). I'm sure this paper will spur research into algorithms with expected runtime of exp ( log^k(n) ), and that should quickly give us a faster factoring algorithm.

On a personal note: I developed a SAT algorithm in 2005, and my rudimentary

Really, P = NP would have far reaching implications to security, essentially proving that method of security will never be secure. If it is P != NP, then that means that you can have problems which take longer than polynomial time to calculate but only polynomial time to verify. So, if the paper is true, then it doesn't really change a whole lot, except that now we know that some day there isn't going to be a trivial solution. I guess its good for cryptography.

Really, P = NP would have far reaching implications to security, essentially proving that method of security will never be secure. If it is P != NP, then that means that you can have problems which take longer than polynomial time to calculate but only polynomial time to verify.

I think it's important to realize that even if they are "unresolved problems" of mathematics, it's not like both answers are equally likely like the flip of a coin. For example the Riemann hypothesis states that all non-trivial zeros of the Riemann zeta function have real part 1/2. It's true for every non-trivial zero we've ever found, but it's not proven true for all the infinitely many zeros. However, outside of mathematics a proof it's true will be met with "yeah, that's what we thought" and a proof it's false with "OMG what's going on here?". Another example is the prime twin conjecture, are there inifinte pairs like (3,5) (5,7) (11,13) (17,19) and so on. There's very good reason to believe it's true but nobody has able to formally prove it. There's a lot of problems today that appear to support the idea that P != NP, and that's what most people believe the answer is. However, stringing together formal proof that it's so is much harder. If this paper turns out to be true, surely it's a great leap for mathematics but it's the answer that doesn't change the world.

Don't think this is what it means. Look at FFT (logarithmic optimization to a quadratic problem). P = NP as I understand it means that ALL NP problems have a corresponding P solution. You just have to think hard enough to find it. Proving that there are classes of NP that have no P just suggests certain crytographic algorithms MIGHT be NP. But it doesn't prove it (unless it was one of the particularly proven NP classes in this or some other paper). And even if this paper includes RSA / ECC, etc. That doesn't mean someone even more clever 30 years from now finds a flaw or special case where this isn't true and thus finds a P cracking tool.

If you show P=NP by constructing an algorithm which solves an NP-complete problem in polynomial time, you immediately have a polynomial time algorithm for *any* problem in NP. That's the definition of NP-complete: a language is NP complete if any other language in NP can be reduced to it in polynomial time.

Even if you provide a non-constructive "existence" proof, it turns out you can construct an (incredibly awful!) polynomial time algorithm by, essentially, a brute force simulation of turing machines -- so

You don't need to simulate the Turing machine. You just need to encode it as a boolean formula. That's part of what Cook's theorem shows; it shows how to encode a non-deterministic Turing machine as a boolean formula with at most a polynomial increase in size. Now that the problem is in a NP-complete form just follow the reductions until you get to the NP-complete algorithm that has a P algorithm. In this way you can solve any NP problem in P time as long as you solve one NP-complete algorithm in P time.

I'm being thick here I guess, but why do we know the required simulation of Turing machines to be in NP=P (given assumption), and not EXPTIME or at least PSPACE?

The algorithm is: on iteration N, simulate one (more) step on Turing machines 1 through N. Stop when a machine outputs an answer with a formal proof that answer is correct.

If P = NP, then the M'th Turing Machine does this in P(n) steps. It takes P(n)+M iterations before that happens. Each iteration takes M+i steps, so the run time is O(P(n)^2), which is polynomial! (note that M is an "exponentially large" constant, so this approach wouldn't result in a truly usable algorithm.)

Practically, not much. It means we can breathe easy that a lot of crypto out there is now provably secure. It's been long considered likely that P != NP, because a lot of NP-complete problems are very old and nobody has gotten very far in solving them, and the extra focus in the last 40 years in breaking public key crypto hasn't produced much more progress on the problem. It was just the nagging issue of nailing down a proof.

A proof that P = NP would have resulted in a lot of cryptographers committing Seppuku. The contrary proof doesn't have many huge implications, though.

A proof that P = NP would have resulted in a lot of cryptographers committing Seppuku.

If I were a cryptographer, I'd be positively itching for someone to break RSA. A 30 year old univerally-used secure cryptosystem means no job.:)

In a world where no crypto is really secure, everyone hires their own cryptographer to build a custom cryptosystem. Let's see Bletchley Park mathematicians try to cleverly crack gigabytes of junk encrypted data when the keys wouldn't fit in all the notebooks required to fill their

Large integer factorization has not been shown to exist in NP-Complete (it is doubtful it does), it is know to exist in both NP and co-NP, it could exist in P (but it is doubtful) we just don't know. RSA public key crypto depends on the difficulty of factoring very large numbers. Currently there is no known efficient mechanism for determining the factors of a very large number. If P != NP we don't get a whole lot more than we have at the moment because we don't know exactly what complexity class integer fac

Even if P=NP, polynomial solutions requiring time n^99 consume enough time to be practically infeasible. Thus, even P=NP would not harm cryptography much, if it did not provide very efficient solutions for every NP-hard problem. On the other hand, favorite cryptographic hard problems, such as factoring, are not known to be NP-hard and may well turn out to be solvable in polynomial time, even if P!=NP. Therefore, proof that P!=NP won't have any interesting implications for cryptography unless it contains new ideas that can help in other ways. Neither will proof P=NP unless it includes ideas for fast solutions of interesting problems, such as fast factoring or fast discrete logarithm.
Proof of P!=NP may help to solve another interesting problem in cryptography: one-way functions. Right now many results are built on the assumption that such functions exist, but nobody have found a single provable one-way function (easy to compute, infeasible to reverse). A bunch of functions are believed to have this property, but not a single one has been proved difficult to reverse. I would be interested to see if this proof will produce such an animal - a provably one-way function.

"Deolalikar's result is that "P (does not equal) NP (intersect) co-NP for Infinite Time Turing Machines". This is a special context - infinite time Turing machines are not the same thing as standard Turing machines, but are a kind of hypercomputer. Dcoetzee 09:07, 8 August 2010 (UTC)"

I guess I'm just way out of my depths here, but it seems to me that a proof of P != NP for infinite time Turing machines would still mean about the same things to complexity theory, and therefor would apply to true Turing Machines anyways.

"Deolalikar's result is that "P (does not equal) NP (intersect) co-NP for Infinite Time Turing Machines". This is a special context - infinite time Turing machines are not the same thing as standard Turing machines, but are a kind of hypercomputer. Dcoetzee 09:07, 8 August 2010 (UTC)"

Sorry, I wasn't clear. I meant what's the next big problem in computer science.

Assuming this proof holds up, the next set of questions are how much the complexity hierarchy breaks down. There are a host of complexity classes between P and NP. Other important classes include PP and BPP http://en.wikipedia.org/wiki/BPP [wikipedia.org], http://en.wikipedia.org/wiki/PP_(complexity) [wikipedia.org]. BPP is a subset of NP and is tentatively believed to be equal to P. Another important class is BQP http://en.wikipedia.org/wiki/BQP [wikipedia.org] which is the class of problems which can be solved quickly by a quantum computer. If this proof goes through it may generalize to showing that some of these other classes are distinct (proving that BQP is not equal to P would be almost as big a deal as proving that P !=NP).

I think this is the first time a serious researcher publishes a paper through email. Makes me wonder if he is actually publishing it or just asking for peer-review from his colleagues.

Or maybe he is trying to best Perelman in insanity. After all, even Perelman put the paper in arXiv.

Anyway, about the paper itself; I am a physicist, and he does say correct things about the Ising model and phase transitions. Unfortunately, it is only a small part of his proof that I can grasp. So I think he is dead serious.

well, he also treats replica symmetry breaking in relation to the kSAT problem (in the hamiltonian fomulation is really just Ising on a random graph) so I would say he knows his shit... If there is a mistake surely is not a trivial one.

Circulating it among colleagues for review is exactly what he was doing. See the author's personal web page: http://www.hpl.hp.com/personal/Vinay_Deolalikar/

He says there that "The preliminary version made it to the web without my knowledge. Please note that the final version of the paper is under preparation, and is to be posted here shortly (in about a week). Stay tuned."

At the risk of pointing out the obvious, that's what you get when you use LaTeX [latex-project.org]. You focus on the content, and LaTeX takes care of the typesetting, incorporating years (perhaps hundreds of them) of research on how to make text aesthetically pleasing, easy to read, and suitable for binding, so that you don't have to do that research yourself. Plus, LaTeX is the format that many journals prefer submissions to be in.

At the time of writing, there are two comments on Greg Baker's blog, congratulating Vinay on making it onto Slashdot. Jeez...he's potentially solved one of (if not) the most important open problem in computing, which could land him a million dollars in prize money...but yeah...well done on making it into that most esteemed of online publications, Slashdot.

Well, Scott Aaronson has written:
"If Vinay Deolalikar is awarded the $1,000,000 Clay Millennium Prize for his proof of PNP, then I, Scott Aaronson, will personally supplement his prize by the amount of $200,000.

"I’m dead serious—and I can afford it about as well as you’d think I can."
See his blog [scottaaronson.com].

Being a researcher in Finite Model Theory (FMT) this paper is very interesting because it uses ideas from that area, i.e. the LFP(FO) bits. Reading through the proof synopsis and scanning the FMT sections there are several potential pitfalls:

1. The logic LFP(FO) only captures P on ordered structures; that is structures that have a built in total ordering relation.

2. Any sentence that describes a problem in LFP(FO) must be order-invariant; that is it must work for any possible ordering of the vertices in the underlying graph/structure.

It is already known that LFP(FO) on unordered structures is a proper subset of LFP(FO) on order structures, so if the ordering and order-invariant requirements for LFP(FO) = P are not dealt with in the proof, then all the author has done is proof that LFP(FO) on unordered structures is a proper subset of NP. Which is already known.

Another potential problem is in the arguing that all first-order properties are local (Hanf's Theorem) in the presence of ordering, as every vertex is effectively connected to every other vertex (Immerman's proof of LFP(FO) = P requires total ordering of the underlying graph/structure), and hence every vertex is in the radius = 1 neighbourhood of every other vertex.

The crucial step in the proof, appears to be the argument that no LFP(FO) formula can extend a partial solution to k-SAT to a full solution to k-SAT. This is where I'd check the logical steps of the proof, and also make sure that the ordered nature of LFP(FO) structures is correctly considered.

I look forward to seeing this published in a peer-reviewed mathematics journal: I'd recommend to the author the "Journal of the ACM" for this (as it's one of the best journals in the field).

At a 100 pages its going to be a while before I can say I have RTFA, but I'll get back with any relevance in a few days after I have digested it. I suggest any post claiming other wise are a bit hasty.

You can read section one (the introduction) and get a high level walk through of what he's doing. Just be prepared to have a requisite in the following to make it through that:

In order to apply this analysis to the space of solutions of random constraint satisfaction problems, we utilize and expand upon ideas from several fields spanning logic, statistics, graphical models, random ensembles, and statistical physics.

On the one hand it seems this will take a general expert in the math related sciences to verify but on the other you would think that -- like with the E8 and Lie groups -- this sort of proof would require a rather large unified theory to be able to reduce the N=NP? problem down to a provable situation. I'm no expert and it's been three or four years since I've even been in academia but even the subsections of this paper are noteworthy if they are true. It could be we're looking at something that jumps so far ahead like the famous papers of Turing and Shannon.

when I took a graduate level computer science course on randomized algorithms, our professor put up an 8-10 page proof for a randomized algorithm to solve graph coloring problems. near the end of the proof I raised my hand and noted that my professor had made a mistake transcribing a factor, as he had left out one of the paths in a markov chain. after checking the proof my professor realized that in fact the thesis he was using as an example was incorrect. Since that moment, I havent trusted complex mathematical proofs over 15 pages that havent been around for a large number of years.... I suppose I should come up with some formula for a trustworthiness factor based on length and duration of time it has held up to scrutiny, but my point being, very very few people are qualified to write or debunk this paper, but everybody should be trying to.

At a 100 pages its going to be a while before I can say I have RTFA, but I'll get back with any relevance in a few days after I have digested it. I suggest any post claiming other wise are a bit hasty.

Aren't the best theories supposed to be elegantly simple? This looks a mess.Wait.. that's just how my head feels after reading the abstract.

IMO, The P vs NP is fundamentally more tricky than other famous theorems/conjectures (like FLT), because on some level it is a statement about mathematics itself. The assumption that P != NP on some level implies that the finding mathematical proofs is difficult. This means that if P!=NP it may be even more difficult to prove that P!=NP. It has been shown that assuming one-way functions exist (this would imply P!=NP easily enough) that a certain type of proof called "natural proofs" can never be used to separate P from NP.

On the flip side, showing P = NP could be easier, but most people believe this is false, since it would mean that there is essentially one "master algorithm" that can solve any problem in NP efficiently.

The current state of computational complexity theory is that we are no where close to resolving P!=NP, that is unless this proof actually checks out. Honestly, we can't even settle "easier" questions like P vs PSPACE. The implications of a correct proof would be absolutely mind blowing.

Ah, but there are more of us now. If it took 1 billion people 3650 years to solve, it should take 6 billion people a mear 60 years. The fact it has take 70 already clearly shows the added cycles this calculation requires.

Humanity follows basically the same logic as massively parallel supercomputers.

No, there really are solutions (not approximations) for many NP-complete problems that are fast on most inputs. For example, current SAT algorithms are fast on most instances. There are, however, pathological cases on which the algorithms are slow. The fact that a problem is NP-complete just means that, if P!=NP, there is no algorithm that is guaranteed to be polynomial-time for all inputs. It is still quite possible to devise algorithms that are fast for almost all inputs, but slow on a few pathological ones.

Of course there will be an error with this proof. Many errors, actually. Most of them irrelevant.Maybe one of them is not. You know what? It will be caught in peer-review, exactly as it's been happening in the last centuries.

One of the reasons the problem attracts so much attention is the consequences of the answer. A proof that P = NP could have stunning practical consequences, if the proof leads to efficient methods for solving some of the important problems in NP. It is also possible that a proof would not lead directly to efficient methods, perhaps if the proof is non-constructive, or the size of the bounding polynomial is too big to be efficient in practice. The consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.

Cryptography, for example, relies on certain problems being difficult. A constructive and efficient solution to the NP-complete problem 3-SAT would break many existing cryptosystems such as Public-key cryptography, used for economic transactions over the internet, and Triple DES, used for transactions between banks. These would need to be modified or replaced.

On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming, and the travelling salesman problem, to name two of the most famous examples. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction are also NP-complete;[15] if these problems were efficiently solvable it could spur considerable advances in biology.

But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. According to Stephen Cook,[4]...it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems.

Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated - for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a "reasonable" size, would essentially end this struggle.

A proof that showed that P NP, while lacking the practical computational benefits of a proof that P = NP, would also represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P NP, much of this focusing of research has already taken place.[16]

NP-class problems can be translated into minesweeper puzzles. The problem is determining whether a solution exists in polynomial time. Your example is not solvable. The problem is determining whether a solution can be determined for arbitrarily large puzzles in a polynomial-scale algorithm, or whether the amount of time needed basically grows exponentially. This guy is saying that it grows basically exponentially.

No, I don't think so. Proving P=NP just means that some polynomial-time algorithm exists to solve any problem that can be verified by a polynomial-time algorithm. It doesn't necessarily tell you how to find such an algorithm,

I think you are misunderstanding what the P=NP question means... and I don't blame you. The question itself is very "meta", but it is not self-referencing as you believe.

P is a set of questions (YES/NO questions) that can be answered easily. NP is another set of questions, that may or may not be easy to answer, but when you see the answer (YES/NO) and a proof for the answer, you can easily check if the proof is correct. But the question "P=NP?" doesn't belong to either of the sets. If P=NP, then all problems that are easily "verifiable" are also easily "solvable", and if P!=NP, then there are problems easily verifiable but hard to solve. But, as the question "P=NP?" doesn't belong to P or NP, there is no paradox.

That's an over simplification, of course. For instance, "easy" in the previous paragraph actually means "solvable in polynomial time by a deterministic turing machine", and "not easy" would be "solvable in polynomial time by a non-deterministic turing machine", and there is the widespread confusion about "NP" meaning "Non-P" instead of "P in a Non-deterministic machine". The wikipedia article is really good, but unfortunately, much too formal to understand without previous knowledge. I hope I helped a bit.

The Millennium Prizes mostly have different rules for proofs and counterexamples.
That's not the case for P=NP, though; either proving P=NP or P!=NP is sufficient to get the prize (given that the proof is confirmed valid, etc.).