New proof unlocks answer to the P versus NP problem—maybe

A new proof, published to the Web less than one week ago, purports to finally …

A paper that leaked onto the Web late last week claims to have solved one of the great modern problems in mathematics and computer science. Vinay Deolalikar, a principal research scientist at HP labs, has published a first draft of what he claims is a proof that P != NP. Unlike someone playing minesweeper in their parent's basement, Dr. Deolalikar has some serious credentials, having made inroads into related computational complexity problems. His history has made people take notice of this grand announcement.

In computational complexity theory, P and NP are two classes of problems. P is the class of problems that a deterministic Turing machine can solve in polynomial time. In more useful terms, this means that any P problem can be solved in less than c*nk steps. c and k are constants, independent of the input size, n—this means that the amount of time or space needed to solve a P problem are some polynomial function of the input.

NP, on the other hand, are problems that can be solved on nondeterministic Turing machines (which we don't know how to make) in polynomial time. Only the solution to a NP problem can be verified on a deterministic Turing machine in polynomial time. Solving them on a deterministic Turing machine generally involves super-polynomial runtimes, e.g. c*kn. Again, c and k are constants independent of the input size, n, meaning that the number of steps required increases exponentially with the input size.

The question addressed in the proof is the exact relationship between the classes P and NP—are they one and the same (implying P=NP), or do they represent logically different classes of problems (P != NP)?

The proof's conclusion, that P != NP, isn't a terribly shocking result; many people had assumed this for years. Yet, to date, no one has been able to prove it and withstand peer-review. The proof won't shatter the field of computation complexity, or our understanding of what is computable by a machine. If it's correct, it merely demonstrates what many have thought to be the case for some time.

(This sort of assumption is not unique to this problem. In my days as a student, I had seen certain mathematical formulations that assumed the Riemann hypothesis is true, yet, like P != NP, that's another unsolved problem deemed worthy of being one of the Clay Mathematics Institute's millennium problems.)

Deolalikar's proof draws upon numerous fields of mathematics and science ranging from formal logic to statistical mechanics. I will attempt to provide a brief outline of the methodology employed here, but understand that I am by no means qualified to offer up any useful insight into the correctness of what he did (I only hope I am capable of reporting it properly).

The proof uses, as its core example and idea, the boolean satisfiability problem (k-SAT). This type of problem involves a boolean expression consisting only of ANDs, ORs, NOTs, variables, and parenthesis, and determines whether the variables can take some set of TRUE/FALSE values that will lead to the entire expression evaluating to TRUE.

The proof attempts to encode k-SAT problem as queries on graphical structures. It uses a geometric interpretation of FO(LFP) logic that "captures all polynomial time computable queries" on the k-SAT instances. If the value of k is high enough, then the phase space of the solution rapidly becomes huge, and an exponential number of solutions will be generated by the LFP construction. This apparently shows that the "LFP cannot express the satisfiability query [...] and seperates P from NP."

Even though the proof was released to Deolalikar's fellow mathematicians and computer scientists (and inadvertently to the public at large) just last Friday, people well versed in the field have already found some problematic areas. While I doubt there has been sufficient time for anyone to give the full proof the rigorous testing it would need to withstand peer-review, a few potential weaknesses have been exposed.

Prof. Dick Lipton, a specialist in the theory of computation, has kept his blog updated with a good analysis of the current issues that have been raised. They seem to revolve around an assumption made by Dr. Deolalikar that, if P=NP, it would necessarily assume a polynomial time algorithm. This, Lipton says, is contradicted by certain known properties of random k-SATs.

Pre-prints of the paper are made available through Dr. Deolalikar's HP webpage.

They seem to revolve around an assumption made by Dr. Deolalikar that, if P=NP, it would necessarily assume a polynomial time algorithm.

If there is not a poly-time algorithm to solve a NP problem, then P!=NP. So I don't understand why this assumption would not be valid..

That's a bit of an editorial change that I over looked. It i not that the assumption implies a P-algorithm to an NP-problem, it implies the existence of a specific type of P algorithm that would lead to a contradiction with known properties of the k-SAT problem for large k. As Prof. Lipton puts it:

Quote:

The objections are related in that they all question the author’s deduction that P = NP implies the existence of a polynomial-time algorithm of a certain specific kind, which he then argues leads to a contradiction involving rigorously-known statistical properties of random {k}-SAT instances, for large enough {k}.

In this post he details the mathematics behind the major concerns. It's worth a look if you are so inclined.

It sounds sort of along the lines of someone doing a proof of the sky being blue. Everyone assumes it's a correct statement, the proof may or may not be right, but nobody is saying what difference it will make if it's true or false.

It sounds sort of along the lines of someone doing a proof of the sky being blue. Everyone assumes it's a correct statement, the proof may or may not be right, but nobody is saying what difference it will make if it's true or false.

It is widely assumed that P!=NP, but it is not known definitively. If someone did come up with a proof that P=NP, it would be one of the most meaningful, groundbreaking mathematical discoveries since the Greeks. The practical applications would revolutionize nearly every aspect of computation.

The converse proof, P!=NP, while not as earth shattering, is important because at least then all the mathematicians can stop having wet dreams about P=NP.

It sounds sort of along the lines of someone doing a proof of the sky being blue. Everyone assumes it's a correct statement, the proof may or may not be right, but nobody is saying what difference it will make if it's true or false.

It is widely assumed that P!=NP, but it is not known definitively. If someone did come up with a proof that P=NP, it would be one of the most meaningful, groundbreaking mathematical discoveries since the Greeks. The practical applications would revolutionize nearly every aspect of computation.

The converse proof, P!=NP, while not as earth shattering, is important because at least then all the mathematicians can stop having wet dreams about P=NP.

I was going to rag on the author for downplaying the significance of this proof if it is valid; however, I think you've gone over the top. Yes, it would be super-duper cool to mathematicians and computer scientists, but there isn't much real-world implication of it. Computer science has worked on this problem for 50 years, and while they can't solve it, P!=NP is overwhelming the assumption among researchers. I suppose students won't have to spend all of algorithms courses dicking around with proving NP-completeness, but otherwise there's not much to it, especially in the short-term.

Computer science is a wonderful field, but the results of research like this are years away from any possible practical application. Some people like to link to this XKCD (http://xkcd.com/435/) comic to make themselves feel superior (and because it's funny), but the innovations in computer science/mathematics (computer science is just a subfield in mathematics) have to move back through all those fields to have impact on non-CS/Math people.

I was going to rag on the author for downplaying the significance of this proof if it is valid; however, I think you've gone over the top. Yes, it would be super-duper cool to mathematicians and computer scientists, but there isn't much real-world implication of it. Computer science has worked on this problem for 50 years, and while they can't solve it, P!=NP is overwhelming the assumption among researchers. I suppose students won't have to spend all of algorithms courses dicking around with proving NP-completeness, but otherwise there's not much to it, especially in the short-term.

Computer science is a wonderful field, but the results of research like this are years away from any possible practical application. Some people like to link to this XKCD (http://xkcd.com/435/) comic to make themselves feel superior (and because it's funny), but the innovations in computer science/mathematics (computer science is just a subfield in mathematics) have to move back through all those fields to have impact on non-CS/Math people.

I disagree. As I said, proving that P!=NP isn't itself particularly exciting or groundbreaking. What is exciting is the *disproof* of P=NP. If P=NP were to be proven, that would have a nearly incalculable impact on the state of modern computing and, as a result, our lives. Instant Nobel Prize, no question.

EDIT - For a lot of theoretical math, I would agree with the part about it having to percolate up. However with this particular problem, there is an overwhelming amount of direct applications to core computer science problems. Solving those would impact things very quickly.

What is wrong with people? This is why we can't have nice things, because there's always that idiot out there willing and able to deface, destroy, or otherwise cheapen everything that is created.

I weep for the species.

It seems that preventing defacement of a mathematical wiki would be simple - allow only registered users to edit, and require users to perform advanced calculus to register.

It is locked to registered users, but is still defaced. Not sure what sort of registration process is needed to prevent this, if it wasn't just flat out compromised. It seems to have been set up rather quickly, if the placeholder graphics are any indication, so it might not have been properly secured.

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean. The only distinction I observe is between a deterministic Turing machine and a non-deterministic Turing machine, and neither of those terms has any meaning to me. Wolfram wasn't much help either.

In other places I have seen the difference between P and NP described as the difference between SOLVING a problem and merely CHECKING any particular answer. So now I'm totally baffled.

It sounds sort of along the lines of someone doing a proof of the sky being blue. Everyone assumes it's a correct statement, the proof may or may not be right, but nobody is saying what difference it will make if it's true or false.

Well, there are some areas that depend on the fact that a problem cannot be solved easily (read: poly-time). Security (encryption) is a big one.

That said, the implications would potentially have been much more useful had he proved the opposite.

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean. The only distinction I observe is between a deterministic Turing machine and a non-deterministic Turing machine, and neither of those terms has any meaning to me. Wolfram wasn't much help either.

In other places I have seen the difference between P and NP described as the difference between SOLVING a problem and merely CHECKING any particular answer. So now I'm totally baffled.

http://en.wikipedia.org/wiki/Turing_machineBriefly (and probably imprecisely) - it's more or less how a typical computer as we know it today operates (programs stored in memory, executed by a processor, generating output).

NP problems are a set of problems that:1) A solution can be verified in P time2) A solution can only be reached in P time with accurate "guesses" (ie. cannot be determined by the current state) which prune the potential solution space the program would need to try from an exponential set.(my definition is not precise, but hopefully it's good enough to give you an idea).

Ex. One NP problem is, given a logical expression with k boolean variables, is there a permutation of these variables (ex. a = true, b = true, c = false, ...) that returns true for the expression?

Given a potential solution, it is computationally easy to verify that it solves the problem (condition 1).However, the potential number of permutations you must try to find a solution is exponential with regard to k, and is not considered computationally easy if you would need to iterate over most of the potential solutions to reach a conclusion. (condition 2)

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean.

This is DEEP into nerd territory. Seriously. Maybe beyond rocket science. With ~2 years of calculus and computer science and a couple more of physical science I only have a loose grasp what the problem even is. Heck I just barely understand d_jedi's explanation above, so let's see if I can go a slightly different route.

Hopefully you've figured out that a deterministic Turing machine is a sort of idealized computer with infinite storage (and/or time) -- a thought model -- that can do every type of computation you can imagine. However, it works one agonizing step at a time, so it's not necessarily efficient at all. Specialized devices are often far better in real life despite being finite. The value in a TM comes from the fact that, hypothetically it can solve anything (as long as it's, um, mathematically solvable).

So, knowing that, try reading this and hopefully the tip of the P vs NP iceberg will come into view.

I was going to rag on the author for downplaying the significance of this proof if it is valid; however, I think you've gone over the top. Yes, it would be super-duper cool to mathematicians and computer scientists, but there isn't much real-world implication of it. Computer science has worked on this problem for 50 years, and while they can't solve it, P!=NP is overwhelming the assumption among researchers. I suppose students won't have to spend all of algorithms courses dicking around with proving NP-completeness, but otherwise there's not much to it, especially in the short-term.

Computer science is a wonderful field, but the results of research like this are years away from any possible practical application. Some people like to link to this XKCD (http://xkcd.com/435/) comic to make themselves feel superior (and because it's funny), but the innovations in computer science/mathematics (computer science is just a subfield in mathematics) have to move back through all those fields to have impact on non-CS/Math people.

I disagree. As I said, proving that P!=NP isn't itself particularly exciting or groundbreaking. What is exciting is the *disproof* of P=NP. If P=NP were to be proven, that would have a nearly incalculable impact on the state of modern computing and, as a result, our lives. Instant Nobel Prize, no question.

EDIT - For a lot of theoretical math, I would agree with the part about it having to percolate up. However with this particular problem, there is an overwhelming amount of direct applications to core computer science problems. Solving those would impact things very quickly.

it is true that P=NP would have a massive impact on computational theory, but what it would mean for actual algorithms and cryptography is regularly overstated: If you could find a reduction of SAT or whatever to P, it might be in n^2^2^2^2^2^2^2^...^2 or something, rendering practical implications rather insignificant. if the reduction takes to long, you know that there is a way, but it takes millions of years to go it. the next question would obviously be: what is a universal lower bound for the reduction? of course, if the reduction goes through in n^17, we are talking business (but that seems highly improbable, someone would have found an algorithm already).

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean. The only distinction I observe is between a deterministic Turing machine and a non-deterministic Turing machine, and neither of those terms has any meaning to me. Wolfram wasn't much help either.

The following is entirely meant for the layperson. It's about as accurate as stating that Quantum mechanics involves spinning subparticles; it's an analogy at best.

P means "solvable in polynomial time." That basically means that as the data set you are processing grows, the amount of time it takes grows in a fairly slow and manageable amount.

NP means "not solvable in polynomial time." For these problems, the time to process the data set grows very rapidly as the size of the data set grows. For these problems, a single extra piece of data may very well take things from "computes in a few minutes" to "computes in a few years." Add a few more pieces of data and you can pass through "takes longer than you have to live" all the way to "won't be done processing until well after the Sun explodes and ends all human life on Earth."

Deterministic Turing machines are idealized regular computers. Given an algorithm, they run each step in order. If a problem (P or NP) takes M steps to execute, it takes a Turing machine M cycles to execute it. The faster the machine, the faster the algorithm runs. Multi-core machines are basically just multiple Turing machines, so at best you can split the workload evenly between the number of machines you have.

Non-deterministic Turing machines can process multiple calculations at the same time, to an infinite degree. This is saying you have a CPU with an infinite number of cores. It takes this machine essentially no time to execute any algorithm, because even for an algorithm that takes a hobajillion calculations (let's just say that's going to take a few millenia on a contemporary PC), the non-deterministic Turing machine can just create a hobajillion cores, split the workload between them, and process the data quite quickly. These machines do not exist, and quite possibly never will.

So, given we only have Turing machines, then generally speaking we can solve P problems but we cannot solve NP problems. In more specific circumstances, some P solutions still take too long on contemporary hardware and need faster machines to process quickly enough (e.g., how 3D games can get more detailed as hardware gets faster) and some NP solutions can be implemented on contemporary hardware for very small datasets (but unfortunately small data sets are rarely all that interesting, even in mundane situations).

P=NP means that all those NP problems can be solved in P complexity, so all we have to do is hit the math books a little harder and all those NP problems become solvable on real hardware. P/=NP means that we can stop wasting time and money trying to solve NP problems; it would be a let down, but at least it's one we're already expecting.

Non-deterministic Turing machines can process multiple calculations at the same time, to an infinite degree. This is saying you have a CPU with an infinite number of cores. It takes this machine essentially no time to execute any algorithm, because even for an algorithm that takes a hobajillion calculations (let's just say that's going to take a few millenia on a contemporary PC), the non-deterministic Turing machine can just create a hobajillion cores, split the workload between them, and process the data quite quickly. These machines do not exist, and quite possibly never will.

Elanthis, the analogy breaks down a little: NP-Machines would be independent cores with no shared memory subsystem. The different calculations can not communicate with each other. That is very important.It is rather like this: Before you start the calculation, you guess all possible answers to the given problem and give every one of your infinite number of (P-TIME) cores one suspect to proof (these calculations are independent) . Usually most of them will be wrong, but as soon as one possible answer is right, your calculation as a whole is finished and the machine answers: Yes, there is an answer.

If you need an analogy, distributed computing would be better: A server assigns every participating computer a workload and waits for an answer. In the case of SETI as soon as one answer is truly positive, we have found aliens - we do not know if we found all aliens or much else, but the questions: Are there aliens out there? is answered.

My reply, of course, is very much an oversimplification once again ;-)

EDIT: Just to be clear: The number of cores for any given query is finite, the infinity is potential: Since the query can get arbitrarily large, the number of potentially needed cores grows beyond any set amount.

Here I thought that Ars didn't post this story because getting excited about it was premature – I thought highly of Ars for not posting it – and now after a week, now that the proof's falling apart is in the offing, Ars suddenly writes about it? I don't get it.

"…The theorem is a hack on discrete number theory that simultaneously disproves the Church-Turing hypothesis (wave if you understood that) and worse, permits NP-complete problems to be converted into P-complete ones. This has several consequences, starting with screwing over most cryptography algorithms--translation: all your bank account are belong to us--and ending with the ability to computationally generate a Dho-Nha geometry curve in real time.

This latter item is just slightly less dangerous than allowing nerds with laptops to wave a magic wand and turn them into hydrogen bombs at will…"

"…The theorem is a hack on discrete number theory that simultaneously disproves the Church-Turing hypothesis (wave if you understood that) and worse, permits NP-complete problems to be converted into P-complete ones. This has several consequences, starting with screwing over most cryptography algorithms--translation: all your bank account are belong to us--and ending with the ability to computationally generate a Dho-Nha geometry curve in real time.

This latter item is just slightly less dangerous than allowing nerds with laptops to wave a magic wand and turn them into hydrogen bombs at will…"

I don't know the author, but that is nonsense, almost every part of every sentence. ;-)

That's sort of the point : his book (and sequels) is a take on Lovecraftiana in which mathematic ideas can open fissures in the multiverse and let evil tentacular things through to suck our souls. The main character, Bob, is an IT guy who was drafted into UK's occult Secret Service to prevent such sort of things. The book balances winky Cthulhuisms, office tedium, genuine dread, adventure, and computer geekery really well.

Quote:

You haven't heard of the Turing theorem--at least, not by name--unless you're one of us. Turing never published it; in fact he died very suddenly, not long after revealing its existence to an old wartime friend who he should have known better than to have trusted. This was simultaneously the Laundry's first ever success and greatest ever disaster: to be honest, they overreacted disgracefully and managed to deprive themselves of one of the finest minds at the same time…

…Just solving certain theorems makes waves in the Platonic over-space. Pump lots of power through a grid tuned carefully in accordance with the right parameters--which fall naturally out of the geometry curve I mentioned, which in turn falls easily out of the Turing theorem--and you can actually amplify these waves, until they rip honking great holes in spacetime and let congruent segments of otherwise- separate universes merge. You really don't want to be standing at ground zero when that happens.Which is why we have the Laundry …

Anyway, you want to try Nilp's "Antibodies" suggestion: it shows Stross at his forte, and adresses the "issue" more intensely than "The Atrocity Archives" does (there it is background setting, mostly)(I hope I am not overdoing the quotations)

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean.

This is DEEP into nerd territory. Seriously. Maybe beyond rocket science. With ~2 years of calculus and computer science and a couple more of physical science I only have a loose grasp what the problem even is. Heck I just barely understand d_jedi's explanation above, so let's see if I can go a slightly different route.

Hopefully you've figured out that a deterministic Turing machine is a sort of idealized computer with infinite storage (and/or time) -- a thought model -- that can do every type of computation you can imagine. However, it works one agonizing step at a time, so it's not necessarily efficient at all. Specialized devices are often far better in real life despite being finite. The value in a TM comes from the fact that, hypothetically it can solve anything (as long as it's, um, mathematically solvable).

So, knowing that, try reading this and hopefully the tip of the P vs NP iceberg will come into view.

Ars has tons of computer science nerds so hopefully one of them can correct what's surely an oversimplification.

The problem isn't that its an oversimplification so much as there is a massive background on what computing is (not computers, computing) that is needed to explain it properly. The easiest way I can explain it (which isn't 100% accurate but gives you an idea hopefully) is thusly:

First, the Turing machine concept is easy to imagine as a computer as we understand it today. It has state and reads and writes from some sort of memory thereby changing its own state. It continues to change its own state by reading and writing until the end or goal state is achieved at which point it halts and the computation result is somewhere in memory. Through this model computation is performed.

The difference between deterministic and non-deterministic machines comes when a decision is made. In a normal machine when you branch you can only choose one path. In the non-deterministic model, every time there is a split or a decision, you take all possible paths at once in parallel. It is worth mentioning that this is not something that actually exists, only a theoretical concept, you cannot actually take every path every time in parallel on a computer as we know and understand them.

The result of this is that anything that can be solved in polynomial time (for more on what we mean by polynomial time, you'll want to look at how computational algorithms are classified, Big O notation might be a place to look) on the Turing machine is in P while anything that can be solved in polynomial time on a non-deterministic turning machine is in NP. At this point, hopefully its becoming clear why these things are different, and probably not the same. With that said, we have no proof that they are not the same, hence the interest in this new potential proof.

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean. The only distinction I observe is between a deterministic Turing machine and a non-deterministic Turing machine, and neither of those terms has any meaning to me. Wolfram wasn't much help either.

The following is entirely meant for the layperson. It's about as accurate as stating that Quantum mechanics involves spinning subparticles; it's an analogy at best.

P means "solvable in polynomial time." That basically means that as the data set you are processing grows, the amount of time it takes grows in a fairly slow and manageable amount.

NP means "not solvable in polynomial time." For these problems, the time to process the data set grows very rapidly as the size of the data set grows. For these problems, a single extra piece of data may very well take things from "computes in a few minutes" to "computes in a few years." Add a few more pieces of data and you can pass through "takes longer than you have to live" all the way to "won't be done processing until well after the Sun explodes and ends all human life on Earth."

Deterministic Turing machines are idealized regular computers. Given an algorithm, they run each step in order. If a problem (P or NP) takes M steps to execute, it takes a Turing machine M cycles to execute it. The faster the machine, the faster the algorithm runs. Multi-core machines are basically just multiple Turing machines, so at best you can split the workload evenly between the number of machines you have.

Non-deterministic Turing machines can process multiple calculations at the same time, to an infinite degree. This is saying you have a CPU with an infinite number of cores. It takes this machine essentially no time to execute any algorithm, because even for an algorithm that takes a hobajillion calculations (let's just say that's going to take a few millenia on a contemporary PC), the non-deterministic Turing machine can just create a hobajillion cores, split the workload between them, and process the data quite quickly. These machines do not exist, and quite possibly never will.

So, given we only have Turing machines, then generally speaking we can solve P problems but we cannot solve NP problems. In more specific circumstances, some P solutions still take too long on contemporary hardware and need faster machines to process quickly enough (e.g., how 3D games can get more detailed as hardware gets faster) and some NP solutions can be implemented on contemporary hardware for very small datasets (but unfortunately small data sets are rarely all that interesting, even in mundane situations).

P=NP means that all those NP problems can be solved in P complexity, so all we have to do is hit the math books a little harder and all those NP problems become solvable on real hardware. P/=NP means that we can stop wasting time and money trying to solve NP problems; it would be a let down, but at least it's one we're already expecting.

I take issue with your description of what P and NP are. NP does not mean "not polynomial" it means "non-deterministic polynomial" which is much smaller more defined class. There are many things that are not in P and not in NP, say like things in EXP (exponential time). Also, and most importantly, there is a distinct relationship between P and NP where the only difference is parallelism. The NP machine has infinite parallelism accessible at any time with no threading issues to contend with, it just magically does it, leading to a class of problems that are very branchy but still could in theory be solved in polynomial time if we could branch and parallelize with no cost.

Usually Ars is somewhat noob-friendly, but this article doesn't explain in the slightest what P and NP actually mean. The only distinction I observe is between a deterministic Turing machine and a non-deterministic Turing machine, and neither of those terms has any meaning to me. Wolfram wasn't much help either.

The following is entirely meant for the layperson. It's about as accurate as stating that Quantum mechanics involves spinning subparticles; it's an analogy at best.

P means "solvable in polynomial time." That basically means that as the data set you are processing grows, the amount of time it takes grows in a fairly slow and manageable amount.

NP means "not solvable in polynomial time." For these problems, the time to process the data set grows very rapidly as the size of the data set grows. For these problems, a single extra piece of data may very well take things from "computes in a few minutes" to "computes in a few years." Add a few more pieces of data and you can pass through "takes longer than you have to live" all the way to "won't be done processing until well after the Sun explodes and ends all human life on Earth."

Deterministic Turing machines are idealized regular computers. Given an algorithm, they run each step in order. If a problem (P or NP) takes M steps to execute, it takes a Turing machine M cycles to execute it. The faster the machine, the faster the algorithm runs. Multi-core machines are basically just multiple Turing machines, so at best you can split the workload evenly between the number of machines you have.

Non-deterministic Turing machines can process multiple calculations at the same time, to an infinite degree. This is saying you have a CPU with an infinite number of cores. It takes this machine essentially no time to execute any algorithm, because even for an algorithm that takes a hobajillion calculations (let's just say that's going to take a few millenia on a contemporary PC), the non-deterministic Turing machine can just create a hobajillion cores, split the workload between them, and process the data quite quickly. These machines do not exist, and quite possibly never will.

So, given we only have Turing machines, then generally speaking we can solve P problems but we cannot solve NP problems. In more specific circumstances, some P solutions still take too long on contemporary hardware and need faster machines to process quickly enough (e.g., how 3D games can get more detailed as hardware gets faster) and some NP solutions can be implemented on contemporary hardware for very small datasets (but unfortunately small data sets are rarely all that interesting, even in mundane situations).

P=NP means that all those NP problems can be solved in P complexity, so all we have to do is hit the math books a little harder and all those NP problems become solvable on real hardware. P/=NP means that we can stop wasting time and money trying to solve NP problems; it would be a let down, but at least it's one we're already expecting.

NP does not mean "not solvable in polynomial time". (This is a very common misconception about this problem.) The answers to problems in NP can be verified in polynomial time. They may or may not be solvable in polynomial time.

I think a lot of people trying to explain this are missing the fact that P and NP are not disjoint sets. P is a subset of NP — all P problems are also NP problems. That is, all problems that can be solved in polynomial time can also have their solutions verified in polynomial time.

The assertion this paper is trying to prove is that not all NP problems are P problems. That is, there exist problems which cannot be solved in polynomial time, but whose solutions can be verified in polynomial time.

Starting with the proviso that computer science is not my field, I do have an indirect professional interest in pure maths and theoretical computer science. My impression of the majority opinion, from discussions on mailing lists pertaining to these topics, is this: the paper that generated this buzz is more a suggestive promissory note than a proof. In fact, it isn't even a proper outline of a proof, if by outline one requires at least enumerating the main steps (lemmas, theorems) one would invoke in order to prove the final result. Again, this is merely a report on what seems to be the prevailing opinion; more people is not the same as more correct. I understand the article is being looked at and evaluated by experts even now - the standard procedure for any significant result before it becomes an accepted theorem.

Here's what I'm kinda wondering about (and while I remember some of this stuff from uni., this definitely over my head, but), how does solving the theoretical problem (assuming proof of P=NP) helps us practically? It's kinda like saying (and this is a very crude comparison) that if we theoretically prove that matter can move faster than the speed of light in vacuum, we'll be able to travel to the stars - the theory may be there but practically we may never be able to harness enough energy to do it.

I disagree. As I said, proving that P!=NP isn't itself particularly exciting or groundbreaking. What is exciting is the *disproof* of P=NP. If P=NP were to be proven, that would have a nearly incalculable impact on the state of modern computing and, as a result, our lives. Instant Nobel Prize, no question.

EDIT - For a lot of theoretical math, I would agree with the part about it having to percolate up. However with this particular problem, there is an overwhelming amount of direct applications to core computer science problems. Solving those would impact things very quickly.

To be a little pedantic I believe they would get either the fields medal or the turing award :-) since there's no Nobel prize for mathematics and I believe the same goes for computer science, if I am not mistaken.

Thanks to Kani and elanthis (with Kani's correction on NP) for explaining Turing machines better than I could

juanxer wrote:

How does quantum computing fit into this, Turing machine models-wise?

My knowledge is pretty crude, but I don't think quantum computers (at least the sort built so far) have much in common with Turing machines. Yeah, the point is that they can solve some classes of problems that normal CPUs struggle with (thanks to parallelism), but it's pretty constrained by the finite number of qubits. Also, the output from a quantum machine is probabilistic; you have to run it many times to have high confidence in the result, and even then it will never be 100%, whereas a true Turing machine should get it exactly right the first time. Again I'm pretty n00bish here so correction are very welcome.

2late2die wrote:

how does solving the theoretical problem (assuming proof of P=NP) helps us practically? It's kinda like saying (and this is a very crude comparison) that if we theoretically prove that matter can move faster than the speed of light in vacuum, we'll be able to travel to the stars

Well at least knowing whether something is possible (within reasonable limits of space and time) is a good first step! For your analogy, just being able to accelerate small particles (not human-scale objects) faster than light could solve the problem of communication delay across space. For computer science, I'd think knowing P=NP exactly might allow them to make some small computational jumps in big calculations or big datasets, which could save energy, time, money, etc... even if they don't break encryption schemes or solve travelling salesman problems right away.

This may be true and, at a theoretical level, it could be expected that attempting to solve such a problem would be a futile effort because it could take impossibly long to generate and validate the possible solutions. That does not make it impossible to arrive at a solution, only highly unlikely.

Fortunately, the supercomputer known as "Earth," limited though it may be, has in fact happily arrived at an answer. As it turns out, the answer is yes.

Matt Ford / Matt is a contributing writer at Ars Technica, focusing on physics, astronomy, chemistry, mathematics, and engineering. When he's not writing, he works on realtime models of large-scale engineering systems.