Posted
by
Soulskillon Wednesday August 11, 2010 @02:02AM
from the intuitively-obvious-to-the-most-casual-observer dept.

An anonymous reader writes "We previously discussed news that Vinay Deolalikar, a Principal Research Scientist at HP Labs, wrote a paper that claimed to prove P is not equal to NP. Dick Lipton, a Professor of Computer Science at Georgia Tech, analyzed the idea of the proof on his blog. In a recent post, he explains that there have been many serious objections raised about the proof. The post summarizes the issues that need to be answered in any subsequent development, and additional concerns are raised in the comment section."

The paper was preliminary to begin with. It is currently withdrawn in order to fix minor typos and because currently "enough unresolved issues with the paper exist to foster a healthy sense of skepticism". This is a good thing for now.

The original discussion was in a Google Doc but has since moved to a wiki [michaelnielsen.org].

For anyone interested in the details, you can find a lot more on this wiki [michaelnielsen.org], where a lot of mathematicians are weighing in on the proof and its potential flaws. Mathematicians are gathering from all over to examine this paper because it's so interesting. Even if one of the serious objections that have been raised so far kills it, it contains some novel ideas that will get people thinking.

They've also been gathering the news coverage and such, so it's probably the best place to find up-to-date information about this proof. It seems to have sparked quite a lot of interest for a paper that hasn't even been properly published.

No, it's not. The relationship between computer science and mathematics is similar to the relationship between physics and mathematics: there is some overlap, lots of fruitful cooperation, and some fundamental differences in methodology and subject matter.

Even if computer science were a subset, it's only the computer scientists that are gathering to vet this paper. Saying "mathematicians are gathering to vet this paper" would be as misleading

>No, it's not. The relationship between computer science and mathematics is similar to the relationship between physics and mathematics:

No it's not the one IS the other and it's the perceived DIFFERENCE that doesn't exist. It's purely a perception -and a DELIBERATE illusion at that, designed to simplify the process. Ultimately it's easilly provable that every computer program is a simple mathematical function - so simple in fact that it is in fact a single number.

There is nothing weird about this- if you know lambda calculus, godel-number and Turing machines it's simple logic. We have never done anything to "split" the fields. All computer science did was to create a (very shallow) layer of pretense through which ot access the maths.

To suggest it is anything OTHER than mathematics is to prove you have absolutely no idea how computers actually work. In the real world- every computer is a universal turing machine.If you have any real doubt - just consider this: any program COULD be written in lisp.Lisp is DIRECTLY based on lambda-calculus - in fact the ONLY (minor) difference as small syntactical changes designed to make it easier to TYPE lambda on a computer (it was after-all designed for writing in).Lamba-calculus is a simple form of mathematical language - like algebra or godel numbers or any of a dozen other ways to write down 2+2=4 that are all just different ways of expressing it designed to be useful for different purposes.

It's true that currently the most popular languages do not follow the lisp "look like the function you are" structure -but this is because in single-CPU machines top-down programs were slightly more similar to how the hardware actually PROCESSED the functions - and that made it easier to program in.Expect this to change in the next few years - multi-CPU machines are actually EASIER to program for in a functional language like lisp - which makes all those nasty multithreading issues just go away by putting you on the actual mathematics that happens.

Computer science is not a subset of mathematics - rather, mathematics is a subset of computer science. Any question in mathematics can be restated as a question about Turing machines. The question "Can the statement x be proven theory T?" can be restated as, does Turing machine PM(x,T) halt, where PM is a rather simple Turing machine that tries out all potential proofs of x using axioms of theory T (usually there's inifinitely many proofs to try) and halts if it finds a valid proof.
You could say that comp

To suggest it is anything OTHER than mathematics is to prove you have absolutely no idea how computers actually work. In the real world- every computer is a universal turing machine.If you have any real doubt - just consider this: any program COULD be written in lisp.Lisp is DIRECTLY based on lambda-calculus - in fact the ONLY (minor) difference as small syntactical changes designed to make it easier to TYPE lambda on a computer (it was after-all designed for writing in).

the mathematical operations that comprise a function is the function it is, and nothing looks more like that than the underlying assembly.

Assembly language doesn't have a notion of types. For example, the Python expression [x + 5 for x in some_list] starts with a list (or other iterable item) and ends with a list. Assembly language has to explicitly loop over the elements, apply the operation, write back the result, and hope the rest of the program treats it as a list.

Assembly language doesn't have a notion of types. For example, the Python expression [x + 5 for x in some_list] starts with a list (or other iterable item) and ends with a list. Assembly language has to explicitly loop over the elements, apply the operation, write back the result, and hope the rest of the program treats it as a list.

Or, in assembly, you could implement functions which verify the type of parameters and branch to exception code if they don't match. Which is exactly what Python is doing unde

everything a computer does is just math and can be translated into any equivalent math and you'll get the same result

Being able to simulate and model something at a low level doesn't give you a high level understanding. A mathematician understands boolean logic, but that doesn't mean he has the knowledge and skills to design a CPU or program it. That knowledge and those skills aren't taught in mathematics, they are taught in computer science.

Being able to simulate and model something at a low level doesn't give you a high level understanding. A mathematician understands boolean logic, but that doesn't mean he has the knowledge and skills to design a CPU or program it. That knowledge and those skills aren't taught in mathematics, they are taught in computer science.

Yes, you are completely correct to say that the specific disciplines necessary to write computer programs, or to design a CPU (which is a combination of computer science and electrica

Is math and is covered by mathematics degree programs are completely orthogonal properties.

Nevetheless, it is computer scientists, not mathematicians, that are gathering to to vet this paper: if you work on the theory of computation, you are a computer scientist, no matter what your training, background, or methods.

>. A mathematician understands boolean logic, but that doesn't mean he has the knowledge and skills to design a CPU or program it. That knowledge and those skills aren't taught in mathematics, they are taught in computer science.

This is a half-truth at best. Firstly because the knowledge to design a CPU isn't taught in computer science either - it's taught in electronic engineering. The real problem is that so few people today actually studied computer science - learning programming != computer science.

>Donald Knuth was trained as a mathematician before computer science existed, but today he is a computer scientists at the computer science department. Go check his home page.

You know there are OTHER specialized subfields of mathematics - and Knuth is among the primary people standing behind the belief that computer science is maths - and has testified in the supreme court his belief that it is such pure maths that it should be unpatentable.

To suggest it is anything OTHER than mathematics is to prove you have absolutely no idea how computers actually work. In the real world- every computer is a universal turing machine.If you have any real doubt - just consider this: any program COULD be written in lisp.Lisp is DIRECTLY based on lambda-calculus - in fact the ONLY (minor) difference as small syntactical changes designed to make it easier to TYPE lambda on a computer (it was after-all designed for writing in).Lamba-calculus is a simple form of mathematical language - like algebra or godel numbers or any of a dozen other ways to write down 2+2=4 that are all just different ways of expressing it designed to be useful for different purposes.

OK fine, but if untyped lambda calculus is a form of notation that's useful for describing computation, isn't this a circular argument? It's a computer. Of course a form of notation used to express computation would be able to describe what it does.

If, on the other hand, I want to describe a system where I have a bunch of rocks in one pile, and I move rocks to another pile based on certain logical criteria, forming a kind of "loop"... couldn't that also be expre

You can call it math if you want, but if I can write a program to do what I want to do, it makes no difference to me if it's math or not.

The difference is that a computer can be perfectly modeled by math because all operations on a computer are mathematical. Your example of moving a rock include incompletely modeled physical fields such as robotics (how to move the limb that moves the rocks) and computer vision (how to determine where to place the rocks so that they don't fall).

So in the end, your comments sound like the same kind of navel-gazing that says "math is everywhere".... "Look, Bobby, see the Golden Ratio in the structure of this leaf? Math is everywhere!" "No dad, that is not math. That is a leaf. Math is how you think about the leaf."

Right. The leaf can be described by math. Math is the description, an abstract representation of the concept of how the leaf is structured.

A computer program is nothing more than an abstract representation of mathematical operations. "

>OK fine, but if untyped lambda calculus is a form of notation that's useful for describing computation, isn't this a circular argument?In a sense being circular makes it true - because you can translate between a form of pure maths, and a computer language that's a "circle" - the fact that the meaning remains utterly unchanged throughout the circle means they are identical.

>It's a computer. Of course a form of notation used to express computation would be able to describe what it does.

There is nothing weird about this- if you know lambda calculus, godel-number and Turing machines it's simple logic. We have never done anything to "split" the fields. All computer science did was to create a (very shallow) layer of pretense through which ot access the maths.

Cool... and since most porn is digital now, and displayed on computers, can we then say that porn is just a calculation in lambda calculus?

>>Ultimately it's easilly provable that every computer program is a simple mathematical function - so simple in fact that it is in fact a single number.

>Every biological system is a physical system, but if you study physics, you'll know next to nothing about biology.

If you do the quantum wave functions for every particle in the cat, in the milk, in the floor... you can predict that the cat will drink the milk. It's in there - though granted with current technology it would take us longer than the e

Really, can't a very similar argument be applied to nearly every field of applied knowledge? Mathematics is a model for how the universe works; all practical science is just applied mathematics in some form or another. Music, anthropology, biology, conspiracy theories--they can all be described as applications of mathematics.

All those things existed before mathematics, were not created by mathematicians, and exist independently of whether mathematics did or not.We use mathematics to describe them but they all predate it.

Computer science doesn't share ANY of those traits. It did not, indeed COULD NOT exist before mathematics, it was created BY mathematicians and it cannot exist indepently of mathematics.That's why it is in fact wrong to claim that it's APPLIED mathematics (though it's useful to think that way) - it isn't, it IS

Strictly speaking, every computer in the real world is a finite state machine that's complex enough to simulate a universal Turing machine. It's a subtle difference that really only matters if you're considering the math behind it;)

Strictly speaking, every computer in the real world is a finite state machine that's complex enough to simulate a universal Turing machine.

Not exactly. A finite state machine cannot represent the unbounded tape of a universal Turing machine. I prefer to model computers as deterministic linear bounded automata [wikipedia.org], which are identical to Turing machines except that they cannot advance the index past the end of the input. Each LBA has an equivalent FSM, but unlike an FSM, an LBA has an index, which allows reasoning about arrays and pointers.

writing a Lisp interpreter (or compiler) that can use multiple CPUs effectively is no small task.

Just making a thread-map counterpart to the map functions [delorie.com] will help use more CPUs: break the list into portions, have each thread process one, and slam them back together. This might be easier on splittable list structures, such as skip lists [wikipedia.org] or array lists [wikipedia.org], than on the common linked list.

buffer overflow exploits, quantum physics, the pre-big bang universe and phone company math make my head hurt. Understanding this sort of thing must be like having a set of truck nuts hanging from your geek card.

buffer overflow exploits, quantum physics, the pre-big bang universe and phone company math make my head hurt. Understanding this sort of thing must be like having a set of truck nuts hanging from your geek card.

Those are all very different things that require very different skills and interests.

Most mathematicians would not understand a lot of the things in that proof, barring research, unless they were into abstract algebra and group theory. I tried to read through it a bit and holy shit it was difficult. Its mostly that Im not used to seeing things presented that way, with a fair share of WTF thrown in over topics Ive never been exposed to.

"Formal Language Theory" - an undergrad course at my university that dealt with Finite State Automata, Touring Machines, Computability Theory, Complexity Theory, and the formal proofs thereof, was the most interesting class that I've ever taken. That being said, I always felt when doing homework for that class that I was taking a dive off the deep end (i.e. pushing the limits of human sanity). And that's only from studying the "low hanging fruit" that people were publishing papers on several decades ago when theoretical computer science was still relatively young. I can't imagine things have gotten any less mind-warpingly complex since then.

I have tremendous respect for the folks who continue to "dabble" in this stuff. I'm sure that for their efforts they have been rewarded with glimpses of indescribably beautiful works of both man and of nature.

I also took formal language theory and found it to be one of the most thought provoking classes I ever took as well as one of the most difficult. The bi weekly assignments would take me about 20 hours, but man I learned a lot, including how to program a turing machine, an exercise in abstraction which still blows my mind. I'd like to give a big shoutout to my professor, David Barrington, an amazing teacher who also seems to be doing very interesting work in the analysis of this paper. See links to his posts

Vinay Deolalikar was a little unfortunate in that his unreviewed theory got more attention than he believed it would. It seems his paper offers a new approach, but as it was a first draft had a number of holes and was by no means ready for "prime time".

On the other hand, you could say that broadcasting that you have a solution to one of the most famous remaining unsolved problems was a little ill-advised.

Lipton was one of the co-authors of a great analysis called "Social Processes and Proofs of Theorems and Programs" (http://www.cs.umd.edu/~gasarch/BLOGPAPERS/social.pdf). It points out how a very complex proof is only as valid as the community of scientists who believe it. There are great risks for subtle lapses of logic in a 90 page proof and at best, a distinguished team of reviewers can only agree that they have not found a flaw. That said, the P != NP proof is great in that it has started a new socia

In one case encryption can be proven secure, in the other we loose encryption but gain efficiency. What would be better for humanity going forward, being able to solve box packing problems instantly or having nearly perfectly secure communication?

Intuition suggests that improvable worlds would be better than universes set in proverbial stone, so P = NP. Even if we can't prove a lemma, that kind of world doesn't rule out accidental proofs by dumb luck or alien intelligence.
Personally, assuming I kinda grok the basics, there are more cases of improvable than provable, such as Sylvain Gelly's MoGo program which can beat average Go players about five decades sooner than expected. P = NP means live is easy, imho.

I'm one of those ex-mathamaticians who still sulks at the existence of discussions beyond my ability to comprehend, where there is absolutely nothing constructive I can add. As a student back in the day, I was always nervous of proofs that were longer than a page - it always seemed to me that once a single proof got beyond a certain length, there was always some lingering doubt that some flaw or special condition had been overlooked, doubt that would pass on to every result that then used it. I guess that's the difference between learning math (where the problems are deliberately selected by textbook authors to have nicely bounded complexity) and researching math (where nobody knows how many twists and turns there are in the road between you and your goal).

I am a layman (not a mathematician) however there are several large points of suspicion that I can identify with this proof. First of all, its 102 pages long. Second of all, its a proof by contradiction, namely that certain known statistical behaviors of a formula are contradicted for the author's constructions if P=NP. So in reality, a proof like this requires not only examination of the particular proof in question, but of all other theorems and inferences that are relied upon to construct the contradi

Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems for mathematics. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all of mathematics is impossible, thus giving a negative answer to Hilbert's second problem.

They've tried that, but all that's been proven so far is that several types of proof won't work, rather than proving that it's impossible to prove. The first few sections of this paper itself go into detail about why this proof isn't one of the kinds of proof that won't work, incidentally.

There has been much debate surrounding answers to the question of P=NP.The problem is that we cannot answer the question until we have successfullyasked the question. The question is impossible to ask, that it why it willnever be answered.

1) A provable answer to the question P=NP requires a complete and consistentformal statement of the question.Rationale: Hopefully, this is self-evident. It is certainly axiomatic thata formally provable statement be expressed in formal terms. Completion andconsistency follow from the requirement to provide a proof that is notsubject to challenge.

2) A complete and consistent formal statement of the question mustincorporate a complete and consistent formal definition of the sets P and NPRationale: Hopefully, this is also self-evident. (I have left out therequirement to define the equality operator, since it is defined for us byset theory.)

3) A definition based on a potentially undetectable characteristic isincompleteRationale: We cannot accept the definition of the set NP purely in terms ofits members having a property (a solution test in polynomial time) that wehave no reliable mechanism to detect. Therefore, a complete definition ofthe set NP must be arrived at via some other means.

4) The only possibility for a complete definition of the set NP is alanguageRationale: Once we rule out observation of characteristics, our only meanstowards a definition of the set NP is to formulate a language, a procedurefor testing the formal expression of the candidate problem that will acceptthe problem or reject it.

5) No formal language capable of expressing non-trivial mathematicalproblems can be consistent and completeRationale: As proven by Godel.

6) Therefore, no consistent and complete definition of the set NP is possibleRationale: If we accept that the set NP can only be rigorously defined via alanguage, this conclusion follows from the premises above.

7) Therefore, no consistent and complete statement of the problem of P=NP ispossibleComment: A conclusion which is not only proven in this paper, but supportedby the years of argument between mathematicians regarding the relevance ofproposed answers to the problem.

8) Therefore, P=NP is undecidableComment: Given our inability to ask, we are unable to answer.

Step 3 states that "We cannot accept the definition of the set NP purely in terms of its members having a property (a solution in polynomial time) that we have no reliable mechanism to detect." "Detect" is a bit vague here, but all that's needed for an existence proof (or disproof) is a formal definition, not any means of actually detecting cases. Look at pretty much any proof involving transfinites.

Furthermore, this guy is asking for a "complete and consistant" definition.

In the Incompleteness Theorem, a system of axioms is complete if, for all statements in the system, either the statement or its negation is provable from the axioms. A system is consistant if there exist no statements for which both the statement and its negation are provable.

Basically, his "proof" is "Hey, we don't want a contradictory or unfinished definition, right? And those words mean the same thing as consistant and compl

In addition to the fail at step 3 mentioned in sibling thread, step 6 does not follow. Math might be incomplete in other areas, and be complete in all that is needed to state P=NP.

That there is a problem should be evident from the fact that this proof doesn't use any specifics of P=NP, so it can be used to prove anything undecidable, which is clearly wrong, since there exist proven theorems.

How could P=NP be proven undecidable? If P=NP is proven undecidable, it means there can be no deterministic polynomial-time algorithm to solve an NP-complete problem in polynomial time (because such an algorithm would prove that P=NP). If no such algorithm exists, then P!=NP.

(this doesn't mean P=NP can't be undecidable... it just means that if if it is undecidable, the question of its undecidability is also so, and so on up the line)

If, in step 2, you mean a formal decision procedure to tell if an problem is in P or in NP, no, we don't. We know that any problem in P is also in NP, by definition. Therefore, if we can prove that any problem in NP is also in P, using only properties we've proved belong to NP, we've got a proof. If we can prove that any individual problem in NP is not in P, we have a proof of the opposite. Neither of these requires a general NP- or P-detector.

I almost modded you interesting, but I can't.I don't have enough of a background to realize if, for the particular issue of P (!)= NP, it makes sense to think of decidability. I do suspect it doesn't, because the problem is pretty old, and I've never heard of anyone talking about its decidability.When there's a million dollars involved, I've learned to expect that enough people tried enough different ways to exhaust all the ideas I can have in 5 minutes from hearing about the problem (and ever since I've he

Couple of things:
Irrespective of whether P = NP or not, there exists polynomial-time reductions [wikipedia.org] from any problem in P to any NP-complete problem. I guess you meant it the other way round.

I am not sure in what sense you use the term "undecidable", but if (as it seems likely) you used it to mean independent [wikipedia.org], then your argument fails at steps 2 and 3.

A proof that P =? NP is undecidable has to only show that one can neither prove nor disprove this statement: it need not show that no polynomial-time

This question has been considered by quite a few different people; search google for P vs NP independent [google.com] ("independent" meaning independent from the usual accepted axioms for mathematics, i.e., can't be proven using the currently accepted axioms).

Not really. It's proof of a negative which is vastly more difficult than a positive. For example, proving no dogs can have black spots is much harder than dogs can have black spots. You'd have to prove how it's impossible for any dog in existence to get black spots. The opposite only requires existence of 1 dog with black spots.

It's the same with Fermat's Last Theorem: Prove no solutions exist for x^n+y^n=z^n for all integers x,y,z where n is an integer greater than 2. That proof takes hundreds of pages.

P is the class of problems for which you can get the answer (output) quickly (i.e. in polytime).

NP is the class of problems for which you can verify an answer quickly.

P = NP is the question of whether all problems where you can verify the answer quickly have corresponding solvers that also find the answer quickly. If yes, P = NP, if not, P != NP. It's really a question about how powerful algorithms can be - and thus how powerful intelligence can be, because if P = NP, you could