1) Graham's problem is to decide whether a given edge-coloring (with two colors) of the complete graph on vertices $\lbrace-1,+1\rbrace^n$ contains a planar $K_4$ colored with just one color. Graham's result is that such a $K_4$ exists provided $n$ is large enough, larger than some integer $N$.

Recently, I learned that the integer $N$ determined by Graham's problem is known to lie between $13$ and $F^{7}(12)$, which is called Graham's number, a number which beats any imagination and according to Wikipedia is practically incomprehensible. Some consider it to be the largest number which was ever used in a serious mathematical argument.

One way of looking at this is the following: Graham's problem of deciding whether for given $n$ and given coloring such a planar $K_4$ exists takes constant time. Indeed, if $n$ is small you have to look, if $n$ is large you are done. However, this takes polynomial time (check all four-tuples of vertices) for all practical purposes (assuming that $N$ is close to its upper bound).

I am sure someone can now cook up an algorithm which needs exponential time for all practical purposes but constant or polynomial time in general.

2) I also learned that the best proof of Szemerédi's regularity lemma yields a bound on a certain integer $n(\varepsilon)$ which is the $\log(1/\varepsilon^5)$́-iteration of the exponential function applied to $1$. This bound seems ridiculous in the sense that it does not even allow for interesting applications of this result (say with $\varepsilon=10^{-6}$) to networks like the internet, neural networks or even anything practically thinkable. At this point, this is only an upper bound, but Timothy Gowers showed that $\log(1/\varepsilon)$-iterations are necessary.

Again, it seems that one could cook up reasonable algorithmic problems which have solutions which are polynomial time but practically useless. Maybe one can do better in concrete cases, but this then needs additional input.

Coming closer to the question, what if finally $P=NP$ holds, but the proof involves something like the existence of a solution to Graham's problem or an application of Szemerédi's regularity lemma, so that finally the bounds of the polynomial time algorithm are for some reason so poor that nobody even wants to construct it explicitly. Maybe the bounds are exponential for all practical purposes, but still polynomial.

I often heard the argument that once a polynomial solution for a reasonable problem is found, further research has also produced practicable polynomial time algorithms. At least for Graham's problem this seems to fail miserably so far.

Question: Is there any theoretical evidence for this?

Now, maybe a bit provocative:

Question: Why do we think that $P\neq NP$ is necessarily important?

I know that $P$ vs. $NP$ is important for theoretical and conceptional reasons, but what if finally $P=NP$ holds but no effective proof can be found. I guess this wouldn't change much.

EDIT: Just to be clear: I do not dismiss complexity theory at all and I can appreciate theoretical results, even if they are of no practical use.

"To me, dismissing complexity theory because of its love affair with worst-case, asymptotic analysis is like dismissing physics because of its love affair with frictionless surfaces, point particles, elastic collisions, and ideal springs and resistors. In both cases, people make the simplifying assumptions not because they’re under any illusions that the world really is that way, but rather because their goal is understanding." -- Scott Aaronson
–
aorqFeb 3 '11 at 21:43

Thanks a lot for this comment. I hope that the examples show that my concerns do not root in ignorance; but rather question simplifying assumptions because of concrete instances where they seem to fail.
–
Andreas ThomFeb 3 '11 at 22:00

6

The goal of complexity theory is not to describe the time scales that are relevant to humans. We humans are just passing by. If you insist that results be relevant to daily life, you are no longer doing mathematics.
–
David HarrisFeb 3 '11 at 22:05

There's also the converse, P!=NP but there's algorithm that solves any real-life NP-hard instance in a blink. For instance, SAT is NP-hard, but all SAT instances that "come up in practice" are easy -- rjlipton.wordpress.com/2009/07/13/…
–
Yaroslav BulatovFeb 4 '11 at 3:16

4 Answers
4

The $P \ne NP$ problem is the best way we know to formulate the belief (which was expressed even before the problem was formally stated) that certain algorithmic problems requires exponential number of steps as a function of their description. The formulation is based on the important notion of a nondeterministic algorithm. The conjecture that P is not equal to NP is the basis of a mathematical theory of complexity which is remarkably beautiful. It is likely that a proof for the conjecture will lead to better understanding of the limitations of computation which will go beyond the conjecture itself. (Unfortunately we are very far from such a proof.) A counterexample (which is not expected) may lead to a major change of our reality and not just our understanding of it. (I suppose this is one of the main reasons in believing the conjecture.) Your concern is that maybe, because of the asymptotic nature of the conjecture, we can question its real relevance. Namely problems we regard as intractable may still be even if P=NP because of huge constants in the polynomial involved, or perhaps even if NP not equal P there can be efficient algorithm to relevant cases of intractable problems. These are serious concerns that are often raised and sometimes there are efforts to study them scientifically.

Overall, it looks that these asymptotic concerns are secondary compared to the problem itself. Moreover, there are many examples that the asymptotic behavior and practical behavior are similar beyond what the theory dictates (but a few examples also in the other direction). There are other related concerns regarding the relevance for the $P \ne NP$ problem. Is the worst case analysis relevant to practical problems? Is the hardness apply when we are interested in approximate solution rather than in exact solution? Both these questions (like the asyptotic issue) can be regarded as secondary in importance compared to the $NP \ne P$ problem itself but otherwise very important. Indeed both hardness of approximation and average case analysis are very central research topics.

(A side remark, there is something unnatural about the way Graham's numbers are used in the question. You are dealing with a decision problem where the answer is always yes when n is sufficiently large. And there is a simple polynomial time algorithm as a function of n to decide the answer for a every given n. So the relevance of this particular example (which is interesting) to the question asked about NP complete problems (which is also interesting) is not so strong.)

Finally, regarding relevance. Andreas raised the interesting possibility that NP=P but the constant involved are so huge that the poynomial algorithm for solving NP complete problems is utterly not practical. A related interesting possibility is that there even exists a practical polynomial time algorithm for NP complete problems but that finding this algorithm itself is computationally intractable. As I said, both these possibilities are considered unplausible. Even if true they will not harm the relevance of the NP=P problem in making the central distinction between intractable and tractable problems. In order to make these concerns interesting one should come up with a theoretical framework to study them, and then study them fruitfully. (As I mentioned above, this was done for related concerns regarding approximation and regarding average case behavior.)

Right. These examples show surprisingly good performance and a priori need exponential time. I was more asking for surprisingly bad performance and a priori polynomial time (but large constants).
–
Andreas ThomFeb 4 '11 at 7:08

Whether $P = NP$ is in the end only a single bit of information; moreover, people expect that they are not equal. The same can be said of any of the Clay Prize problems. So what's the point? Since it is a very common assumption that $P \ne NP$, in many very important papers in computer science and mathematics, people expect that the proof will also be very interesting and that there will be a lot to learn from it. After all, every time any version of the Poincaré conjecture has been proven (by Smale, Freedman, Perelman, or even in 2D by Riemann et al) or disproven (by Milnor in the smooth case in high dimensions), the proof has led to all kinds of interesting other mathematics. Every once in a blue moon, mathematics is a little bit cruel and a long awaited proof is not as interesting as people expected. Possibly the first proof of the transcendence of $\pi$ was an example of that. But that's not what usually happens.

In the specific case of the $P$ vs $NP$ problem, it is easy to see some of the related questions that would hopefully be affected. You can go down the list of NP-hard problems and ask, more precisely, how hard each one is. The reductions among these problems tell you that they are roughly as difficult as each other, but that's about it. Moreover, there are many other conjectured separations between complexity classes that look similar to $P$ vs $NP$. $P$ vs $PSPACE$ looks similar, but these two should be much farther apart. $P$ vs $BQP$ is a particularly juicy question, since it isn't even obvious to everyone (although I think it's a safe bet) that it's worthwhile to build a quantum computer.

Even better, there are table-turning results. Given an assumption that's not so much stronger than $P \ne NP$, it follows that $P = BPP$, i.e., every randomized polynomial time algorithm can be derandomized in polynomial time.

Yes, conceivably mathematics is cruel and it will turn out that 3-SAT can be solved in time $O(n^G)$, where $G$ is Graham's number. But why should it be? Let's say that our aesthetic and our intuition for expecting a very influential proof is good 90% of the time. Recently our batting average has been something like that. That is reason enough to explore the problem.

Thanks for your answer. However, my point was not that "Every once in a blue moon, mathematics is a little bit cruel.", but that there are concrete and natural instances where constants are just too big. Gurevich ends his note (see the link in Timothy Chow's answer) by saying "A good theory of non-asymptotic complexity is a biggest challenge of all in the area.
–
Andreas ThomFeb 3 '11 at 22:47

Maybe you're discussing it as an applied problem, whereas I view it as a pure problem? I think that it would be a cruel outcome because, as Gil says, people believe that NP-hard problems (those that are NP-hard in the Karp-reducible sense) take exponential time.
–
Greg KuperbergFeb 4 '11 at 5:44

It is clearly relevant whether there exist feasible algorithms for NP-complete problems, where I am using the term "feasible algorithm" in the informal sense of an algorithm that we could actually run quickly in practice on problems of practical interest. Call this the Main Question.

The relevance of the $P = NP$ question lies in its perceived relevance to the Main Question. Clearly, to prove that $P = NP$ is a relevant question, it would suffice to prove the lemma that $P$ is precisely the class of problems with feasible algorithms.

Of course, the lemma is obviously false, as you point out and as others have pointed out (e.g., Yuri Gurevich). Still, there is enough overlap between $P$ and the class of problems with feasible solutions that settling the $P=NP$ question would be a major partial step towards answering the Main Question. It is in this sense that $P=NP$ is a relevant question.

To respond to the question in your title, I think it's based on a false premise. I don't think anyone serious claims that "P vs NP" is "necessarily relevant", in the sense you seem to mean "relevant"---that is, I don't think anyone claims that a solution "P vs NP" will automatically have significant impact on the way computing is used. There are, however, good reasons to think that the question is of substantial theoretical interest.

As for your question at the very end, if P=NP then, for any problem, the algorithm which diagonalizes through all possible P-time algorithms is guaranteed to be a P-time algorithm for the problem. (Of course, this algorithm would be very slow.) So in some sense there can't be a non-effective proof that P=NP.