Monday, October 05, 2009

In a prior
post
(a while back) I pondered if Mahaney's theorem
(SAT \le_m S, S Spare, implies P=NP) should be taught in
a basic grad course in complexity.
I thought YES.

As a sequel to that I note the following:
The theorem is not in EITHER
Goldreich's Complexity book or Arora-Barak's Complexity book.
One of the following is true:

Gasarch is right and Goldreich and Arora-Barak are
wrong. It should have been in their books.

Gasarch is wrong and Goldreich and Arora-Barak are
right. Its okay that its not in their books.

It is in the book but Gasarch couldn't find it.

Clearly option (2) is correct. These people have thought long
and hard on what should be in such a course and are broader and deeper
in the field then me.
The question arises, why did I think this was important anyway?
I could argue (as some of the commenters did) about the Berman-Hartmanis
conjecture being important (all NPC sets are really THE SAME SET!)
but the real reason is that I was impressed with it in my early years
and it is hard to shake that off.

When I taught Graduate Algorithms (I really did!)
I taught Yao's MST algorithm that ran in time O(|E|log log |V|).
The students asked me WHY this was important (there were better algorithms
available). Then I realized that I taught it only because Samir Khuller
taught it when he taught the class, so I asked him why it was
important. He said it very much impressed him when he was a graduate
student.

The contents of the Goldreich book and Arora-Barak book are
influenced by a deep understanding of the field, and not by
things that impressed them in their youth.

Is Mahaney's theorem important? The real question might be
compared to what?. As the number of things that you
absolutely have to have in a course grows, at some point,
something has to be tossed out.

11 comments:

An interesting point. One thing it makes me think about is: as a field matures some theorems/facts probably do need to lose their citations/provenance (even though this is unfair to the creators) for the good of the field. Like how the Myhill-Nerode Theorem disappears from the index of the3rd edition of "Automata Theory, Languages and Computation." At some point a field matures enough that the idea in question becomes part of how you think about automata, and calling it out by name/history seems to distract from the exposition.

You are simply wrong Bill. First, because Mahaney's theorem is very important and one ought to prove it (with the easier Ogihara-Watanabe proof) in a graduate complexity course. But also because you defer your own judgement to three people who aren't primarily complexity researchers simply because they wrote a book and you didn't.

1) AGREE that IF you do the theorem you should useOgiwara-Watnabe proof.

2) QUESTION: Why is Mahaney's theorem important?

3) Caveat to last question:Important compared to what? There are so many things to teach so one has to decide what is worth presenting.

4) Defering to people who have thought longer and harder about what should be in the course than I have seems reasonable to me.

5) Having said all that, you have given me (and hopefully our readers) food for thought. In the past I did MahaneyAND many generalizations of it(btt, word-reducible sets). I will certainly cut that out, though I maystill do Mahaney's theorem itself.(Defering to someone else who has thought long and hard on these issues and runs a complexity blog!)

I bought Aurora-Barak's book and really liked it. Since they have page limitations, they cannot do justice to every great theorem. There are lots of great theorems out there. For a fast growing field like TCS, "static" books cannot do full justice.

My strong suggestion is to have a Wiki dedicated to TCS. Very much like the open problem garden. This will enable different people (especially the experts) to write outlines of great results. Somebody with money and infrastructure (Eg, Intractability Center) should host such a Wiki.

Exercise 2.30. (Berman's Theorem 1978) A language is called unary if every string in it is of the form 1^i (the string of i ones) for some i > 0. Show that if there exists an NP-complete unary language then P = NP. (See Exercise 6.9 for a strengthening of this result.)

Exercise 6.9 (Mahaney's Theorem [Mah80]) Show that if a sparse language is NP-complete, then P = NP. (This is a strengthening of Exercise 2.30 of Chapter 2.)

As Boris notes, Mahaney's theorem is in our book. (BTW does the hint in our book follow Ogihara-Watanabe? If so we should add credit in the second edition.)

We're to blame for Bill not finding it, since our index is pretty bad - your best bet in finding stuff is to avoid the index and try the "search inside" function in amazon.

I agree this is an important result: this theorem, together with Karp-Lipton is part of the basis for our intuition that despite the "unary halting" example, non-uniformity doesn't really add that much power to computation.

But I still think it's OK not to teach it in a course. That's not saying much: any one term grad complexity class will necessarily omit some very basic and fundamental results. One of the ways I try to make up for this is by including more results in guided homework exercises, and the book reflects this tendency.

I enjoyed learning about Mahaney's theorem while taking your class. I remember nothing of the proof, but I clearly remember the theorem. I think it's important for 2 reasons. (i) It gives a deeper understanding of what NP-complete problems are. It gives an idea that being NP-complete is not only about being hard to solve, but also about being able to encode enough information in order to allow all other NP problems to be reduced to it. (ii) It is a somewhat simple example of a theorem of the form X -> P=NP where X is not solve a NP-hard problem in poly time.

On Yao's MST algorithm, I think it's very important to show some algorithm with log log n somewhere in the complexity, so students don't freak out when they hear of any algorithm that deviates from the "standard" complexities. Personally, I prefer to teach how to switch from Boruvka to Prim to attain the log log n factor.

I believe both these examples impress students because they somehow contradict the intuition. Impressing students and contradicting the intuition are good enough reasons to teach a topic.

It's worth mentioning that Oded's book covers Fortune's theorem, or some generalization. Exercise 3.12: Prove that if SAT is Karp-reducible to a subset S of polytime computable G, and G\S is sparse, then SAT has a polytime algorithm.

He also proves the Berman-Hartmanis Isomorphism results in Exercise 2.31.

To Boaz: Based my limited understanding of the literature, your book does use the Ogiwara-Watnabe proof of Mahaney's theorem.

To Bill: Personally, I think that while Mahaney's theorem is interesting, some generalizations (which you are probably familiar with) are much more interesting. We can define an equivalence relation on sets, where we say two sets are equivalent if their symmetric difference is sparse. The Ogihara-Watanabe techniques show that this equivalence preserves structural complexity in following sense: If a set S is NP-Hard and sparsely close to a set in P, then P=NP. (Notice that Oded's book generalizes Fortune's theorem in the direction of the above result). One can go further in this direction, see the paper “Closeness of NP-Hard Sets to Other Complexity Classes” by Bin Fu and Hong-zhou Li. I like this generalization of Mahaney's theorem more than the original theorem itself because it says the sparse-ness issue is more pervasive than just with sparse sets. One might come away from Mahaney's theorem and think that the result was "only" limited to sparse sets. However, even this more general result might not make the cut for a basis course because its hard (for me) to relate this to other results.

Somebody pointed me to this discussion (as I am one of the authors of the complexity books cited). I must point out that in my years of asking people what should or should not be covered in our book, I noticed that there was definitely a generational effect in people's answers. To give a random example, people of my generation (those introduced to complexity around 1990) find Toda's theorem a central result of complexity. People even 4-5 years younger than me often don't. (I dont intend to start another pointless discussion about whether or not they are wrong.)

Both Lance and Bill were introduced to complexity in the early 1980s, which will undoubtedly show up in their tastes.

Unfortunately, in writing our book we were forced to disappoint large subcommunities in the field, by devoting very limited space to subareas such as structural complexity, proof complexity, algebraic complexity, real complexity, communication and decision tree complexity etc. All of these subareas have been covered in excellent books that are almost as thick as our book.

BTW here is my answer to the question in this post: I do not teach Mahaney's theorem in class but may give it as a exercise with a suitable hint. I believe the same is true of complexity courses at MIT, Berkeley, etc. I think it is quite a reasonable decision to teach it, but inevitably this would mean subtracting something whose omission would feel odd too.

Let me join in as the last person being referred to in the original post. I don't think there are "absolute" answers re this specific question or any other specific questions of this type.

Each person, teaching or writing a book/survey etc, makes choices, which are governed by some principles. I think both my book and Arora-Barak book state clearly some of the principles that govern their choices, which I think is a good thing to do. So I'd refer you there.

Lastly, let me protest at Lance comment by which the three authors are not "primarily complexity researchers"; I find this cmment very weird, to say the least...