We all have favorite papers in our own respective areas of theory. Every once in a while, one finds a paper so astounding (e.g., important, compelling, deceptively simple, etc.) that one wants to share it with everyone. So list these papers here! They don't have to be from theoretical computer science -- anything that you think might appeal to the community is a fine answer.

You can give as many answers as you want; please put one paper per answer! Also, notice this is community wiki, so vote on everything you like!

$\begingroup$In the answers, I'd like to see more emphasis on whether it really is a good idea to read the original paper nowadays (or if it makes much more sense to read a modern textbook exposition of it). I have too often seen TCS papers that are truly seminal, but I'd rather save my colleagues from the pain of trying to decipher the original write-up – which is far too often a hastily-written 10-page conference abstract, with references to a "full version" that never appeared...$\endgroup$
– Jukka SuomelaSep 12 '10 at 9:46

6

$\begingroup$Yes, I hope it is clear that papers of this type are not good for the list (if you want to share it with everyone, then it shouldn't be a pain to read)$\endgroup$
– Ryan WilliamsSep 12 '10 at 16:22

29

$\begingroup$Too many people are just posting one-liners. Any one can post 100s of unique papers without putting any thought into it. Please post why you think everyone should read those papers. This means justifying why they should read that paper instead of someone else's writeup of that result, and what is so awesome about the paper that everyone should read it.$\endgroup$
– Robin KothariSep 16 '10 at 19:18

$\begingroup$Good question. My opinion is that if you want to understand the minds of the inventors, and possibly understand how to invent things, you have to read their own words. The more you labor, the closer you get to their actual thought process.$\endgroup$
– ixtmixilixSep 26 '10 at 22:07

Alan Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society s2-42, 230–265, 1937. doi: 10.1112/plms/s2-42.1.230

In just 36 pages, Turing formulates (but does not name) the Turing Machine, recasts Gödel's famous First Incompleteness Theorem in terms of computation, describes the concept of universality, and in the appendix shows that computability by Turing machines is equivalent to computability by $\lambda$-definable functions (as studied by Church and Kleene).

$\begingroup$Also, very approachable. I read it quite some time ago, when I had basically no CS background, no programming experience and didn't even know what a compiler was.$\endgroup$
– Jörg W MittagSep 13 '10 at 19:05

$\begingroup$"Last week, Googler Ken Thompson was awarded the Japan Prize in Information and Communications for his early work on the UNIX operating system." (src: Buzz post from Life at Google)$\endgroup$
– Sebastián GrignoliMay 26 '11 at 5:23

4

$\begingroup$I would think this paper would be pretty difficult to digest without at least knowing what a compiler is.$\endgroup$
– FixeeSep 2 '11 at 4:26

This paper explains and reinforces the notion that floating point isn't magic. It explains overflow, underflow, what denormalized numbers are, what NaNs are, what inf is, and all the things these imply. After reading this paper, you'll know why a == a + 1.0 can be true, why a==a can be false, why running your code on two different machines can give you two different answers, why summing numbers in a different order can give you an order of magnitude difference and all the wacky stuff that happens in the world of mapping an uncountably infinite set of numbers onto a countably finite set.

$\begingroup$I always think that CS research papers are written in some foreign language.$\endgroup$
– Berlin BrownApr 26 '11 at 21:00

3

$\begingroup$Very good! It is worth to be put on tagline banner on the site to be sure no one student miss that.$\endgroup$
– VagMay 25 '11 at 13:28

$\begingroup$The second link is currently broken$\endgroup$
– Christopher ManningJun 21 '12 at 4:02

2

$\begingroup$This is my favourite from the the list. Also note that this is a living document, unlike most papers which do not receive updates after being published.$\endgroup$
– DennisSep 10 '13 at 7:07

Paths, Trees and Flowers by J. Edmonds. This paper about classic combinatorial optimization problem is not only well written, but also states that the notion of "polynomial-time algorithms" is essentially a synonym for efficiency.

Reducibility Among Combinatorial Problems by Richard Karp. The paper contains what's often referred to as Karp's "original 21 NP-complete problems." In many ways, this paper truly motivated the study of NP-completeness by demonstrating its applicability to a wider domain. Very readable.

This was the first paper that took the study of time complexity seriously, and surely was the primary impetus for Hartmanis and Stearns' joint Turing award. While their initial definitions are not quite what we use today, the paper remains extremely readable. You really get the feeling of how things were in the old "Wild West" frontier of the 60's.

He introduces the idea of quantum computation, describes quantum circuits, explains how classical circuits can be simulated by quantum circuits, and shows how quantum circuits can compute functions without lots of garbage qubits (using uncomputation).

He then shows how any classical circuit can be encoded into a time-independent Hamiltonian! His proof goes through for quantum circuits too, therefore showing that time evolving Hamiltonians is BQP-hard! His Hamiltonian construction is also used in the proof of the quantum version of the Cook-Levin theorem, proved by Kitaev, which shows that k-local Hamiltonian is QMA-complete.

$\begingroup$That's the one. I added a new link and a link to it's page on the publisher's website.$\endgroup$
– Robin KothariOct 1 '10 at 23:27

$\begingroup$Did the notions of BQP and QMA exist when Feynman wrote this paper? Or is there a recent paper which draws this connection? Any reference/exposition of this fact that k-local Hamiltonian is QMA complete?$\endgroup$
– AnirbitJan 3 '16 at 10:54

I'm surprised that no one has come up with Hastad's "Some Optimal Inapproximability Results" (JACM 2001; originally STOC 1997). This landmark paper has been written so well, you can come to it with little other than mathematical maturity and it will make you want to learn several things well, such as its Fourier techniques, parallel repetition, gadgets, and whatnot.

The complexity of theorem-proving procedures by Stephen A. Cook. This paper proves that all the languages decided by polytime nondeterministic Turing machines can be (Cook-)reduced to the set of propositional tautologies.

The importance of this result is (at least) twofold: first, it shows that there exist problems in NP which are at least as hard as the whole class, the NP-complete problems; furthermore, it provides a concrete example of such a problem, which can then be reduced to others in order to prove them complete.

Nowadays Karp reductions are more commonly used than Cook reductions, but the main proof of this paper can be easily adapted to show that SAT is NP-complete with respect to Karp reductions.

$\begingroup$This is one of those conference papers for which no journal version ever appeared, but this one is definitely worth going back to: well written and full of great side comments.$\endgroup$
– András SalamonSep 12 '10 at 16:42

From the abstract: In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics.

This rather magical paper was the first one to formalize streaming algorithms and prove rigorous upper and lower bounds for foundational tasks in the streaming model. Its techniques are simple, its proofs are beautiful, and its impact has been profound. The work won Alon, Matias and Szegedy the Gödel Prize in 2005.

Russell Impagliazzo's A Personal View of Average-Case Complexity. This is a great paper because it is cleverly written, and it summarizes the state of affairs in five "worlds" where our conjectures about complexity are resolved in various ways, giving real-world consequences in each case.

Extractors and Pseudorandom Generators by Luca Trevisan. In this paper good randomness extractor is built by the means of error-correcting codes and combinatorial designs. Construction is quite easy to understand but it is completely stunning, because it is not obvious at all what is the connection between extractors, codes and designs.

After all, it is a good example of a result in TCS that requires some fancy combinatorics.

$\begingroup$I read this and I read A Mathematician's Lament by Lockhart (maa.org/devlin/LockhartsLament.pdf). IMHO I believe that the strategy that Lamport suggest goes against what Lockhart's argues on the beauty of mathematics.$\endgroup$
– Marcos VillagraApr 25 '11 at 5:02

4

$\begingroup$Very interesting read. I understand your opinion, but if I'm not mistaken, Lamport aims his message towards people who are more "mathematically educated" than those targeted by Lockhart, who aims at helping students develop a taste for mathematics. I'll also admit that following a strict format makes proofs quite dull to read, but I agree with Lamport on the idea of proofs by levels: you do not always want/need/have time to read everything in detail, and even when you do, having a summary of what's to come can be quite helpful. Quite a lot more than those "easy to see/clearly/wlog/..." ;-)$\endgroup$
– Anthony LabarreMay 17 '11 at 11:27

More seriously, I think most papers should not be read in the original. As time passes people figure out better way of understanding and presenting the original problem/solution. Except for the Turing original paper, which is of historical importance, I would not recommend reading most original papers if there is followup work that cleaned it up. In particular, of a lot of stuff is presented much better in books than in the original.

$\begingroup$This comment is true in general, but Ryan explicitly asks for examples for which this is not true. There are many classic papers that contain conjectures not yet proved, techniques that have been overlooked, or results that tend to be forgotten but could be dusted off and put to new uses.$\endgroup$
– András SalamonSep 17 '10 at 11:35

11

$\begingroup$I disagree. It is true that original papers sometime are unreadable and secondary works give better exposition of the results, but sometimes the original papers contain ideas which are omitted in later works. Also reading original papers can teach us how the author came up with the idea. Take a look at this post of Timothy Chow on MO: mathoverflow.net/questions/28268/do-you-read-the-masters$\endgroup$
– KavehSep 29 '10 at 18:35

4

$\begingroup$Its great when this happens. I just claim that it is somewhat rare.$\endgroup$
– Sariel Har-PeledSep 30 '10 at 4:14

6

$\begingroup$You say "All of them", but don't you then argue for "None of them"?$\endgroup$
– Peter TaylorJan 19 '11 at 11:28

$\begingroup$By the way, I am not advocating this paper -- just edited to fix typos and add a link. I prefer Gold's paper if one wants a classic paper about language.$\endgroup$
– András SalamonSep 17 '10 at 13:10

Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).