Computer Science 522
Computational Complexity

Spring 2009

News:·FINAL is now available.
· All students taking this course should
join the mailing
list. It's also recommended for anyone planning to go to some of the lectures.
(I'll announce in advance on the mailing list the topics for each of the more
advanced lectures.)
· I will assume all students have basic familiarity
with NP-completeness, space complexity, diagonalization, and basic notions of
discrete probability and linear algebra. See chapters 1-6 and the
appendix in the textbook. Email me for any clarifications.

Requirements and grading: Submitting and grading weekly homework assignments (50% of grade),
take home final exam (50% of final grade).

Prerequisites: There are no formal prerequisites, but I
will assume some degree of mathematical maturity and familiarity
with basic notions such as functions, sets, graphs, O notations, and
probability on finite sample spaces. See the appendix of the book to brush up on
this stuff.

Course Information

This is a graduate course in computational complexity, including both
"classical" results from the last few decades, and very recent results from
the last few years.

Complexity theory deals with the power of efficient computation.
While in the last century logicians and computer scientists developed
a pretty good understanding of the power of finite time algorithms
(where finite can be an algorithm that on on a 1000 bit input will take
longer to run than the lifetime of the sun)
our understanding of efficient algorithms is quite poor.
Thus, complexity theory contains more questions, and relationships between
questions, than actual answers. Nevertheless, we will learn about some fascinating
insights, connections, and even few answers, that emerged from complexity
theory research.

Among the questions we will tackle (for various types
of computational problems) are:

Are algorithms that are given more time always able to solve more problems?

Is verifying solutions to problems easier than coming up with such solutions?

Is finding approximate answers easier than finding exact answers?

Can tossing coins help us compute faster?

Is solving problems on an average input easier than solving them on every input?

Can you verify that an algorithm solves a problem without solving it yourself?

Can we prove that some interesting problems cannot be solved efficiently?

Can the counterintuitive notions of quantum mechanics help us solve some problems faster?

I plan to include also some topics, including some results from the
last couple of years.
These include derandomization, expanders and extractors,
Reingold's deterministic O(log n)-space algorithm for undirected s-t connectivity and the
PCP theorem. A general recurring theme
in this course will be the notion of obtaining combinatorial objects with random and
pseudorandom properties.

Perhaps the question that will occur to you after attending this
course is "How is it that all these seemingly intelligent people
have been working on this for several decades and have not managed
to prove even some ridiculously obvious conjectures?". The answer is
that we need your help to solve some of these problems, and get rid
of this embarrassing situation.

Readings:

Our main textbook will be the upcoming book
Computational Complexity: A Modern Approach
by Sanjeev Arora and me.
Drafts of the book will be available from Pequod Copy. Whenever
presenting material that is not in this book, I will provide references to the relevant
research papers or other lecture notes.

Homework

Homework will be given each class and are
due the next class. You can submit the
homework earlier but please do not submit them later. However, I
will accept the homework up to the following Monday at a penalty of 30
points. Each homework assignment will be checked by 1-2 students
within a week. The students checking a homework assignment are not
exempt from submitting that assignment.
The home work grade will be the average of all assignments. I will
not drop the lowest graded assignment. However, some assignments
will contain more bonus points (i.e., more than 100), and checking
homework for the week will give you a 40 point bonus in that
assignment (20 if 2 students are checking).
I encourage you to use LaTeX to write your solutions. You might find
the following short guide for
LaTeX written by Dave Xiao to be useful (you might also want to
look at the source files for the guide: latex-guide.tex and macros.tex). To make typing in LaTeX
easier, I will provide the LaTeX source for all the homework
exercises available.

The "old" PCP proof: The "pre-Dinur" proof of the PCP theorem is also worth looking at, as some of
the tools that are not necessary for the current proof (e.g., the low degree test) are important and interesting
in their own right. See these lecture notes by Feige
for this proof.

Additional reading: The hardcore lemma is from paper "Hard-core distributions for
somewhat hard problems" of Russell Impagliazzo
(see also his wikipedia entry).
The XOR lemma has several different proofs with varying parameters, see this survey
by Goldreich, Nisan and Wigderson.

A derandomized version of the XOR lemma, that given a function on n bits only needs to move to
a function on O(n+k) bits to get hardness similar to the hardness the original version's with k repetitions (and hence nk bits)
was given in this paper by Impagliazzo and Wigderson. In particular,
using what we've seen this paper shows how to get BPP=P from functions with 1-1/n hardness for exponential circuits.
(We'll show how to get functions like that from functions that are worst-case hard next time).

I highly recommend this survey
by Valentine Kabanets on derandomization. It contains brief
descriptions and pointers to many of the latest results and exciting research directions of this field.

Hardness vs. randomness tradeoff: The results shown in class generalize to a tradeoff between
the assumption on the circuit size required to compute functions in E and the resulting time to derandomize BPP.
However, the currently known approach to get an optimal tradeoff (by this we mean optimal w.r.t. to "black-box" proofs)
is somewhat different (and in particular uses error correcting codes but not the NW generator). This is obtained in the
following two papers by Shaltiel and Umans
and Umans. You can also see a PowerPoint
presentation by Ronen Shaltiel on this topic.

Randomness extractors: Another topic we did not touch is randomness extractors which are used not to derandomize
BPP but to execute probabilistic algorithms without access to truly independent and uniform coin tosses. The following
survey by Ronen Shaltiel is a good starting point for information on this topic. See also the following
presentation by Salil Vadhan.

More resources on derandomization and pseudorandomness: As you can see, one could make a whole course out of the
topics we did not cover on pseudorandomness, and indeed there several such courses with excellent lecture notes were
given. Some recommended links are: ShaltielTrevisanZuckermanGoldreichVadhan