Pages

Sunday, June 08, 2008

A seminar on randomization

I'll be running a graduate seminar on randomization next semester. The goal is to cover, in 14 80 minute-long student presentations, a range of topics all centered around our ability to toss coins.Some of the constraints:

Student background is good, but not extensive (i.e they are familiar with Chernoff bounds, but not much else; not much complexity background, and Kleinberg-Tardos-level algorithms familiarity)

Breadth is important: it's important to me that students become familiar with a wide range of topics rather than drilling deeply down into one (you could run an entire semester-long course on concentration of measure). This goes back to the previous point: at this stage, I'd like students to get as much wide exposure as feasible, in the hope that it will make later, deeper material more accessible to them later on.

Source material should be accessible and presentable (by students) in 80 minute chunks.

Here's a list of topics: I'd love to hear what people think, and get suggestions on changes/additions/deletions. Pointers to source material will also be helpful.

A randomized algorithm either has an uncertain running time, or a likelihood of error. How do we analyze such algorithms ? This will lead directly to an investigation of tail bounds on distributions, martingales, and the method of bounded differences. We will also look at the famous minimax principle for proving lower bounds on the complexity of randomized algorithms.

We'll take a tour of the many places where randomization leads to more efficient (and more elegant) algorithms (hashing, approximate counting, randomized incremental constructions, randomized rounding)

We'll get acquainted with BPP (the randomized analogue of P), and other random complexity classes, and how randomness and hardness are connected. We'll also look at interactive proofs.

Where does randomness come from ? If we have a deterministic procedure to generate random bits, how is it random anymore ? We'll talk about pseudorandomness (the art of making more from less).

Since it's tricky to make random bits, we'd like to minimize the effort involved. How can we 'derandomize' algorithms ?

(if time permits) Quantum computing would appear to be "probability with complex numbers". What makes it similar (and different) to probabilistic computation ?