Wednesday, May 16, 2007

Gödel Prize, Naturally

A computer scientist who did some work in complexity theory once quoted someone: "if you wish to separate NP from P, it is not NP that you need to understand, it is P that you need to understand".

The natural proofs argument says: if you're going to do that, watch out -- you might need deeper thoughts than you're capable of.

Let's say we don't like a circuit complexity class C - we think it's too weak in its computational power (think AC^0, nonuniform versions of Logspace or P). We go into our laboratory, cook up deep thoughts about C, and come up with "nutshell" observations like: if a Boolean function can be computed by C, then it can't be too special -- in fact, it must have property P. [Running example: if a function can be computed by an AC^0 circuit family, it can't be "too sensitive" (the function's value on a random point of the Boolean hypercube must agree with all but polylog many neighbors of the point, whp)]. Then we think a little bit more, and say, "Hey, but wait, here's this function g that doesn't have property P" [Running example, contd: The parity function is like a sweet-tooth on a dentist's chair -- too damn sensitive.] And voila, we've shown a lower bound against class C.

All's well, but now let's go and analyze our thought process, specifically, let's try and ask: what's the computational complexity of our deep thought that led to the lower bound? In other words, what's the computational (say, circuit) complexity of the property P? Suppose that P(f), a function on N = 2^n bits, can be computed by some circuit of type/complexity D.

On the other hand, by virtue of the fact that we were able to succinctly describe property P in one or two English sentences, it ought to be the case that a random Boolean function almost surely won't have property P. Since we're supposed to be formal mathematicians, let's not handwave, instead let's turn this into an assumption: let's assume that P(f) is False for a random Boolean function f wp. 1 - o(1).

The punchline of this kind of dont-attempt-early-in-the-morning-before-a-pot-of-Earl-Grey thinking is that if class C is capable of producing pseudorandom functions that can fool class D, then this scenario is impossible -- after all, given a function from the pseudorandom ensemble of functions produced by class C, what is the class D circuit to do? Since it can tell that the function has property P, it can identify that it was manufactured in the C-factory, something it wasn't supposed to be able to (by defn. of pseudorandom function generator)...

Moral of story, illustrated: if there's a pseudorandom function generator computable by polynomial-sized circuits that can fool circuits of size poly(2^n) (or even 2^n^eps), then the property P that you need to come up with to prove a lower bound against polynomial-sized circuits needs to be more complex than can be recognized by a circuit of size 2^n^eps... whew, there I said it in one sentence. In English: to separate C1 from C, it might be futile to pick on C and try and find some human-interest property of all functions computable in C and hope that something in C1 doesn't have that property... you might actually have to be smart and find weirdo functions (like the diagonal language, complete languages, hard-on-average problems) in C1 that are not in C for less blatant reasons.

[Technical clarification of the previous sentence: Imagine a circuit T with two inputs, an n-bit string x, and an n^k-bit string s. Via T, one obtains a collection of functions T_s(x) by sticking in all possible "seeds" s. Suppose there is some fixed c (much smaller than k, naturally) such that any circuit of size 2^{n^c} that takes 2^n inputs cannot tell the difference between a random function from the collection {T_s(.)} and a totally random function. Then what we have on our hands is a polynomial-sized-circuit-computable-pseudorandom-function-generator, the kind of thing Moni Naor might be willing to talk about before breakfast, but one that you shouldn't try making at home.]