Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch

Tuesday, March 11, 2008

Why is PARITY not in AC_0 important?

When discussing what should we teach in a basic complexity course
(taken mostly by non-theorists) we often say
results such-and-such is important.
The question then arises- why is it important? Can the reasons
be conveyed to a non-theory audience?
Lets look at
PARITY CANNOT BE COMPUTED BY POLYTIME, AND-OR-NOT, UNBOUNDED FANIN, CONSTANT DEPTH CIRCUITS
(henceforth PARITY &notin AC0).

Why is PARITY &notin AC0 interesting? important?
(For a good source on this result and other lower bounds on simple
models see
boppana_sipser.pdfboppana_sipser.ps,
a survey from 1989 which is, sadly, not that dated.)
I do find the result interesting, but none of the reasons
below seem that satisfying to a non-theorist.

PARITY &notin AC0
is the way to obtain an oracle that separates PSPACE from PH.
(An oracle to make them collapse is easy.)
Hence no proof that relativizes can be used to seperate
PSPACE from PH (this is not a rigorous concept, but people in
the area have a sense of what it means).
To motivate this you need to do some proofs that
relativize. How many? Perhaps I am biased here- I had a course in
computability theory covering 2/3's of Soare's book
(I understood 1/2 of it at the time) before studying complexity
theory, so I really knew what
relativizing technique meant when I looked at oracles.
That level of understanding is not needed, but some is.
Even so, seems hard to get across to non-theorists in a course.

PARITY &notin AC0
is a natural problem on a natural model
with a
natural proof,
and hence is interesting.
This raises the question: do some people in the real real world really want to construct
polysize constant depth unbounded fanin AND-OR-NOT circuits for PARITY,
and does this result tell them why they cannot?
Are there other lower bounds that are corollaries of
PARITY &notin AC0 that give lower bounds on problems people really want
to solve?
I ask this non-rhetorically.

One approach to P vs NP is to start with simple
models of computation that one can prove lower bounds in,
and then scale up.
There was more optimism for this approach back in 1989
then there is now.

The techniques used to prove the result are
interesting (YES- there are several proofs,
all interesting) and useful for
other theorems of interest (circular reasoning?).

A more general issue: when are results interesting in
their own right, and when are they meant to be
part of a larger research program?
We may not know until many years later.

And of course, for course content, the question is
important? compared to what?

8 comments:

First of all, I don't think you really need in a complexity course to give an oracle that separates PH and PSPACE. (I think that separating P, NP, PH is enough for a course for non-theorists. It is easy for them to admit that most of the classes we speak about can be separated after seeing two or three separations)

And it's hard to convince a non-theorist that very small classes are interesting.I know that my students have difficulty with LOGSPACE, so they will have the same problem with AC0.

And I think the real question you have to ask is not "is PARITY notin AC0" important, but rather "is AC0 important" ?

Of course, if you speak in your course of AC0, proving that a natural problem is not in AC0 is interesting.

None of those reasons are compelling, but I see the AC0 lower bound as a very good illustration of the natural proof limitations. All AC0 lower bound proofs are "natural," and the one via the switching lemma allows to learn AC0 circuits with queries, so if one has talked about Goldreich-Levin when talking about pseudorandomness (which is a premise for natural proofs) and about Fourier analysis when talking about PCP, everything comes together very beautifully.

There is also a meta-point, that complexity theory is still very weak in unconditional results, but very strong in the unexpected connections it has unearthed, but within complexity theoretic questions, and between complexity theory and other fields. So a complexity theory course should too have an emphasis on unexpected connections.

Even real world people (hardware engineers?) or non-theorists need some natural metric to compare circuits, and depth is second only to size in this regard. Couple with this the fact that AC_0 has depth constant (and not log^i(n), which sounds fantastic to the real world) - and AC_0 is indeed an important class to lower bound.

I do however agree with Bill that an audience (theory or otherwise!) must be told briefly why the result is important, if at all.

If a result is not important, but merely interesting, it might be a good idea to mention this explicitly.

The 3rd and 4th reasons are convincing to me, and I think the 3rd could be convincing to a non-theorist. (Bounding the power of weaker models of computation is certainly a natural approach to try.)

Note also that there are lots of things people learn just because they are interesting, and not because they have any intrinsic "importance". Sticking to CS courses by way of example, students learn about operating systems and compilers even though very few people are ever going to build one themselves. We teach programming language principles even though most students' jobs are going to have them program in Java or Ruby on Rails. Etc., etc.

I think that the result is a wonderful illustration of why Complexity is both fascinating and difficult.

I would cover the result in the following context: depth 2 AC_0,a.k.a. CNF and DNF is clearly interesting and useful (e.g. easy clear proofs that they are universal, used in textbooks, even by engineers as PLAs).

PARITY clearly needs exponential size depth 2 circuits (again, intuitive and clear proof for OR of ANDs: each AND must be of maximal length, and all are disjoint). All this formalism collapses for depth 3 or more. Yet the intuition that you cannot compute PARITY remains. The intuition could be based on two "feelings"

a. You need to take into account each input--if indegree was 2, we'd need depth log. Unbounded fanin cannot really help that much because you "mix up" many inputs, and for PARITY you can't really do such mixing.

b. PARITY wiggles a lot: an AND is like a polynomial, which cannot wiggle exponentially much.

Of course neither a. nor b. makes mathematical sense. Yet you can make nontrivial proofs out of them:

a. can be formalized as the Sipser-Hastad proof (which is elemenatry, and can be given in a single lecture).

For b., as Luca points out, for more sophisticated students, you can give the Fourier proof, that brings in notions of learning, correlation, and pseudo-randomness, all with relatively easy proofs.

One can then point out that deep down, a. and b. are the same intuition, expressed in different language.

Finally, (for mature audiences only) one can explain natural proofs, and show that the beautiful AC_0 noncontainment proofs have limitations, and going further seems quite difficult.

Not bad for a toy complexity class.

For non-theorists, selected portions of the above (even just the Sipser-Hastad proof) would give an idea of why questions like P vs NP are difficult

do some people in the real real world really want to construct polysize constant depth unbounded fanin AND-OR-NOT circuits for PARITY, and does this result tell them why they cannot?

At the time of FSS there was a general question of whether one could do the same kinds of tricks to get extremely fast multiplier circuits to match fast adders. Parity notin AC^0 showed that this was not possible.

There is a weaker version of justification 3: Parity notin AC^0 is a stepping stone to understanding fast parallel computation. I never really bought the "stepping stone to P vs NP" but this seemed a reasonable goal.

Why teach it? Because it is important to understand combinatorial/nonuniform models of computation and this is one of the nicest results in dealing with such models. (I guess this is a variant of Bill's reason 2. BTW: Small space TMs - to which AC^0 is compared by Anonymous #1 - are much less natural than clean combinatorial models of computation like circuits. ) The oracle consequence is completely beside the point. For a wider audience, I think it is good to get the understanding the complexity isn't just about TMs. (Though in our quarter-long complexity course we do other things.)

On first reading (and first commenting) I missed the part where Bill says the course is "taken mostly by non-theorists," which makes my previous comment off-topic.

I have never designed a complexity course for non-theorists, but one way to go about it would be to have a section on foundations of cryptography, which would be by far the most appealing part of complexity theory for non-complexity-theorists. If one works up to the definition of pseudorandom functions, then it's a small step to do Razborov-Rudich, and at that point proving that parity is not in AC0 would be a good illustration of what kind of lower bounds we know how to prove, and why they end up being "natural."

Plus one immediately can see that the lower bound proof is an attack that breaks all block cyphers implementable as bounded-depth circuits, which gives both the Razborov-Rudich argument and the lower bound proof a very concrete flavor.

For me this result is important simply because there aren't that many unconditional separation/lower bound results in complexity, and it's reassuring to visit at least one proof of impossibility about a natural problem. There are some classical trivial results like the time and space hierarchy theorems, but it's not really very surprising that EXPTIME-complete problems can't be solved in polynomial time. On the other hand, when you pose the problem to someone to attempt to construct polynomial-size constant-depth circuits for parity, you can imagine them struggling for a while with various possible solutions. It has for me the same flavor as pumping theorem results that specific languages are nonregular.