> I have heard of a proof having to do with compilers trying to> automatically parallelize code with general dependencies. The gist was> that there is an NP-complete problem somewhere in there, making> prospects for automatic parallelization look bad.

I'm sorry I have no more precise information to offer, but I guess you
refer to the NP-completeness of the *exact* resolution of integer
[in]equation systems such as those that occur in automatic parallelization
(more precisely, dependence analysis of "array/loop" programs).

This will be here to stay, of course, but this does not mean that
automatic parallelization is infeasible. There are two approaches to
alleviate this NP-completeness problem:

- develop fast approximations (e.g., GCD test, Banerjee, etc.);

- use general-purpose, "expensive" algorithms (e.g. Fourier-Motzkin,
simplex) but enhance them with heuristics that work well (i.e., fast) for
the specific kind of systems that occur in automatic parallelization
(e.g., SUIF, Omega, the work by Jean-Claude Sogno in our group at INRIA).

The crux of the whole issue is how you define "good" or "reasonable"
automatic parallelization, given that the programmer has often much more
information about his program in his head than he cares to give the
compiler (e.g., bounds on some parameters, alias information, etc.).