15 August 2010

A simple reduction

Recent online discussion about computational complexity included the notion that the structure of solutions of SAT might be useful in proving separation results. I want to point out a further obstacle to this idea.

Previously I discussed k-ISOLATED SAT, the decision problem which attempts to ramp up the hardness of SAT by requiring not only a solution, but one that is k-isolated (so that any other solution differs in at least k+1 variables). k-ISOLATED SAT is tangentially related to the question that Valiant and Vazirani discussed in their 1986 paper NP is as easy as detecting unique solutions: if we require our Turing machine to only provide a correct answer when the input instance has precisely one solution, how does the difficulty of the problem change? This modifies SAT into a promise problem.

Here I wanted to point out an even weaker reduction. It seems obvious but might still be interesting. This reduction shows that any separation of the solution space of k-SAT must be quite strong, stronger than Deolalikar currently seems to be considering. The separation argument in his draft (in the original, through to the most recent third version) seems to hinge on a gap between the lower bound of -isolation versus the upper bound of O(polylog(n))-isolation of solutions. This is not completely clear, because there is an undefined notion of probabilistic “parameters” that he uses, and he also uses in several places to indicate the lower bound, so maybe he actually meant something else.

I now show that one can reduce SAT to -ISOLATED SAT, for any fixed . This rather narrows the wiggle room in any proof strategy that relies on showing a gap exists.

Simply apply the following deterministic reduction, that doesn’t rely on any derandomization assumptions. Let , and create new variables for each of the original variables in the instance. Then add extra clauses to enforce that the same values are assigned to each of the variables associated with each original variable. This creates a new instance with variables and polynomially many clauses. If we adjust for the increased number of variables, the reduction is then from SAT to -ISOLATED SAT.

In other words, any argument that considers the structure of the solution space in terms of how far apart solutions are, must be able to work with the gap that is left between -ISOLATED SAT and -ISOLATED SAT.

But wait, there’s more! The reduction has further consequences which are worth examining on their own merits.

Also related to k-ISOLATED SAT is the complexity class UP. UP consists of those decision problems in NP where every instance has precisely 0 or 1 solution. This class seems to be somewhere between P and NP in the complexity hierarchy, but it is not known precisely where. Note that n-ISOLATED SAT is precisely the UP version of SAT: if there are n variables then n-isolation ensures there can be at most one solution, and if there is at most one solution, then obviously solutions are n-isolated.

The gap is intriguing even ignoring any of the current debate. We can reduce SAT to SAT with up to -isolated solutions. It is not obvious how to make a deterministic reduction going the other way, so it seems a reasonable guess that the difficulty increases monotonically up to -isolation. On the other hand, n-isolation of solutions is in UP, so seems to be easier than the original complexity of the problem. So it looks like increasing the isolation starts making the problem easier, from some point onward. The question is then:

At what value of the parameter k does increasing the degree of isolation make k-ISOLATED SAT become easier? Further, does the difficulty decrease monotonically from then on, or does the complexity have a more interesting behaviour?