Monthly Archives: March 2020

(Looks like I had a problem I forgot to post. I’ll let this count for this week while we’re on coronavirus lockdown. We’ll see where we are in 2 weeks)

We continue bouncing between really easy and really hard reductions with another easy one.

The problem: Predicate Logic Without Negation. This is problem LO15 in the appendix.

The description: Given a predicate logic system, that is:

A set of variables U = {u1..un}

A set of “function symbols” F = {f1..fk}

A set of “relation symbols” R = {R1..Rj}

..and a well-formed predicate logic sentence A without negations using U, F, and R, is A true under all interpretations of F and R. An interpretation (I think) is us replacing the function (or relation) symbol fi or ri with an actual function of the correct dimension.

Example:

Suppose we have the predicate logic sentence:

∀x,y R1(x,y) = R2(x,y)

..then this statement would not be true under all interpretations R, because we can make R1 and R2 be different relations (maybe R1 always is true but R2 is always false)

But if we do:

∀x,y R1(x,y) = R1(x,y)

..then no matter what relation we choose for R1, the statement is true.

Reduction: G&J imply that the reduction is “trivial” but I still went to Kozen’s paper to figure out how to get rid of the negations. The idea is that we replace each positive instance of a literal xi with the equation “xi = a1“, and we replace each negative instance of a literal ~xj with the equation “xj = a0“. The new values a0 and a1 are constants (or, more precisely, functions that take 0 parameters)

If the original sat formula is satisfiable, then our new formula is also satisfiable with setting a1 to true and a0 to false. (Or, technically, setting each variable xi to a1 if it is true in our satisfying assignment, and setting it to a0 if it’s false in our satisfying assignment).

If the new formula is satisfiable, then we have a way to assign the variables to a0 or a1 that makes the whole formula true. This gives us a way to solve the original SAT instance.

Difficulty: 4. There isn’t much to the transformation, but I think it’s hard to see what’s happening- the fact that the a values are 0-parameter functions, and that the x values are given values in the domain {a0, a1} is a subtle point. It’s hard right off to see the new formula with no negations and not say “But that’s always satisfiable if we make both a variables true!”. The trick is that we’re not setting the a variables to true and false, we’re setting the x variables to constant values in {a0, a1}, which is different.

The problem: Conjunctive Satisfiability with Functions and Inequalities. This is problem LO16 in the appendix.

The description: Given a set U of variables, a set F of functions on 1 variable, a collection C of clauses connecting 2 statements U and V with either:

≤

>

=

≠

.. and U and V are either

A constant 0 or 1

f(0) or f(1) for some f in F

f(u) for some u in U, and some f in F

.. can we find integer values to all variables in U that make all of the clauses in C true?

Example:

Let F = {f,g}, f(x) = x2, g(x) = x3. Let U = {a,b}

Let our clauses be:

f(a) > g(b)

a ≤ b

We can satisfy the clauses by choosing a = -2 and b = 1. If we had another clause that said a > 2 then the clauses would not be satisfiable.

Reduction: G&J point you to an “unpublished manuscript” by Pratt, which I found online. Some people who reference this paper just call it a “manuscript” and some people call it a Tech Report from MIT. I looked at the MIT tech report list and couldn’t find this. But that doesn’t really mean anything since not all of the tech reports from the 1970s have been indexed online at a lot of places. Anyway, you can find the paper online.

I do wish the paper had gone through an editing pass because it has what to me look like bugs or ambiguities. Maybe I’m not understanding things, though.

Here’s the basic idea: Given a Satisfiability formula P, build a formula H(P) in this new construction. The clauses are:

Each variable has to be 0 or 1 (so, 0 ≤ u and u ≤1 for all u in U)

Each literal i has a distinct function fi. If the literal is positive then we will be using fi(0) in what’s to come. If the literal is negative then we will be using fi(1) in what’s to come.

If 2 literals P and Q are connected by ∧, create clauses for the inequalities:

fp(0 or 1) ≤ fq (0 or 1) (0 or 1 depending on whether the literal is positive or negative

fp(P) ≤ fq(Q)

If 2 literals P and Q are connected by ∨ (the paper uses the | symbol for “or” which I’d never seen before), create a clause for the inequalities:

fp(P) ≤ fq (0 or 1)

f0 (0 or 1) > fP(P) for some(?) (maybe all?) variable P.

In the description, Pratt says that they’re building a graph to represent the formula connected by inequalities. If we hit a ∧ symbol, it will connect the graphs of the two parts of the formula together with a parallel connection, so that there is always a way to get from one subgraph to another. If we hit a ∨ symbol, the subgraphs are connected by a series connection. He says “We hope that this informal discussion will make a more formal argument unnecessary”. I for one wish there was some more detail here.

The examples he uses to explain what’s happening are just on the trivial formulas “a” and “a ∧ ~a”. I can sort of see how the clauses he creates come out of those formulas. But since none of his examples use the ∨ operation, or use more complex clauses, it’s hard for me to see what happens there. I’m probably missing something.

It also seems like he wants to let the definition of the functions be part of the satisfiability argument (i.e. not only do we have to find truth values for all of he variables, we also have to create functions that make all of the clauses become true), which seems to not be the definition provided in G&J.

Difficulty: 8. There is probably a much simpler reduction out there. I wish I knew what it was.