Unfortunately,
length may not be the best indicator of the value of a particular
learned clause. If we restrict our attention to any particular
subproblem in the search, some short clauses may not be applicable at
all, while other longer clauses may lead to significant pruning. As
an example, consider a node defined by the partial assignment together with the two learned clauses:

(5)

(6)

As long as the assignments in are retained, the clause (5)
cannot be used for pruning because it is satisfied by itself. In
fact, this clause will not be useful until we backtrack and change the
values for both and for . The clause (6) is more likely
to be useful in the current subproblem, since it will lead to a unit
propagation if either or is set to false. If the subproblem
below the node given by is large, (6) may be used many
times. Within the context of this subproblem, the longer clause is
actually the more useful.

At some level, the appearance of , , and
in (6) shouldn't contribute to the effective length of the
learned clause because that all of these literals are currently false.
Length-bounded learning cannot make this distinction.

We see from this example that it is useful to retain clauses of
arbitrary length provided that they are relevant to the current search
context. If the context subsequently changes, we can remove some
clauses to make room for new ones that are more suitable to the new
context. This is what relevance-bounded learning does.

In relevance-bounded learning, the effective length of a clause is
defined in terms of the current partial assignment. The irrelevance of a clause is defined as one less than the number of
unvalued or satisfied literals it contains. For example, the clause
(6) under the partial assignment has an irrelevance of 1.
The idea of relevance-bounded learning originated in dynamic
backtracking [GinsbergGinsberg1993], in which clauses were deleted if
their irrelevance exceeded . This idea was generalized by Bayardo
and Miranker Bayardo:relsat, who defined a general
relevance bound and then deleted all clauses whose irrelevance
exceeded that bound. This generalization is implemented in the RELSAT satisfiability engine [Bayardo SchragBayardo Schrag1997].

Like length-bounded learning, relevance-bounded learning retains all
clauses of length less than the irrelevance bound , since the
irrelevance of a clause can never exceed its length. But the
technique of relevance-bounded learning also allows the retention of
longer clauses if they are applicable to the current portion of the
search space. Such clauses are only removed when they are no longer
relevant.

Returning to our example, if we backtrack from the original partial
assignment with and find ourselves exploring
, the short nogood (5) will be 0-irrelevant
(since we can unit propagate to conclude ) and the long
one (6) will be 4-irrelevant. Using a relevance bound of 3 or
less, this nogood would be discarded.

The condition that nogoods are only retained until their irrelevance
exceeds a bound is sufficient to ensure that only a polynomial
number of nogoods are retained at any point.7 Experimental results [Bayardo MirankerBayardo Miranker1996]
show that relevance-bounded learning is more effective than its
length-bounded counterpart and, even with a relevance bound of 1, the
results are comparable to those of learning without
restriction.8