be useful in later steps of inference. For instance, we
might choose to merge nodes or reorient edges. Once
the inference graph has been built, we run a probabilistic inference engine over the graph to generate
new confidences. Each node represents an assertion,
so it can be in one of two states: true or false (“on” or
“off”). Thus a graph with k nodes can be in 2k
possible states. The inference graph specifies the likelihoods of each of these states. The belief engine uses
these likelihoods to calculate the marginal probability, for each node, of it being in the true state. This
marginal probability is treated as a confidence. Finally, we read confidences and other data from the inference graph back into the assertion graph.

There are some challenges in applying probabilistic inference to an assertion graph. Most tools in the
inference literature were designed to solve a different
problem, which we will call the classical inference
problem. In this problem, we are given a training set
and a test set that can be seen as samples from a common joint distribution. The task is to construct a
model that captures the training set (for instance, by
maximizing the likelihood of the training set), and
then apply the model to predict unknown values in
the test set. Arguably the greatest problem in the classical inference task is that the structure of the graphic model is underdetermined; a large space of possible structures needs to be explored. Once a structure