Month: November 2011

When talking about graphical models with people (particularly computer vision folks) I find myself advancing a few opinions over and over again. So, in an effort to stop bothering people at conferences, I thought I’d write a few entries here.

The first thing I’d like to discuss is “surrogate likelihood” training. (So far as I know, Martin Wainwright was the first person to give a name to this method.)

Background

Suppose we want to fit a Markov random field (MRF). I’m writing this as a generative model with an MRF for simplicity– pretty much the same story holds with a Conditional Random Field in the discriminative setting.

Here, the first product is over all cliques/factors in the graph, and the second is over all single variables. Now, it is convenient to note that MRFs can be seen as members of the exponential family

,

where

is a function consisting of indicator functions for each possible configuration of each clique and variable, and the log-partition function

.

ensures normalization.

Now, the log-partition function has the very important (and easy to show) property that the gradient is the expected value of .

With a graphical model, what does this mean? Well, notice that the expected value of, say, will be exactly . Thus, the expected value of will be a vector containing all univariate and clique-wise marginals. If we write this as , then we have

.

The usual story

Suppose we want to do maximum likelihood learning. This means we want to set to maximize

If we want to use gradient ascent, we would just take a small step along the gradient. This has a very intuitive form: it is the difference of the expected value of under the model to the expected value of under the current distribution.

.

.

Note the lovely property of moment matching here. If we have found a solution, then and so the expected value of under the current distribution will be exactly equal to that under the data.

Unfortunately, in a high-treewidth setting, we can’t compute the marginals. That’s too bad. However, we have all these lovely approximate inference algorithms (loopy belief propagation, tree-reweighted belief propagation, mean field, etc.). Suppose we write the resulting approximate marginals as . Then, instead of taking the above gradient step, why not instead just use

?

That’s all fine! However, I often see people say/imply/write some or all of the following:

This is not guaranteed to converge.

There is no longer any well-defined objective function being maximized.

We can’t use line searches.

We have to use (possibly stochastic) gradient ascent.

This whole procedure is frightening and shouldn’t be mentioned in polite company.

I agree that we should view this procedure with some suspicion, but it gets far more than it deserves! The first four points, in my view, are simply wrong.

What’s missing

The critical thing that is missing from the above story is this: Approximate marginals come together with an approximate partition function!

That is, if you are computing approximate marginals using loopy belief propagation, mean-field, or tree-reweighted belief propagation, there is a well-defined approximate log-partition function such that

.

What this means is that you should think, not of approximating the likelihood gradient, but of approximating the likelihood itself. Specifically, what the above is really doing is optimizing the “surrogate likelihood”

What’s the gradient of this? It is

or exactly the gradient that was being used above. The advantage of doing things this way is that it is a normal optimization. There is a well-defined objective. It can be plugged into a standard optimization routine, such as BFGS, which will probably be faster than gradient ascent. Line searches guarantee convergence. is perfectly tractable to compute. In fact, if you have already computed approximate marginals, has almost no cost. Life is good.

The only counterargument I can think of is that mean-field and loopy BP can have different local optima, which might mean that a no-line-search-refuse-to-look-at-the-objective-function-just-follow-the-gradient-and-pray style optimization could be more robust, though I’d like to see that argument made…

I’m not sure of the history, but I think part of the reason this procedure has such a bad reputation (even from people that use it!) might be that it predates the “modern” understanding of inference procedures as producing approximate partition functions as well as approximate marginals.