READING

Krähenbühl and Koltun introduce a popular algorithm for inference in dense random fields under the assumption of Gaussian edge potentials. In particular, the assumed model has the form

$E(x) = \sum_i \psi_u(x_i) + \sum_{i < j} \psi_p(x_i, x_i)$

with unary potentials $\psi_u$ and pairwise potentials $\psi_p$. Here, $x_i$ denotes the label assigned to pixel $i$. The unary potential is usually modeled by a pixel-wise classifier, while the pairwise potential is modeled using a mixture of Gaussian kernels multiplied by a label compatibility function:

where $k^{(m)}$ is a Gaussian kernel comparing the feature vectors $f_i$ and $f_j$ of the corresponding pixels. In practice, it is computed as a weighted sum of Gaussian kernels applied on color and spatial coordinates, where the variances used are parameters that need to be learned. As compatibility function, a Potts model can be used, however, Krähenbühl and Koltun learn a symmetric function instead. This allows to take the interactions between labels into account.

Based on this random field, their main contribution is an efficient message passing algorithm based on the mean field approximation allowing for efficient inference. Using the mean field approximation derived by Koller and Friedmann [1], the derived update equation looks as follows:

The derivation is based on modeling a distribution $Q(x) = \prod_i Q(x_i)$ to minimize the KL-divergence

$D(Q|P) = \sum_{x}Q(x) \log \frac{Q(x)}{P(x)}$

(following Chapter 11.5 of Koller and Friedmann [1] and then substituting the definition of the pairwise potentials). In Equation (1), $Z_i$ denotes the partition function corresponding to $Q_i$ and $\mathcal{L}$ is the set of interesting labels. Also see the supplementary material of the paper for details. This scheme can easily be implemented as can be seen in Algorithm 1.

However, due to the message passing step, the complexity of a naive implementation is quadratic in the number of pixels. The main idea employed by Krähenbühl and Koltun is that the message passing step can be modeled by convolving $Q$ with a Gaussian (as $k^{(m)}$ corresponds to a Gaussian kernel):

By replacing the message passing step with Equation (2) the computational complexity can be reduced to be linear based on the following observations:

Due to the sampling theorem, the convolution can be performed on a downsampled version of $Q$. This is due to the fact that Gaussian filtering performs low-pass filtering which implies that $Q$ can be reconstructed based on samples spaced proportionally to the variance used.

When using a truncated Gaussian filter, only a constant number of neighbors are needed for the computation (as the spacing of the samples is proportional to the standard deviation).

When using the indicator function as label compatibility function, Krähenbühl and Koltun do only learn the unary potentials using JointBoost [2]. The remaining parameters, including the variances, are set using discrete grid search on a validation set. Unfortunately, they do not mention how to learn the weights in the Gaussian mixture model used for the pairwise potentials.