Kamiltonian Monte Carlo [no typo]

Heiko Strathmann, Dino Sejdinovic, Samuel Livingstone, Zoltán Szabó, and Arthur Gretton arXived a paper last week about Kamiltonian MCMC, the K being related with RKHS. (RKHS as in another KAMH paper for adaptive Metropolis-Hastings by essentially the same authors, plus Maria Lomeli and Christophe Andrieu. And another paper by some of the authors on density estimation via infinite exponential family models.) The goal here is to bypass the computation of the derivatives in the moves of the Hamiltonian MCMC algorithm by using a kernel surrogate. While the genuine RKHS approach operates within an infinite exponential family model, two versions are proposed, KMC lite with an increasing sequence of RKHS subspaces, and KMC finite, with a finite dimensional space. In practice, this means using a leapfrog integrator with a different potential function, hence with a different dynamics.

The estimation of the infinite exponential family model is somewhat of an issue, as it is estimated from the past history of the Markov chain, simplified into a random subsample from this history [presumably without replacement, meaning the Markovian structure is lost on the subsample]. This is puzzling because there is dependence on the whole past, which cancels ergodicity guarantees… For instance, we gave an illustration in Introducing Monte Carlo Methods with R [Chapter 8] of the poor impact of approximating the target by non-parametric kernel estimates. I would thus lean towards the requirement of a secondary Markov chain to build this kernel estimate. The authors are obviously aware of this difficulty and advocate an attenuation scheme. There is also the issue of the cost of a kernel estimate, in O(n³) for a subsample of size n. If, instead, a fixed dimension m for the RKHS is selected, the cost is in O(tm²+m³), with the advantage of a feasible on-line update, making it an O(m³) cost in fine. But again the worry of using the whole past of the Markov chain to set its future path…

Among the experiments, a KMC for ABC that follows the recent proposal of Hamiltonian ABC by Meeds et al. The arguments are interesting albeit sketchy: KMC-ABC does not require simulations at each leapfrog step, is it because the kernel approximation does not get updated at each step? Puzzling.

I also discussed the paper with Michael Betancourt (Warwick) and here his comments:

“I’m hesitant for the same reason I’ve been hesitant about algorithms like Bayesian quadrature and GP emulators in general. Outside of a few dimensions I’m not convinced that GP priors have enough regularization to really specify the interpolation between the available samples, so any algorithm that uses a single interpolation will be fundamentally limited (as I believe is born out in non-trivial scaling examples) and trying to marginalize over interpolations will be too awkward.

They’re really using kernel methods to model the target density which then gives the gradient analytically. RKHS/kernel methods/ Gaussian processes are all the same math — they’re putting prior measures over functions. My hesitancy is that these measures are at once more diffuse than people think (there are lots of functions satisfying a given smoothness criterion) and more rigid than people think (perturb any of the smoothness hyper-parameters and you get an entirely new space of functions).

When using these methods as an emulator you have to set the values of the hyper-parameters which locks in a very singular definition of smoothness and neglects all others. But even within this singular definition there are a huge number of possible functions. So when you only have a few points to constrain the emulation surface, how accurate can you expect the emulator to be between the points?

In most cases where the gradient is unavailable it’s either because (a) people are using decades-old Fortran black boxes that no one understands, in which case there are bigger problems than trying to improve statistical methods or (b) there’s a marginalization, in which case the gradients are given by integrals which can be approximated with more MCMC. Lots of options.”

Using KMC on a joint space in a hierarchical model will break it — too many dimensions! But why would one do that? If HMC is available, use it! We include HMC results in some of the experiments and it naturally outperforms KMC or other energy surrogates (modulo some potential speed up on “tall data” or easy models, as in Babak’s paper). But what to do when gradients are not available? Fillipone & Girolami, PAMI, 2014 for example random walk on the GPC marginal posterior, and KMC for sure can do better than that.

Thank you very much for the feedback, Christian and Michael. We’d like to discuss a few of the points you raised.

Adaptation and ergodicity.
We certainly agree that the naive approach of using a non-parametric kernel density estimator on the chain history (as in [Christian’s book, Example 8.8](http://goo.gl/vMHEpy)) as a *proposal* fails spectacularly on simple examples: the probability of proposing in unexplored regions is extremely small, independent of the current position of the MCMC trajectory. This is not what we do though. Instead, we use the gradient of a density estimator, and not the density itself, for our HMC proposal. Just like [KAMH](http://arxiv.org/abs/1307.5302), KMC lite in fact falls back to Random Walk Metropolis in previously unexplored regions and therefore inherits geometric ergodicity properties. This in particular includes the ability to explore previously “unseen” regions, even if adaptation has stopped. I implemented a simple illustration and comparison [here](http://nbviewer.ipython.org/gist/karlnapf/da0089726c43ed52a899).

ABC example.
The main point of the ABC example, is that our method does not suffer from the additional bias from Gaussian synthetic likelihoods when being confronted with skewed models. But there is also a computational efficiency aspect. The scheme by [Meeds et al.](https://xianblog.wordpress.com/2015/03/13/hamiltonian-abc/) relies on finite differences and requires $2D$ simulations from the likelihood *every time* the gradient is evaluated (i.e. every leapfrog iteration) and H-ABC discards this valuable information subsequently. In contrast, KMC accumulates gradient information from simulations: it only requires to simulate from the likelihood *once* in the accept/reject step after the leapfrog integration (where gradients are available in closed form). The density is only updated then, and not during the leapfrog integration. Similar work on speeding up HMC via energy surrogates can be applied in the [tall data scenario](http://arxiv.org/abs/1504.01418).

Monte Carlo gradients.
Approximating HMC when gradients aren’t available is in general a difficult problem. One approach (like surrogate models) may work well in some scenarios while a different approach (i.e. Monte Carlo) may work better in others, and the ABC example showcases such a case. We very much doubt that one size will fit all — but rather claim that it is of interest to find and document these scenarios.
Michael raised the concern that intractable gradients in the Pseudo-Marginal case can be avoided by running an MCMC chain on the joint space (e.g. $(f,\theta)$ for the GP classifier). To us, however, the situation is not that clear. In many cases, the correlations between variables can cause convergence problems (see e.g. [here](http://arxiv.org/pdf/1301.2878.pdf)) for the MCMC and have to be addressed by de-correlation schemes (as [here](http://papers.nips.cc/paper/4114-slice-sampling-covariance-hyperparameters-of-latent-gaussian-models)), or e.g. by incorporating [geometric information](http://www.dcs.gla.ac.uk/inference/rmhmc/), which also needs fixes as [Michaels’s very own one](http://arxiv.org/abs/1212.4693). Which is the method of choice with a particular statistical problem at hand? Which method gives the smallest estimation error (if that is the goal?) for a given problem? Estimation error per time? A thorough comparison of these different classes of algorithms in terms of performance related to problem class would help here. Most papers (including ours) only show experiments favouring their own method.

GP estimator quality
Finally, to address Michael’s point on the consistency of the GP estimator of the density gradient: this is discussed In the [original paper on the infinite dimensional exponential family](http://arxiv.org/pdf/1312.3516v3.pdf). As Michael points out, higher dimensional problems are unavoidably harder, however the specific details are rather involved. First, in terms of theory: both the well-specified case (when the natural parameter is in the RKHS, Section 4), and the ill-specified case (the natural parameter is in a “reasonable”, larger class of functions, Section 5), the estimate is consistent. Consistency is obtained in various metrics, including the $L^2$ error on gradients. The rates depend on how smooth the natural parameter is (and indeed a poor choice of hyper-parameter will mean slower convergence). The key point, in regards to Michael’s question, is that the smoothness requirement becomes more restrictive as the dimension increases: see Section 4.2, “range space assumption”.
Second, in terms of practice: we have found in experiments that the infinite dimensional exponential family does perform considerably better than a kernel density estimator when the dimension increases (Section 6). In other words, our density estimator can take advantage of smoothness properties of the “true” target density to get good convergence rates. As a practical strategy for hyper-parameter choice, we cross-validate, which works well empirically despite being distasteful to Bayesians. Experiments in the KMC paper also indicate that we can scale these estimators up to dimensions in the 100s on Laptop computers (unlike most other gradient estimation techniques in HMC, e.g. the ones in your [HMC & sub-sampling note](https://goo.gl/1a1Q8v), or the finite differences in [Meeds et al](https://xianblog.wordpress.com/2015/03/13/hamiltonian-abc/)).

Thanks for the discussion. One of the examples is pseudo-marginal inference on the hyperparameters of a GP classifier (i.e. a hierarchical structure). It seems to do reasonably well, so if you combined these steps for hyperparameters with something like RMHMC, SMMALA or elliptical slice sampling to sample the latent data conditional on the hyperparameters (as in Fillipone & Girolami, PAMI, 2014) then I’d have thought you would have something which works ok, at least for *some* hierarchical models.

Thinking a little bit more about it, if you’ve used a pseudomarginal method to marginalise out part of the hierarchy, I’m pretty sure you can’t argue that it works on a hierarchical model. To make an argument that it works for this as a hierarchical model, you’d have to use KMC to *jointly* estimate the function and the parameters.

We did the sampler for the for the theta | y step. Clearly this is a reasonable way to go as it breaks the posterior dependence between theta and x that can cause a lot of trouble.

The state of the art suggested in previous papers was RWM for this bit, so we show that KMC did a little better.

I’m not making the argument that you can do joint sampling – we’ll have to agree to disagree on that meaning I can’t make the previous claim.

Personally I have never seen a Markov chain Monte Carlo scheme (including Riemann Manifold Hamiltonian Monte Carlo, and their are actually several contributions on this issue in the discussions to the read paper), which can handle joint sampling of hyperparameters and latent data in an arbitrarily complex hierarchical model. There are so many complex dependencies that the posterior will look so peculiar – constructing a general purpose sampler that can handle all of those and navigate to regions of interest seems to me to be an extremely difficult task. So I would raise my eyebrows at any MCMC scheme claiming to do that. If you know one though please let me know!

Arbitrary, no. Three level hierarchy with a big Gaussian bit at the second level is classical (I’m aware of versions since the late 90s). See, for example, all disease mapping, spatial statistics and some point process literature.

I’m late to this, but i don’t totally agree with Michael. (This is different to saying I think KMC is a good idea. At the very least I think it has a stupid name).

I think he’s confusing penalised methods (which work) with the corresponding Bayesian method (which doesn’t). A simple example would be LASSO, which works, vs Bayesian LASSO, which fantastically doesn’t. Kernel estimators (going back to probably Wahba) work fantastically on the location (there are some solid theory results backing this up), but a Bayesian interpretation (specifically of credible intervals) needs to be made with care (i.e. they are a disaster!). In my reading of this (brief) paper, they didn’t use the credible intervals, which possibly makes the approximations ok.

Basically, I think MB has made the “inverse Bayesian quadrature error” in assuming a meaningful Bayesian model for something that only makes sense at the mode. (c.f. Bayesian Quadrature, which is basically crap)

I still don’t think this would work for interesting (i.e. hierarchical) models. I am not convinced by the examples, but it looks like a NIPS paper, so I wouldn’t expect to be.