Harmonic mean estimators

In connection with my recent talk at MaxEnt 2009 and older talks and posts, I have just written with Darren Wraith a very short note on some computational methods used for approximating Bayes factors. This note has now been arXived and it is strongly based on the slides for this talk. Nonetheless, I think there is an interesting point made in the section on harmonic mean estimators, namely that (a) using an MCMC output, it is easy to determine highest posterior density regions by looking directly at the numerical value of the posterior (up to a constant) and (b) those HPD regions can be used to construct pseudo-proposals whose support is one of those regions. This cancels the infinite variance difficulty rightly stressed recently by Radford Neal. In short, the fact that

holds for any density φ is of interest for approximating m(x) and therefore Bayes factors only when φ is such that the corresponding importance sampling estimator has a finite variance. The infamous but common choice φ=π is rarely appropriate in this regard. However, if one uses the output of an MCMC sampler to determine an empirical highest posterior density region, restricting φ to this region or to an approximation of it—like an ellipsoid—escapes the infinite variance difficulties.

[…] an approximation is converging. Except when λ=0, of course, which brings us back to the original harmonic mean estimator. (Properly rejected by the authors for having a very slow convergence. Or, more accurately, […]

[…] for the variability of this estimate, and may thus induce infinite variance behaviour, as in the harmonic mean estimator of Newton and Raftery (1994). Because the tails of the importance density are those of the […]