A clarification...by "truth" in the second paragraph below I don't mean to imply that
critiques of reconstruction methods based simply on examinations of ensemble distributions
should stand, per se. I only meant to say that recognition of the universe of possible
reconstructions is a worthwhile addition of knowledge. Conceivably, doing this might
possibly help us refine our validation schemes.

Caspar and I have taken one look into this set of issues in the companion piece to the
Wahl-Ammann paper in Climatic Change last fall (the Ammann-Wahl article there), to deal
with critiques of validation methodology raised by MM. We revisited their ensemble
approach (reconstructions driven only by the full-AR persistence structure in the proxies),
but restricted its output with the kinds of calibration and verification criteria we use in
actual practice (which MM did not do). The idea was to do exactly the kind of geophysical
contextualization that Caspar mentions -- thereby incorporating the ensemble method, but
also embedding the ensemble output into the real world decision-making structure we use
with all our reconstructions. Interestingly, the results are quite similar to verification
significance results based on small-lag AR structures in the target series itself, the
general way this issue is approached in climatology.

The statisticians have a point in that we are really sampling from noisy proxies that
themselves are sampling from one of many possible realizations of climate for a given set
of forcings (thinking of model ensembles, e.g., all with slightly perturbed initial
conditions). However, we in the paleoclimate part of geophysics (and other disciplines
that use similar or identical methods, such as econometrics) have clearly recognized the
need to separate "wheat from chaff" in forecasting/hindcasting models, and thus the
calibration and verification exercises we do.

So, it seems to me (at least on a first pass) that there is "truth" in both perspectives on
the problem. It would be interesting to explore what our validity screening procedures are
in fact doing from a purely mathematical theoretical standpoint...what is the effect of the
truncation of possibilities that our validation procedures entail in the underlying
geometries we are examining? That could be one way to bridge the difference between the
statistical and geophysical perspectives Caspar identifies. [Let me know if you think I've
got something incorrect in this Q.]

I wouldn't read too much into this. I believe that all we are looking at is the difference
between a statisticians approach and us in geophysics. The statisticians like to simulate
many ensembles. I had the same discussions with our guys at NCAR. The tendency for them is
to include all possible reconstructions and then describe the distributions. Our approach
has been to throw away reconstructions that don't make sense or that don't pass
verification. So its more philosophical than anything else.

Though Mike might be right in the sense that the choices can lead some of these approaches
astray. We had this with regard to the selections of uncertainty, what is actually
independent uncertainty. There a good and strong check on reality is necessary.

So we shall see in Vienna ...

Caspar

On Mar 30, 2008, at 8:32 PM, Michael Mann wrote:

Malcolm, in short, this looks like nonsense. there is nothing magic about 'Bayesian'
methods. Many of the methods we use can easily be recast as Bayesian approaches, the
critical question comes down to what the "prior" is. For example, in RegEM, the prior is
the first 'guess' in the iterative expectation-maximization algorithm. Of course, if the
final result is sensitive to that choice, one becomes a bit worried, the pitfall indeed of
many a Bayesian approach.
mike
Caspar Ammann wrote: