Ammann’s April Fool’s Joke

Yesterday, the Economist had an amusing April Fool’s joke, announcing an “Econoland” theme park that “combines the magic of a theme park with the excitement of macroeconomics”.

AS PART of a strategy designed to broaden the revenue base, leverage content over new platforms and promote The Economist brand to a young and dynamic audience, The Economist Group is delighted to announce the development of a public-entertainment facility that combines the magic of a theme park with the excitement of macroeconomics.

Sort of like the contradiction of trying to write a popular blog on paleoclimate and statistics. The combined excitement of dendrochronology and multivariate calibration. And the hair-raising adventure of going where no explorer from the civilized world has ever gone before – into the dank world of RegEM TTLS.

It reminded me of last year’s clever joke from Caspar Ammann – the Paleoclimate Challenge, aptly titled the PR Challenge announced in a webpage here, discussed at CA here (inter alia). They diagnosed one of the “PR” problems of paleoclimate as being lack of replicable data and methods, as for example here

Most concerns regarding available climate reconstructions arise from:
– The small number of proxies of acceptable quality
– Changes in proxy sensitivity to climate over time
– Small sample sizes
– Uncertainties in the ability of statistical algorithms to recognize and reproduce climate variations against the noise at various timescales.
– Differences in implementation of the “same” reconstruction algorithms
– ‘Tuning’ of algorithms and/or choice of proxy networks in order to achieve a desired result.
Such criticisms cloud efforts to provide an extended record that forms a crucial basis for climate change predictions. The paleoclimate community needs to find ways of reassessing its methods to build confidence in the reconstruction efforts.

Broader community goals and benefits”
– Transparent discussion on the state of knowledge about climate of the last 1 – 2 millennia.
– Open access to reconstruction codes, documentations, data and validation methods and stimulation of use of NOAA Word Data Center for Paleoclimatology as the repository for proxy data.
– Enhance interaction between proxy-paleo, modeling and statistics communities.
– Enable the development of novel methods through well-documented presentation of current status (including successes and deficiencies).
– Emphasize the need for new approaches in hand

The PR Challenge promised that they would “build an open reconstruction access point web site” by April 2009:

Interestingly, in May 2005, Ammann made similar promises when UCAR announced the submission of Wahl and Ammann (to Climatic Change) and Wammann and Wahl (to GRL). As previously discussed here, Ammann and Wahl was rejected by GRL (twice) posing a bit of a conundrum for IPCC 2007 relying on then unpublished Wahl and Ammann, which relied on the rejected paper for key results. (They solved this by a bait-and-switch in a new “Ammann and Wahl” was submitted well after the IPCC acceptance deadline with all the references in the “accepted” Wahl and Ammann switched to the new paper – a history well told by Bishop Hill in “Caspar and the Jesus Paper.”) In several UCAR webpages here, herehere, Ammann promised four years ago (May 10, 2005) to deliver “Community Codes in open source, incl test data” listing here the suite of paleo reconstructions regularly discussed here.

At the time of the original announcement on May 10, 2005, Ammann published their code for MBH emulation, which I reconciled to ours within a few days. Nearly four years later, the only new source code archived by Ammann is their adaptation of our RE benchmarking code here, which uses our archived code right up to the nomenclature of minor variables (my nomenclature is a bit better now than then, but variables like NM, Data1,.. which occur in the Ammann code also occur in our archived code.) Not a lot of production.

Needless to say, the PR Challenge doesn’t mention either Climate Audit or M&M though we’ve obviously been the major forces in raising these issues. Nor have they “solicited input” from the active community here. It’s also pretty obvious that Climate Audit represents by far the largest effort to collect source data and replicate results and had already established substantial interaction with a highly interested segment of the statistical community.

We’ll see how Ammann fares in getting his website up after working on this for years. Most of the work is already done at Climate Audit. I wonder if NOAA would fund me to do what they’ve failed to do.

The “PR” project itself may a hoax, but evidently its funding is for real. The CLIVAR webpage at http://www.clivar.org/organization/pages/pages.php features a group photo of a meeting in Italy, with Michael Mann in the center holding a champagne glass. CLIVAR itself is sponsored by Unesco and the WMO.

I was wondering about the carbon footprint of the workshop in Italy with champagne for all. The workshop consisted of a main session with 30 minute presentation and 10 minute q&A. This was followed by some break out sessions over the next few days.

One can wonder why this was accomplished that could not be accomplished by some free web tools or even the new telepresence tools from Cisco and others. Not cost to the sponsors and no carbon emitted.

The goal is to offer a standardized access point for all currently existing, as well as future, reconstruction methods. The aim is to collect the:
– Actual computer code used
– Detailed description and documentation
– Original input proxy data
– Original instrumental data
– Original output data

The project description page cited says that the actual Access Point web site is “to be provided soon”. The project’s Timeline page says that the Access Point will be built by April 2009. Yet a Google Search for it just turns up the project description pages.

Perhaps the next 29 days of April will be sufficient for them to at least set up a URL where the promised content would have been if they had in fact assembled any of it.

Of the five sources of uncertainty discussed in the previous section, three of them
(timescale uncertainty, spatial uncertainty, sampling uncertainty) can be readily
represented by including Monte-Carlo simulations of the influence on the reconstruction
that results. Incorporating “timescale wiggle” into such calculations is straightforward
and should probably be adopted routinely.

Quantitatively representing the effects of possible changes in the relationships between
the proxy variables measured, and the climate variable they are supposed to represent,
is far more problematic.

RE #2, TAG #4,
On skimming over the CLIVAR workshop report TAG links, I admittedly see no mention of the PR project itself, so perhaps the workshop itself may not have directly received funds from this particular budget line item.

But if any funds have actually been disbursed under the PR project, they should at the minimum create an Access Point URL by the end of this month, even if it is only an empty page where wishful “future content” will go.

Do NOAA records show if anything has been spent? If 10% of proxies are already archived, it shouldn’t be all that hard, for the transparently “PR” purposes of this project, just to set up a page of links to existing archives.

RE TAG #5, the summary tables in AR4 indicate that human negative effects on temperature (land use, aerosols, contrails, etc) are almost half as potent as human positive effects (GHG etc). So if we just redouble our negative effects, eg with more contrail-creating international conference travel, we can perhaps neutralize the GHG emissions.

RE TAG #5, the summary tables in AR4 indicate that human negative effects on temperature (land use, aerosols, contrails, etc) are almost half as potent as human positive effects (GHG etc). So if we just redouble our negative effects, eg with more contrail-creating international conference travel, we can perhaps neutralize the GHG emissions.

Jim Hansen has stated that we can control the climate to the extent that we can avoid future ice ages with, as I recall, one factory producing halocarbons for release to the atmosphere – the non-ozone depleting chemicals, I assume.

With Hansen’s tipping points coming due sooner rather than later with regards to GHG and AGW, I would not be surprised to see him making proposals for man to artifically cool the earth.

The Economist obliquely referenced the transparency question in an article in their March 12 edition celebrating 20 years of the Web: “What’s the Score?”

Apparently Nature has tried online peer review but the scientists don’t like it. The problem is that “Scientists publish, in part, because their careers depend on it.”

The Economist hopes for a collaborative future. “[N]o one yet knows how to measure the impact of a blog post or the sharing of a good idea with another researcher in some collaborative web-based workspace.”

But suppose the solution is not a “collaborative” workspace but an “adversarial” workspace?

Then, of course, the solution is already implemented and working like a charm, right here in River City. Only, of course, the scientists, particularly the more political scientists, don’t like it.

By the way, the Economist suggests that “there may even be a knighthood” for the chap who can invent the new web-based scientific paradigm.

Congratulations, “Sir Steve.” But I think you should hold out for a KCB.

A couple of replies that would not be directly stated but that I could image coming from some in this particular science community are: April Fools, we never intended to follow through so you whiny folks should get a life — or — What is it that you do not understand about “collection of reconstruction codes and documentation, collect existing model run data and prepare for pseudo-proxy calculation, identify networks and develop forward models or off-line regression models for pseudo-proxies in consultation with key specialists, solicit input on reconstruction targets from reconstruction community and build Open Reconstruction Access Point Web Site”?

Platitudes will not change to deeds until the community sees a need for it. They seem to me to be rather content at the moment.

Well, I followed the link to the original page, and found it exactly as described, standing as a beacon of optimism. It’s kind of like all those parents who announce to their Nursery School manager that some day little Josh will be going to Harvard.

Another PAGES/CLIVAR project, the Paleoclimate Reconstruction Challenge, received generous support from NOAA and has now officially started. The “challenge” for the community is to reconstruct synthetic (modeled) climate scenarios from synthetic (pseudo) proxies. Dig deeper at http://www.pages-igbp.org/science/prchallenge/

So the project
* Has officially started
* Has “generous support”
* The community is challenged to reconstruct climate from proxies

Why is the community challenged? Haven’t the researchers already done the work? The researchers’ work should only have to be made available.

This is a partial list of activities, mostly with external partners, designed to produce improved and expanded data sets needed to meet specific science objectives
…
Title: Paleoclimate Reconstruction Challenge
Relates to these initiatives: NOAA/OAR/CPO, PAGES-CLIVAR
External partners: Caspar Ammann (NCAR), Nick Graham (Scripps, HRC), Roseanne D’Arrigo (LDEO), Thorsten Kiefer (PAGES)
Paleo participants: E. Wahl, D. Anderson, B. Bauer, E.Gille
Begin date: 2008
End date: 2011
Description: The activity addresses concerns in climate reconstruction over the past 2000 years regarding selection of specific proxy networks, the potential inability of included proxies to resolve information at all time scales, and the capability of the reconstruction algorithms themselves. Individual reconstruction groups will be provided a small set of realistic pseudo-proxy series and calibration “instrumental data” drawn from coupled AOGCM output, and will be asked to reconstruct the simulated climate evolution. By comparing these reconstructions with the full, “true” model climates, each group can assess their performance in great detail. A key objective of this project will be to document how much of the true climate can be described with the combined set of reconstruction results, to determine which aspects of the overall or regional climate are captured well, and whether important elements are being missed. The “Challenge” will improve the exchange among the paleoclimate reconstruction groups and provide a platform for enhanced interaction with the associated disciplines in Climate Modeling and Statistics, with particular focus on developing more formal assessment and quantification of uncertainty and regional climate understanding. The results of the Challenge will support and steer the community to develop strategies for improving the reconstruction methods so that past climate variations can be better understood. NOAA-Paleoclimate is playing a key role in terms of data access and conceptual input, and will also participate in the climate reconstructions themselves.

They’re going to compare pseudo-proxies to climate model output. Is this supposed to be related to reality?

As Craig already mentioned, not a bad idea! Just make sure your proposal includes the words “climate change” so they can justify the funding. I’m quite confident that you could write up a viable proposal based on the need to archive data required for analysis of the impact of climate change. You can easily justify the need for the project by citing the same people.

Hey Steve, what about a cordial note to Kim Cobb to see if she knows anything about the status of the project? IIRC, she made the trip to Italy. If not Dr. Cobb, perhaps her boss knows something about it.

The idea to use growth rings to work out past climate change is not new, but Trouet’s team is the first to look back beyond 1400 in the European record. They found that the strongly positive NAO lasted for about 350 years from 1050 to 1400.

Re: Raven (#31), Raven – thanks for the link. I just read Ch4 of this and it makes some good points (including issues with satellite records – p95). It seems to be arguing for better treatment of data and the need for a systematic/multidisciplinary approach. Motivation seems to be to give “decision makers” reliable info. for policy formulation. The report has been around since Dec08 so apologies if this is old/known material – I did a CA search and found ref. to a predecessor report v1-1 on another thread.

This thread seems ok though – if I’ve understood correctly I think there might be a chance of NOAA hitting the tip jar as they seem to be recommending doing some of what CA does … :)

Report here:

“Reanalysis and Attribution: Understanding How and Why Recent Climate Has Varied and Changed”

I like this quote from the Climate Ark article above:Michael Mann at Pennsylvania State University says that based on the analyses and modelling that he has done, increased solar output and a reduction in volcanoes spouting cooling ash into the atmosphere could have not only kicked off the medieval warming, but might also have maintained it directly.

Mann is also concerned that the dominance of medieval La Niña conditions now indicated by Trouet’s work might make it more likely that the current man-made warming could also put the El Niño system back into a La Niña mode, although most climate models so far had predicted the opposite.

“If this happens, then the implications are profound, because regions that are already suffering from increased droughts as a result of climate warming, like western North America, will become even drier if La Niña prevails in the future”, he says.
.
It looks to me like he’s haunted by that MWP he caused to go missing. He now seems to agree there WAS a MWP. He threatens to reverse the current models’ predictions to uphold that. He manages to include the necessary alarmist ingredient (only I think the drought picture is misleading as well). And he is omitting the known cyclicity of the El Nino to try to uphold the picture. Quite a lot. Or did I misunderstand him?

Why did you have to post a New Scientist link!!?:) I couldn’t help but look and nearly spoil my day. Apparently someone has found a reason for the Medieval warming that the famous ‘spaghetti graph’ (shown) proves didn’t happen.
How could anyone take this seriously? Then I looked at the links to other articles from that page…’Masturbation brings hay fever relief’, ‘Scientist spends four years studying navel fluff’, ‘Spanking brings couples together’… and I realized that the whole site is in fact an April fool joke.

This paper IS for real! I located the SI for the paper here and it uses a method with which I was previously unacquainted: PSR or Proxy Surrogate Reconstruction. The description of the method given in the SI was

PSR reconstruction. PSR (S9) is a method for making proxy-guided multi-variate reconstructions from climate model data. Let P be an array of NP time series (length NTP) of climate proxy data that correspond to output or derived variables from a climate model. Let M be an array of NP time series (length NTM) of the corresponding climate data drawn from a climate model simulation. For each of the i = 1, NTP proxy “observations” (Pi) find the model output temporal index (j, in the range 1, NTM) for which the similarity between Pi and Mj is maximized. In a reordering vector (O), set Oi = j, i.e., the model data from time index j is most similar to the proxy data from time index i. When all the NTP realizations have been so treated, O provides a reordering of the model data for which the model-proxy agreement at the NP proxy locations is maximized.

What the heck does that mean??? Re-order the model output so it matches the proxy??? !!!

I could only find one reference to PSR on Climate Audit and that was comment #56 in the IPCC AR4 thread on a paper whose authors include Ammann.

The aim of the exercise is based on AOGCM output as a surrogate for real world-like climate information. From the climate model output, we will generate synthetic, but realistic, climate proxy (“pseudo-proxy”) information, which will then be used to generate climate reconstructions. As the “true” model climate is known, the methodologies (various climate reconstruction algorithms) utilized by the different groups can be tested in their skills to reconstruct the underlying climate from the pseudo-proxy information – this is the Paleoclimate Reconstruction (PR) Challenge.

Surrogate for real-world!? Pseudo-proxies!?

AndyL – The year extends from June to April for academics – no comment.

Re: RomanM (#35),
If you think the methods are insufficiently specified to replicate the work, write to the author and write to Science. Be as nice as pie and maybe you’ll get a favorable response. Try to avoid words like “insane”.

I could only find one reference to PSR on Climate Audit and that was comment #56 in the IPCC AR4 thread on a paper whose authors include Ammann.

Did you read reference S9 cited by Trouet et al? It is by Graham et al. You will clearly not understand PSR until you read that paper. Indeed, the methods as specified in the Trouet SI may appear “insanely” brief. I’ve mentioned dozens of times the premium placed on space in Science and Nature. Such brevity should come as no surprise. You need to audit Graham et al 2007, listed as S9 in the SI.

In an unprecedented move Wednesday, the Norwegian Nobel Committee rescinded the Peace Prize it awarded in 2007 to former US vice president Al Gore and the United Nations Intergovernmental Panel on Climate Change, amid overwhelming evidence that global warming is an elaborate hoax cooked up by Mr. Gore.

Shortly after I wrote to Steig, the AVHRR data magically appeared, although he couldn’t bring himself to reply personally to me (unless you count the automatically generated “I’m out of the office galavanting in Antarctica” message). However, Dr. Comiso should be complimented for his gentlemanly communication when I had contacted him. So sometimes “nice” works.

Abstract
Terrestrial and marine late Holocene proxy records from the western and central US suggest that climate between approximately 500 and 1350 a.d. was marked by generally arid conditions with episodes of severe centennial-scale drought, elevated incidence of wild fire, cool sea surface temperatures (SSTs) along the California coast, and dune mobilization in the western plains. This Medieval Climate Anomaly (MCA) was followed by wetter conditions and warming coastal SSTs during the transition into the “Little Ice Age” (LIA). Proxy records from the tropical Pacific Ocean show contemporaneous changes indicating cool central and eastern tropical Pacific SSTs during the MCA, with warmer than modern temperatures in the western equatorial Pacific. This pattern of mid-latitude and tropical climate conditions is consistent with the hypothesis that the dry MCA in the western US resulted (at least in part) from tropically forced changes in winter NH circulation patterns like those associated with modern La Niña episodes. We examine this hypothesis, and present other analyses showing that the imprint of MCA climate change appears in proxy records from widely distributed regions around the planet, and in many cases is consistent with a cool medieval tropical Pacific. One example, explored with numerical model results, is the suggestion of increased westerlies and warmer winter temperatures over northern Europe during medieval times. An analog technique for the combined use of proxy records and model results, Proxy Surrogate Reconstruction (PSR), is introduced.

Ok, I looked at the paper you referenced. here is what their description looks like:

We introduce a new technique (Proxy-Surrogate Ranking, PSR) to assist in inferring past changes in large-scale climate and circulation. PSR is an analog method [using elements from the “trend-surface” approach described by Graumlich (1993)] in which numerical model output is reordered to obtain temporal agreement between a proxy data series (Y) and a corresponding subset of the model output (Y*; both Y and Y* may be multivariate). The goal is to reorder the model output (the “surrogate” data) so that there is good serial agreement between Y and Y*. The reordered model data can then be used to examine possible multivariate scenarios of past climate that are consistent with the original proxy data. For example, one might have a proxy-derived regional precipitation index (Y) and a corresponding index from a model simulation (Y*; Y and Y* needn’t have the same length). The model index is reordered so that it agrees well (in time) with the proxy series, then this reordering applied to the full model output. The reordered model data can then used to examine how, for example, 500 hPa heights may have evolved as consistent with behavior of the original proxy index.

I presume that their rationalization goes something like this (hey, I haven’t read the whole thing – it’s 45 pages long!):

They have model output which contains multiple variables. These variables are obviously inter-related as in the real world. Now, pick a proxy P. P is related in the real world linearly to one (or more?) variables represented in the model output, say X. So create a “reference table” between P and X. Now, reorder X temporally so that X and P are highly correlated dragging along all of the other model variables. Voila, you know what these other variables did in the real world 1000 years ago. It’s that simple!

All you need is proxies closely related to some real world variables, models which accurately reflect that reality, models whose output inter-relationships are also accurate, …. It’s as easy as reconstructing the temperature.

Lemme see if I got this straight. You have a proxy series that is
a realization of the actual physics of the earth. And then you run a GCM a bunch of times and you get N series of model generated proxies. And then for each data point in the “observed” proxy series, you hunt through the generated proxy series and find the winning ticket. And then you line up all your winning tickets and you have won the Paleo lotto. Interesting. So, the more times I run the GCM and the wider the variance of the model, the higher my probability is of finding a winning ticket. So that a real “observed proxy” including all its error can at the limit be matched exactly by GCM output. There is actually a much more better way of doing this. ( too bad april 1 has slipped me by)

You have an Observed proxy series N
Fire up thousands of instances of your GCM. Examine the output after the first step.
Check The GCM output against the “observed” proxy series.
Does it match? If yes continue those threads, if no terminate those threads.
( “matching” criteria is TBD, heck we could relax the criteria as a function
of time and you could simulate uncertain bars that increased as we go back in time )

heck we could make this community effort and exploit unused computer cycles
around the world. The truth is out there.

Stephen my interpretation of what RomanM offered differs from yours. RomanM’s is constrained by finding a variable(s) match between model and proxy and then bringing the other variables along without changijng there relative relationship to the matched variable(s). That does not sound so bad on the face of it – if selection criterion were made a priori.

What bothers me is that evidently the match can be made by unhooking the relative time references from either(?) the proxy or modeled series. That to me would indicate that model and/or proxy series start dates can be a major source of error in reconstructions (and models?) but that the sequences are better established. Are the authors referring to tree rings or ice cores here or making a general statement?

I’ve tried reading their explanation in the paper and I not sure that I completely understand it because they present it with a lot of obtuse armwaving.

I’ll try to interpret it in a way that might make a modicum of sense:

I have a proxy P which takes these values temporally in order: (1,7,8,9,15) for five consecutive time periods. This proxy can either be physical measurements or itself a proxy reconstruction of a physical phenomenum. For the sake of this discussion suppose it is temperatures.

Now suppose I run a model from which I generate 5 SSTs (13,20,15,17,10) and corresponding rainfall (1,0,3,2,6). Where it gets confusing is that apparently, the variables generated in the model need not be “copies” of the proxy, the length of the model sequences need not be the same as the length of proxy sequence nor is it required that the model runs be representative of the same time period as the proxy. As well, both the proxy sequence and the model sequences can be functionally transformed values. They only ask that there be a “some meaningful correspondence” between the proxy and the matching model variable.

Now comes the operational part. We will reorder the computer output to make the output of the model “similar” (their word, not mine) to the proxy. Since this is not mathematically spelled out, I would assume that it means something like:

We can maximize the correlation by having the model SSTs in the order: (10, 13, 15, 17, 20). This would put the rainfall in the order: (6,1,3,2,0). I would now assume that I have just reconstructed the rainfall for the same time periods as the proxy.

To quote the paper:

This technique has the advantages of allowing broad flexibility in the definition of “similarity,” and if the surrogate set is multi-variate model output (as in this paper), the cross-spatial and cross-variable relationships inherent in model physics are conserved. This last point allows suggestions to be entertained concerning the evolution of fields other than those directly involved in the construction of Y*. The method has the primary disadvantage of being relevant only where the range of Y is covered by Y*, and of course is subject to assumptions concerning whether the relationship between the proxy and model indices (Y and Y*) is meaningful, and their relationships to reality.

It seems to me that this is th iffiest concept since using teleconnection as a mathematical reality. But hey, it’s climate science…

I am not remotely qualified to comment on the statistical methods used in these papers but would I be wrong in stating that there seems to be the same kind of over reliance on ‘novel and clever’ algorithms and computer models in the climate change community as has recently brought the financial community to the brink of collapse, with the resultant fallout for us all?

I sense that these “novel and clever” algorithms are the result of a “it seemed like a good idea at the time” type of thinking. Unfortunately, they also tend to indicate a lack of gut level understanding of what data analysis is all about.

There is always an initial grain of truth in the basic methodology, but it is then stretched one step farther without realizing that they may have bent or broken assumptions underlying the analyses in the process. Proceeding as if nothing in the underlying statistical structure has changed – e.g. applying PCA selection procedures which are no longer applicable in decentered PCA or smoothing data before analysis and then calculating error bounds if it were raw data – leads to serious underestimation of such bounds as well as serious possible statistical bias in their results.

Even when aware of such problems, the justification for the use of the new methods may be a single example on data which may not be representative rather than a proper theoretical and practical evaluation.

What bothers me about this latest PSR novelty is that there seems to be understanding that the time series structure of the model sequences is altered or completely destroyed by the rearrangement (if I understand what they are doing). I have always taught that statistics as applied common sense and this methodology really makes no sense to me at all.

Not at all. You understand correctly. Model results trump all. So, based on those, we have to generate a storyline that fits the model results, irrespective of what the data (if there are any) that show otherwise.

Right. And models can’t get both temperature and rainfall correct; tune to one, the other deviates, all the while while Carl Wunsch observes that the ocean models in AOGCMs don’t converge. But why worry about GCM non-convergence when the results ‘look reasonable’?

Meanwhile the climate sages gravitate on about impending catastrophe and science reporters wring their hands in public.

I see someone posted with my moniker on Wattsupwiththat. That wasn’t me. I’m actually tempted just to post with Micky Corbett from now on as that’s my name.
With regards to the April Fools and maybe this paper about PSR, did anyone else just rub their face in tired resignation on reading this? Why add more complication to complication
Here’s a good April Fool’s joke ‘we propose a new method. Its called DIABH as in Doing It All By Hand’. Apparently its very robust.

I’ve posted two longish comments and have had them both crash. Sigh.
.
It is fair to crticize the PR challenge insofar as teams are challenged to match their so-called proxies to something that could be mostly noise. Yes, that objective seems a bit weird on the surface. But the part that is encouraging is that they are using open methodologies that will be directly comparable because the target is known. What will come of this is a more robust set of criteria on which to judge reconstruction skill. To suppose that the goal here is to “reconstruct” GCM runs would be to miss the point. No team is interested in emulating noise. But the community needs transparent methods and comparative analyses. Hence the challenge to the community.
.
Moreover, note the double-blind structure of the experiment: the guys doing the GCM runs are completely independent of the teams doing the “recons”. There is no room for cherry-picking or handwaving when it comes to matching the GCM runs. Unlike past climate the GCM runs are known.
.
Sure, a bunch of warmie dumb-dumbs are going to take these “recons” and suggest they are credible when of course they are not. But that hasn’t happened yet.
.
Looking forward to Steve M’s new thread on the matter.

Y’know how we’ve seen split-second timing from time to time. I’m wondering if there’s another curious example here. Here ‘s what I’m wondering.

I corresponded with Bruce Bauer of NOAA WDCP (who does a fine job and whose efforts I support at every opportunity) about the Esper data set. On April 7 at 7 pm, he notified me that he had received the Esper data set, but it wasn’t publicly posted yet. I think that I checked on April 8 and it wasn’t there, but I’m only about 75% sure of this. On April 10, while I was away for a long weekend, he emailed me to say that the data (such as it was i.e. , in my terms, the least possible archive) was online. When I checked this morning, the directory shows an online date of April 7. So it looks like there is a bit of a lag time between the directory data and the date of actual availability to the public.

When I wrote on Ammann’s April Fool’s joke on April 2, I wasn’t aware of the April 1-dated first instalment of the PR Challenge at NOAA (still unreferenced BTW at the PR Challenge website). As it happens, the actual contribution is itself an April Fool’s Joke, but, had it been more substantial, I suspect that I might have heard some blowback on being unaware of the April 1-dated NOAA data set.

Based on the evidence from the Trouet data set, it looks like an April 1-dated directory may not have been publicly available on April 2, not that much turns on this other than my own sense of personal diligence.

One Trackback

[…] 2, 2009 with an SI that did not include any data. Discussion began here on April 3 in the comments here and later that day I posted on it here observing: Unfortunately, the authors failed to provide any […]