The Kaufman Backstory

The backstory to the development of the Kaufman et al 2009 reconstruction is pretty interesting. A few years ago (after the MM criticisms of paleoclimate reconstructions), the US National Science Foundation sponsored the sampling of 30 Arctic lakes in a standardized way. It’s remarkable to compare the original population to the data sets used in Kaufman et al.

The objectives of the original NSF sampling program are described here as follows:

Fourteen collaborative PIs will generate standardized, high-resolution (annual and decadal) proxy climate records from 30 lakes across the North American Arctic. The four-year project (2005-2008) is funded by NSF’s Arctic System Science Program.

The methods of the sampling program are described here in further detail. This sets out a standardized program that meets the sort of standards advocated at Climate Audit – the same three measurements are going to be made on 30 lakes. On paper, this is excellent:

The last 2000 years of sediment for each lake will be analyzed to extract quantitative estimates of past summer temperature and other climate parameters. We will rely on three established proxies widely used in similar reconstructions of Holocene climatic change for quantitative estimates of climate: chironomid assemblages, oxygen isotopes, and lamination characteristics. In addition, as part of our multi-proxy approach, we will analyze other biological, geochemical, or sedimentological paleoenvironmental indicators from each lake. These “secondary proxies”, including diatom and pollen assemblages, or biogenic silica and organic carbon contents, often carry important signals of ecological and geomorphological processes that provide context, crosschecks, and comparisons for the primary proxies. In addition to varve counts where possible, age models for each sediment core will be based on 137Cs, 210Pb and 14C measurements.

Minutes of three project meetings are online and make an interesting read. Most/all the project PIs and associates attended each meeting. The first meeting in Tucson in May 2006 reviews the status of all the sites. A couple of replacements are noted: Hudson Lake for Twin Lake and Hallett Lake for Greyling Lake. The latter substitution is explained as follows:

Laminations are visible on core face, but thin sections show diffuse layering unsuited for further analyses. Efforts will shift to Hallet Lake this summer (2006)

The next meeting of PIs took place in Iceland in May 2007. These minutes record some caveats:

We need to be very careful that all data included in the synthesis are publicly available, and preferably peer-reviewed. We will be SCRUTINIZED. Ideally as many *published* records as possible… We may be a lightning rod – and therefore need to be extremely careful to document our decisions and be ready to publicly defend them.

The idea of a multiproxy reconstruction using data generated outside the NSF-program is considered. One PI sensibly asked (a sensible idea discarded in Kaufman et al as noted below):

But shouldn’t we aim to do a synthesis that is only lake seds (at least as first step)?

Bradley of MBH attended the meeting and, in case other PIS had not noticed, reported (presaging later decisions):

The third meeting was at AGU in December 2007. By this time, the roster of proxies under discussion had departed fairly considerably from the original NSF30.

At the San Francisco meeting, a special edition of J of Paleolimnology was contemplated for the sites in the NSF program. This edition reported on 14 sites, listed here. Of the 14 sites, only 6 (!) came from the original NSF network of 30 site (one of which was Hallett Lake, substituted for Greyling Lake.)

The standardized program described in the program prospectus (chironomid, O18 and laminations) was completely abandoned. Not a single site has a complete archive according to the program description. Despite all the opening talk of a complete archive, only 10 of 30 sites have any sort of NCDC archive at present – the six J of Paleolimnology sites plus 4 others. In my quick survey, I noticed only one of 30 NSF sites where O18 isotopes are archived (Squanga Lake – a site used neither in Kaufman 2009 nor reported in the JPaleolim edition). Only a couple of the NSF 30 sites has chironomid temperatures and neither of these appears to have been used in Kaufman. Only 5 of the NSF30 sites have archived varve thickness – three of these are used in Kaufman, including the series said by Bradley to have a HS shape. One of these series is Kaufman #1, one of only 4 series that contribute to the final HS shape. Three of the NSF30 sites have archived BSi (biological silica), including Kaufman #2 (Blue Lake), one of the 4 series contributing to a HS shape.

Kaufman uses only 6 of the NSF 30 sites (5 of the 6 are archived. Bradley’s C2 remains unarchived.)

While the original NSF program had objectives that met CA scruples – uniform sampling methodology and archiving of all data prior to publication, Kaufman et al 2009 abandoned these objectives for reasons that are not discussed in the article.

Instead of standardized sampling procedures for all 30 sites, each archived site has an ad hoc pattern of test results – some report varve thicknesses, some BSi, a couple chironomids, one delO18, but none carry out the program set out in the original NSF description. Instead of a complete archive, 20 of 30 sites remain without any archived data whatever.

And Kaufman et al did not calculate some sort of index based on the NSF 30 sites. Instead of compiling and reporting the NSF 30, Kaufman selected only 6 of the NSF 30 sites (including the series said by Bradley to have a HS-shape) and added 17 sites from outside the NSF program – including, of course, Briffa’ Yamal tree ring site, one with a known HS shape and which is the strongest contributor to the Kaufman HS.

Kaufman stated in Iceland that they “need to be extremely careful to document our decisions and be ready to publicly defend them”. It would be nice if they did so.

UPDATE:

Darrell Kaufman responded to my email inviting his participation here and offering him a password to create his own account:

I did log onto the Climateaudit website about a week ago. I have no desire to engage in vicious commentary. If you would like to discuss the study professionally and courteously, then I would be happy to talk with you. I am at: 928-523-xxxx

196 Comments

A few years ago (after the MM criticisms of paleoclimate reconstructions), the US National Science Foundation sponsored the sampling of 30 Arctic lakes in a standardized way. It’s remarkable to compare the original population to the data sets used in Kaufman et al.

What part of the Kaufman et al project was paid by NSF?
If only the 30 sites were used, what shape would the graph have?

Maybe it’s just me, but the minutes seem to show a group who want to put the cart before the horse – getting into the analysis and losing focus of data collection. Very little in the minutes which shows a process orientation towards data collection and documentation. Here is a note in the ’07 minutes of Caspar Ammann pushing to start modelling the data.

Caspar suggested that it is not too soon to get started with some model-data experiments, even with preliminary time series. To begin, we can try model-data comparisons of our paleodata reconstructions vs existing model reconstructions since 1750 (see “Action Item” above). We can also aim to compare spatial patterns and amplitudes of reconstructed vs modeled climate change in recent times as a first test.

How much more expensive would it be to perform the three sets of analysis on all of the cores? I get the feeling they simply gave up on their original methodology rather than hiding non-responsive data.

Did even the sampling work get completed? Reading this post leaves me unsure of even that.

Wiser scientists than I have commented that there is value in reporting that data are without character or devoid of useful information. This assists in preventing later people from doing the negative work over again, for no great purpose. So, I look forward to a note from these authors that says why the original plan was not completed and what criteria were used to drop data. (Not much sense in re-inventing the level temperature).

Try as I may, Geoff, I cannot see how a determination of “level temperature” is “data are without character or devoid of useful information”. It is temperature through proxy that is being attempted to measure

Due diligence should require that changes to science programs financed through public funds be justified in some manner. Is there any record of program changes being properly authorised? Or did the proponents just take the money and run?

The IPCC process is broken. It would be interesting to have a management consulting company analyze the IPCC process together with its supporting agencies such as the NSF. I suspect that it would find what it finds in many dysfunctional companies. That is, an entrenched middle management (the professors) who are unwilling to change despite the manifest failure of their company (the IPCC), Change is impossible since so many of the stakeholders depend on the status quo. To me, this is the essence of the Wegman report.

The fostering of innovation in established organizations is a common business research topic. It is very difficult. What the research states is that the innovation must receive strong protection from upper management. The entrenched interests will always find reasons that the innovative method, product etc is not practical, unneeded etc. Common recommendations are that the innovation should be isolated within a specific group that can be protected by a powerful member of upper management. This could be in a separate company which is situated a long distance from the existing organization. When the time is correct, the existing middle management can be replaced by a new one more tuned to the innovative technology.

The nub of this is that an existing organization cannot be chanted even if its long-term survival depends on making a change. It can only be replaced.

My organization had supplied part of the money for some landscape ecology studies. We were not in charge but did organize the investigators for the 3 sites. They agreed to use standard methods in the field. The bird people followed standard methods and we were able to use their data for a spatial analysis. The herpetologists each followed unique sample designs (in one case this consisted of driving the roads looking for snakes) and we could not use their data for cross-site comparisons. Only one of 3 sites herp data could be used at all. We didn’t even have big bucks but did get consistency. No data were discarded.

On another point, if you accept money from a granter with a promise of what you are going to do, you should do it. It is that simple. Rarely there will be extenuating circumstances, but not 30 such circumstances.

Rarely there will be extenuating circumstances, but not 30 such circumstances.

I don’t think anyone expects that the original plan for sample collection would come off without a hitch, but each PI should report on the process and the circumstances encountered. Such reports should be subject to peer review with the PI’s scrutinized for their reasons in not following through. If the original plan is not workable, no such project should be allowed to continue to suck up funds without a transparent process to determine an acceptable alternative.

Is there no SOW in these grants? As bender notes one would also have to do progress reports, but progress reports really only make sense in the context of a Statement of Work. I’m sure everyone here who has served as a PI knows the drill. There is a project proposal. You agree on a statement of work. you work to that plan. you report to that plan. Deviations from that plan happen all the time, but they are documented and justified. I’ve gotten funding to do X and then had a brilliant idea ( do Y instead steve) stuffed down my throat; I’ve gotten funding to do X and then found out halfway through that doing X wasn’t such a good idea. In all these cases the documenting the changes are required. Without access to the SOWs ( if they exist) the progress reports, the schedules, etc etc it’s hard to make a blanket condemnation. On its face, however, this doesnt look too good.

I wonder if an enteprising journalist (I hope that’s not an oxymoron) will endeavor to ask each of the participants, “Is this the way you do science?”. If yes, the answer is instructive. If no, the logical follow up is to ask for particulars as to how this falls short of quality work. Interesting possibilities.

Either way, I think this study certainly shouldn’t inspire much confidence in any of the work produced by the people involved.

I find the notes fairly forthright in telling the status of the sample processing (lagging) and the eagerness to get modeling (advertising for a post-doc modeler and worrying about computer time costs as examples). What I don’t see is strong leadership of the project that calls out people who are not adhering to the original goals of getting the data archived. Maybe it was there and just not noted, but a good leader would have listed deadlines for each lake to be completed and if not met, getting the reason why down in the notes. With so many PIs it’s essential that there either be a nag-in-chief or group-shaming to keep the whole thing rolling to conclusion. They expected this to be high profile so it’s not like they can excuse it as a nice try that just didn’t work out. These collaborative things aren’t easy, for sure; however, everybody knows how committees need to be guided or they will wander off target.

Though most of your messages may get snipped, I think the general point is clear. This project needs to have light shined on it. It looks like it could be very important in terms of getting the attention of the general public.

It wasn’t obvious from a very fast examination, but:
1. just where did the funding come from?
2. was a budget drawn up? Was it sufficient to complete the project?
3. Were the funds for the project completely disbursed?

Yet again, reading this post (and its comments) raises up in me my appreciation of Mr. McIntyre and this site–in my opinion, extremely valuable. Doing the work to shed some light. It’s why I continue to put my pittance into the tip jar.

Given the stated commitment of Kaufman et al in the minutes of one of the meetings to “publicly defending” their work product, it is disappointing that none of the Kaufman PIs have showed up to explain things in what objectively is one of the primary venues for public discussion of these sorts of studies. I’ll send an invitation to Kaufman.

.
What has happened here is no different from what happens in a commercial manufacturing enterprise when the total work scope of a product development R&D project exceeds what the project schedule and budget can properly support.
.
Just as happens in the commercial world when funding and resources for product R&D are tight, short cuts are taken to deliver the final product to the customer on time and on budget.
.
But if the customer still buys the product in spite of the known short cuts which were taken in its development, and if the customer still finds it acceptable for its intended use — then all is well as far as the product manufacturing enterprise is concerned, and there is no compelling need to change the organization’s product R&D methods or to improve the overall manufacturing process as a whole.
.
This particular analysis product fits the needs of its immediate customers, as those customers define their needs — i.e., a written report which has the external look and feel, if not the substance, of an objective scientific study. That’s all that matters to the customers; and as far as they are concerned, they got their money’s worth out of it.

Re: Scott Brim (#28), No one at NSF checks that your work meets what you promised. You get the money up front. Down the road if you are blatantly not delivering anything you endanger future funding because they do ask on proposals about your past grants and what you delivered, but the definition of “delivered” is pretty generous. Long-term projects can get audited (I have done one for NSF and one for DOE) but again if it looks like work is being done no one lines up promised deliverables vs actual delivereds (is that a word?).

Re: Craig Loehle (#31),
.
Among the stakeholders in any projectized development effort, a chain of customers exists — the internal customer who understands the overall market for the organization’s products, the retailer/distributor/franchiser who either directly specifies the product’s most desirable characteristics to the manufacturer or else who indicates what those characteristics ought to be through his/her choice of product offerings, and the consumer who obtains the product after its initial delivery to the retailer/distributor/franchiser.
.
For these kinds of studies, the NSF is essentially acting in the role of the product retailer/distributor/franchiser. As long as the customer base for the products they sell remains happy — in this case other government agencies and the commercial interests which benefit from the myriad of decisions those agencies make — then it does not matter all than much to the product retailer if the end-to-end product development process isn’t all that disciplined or all that transparent. If the product sells, nothing else really matters.

Steve: There is no need for this sort of editorializing. The facts already speak loudly and the tendency in such moralizing is to go a bridge too far and to create a reverse reaction among lurkers.

At first glance I interpreted your comment as a little cynical, but reading the project statement again I would say you are astute, as ususal, Hu. Still, I can’t bring myself to believe that sample collection and archiving would have been abandoned due to the data not being “unprecedented”.

Re: Layman Lurker (#33),
There need not be any mal intent. Confirmation bias occurs when more positive results are found than negative and the experiment is ended prematurely. Premature conclusion of a study (defined in this way) is not that unusual. People “promise the moon” in their proposals, but often can not deliver everything they thought they could. They under-budget, or under-estimate the actual amount of time & effort required to do something, so only half the work gets done. Archiving is simply not as rewarding as publishing. No malice. Just a biased result stemming from a need to prioritize given changing circumstances (e.g. cost overruns, deadlines fast approaching).

My prior objection here as elsewhere is with the flaccid maladministration by NSF, which has the obligation and authority to get these guys to archive and report their results properly. Unfortunately NSF has seemingly abandoned its compliance obligations. I’ve usually been more critical of NSF negligent failure to require archiving than the scientists for not archiving.

NSF are not merely passive players. When I was invited to Georgia Tech in 2007, NSF apparently gave the senior Georgia Tech staff (Judy Curry, Kim Cobb) a really hard time for inviting me. Curry is senior enough that she can cope, but it struck me at the time as a highly inappropriate activity on NSF’s part.

Interestingly, Mann’s NSF minder was supposedly seen escorting him away from the NAS panel hearings in March 2006. So they were definitely keeping an eye on things there too.

Dear Dr Kaufman,
A follow-up for the information requested below as I did not receive an acknowledgement to my 2nd request.

As you may know, I’ve been discussing Kaufman et al 2009 at http://www.climateaudit.org and would welcome your participation. In addition, if you wish to post a thread setting out the paper in whatever terms you wish, you are more than welcome. LEt me know and I’ll provide you a password.

One of the questions that I’ve been discussing at Climate Audit is the apparent disconnect between the sampling and archiving program originally outlined for the 30 NSF sites (the original program description stated that data would be collected from the sites in a consistent format), the limited archive to date and what was used in Kaufman et al 2009, the topic of a current thread. Any light that you can shed on this would be greatly welcome.

In one of your meeting notes, you mention that you intended to “publicly defend” the findings of these studies. Climate Audit represents a relevant forum for such public discussion and I hope that you can participate.

I did log onto the Climateaudit website about a week ago. I have no desire to engage in vicious commentary. If you would like to discuss the study professionally and courteously, then I would be happy to talk with you. I am at: 928-523-xxxx

As CA readers know, I expect good manners and politeness from commenters and myself and generally delete or snip “vicious commentary” when it comes to my attention. This practice is not universal at climate blogs – Steig, for example, made some rather “vicious” comments about me at realclimate and Tamino’s language is well-known. However, I try to ensure politeness here.

On occasion, visiting climate scientists (e.g. Martin Juckes) have posted “vicious commentary” and I’ve permitted them to post such commentary without allowing readers to respond in kind.

In my opinion, Kaufman’s concerns about “vicious commentary” are entirely baseless and he would be better off “publicly defending” his results as he undertook to do in the minutes of his meetings.

Re: Steve McIntyre (#42), you might be interested in this week’s Nature which has five different articles of various kinds (editorials, news stories, etc.) on the importance of data archiving and data sharing.

Re: Jonathan (#47),
How about the importance of due diligence on the part of granters? And Editors? You can hand-wave all you want about the importance of individuals archiving their data. What specific mechanisms are being put in place at the institutional level to ensure compliance? After all, we already have all kinds of regulations. The problems Steve M encounters are usually due to a lack of enforcement, not regulation.

Re: bender (#51), I merely think that it is interesting that Nature has paid so much attention to these questions over the last few months. In keeping with house rules, I do not speculate on their motives for doing so!

Re: Jonathan (#47),
I will spceulate that the motivation has little to do with climate science and a lot to do with genomics and biotechnology – a field where the data actually have some commercial value.

Re: Steve McIntyre (#42), “Mann’s NSF minder was supposedly seen escorting him away from the NAS panel hearings in March 2006.”

Mann’s NSF minder?? I’ve never heard of such a thing! NSF is supposed to be an impartial and merit-based source of funding for the scientific community. Even if they haven’t always met that standard, at least they had that as a standard.

Setting a minder on Mann is a clear signal of institutional bias, in that they are apparently succoring a carrier of their institutional message. NSF is not supposed to have an institutional view point or promote or promulgate any body of results in opposition to another. – snip

I certainly don’t imply mal-intent, however, I have a hard time accepting results like this without “robust” justification. I agree with your comment that the solution is likely institutional mechanisms to ensure major projects like this are not allowed to drift or fall through the cracks.

Re: Hu McCulloch (#30),
I do not think this is a cyncical statement. The authors are guilty of confirmation bias. They published a half-cooked result because it supported the foregone conclusion: “it’s worse than we thought”. If they are right, then let them complete the study as proposed and publish the whole data set.
.
This group is addicted to the fame (Science & Nature publications) and fortune (huge NSF grants) wrought by merely sounding the alarm: “worse than we thought”. They’re plugging into an editorial bias that believes, and fears, this hypothesis. As long they are being this heavily rewarded for their alarmism, they are not going to do their science any other way. They (or a sub-group among them) simply can’t help themselves. That’s the nature of addiction. Bristlecone crack. Yamal heroin. That’s the stuff.

We don’t have any idea of their budget or their expenditures. I suspect earlier posters are correct that the project lacked strong leadership and got off track as competing interests asserted themselves… another classic cause project failure.

Acknowledged. However, on another topic, I would like to inquire as to your informed opinion concerning the following question: Does Mannian Analysis impose a hockey stick shape on a data series through direct manipulation of indicated trend, or does it simply extract a pre-existing HS shape that already exists in some subset of the input data and then makes it the dominating influence upon the overall final trend? This question is offered in the context of yet another question: what is the basis definition for the term “climate signal”?

Steve: Please look at our original articles. You are using words that we didn’t didn’t use and apparently trying to impose your own perspective on what we said. In a very high proportion of cases, Mannian PC methodology will generate HS shaped composites from red noise (red-ness is relevant here). If the population contains pre-existing HS shapes, Mannian PC methods will extract the pre-existing HS shapes and somewhat accentuate them. If the population contains pre-existing HS shapes, one can usually find ways to “get” HS shaped composites without using Mannian PCs e.g. cherrypicking HS shaped series and blending them with other series that lack any pattern. On many occasions, I’ve objected to the metaphor of “signal” and “noise”. Too often, smoothed time series are reified as “signal”. You’ll have to ask some other place for a definition.

Steve: Please look at our original articles. You are using words that we didn’t didn’t use and apparently trying to impose your own perspective on what we said.

What I’m doing here is raising questions that have two mutually reinforcing objectives: (1) Ask the kinds of questions that would naturally arise in any auditor’s mind after reading the mass of information and criticism that has been generated about Mannian Analysis; and (2) to frame those questions in non-jargon terminology which the layman can readily understand.

Steve: In a very high proportion of cases, Mannian PC methodology will generate HS shaped composites from red noise (red-ness is relevant here). If the population contains pre-existing HS shapes, Mannian PC methods will extract the pre-existing HS shapes and somewhat accentuate them. If the population contains pre-existing HS shapes, one can usually find ways to “get” HS shaped composites without using Mannian PCs e.g. cherrypicking HS shaped series and blending them with other series that lack any pattern.

Since I have your attention on this topic, may I ask you to expound further upon the subject of red noise and “redness” as it applies to Mannian Analysis.

Steve: On many occasions, I’ve objected to the metaphor of “signal” and “noise”. Too often, smoothed time series are reified as “signal”. You’ll have to ask some other place for a definition.

If one follows the hockey stick debate in particular, and the AGW debates in general, the notion of “climate signal” is central to every facet of these debates. And yet as far as I can determine, there is no standardized, documented, and formally recognized definition as to what physically a “climate signal” represents, or as to how it should be characterized both qualitatively and quantitatively. Without such a commonly recognized definition, I do not see how it is possible to properly perform climate science studies to any rationally objective QA/QC standard, or else to fully audit those climate science studies to any rationally objective auditing standard.

Steve: Sorry, I don’t have time to discuss this right now. It was discussed at length on many threads. You can consult general literature re “red noise”. The points in our articles are not sensitive to this issue. As observed above, I haven’t encouraged use of the signal-noise metaphor; I don’t know who you’re debating with in respect to this issue. I’d suggest that you seek a definition in another forum.

Any ideas how much time/money would it cost to process the data according to the original standards? I assume it would take too long to get NSF money though it would be an interesting rejection letter if they insisted that Kaufman met the earlier standards given what appears to be an opportunistic choice of proxy data.

Depending on how you read this statement, it is quite an assertion. One would assume their role is purely administrative, that they take no position whatsoever on scientific matters. Evidence to the contrary would be enlightening.

Steve: Please don;t ratchet my statement up past what I said. In the incident in question, they objected to Georgia Tech inviting me to make a presentation. Can one extrapolate past that to the conclusion that they have a “position on scientific matters”? I don’t see any point debating this. Their actions at Georgia Tech were not “merely passive”. Let’s leave it at that.

Isn’t the internet amazing? Such a complex web of inter-relationships, formerly hidden to the public, now increasingly exposed to the light of day. It’s like paleontology. Circumstances expose a little bit of an anomaly, then you start digging …
.
Lots of smart people just aren’t getting it. They’re operating like it’s still the 20th c.

“In one of your meeting notes, you mention that you intended to “publicly defend” the findings of these studies. Climate Audit represents a relevant forum for such public discussion and I hope that you can participate.”

Yes, but I seriously doubt that he ever meant he would come here. more likely, RC or the public media.

The abstract of Kaufman’s grant for this study are here (most recent amendment Aug 2005) – related grants to the other PIs have identical language. The terms clearly state “the project facilitates integration of results by standardizing the methodologies” – an objective that I, for one, endorse and one that should have been insisted on:

Recent results have demonstrated that proxy climate records from the Arctic preserve a signature of summer temperature that is related to both global mean warming and the Arctic Oscillation. This conclusion was based on a synthesis limited to just the last 600 yr. Available records that extend beyond the Little Ice Age to previous warm intervals are currently too few to capture modes of variability with adequate certainty. The longer-period evolution of these modes is identifiable in decadally resolved proxy records, and should be preserved in longer records of annual to multi-decadal resolution.

This synthesis of annual to inter-decadal climatic variability will extend through the key warm interval of Medieval time. The certainty of the climate reconstruction will be significantly improved by nearly tripling the number of high-resolution, 2000-yr-long, proxy climate records currently available in the Arctic. The project facilitates integration of results by standardizing the methodologies and by holding workshops for vested collaborators and their students. This tightly focused synthetic study of the Arctic system will be integrated directly into a climate-modeling effort to explore the role of volcanism and solar irradiance fluctuations versus internal adjustments of the climate system and its inherent modes of variability to explain observed patterns in the proxy climate data.

Across the Arctic, lacustrine archives contain the most accessible and widely distributed proxy records for the past 2000 yr. This work focuses on 30 of the PIs’ highest-priority, most-promising lakes, nearly all of which have been cored previously. The network of sites includes two regional foci (Alaska and the NW North Atlantic) that generally encompass the nodes of the surface temperature expression for the Arctic Oscillation, thereby facilitating the reconstruction of this mode of variability. Half of the lakes contain laminated sediment with potential for annually resolved records; others have high sedimentation rates for sample resolution of 5 to 30 yr. The proxy data from most lakes can be compared with nearby tree-ring records or instrumental data, or can be applied to transfer functions to yield quantitative estimates of temperature or other climatic variables. Analyses at low-resolution have already begun on most of the cores as part of on-going research. Preliminary data from these lakes indicate their high potential for climate reconstructions.

I assume you people have considered that the Team approach to dealing with CA is to read but not post to the threads. Witness Steig’s behavior of claiming that he independently recognized his error on PC analysis and independently submitted the corrigendum to Nature. He could of course be lurking and reading, while CA identifies the errors for him and as long as he is quiet there is no proof that he derived his ideas from this site. Likewise does anyone really think Kaufman is not reading CA? Call this the “Possum Defense” – roll up into a ball until the attacks dissipate.

Of course the Team read here, its the only place they get any real analysis of their work. They are scientists after all, and despite their ‘public face’ they know that there are ‘issues’ with what they do and as scientists they appreciate criticism. Don’t they?

Re: Dave Andrews (#61), I disagree with this smug statement, which I don’t think Steve would endorse. There is much more to research and publication than just peer review, and there is much more feedback to published articles than what just appears on this blog. Not understating the impact that Steve has achieved here, just don’t overstate it either, please.

It would be good to find out why they dropped the ball on the initial fine format for this research and why this was not followed up. The less speculated about motives the better.

I can see why Kaufman would be concerned about some “vicious” questions if he elected to openly answer the questions. CA has some very knowledgeable folks who would take delight in posing the tough questions. However, I do believe that most would act in a professional and courteous manner. You get to hold the scissors, so would be able to control the situation. You do some pretty heavy cutting from time to time.

Steve please CALL HIM. Even a brief explanation of why they chose the proxies they used and why the study appears to diverge from original mission would be very interesting and helpful. I wish he’d check in but one cannot blame Kaufman for choosing not to participate in CA’s often hostile blog environment. -snip – That’s hardly the same as the types of discussion one has in the usual academic environment.

I remain puzzled why Kaufman would regard my commentary as “vicious”. While I occasionally get annoyed, generally I try to be even-tempered and to avoid editorializing. I’ve re-read the posts in question and see no reason for Kaufman’s characterization of the commentary as “vicious”.

Steve I’d guess Kaufman’s concerns are more with the *average tone* of most of the comments in this thread rather than your own observations, though I’m sure he’s not pleased with those either. However it’s hardly reasonable to reject the call offer and then keep bashing his work here.

Bender you *cherry picked* a few threads from the past to make the case that people can be respectful here and I believe you that’s sometimes true, but the normal comment tone at these past few threads (ie the ones Kaufman is reading) would be considered very disrepectful by most academics as well as most normal people. I don’t think it’s as bad here as RealClimate or the ridiculous ClimateProgress, but rightly or wrongly few climate academics will swim in these waters.

joe when people show up to defend their work even the worst of us will give a visitor a civil reception.
Grinsted, webster,curry, scarfetta, It’s not cherry picking to point out the threads where people actually showed up.
There is no point in comparing steves blog to tamino, where lucia got banned for asking questions politely. No point in comparing Steve to Hansen and RC where his name cannot even be mentioned. So I wont point that out. Even iff Kaufmen got some rough treatment, so what? He can pull on his big boy pants and just ignore the rotten comments and answer the polite ones.
Look at what Leif Svalgaard puts up with over at WUWT. Sticks and stones the old saying goes.

Re: Joe Hunkins (#76),
A brief explanation as to why the threads are uneven in tone, why the auditors are pleasant in the company of the authors, and sometimes unpleasant otherwise. Have you ever tried to audit one of these papers? The data are frequently unarchived. The code is typically unavailable, sometimes even considred “propietary” (!) Nonsense. Nothing is more frustrating than not having the materials to do your job. Tracking down data and reverse engineering code is hard, unpleasant work. None of this would be necessary if people complied with current obligations as specified by granting agencies and journals. So next time an auditor gets nasty, try to look at things from his perspective. Science, above all, must be replicable. The very idea of “proprietary” code is absurd. If the analuytical code is directly affecting public policy, then it must be revealed. This is a fairly fundamental democratic principle. People have a fundamental right to know why a policy that will affect them is leaning a certain way.

Very reasonable concerns. In fact I think it would be interesting to require recipients of NSF grants
to address questions about their work online in an open format. I’m also dissapointed that Kaufman has not at least penned and posted any answer to the key concern expressed here – why did you choose these proxies?

Steve I have a lot of respect for your work when it takes an academic tone but I do not understand why you maintain that the overall tone here at CA is generally very respectful of the scientists in question. It’s not. You can make a fine case that sloppy science deserves to get bashed, but you can’t make a case that people getting bashed are likely to keep participating amicably. Most will simply not participate.

And what does that tell you? What kind of scientist is afraid of scrutiny? I thought scientists were supposed to be attracted to dissections of their work?
.
I do understand the fear of being hacked by inexpert analysis. I really do. But assurances were given to Dr. Kaufman that that would not happen. He would be protected by an uninterrupted omnibus thread with Steve M. And Steve M has proven again and again: he is no hack. He is every bit as qualified as any team member to comment on multiproxy climate reconstructions. In fact I would trust his skills as a reviewer far more than whoever it was allowed Kaufman et al 2009 (and other fatally flawed team work) to slip through the cracks. In fact it is a total mystery to me why he is not invited to pre-review manuscripts before they hit the formal review process. [And maybe he is?] I guess there are a number of paleoclimatologist types that are just afraid of scrutiny … for whatever reason.
.
My guess is that, like Juckes, they have a hero complex. They’re smart. They’re good. They want to save the planet and prove the hypothesis: “OMG, it’s worse than we thought”. They’re charismatic. They like groupwork. They wade into a field they actually don’t know a lot about, surround themselves with “experts”, and upon publishing suddenly realize they’d gotten in over their heads. They’re sitting on a house of cards because not once did they think to question the data being fed to them. Because they live in a friendly collegial “echo-chamber” where groupthink is encouraged for the sake of making forward progress. Deadlines. Milestones. Too busy to “smell the flowers”. These people are “productive”. They just don’t know what they’re producing.
.
WMD anyone?

Joe, Seriously, these are grown adults you are talking out. This is the real world, it’s confronting, challenging and sometimes daunting, and that’s excluding the website known as Climate Audit. Yet you make concessions for those who are justifiably held to account. Even if C.A was ‘vicious’ (and it isn’t), surely it is just another challenging day in the world of science.

The absence of comment and unwillingness to co-operate is rather incriminating don’t you think?

P.S. Sorry if this comment pushes the boundaries but we are all grown ups here.

Bashing? That is not what this is about. It is about piecing together what was done. In part so that others can be guided by the knowledge obtained from sensitivity testing. (That’s precisely the process that led Loehle & McCullough to publish their own climate recon, in case you don’t recall.)

Steve I have a lot of respect for your work when it takes an academic tone but I do not understand why you maintain that the overall tone here at CA is generally very respectful of the scientists in question. It’s not. You can make a fine case that sloppy science deserves to get bashed, but you can’t make a case that people getting bashed are likely to keep participating amicably. Most will simply not participate.

I think the most disturbing phenomenon is the expectation, nay demand, that a peer-reviewed publication be assumed to be true unless the authors themselves admit that they have erred. I don’t know that Kaufman2009 is sloppy, because I don’t know what actual duty of care they expended to produce it.

I do know that the statistical model is known to produce “highly significant” results from red noise and the proxy selection is far from randomized and far from calibrated to the actual thing supposedly reconstructed (annual mean temperature). Under such circumstances, very experienced statisticians have repeatedly warned against placing much weight on the announced results – and there are plenty of spurious results in the literature.

I do know that rather than defend some rubbery assumptions, they make excuses about the quality of the commenting on the weblog, as if they’d never come across such affrontery in their sheltered monastic lives. Science in lots of cases is “bashed” around by scientists all of the time, with enough barely concealed contempt for each others’ ability as would make most commenters here shake their heads in dismay.

I do not understand why you maintain that the overall tone here at CA is generally very respectful of the scientists in question. It’s not. You can make a fine case that sloppy science deserves to get bashed, but you can’t make a case that people getting bashed are likely to keep participating amicably. Most will simply not participate.

Joe, I try to choose words carefully. I said that, in my opinion, my commentary was “polite” – which I think it is. I wouldn’t be embarrassed to have my mother or sisters or children read my commentary, One can make pointed and critical comments while still being polite. I play squash very competitively but am unfailingly polite during and after games – which includes calling points against yourself.

I didn’t say that I was “very respectful” of the scientists in question – I don’t wash their feet nor do I tug my forelock in their presence nor do I throw rose petals in their path. Nor do I assume that their statistical methods are correct merely because THEY used them. In many cases, I’ve been highly critical of their methodologies but generally have done so politely.

If the scientists in question do not wish to confront such criticism, then that’s up to them and readers will draw their own conclusions. But I do not accept the excuse that the commentary here is “vicious” without Kaufman being able or willing to provide a single example to back up his statement. I re-read my threads on Kaufman and the comments appeared to me to be factual and, to this point, the correctness of the points has not been disputed either by Kaufman or any readers.

I’d guess Kaufman’s concerns are more with the *average tone* of most of the comments in this thread rather than your own observations, though I’m sure he’s not pleased with those either.

While I am occasionally provoked, I try to be polite and I ask readers to be polite and most readers comply and, for the most part, they do.

From time to time, critics complain about the tone of posts here. When such requests are made, I ask the critic in question to identify any offending comments.In most cases, they are unable to; on a few occasions, they can point out a few “piling on” comments of the type that I discourage, but seldom more than a few. (Memo to those who regularly write “piling on” comments: one of the reasons that I dislike such comments and regularly delete them is that such comments deter readership by third parties without adding editorial substance to the thread.)

I’ve written to Kaufman and asked him to identify any comments that offend against blog policies of politeness – I’ve undertaken to edit or delete any such comments. Until I can pin down exactly what his objections and concerns are, it’s pretty hard to have a constructive conversation.

A quick observation from a long time lurker: It seems that the conversation can get somewhat caustic about climate scientists who are not directly participating in the blog (although they may of course be lurking). However when such people engage directly the tone of the exchange is much more polite. I suspect that Dr Kaufman would be treted with respect, although his arguments might be subject to considerable criticism.

The non-archiving of data should really be not be an issue. Just dont send check for final half of grant money until it is done. It needs to be a specific deliverable and if it isnt done in a reasonable time, the grant money should be returned by force of law for non fullfillment of contract.

Re: larryt (#81),
But how come you won’t find such simple mechanisms offered in the recent isue of Nature? Instead you get the hand-wringing over the need to foster a new, co-operative culture, the need for bigger & better repositories. No. It is as simple as larryt suggests.

Re: bender (#82), Once again, the difference in culture between scientists and engineers shows up. Witholding the final payment until the punchlist is completed is pretty standard practice in E&C. It would be unusual, to the point of negligence, to make the final payment on a construction project without a final walkthrough, meeting, and signoff.

Re: larryt (#81), For most genomics-related journals the raw data and descriptions of the data processing must be uploaded with the submission (e.g. http://www.ncbi.nlm.nih.gov/geo/). This is so that the reviewers may independently analyze the data. This not a requirement for publication, but a requirement for submission.

The same requirement should apply to other sciences, including climate sciences. Although, if that happened we might never see another publication from the Team in our lifetimes.

As an aside, one should not even bother writing a grant to NIH without a data sharing plan.

This behavior by the “Hockey Team” has continued for close to a decade, try as Steve might to get them to follow established scientific practices with transparency and free flow of data and methods in order to enable a reasonable peer review of published papers. This special interest blog is great but until the public at large becomes aware of what’s been going on, we are going to just be spinning our wheels.

– snip : editorializing

Yeah I know snip snip snip, but it had to be said or were going to be here a decade from now with absolutely no change in how these “studies” are conducted.

The “sverage tone” of any forum of skeptics is likely to sound unpleasant to any entrenched believer habituated to life in a happy huggy groupthink echo chamber.
.
Think Mark T is being harsh with the phrase “echo chamber”? Let’s see. How many of the numerous authors of Kaufman et al. have ever scrutinized GCM code line by line, wondering to what extent the expected rise in GMT comes from tuned parameters? How many even realize that the GCMs fail to correctly simulate Earth’s mean temperature? Let’s start there.

Steve, you should take Kaufman’s offer to talk over the phone. Ask him the questions we all want answers to, then report back here. He’s offered to talk- it doesn’t matter if he does it here or via phone. It’s the answers that matter.

Steve: I have some experience in negotiations. Generally it’s a good idea to get irrelevant issues out of the way. Kaufman has complained about “vicious commentary” and I think that it would be helpful to understand exactly what he objects to before calling. Also I’ve encountered situations (e.g. with the GRL editor) where they insisted on being off the record. In this case, I don’t have a whole lot of interest in off the record chit-chat, so that would need to be clarified as well.

Re: David Cauthen (#88),
Nonsense. Steve can set up an omnibus thread where interruptions will simply not be permitted.
.
As this “backstory” thread proves, it’s not just “the answers” that matter. Because for some reason every answer in climate science seems to lead to another question. And the more minds you have viewing a conversation the more questions come to mind. [That’s the beauty of the blogosphere, in case you hadn’t noticed.]

As this “backstory” thread proves, it’s not just “the answers” that matter. Because for some reason every answer in climate science seems to lead to another question. And the more minds you have viewing a conversation the more questions come to mind. [That’s the beauty of the blogosphere, in case you hadn’t noticed.]

Kaufman has made it clear he has no intention of engaging us here in our little corner of the blogosphere ( the beauty of which I certainly appreciate- I don’t stop by everyday for the food). But he did offer conversation by phone which I encourage Steve to accept because it so rarely happens AFAIK. Who knows, maybe Steve will convince him to stop by CA for some civil blogging. It ain’t gonna happen without the call though. It’s certainly not gonna happen because some cat named ‘bender’ says he should.

Kaufman has made it clear he has no intention of engaging us here in our little corner of the blogosphere ( the beauty of which I certainly appreciate- I don’t stop by everyday for the food). But he did offer conversation by phone which I encourage Steve to accept because it so rarely happens AFAIK. Who knows, maybe Steve will convince him to stop by CA for some civil blogging. It ain’t gonna happen without the call though. It’s certainly not gonna happen because some cat named ‘bender’ says he should

Not to bicker. But first, Kaufman did not say he would not engage anyone here. Second, it’s not the cat named bender that should compel him to comment here, but an appetite within the community at large for answers that should have been available at the time the paper was published. Third, phone calls “off the record” are not only a bad idea, they defeat the principle of openness advocatesd by CA, NAS, Science, Nature, NSF …

1. What conventions did you use for interpreting varve data? How are physical characteristics such as mass/density reconciled with thickness? Were regional or geological (etc) factors accounted for at each site?

2. Please explain and justify the drift away from the original sampling plan as set out in the terms of the original NSF grant. Are there individual reports/documentation from each of the PI’s which justifies the deviation from the original plan?

RE: larryt #81. NSF grants are gifts, not contracts. The recipient has no contractual obligation to provide anything. One would expect/hope that a lack of product from a grant recipient without a clear valid explanation would result in elimination of consideration for future funding, but this is not guaranteed either, especially in a field awash in $$$.

RE Sonja #92. It looks to me that there has been a confusion of cause/effect.

Steve, can you define “piling on”? Is that a redundant criticism? Does something qualify as “piling on” because someone already said it, or does it have more to do with venting? I don’t understand exactly what you’re driving at.

I’ve got a deep curiosity about how the scientists feel an analysis like this is valid. I really don’t get how it could possibly be temperature so I have a deep curiosity to have a reasoned explanation of what makes this method have any validity at all. Why are these the good proxies? Perhaps if Steve was to agree to ‘over moderate’ the thread in question, so that nothing but q and a were allowed Doc. Kaufman might reconsider.

My four questions for Kaufman are:
1. Why was a known problematic HS-ness source, Yamal, included in the study instead of the agreed Arctic lake bottom sediment sources?
2. Why was Tilijander data included upside down?
3. Why was Tilijander data used post-1700 when Tilijander stated in her thesis, “Since the early 18th century, the sedimentation has clearly been affected by increased human impact and therefore [is] not useful for paleoclimate research”?
4. Kaufman et al conclusions are clearly in conflict with DMI Arctic measured temperature data for the last 50 years. What is the explanation for the conflict?

I think I can understand Dr Kaufman’s position to some extent. If he comes over here he is likely to be bombarded with questions and will be hard pushed to respond to them all simply because of the volume.

I think a closed thread, where only Steve and Dr Kaufman can add comments, would be a good compromise between a phone call and a free-for-all.

I’m quite conscious of the volume problem for a visiting scientist and have previously set up closed threads e.g. for Juckes – a thread in which I asked questions or transferred reader questions that I felt of particular interest. I would be quite prepared to do so in this case as well.

Unfortunately, Juckes ignored the special thread and never dealt with any of my questions. Instead of dealing with me, he picked fights on other threads, typically with the least sophisticated commenters. His comments were bickering and very unhelpful – mostly the sort of drive-by slagging that we’ve seen from time to time from climate scientists, behavior that I don’t tolerate from regular readers.

It sure would be interesting to get Kaufman’s perspective on how the stats show this curve is verified. Perhaps there are a few other verifications or rational for rejections used in proxy selection we could learn about that would give some comfort of the process. Currently the reasons for choosing one over another are obscure. Perhaps there are a couple of solid favorites, or maybe after a year of this the idealist in me still hopes for too much.

Over the years here I have been entertained, disappointed, outraged, cynical, sarcastic, infuriated, enlightened, usually in some combination. Can’t really say why, but this episode just makes me feel sad. Obviously we are no longer operating in knowledge-based world, and that reality shift is far more frightening than a degree of two change in global temps. If this continues, the question will not be “am I paranoid?” but rather “am I paranoid enough?”
………cue Talking Heads, Life During War Time.

I received an extremely unresponsive email from Kaufman. Let me review the correspondence. On Sep 15, 2009, I wrote to him extending a cordial invitation:

Dear Dr Kaufman,
A follow-up for the information requested below as I did not receive an acknowledgement to my 2nd request.

As you may know, I’ve been discussing Kaufman et al 2009 at http://www.climateaudit,org and would welcome your participation. In addition, if you wish to post a thread setting out the paper in whatever terms you wish, you are more than welcome. LEt me know and I’ll provide you a password.

One of the questions that I’ve been discussing at Climate Audit is the apparent disconnect between the sampling and archiving program originally outlined for the 30 NSF sites (the original program description stated that data would be collected from the sites in a consistent format), the limited archive to date and what was used in Kaufman et al 2009, the topic of a current thread. Any light that you can shed on this would be greatly welcome.

In one of your meeting notes, you mention that you intended to “publicly defend” the findings of these studies. Climate Audit represents a relevant forum for such public discussion and I hope that you can participate.

Regards, Steve McIntyre

As reported previously, Kaufman replied:

I did log onto the Climateaudit website about a week ago. I have no desire to engage in vicious commentary. If you would like to discuss the study professionally and courteously, then I would be happy to talk with you. I am at: 928-523-xxxx

I responded to this offensive comment about Climate Audit as follows:

I’m reassured to hear that you have no desire to engage in vicious commentary. I expect politeness from myself and my readers and I believe that my comments on Kaufman et al 2009 have been temperately expressed.

If there are any comments at Climate Audit by myself (or others) that you regard as being “vicious” or otherwise offending against the politeness that I expect from readers and myself, I would appreciate it if you would draw them to my attention as “viciousness” is against blog policies. I do not moderate comments in advance but will delete or snip comments that breach blog policies requiring politeness after the fact when I notice them or when they are brought to my attention.

Regards, Steve McIntyre

PS – I re-iterate my request for the data versions referred to in prior emails.

Kaufman’s reply to this email remained unresponsive to my request for data and included another offensive dig at Climate Audit (without justifying his previous offensive comment). And instead of attaching the requested data, he attached pdfs of the underlying studies, most of which I’m already in possession of. Kaufman:

Thank you for your offer to clean up the blog.

All of the records that we used in the study have been published previously. I have complied for you all of the articles that report the results from each of the records. The batch of articles is too larger to attach in one e-mail, so I’ve bundled them into two zip files and will send the first one with this message and the second with the next. All of the analyses in the synthesis are based on 10 year means values from these records — the data that I have already sent you and that were posted on the NOAA website at the time of publication. If you are interested in other types of data that were not part of the study (annual/seasonal/monthly values, or laboratory standards/blanks, or replicate counts, or….), then please contact the original authors to request their data.

If you would like to discuss any aspects of the study, please call me (928-523-xxx).

I replied as follows:

Dear Dr Kaufman,

As I told you previously, I expect politeness from myself and my readers and have no plans to alter these policies. If there are any comments that, in your opinion, offend against these policies, I would appreciate it if you would draw them to my attention. If you are unable or unwilling to identify such comments, I would appreciate it if you would withdraw your offensive comment that I had engaged in “vicious commentary” at Climate Audit.

In Kaufman et al 2009, you stated: “We compiled available proxy climate records that (i) were located north of 60°N latitude, (ii) extended back at least 1000 years, (iii) were resolved at an annual to decadal level, and (iv) were published with publicly available data”. Given your failure to provide URLs for proxies 6- Lake C2; 19- Lake Nautajarvi; 21- Lake Lehmilampi; 20- Lake Korttajarvi or the annual versions of proxies 7,12,13 and 16, I take it that the claim to have used “publicly available data” is, at least in part, incorrect.

Your email attaching pdfs of various articles is unresponsive to my request, as is your suggestion that I contact the various original authors for the supposedly “publicly available data”.

Steve, this sounds very much like the “Santer” approach: provide summary data that appears to satisfy the “openness” requirements, but withhold the detail that truly allows replication, instead providing references to previous papers and sources. At the same time, try to bait you into behaviour that can be construed by those sympathetic to his cause as vicious or unprofessional.

They’ve come up with a play that works, and until a was is figured around it, I suspect you’ll continue to see stuff from the passive/aggressive Santer playbook.

Obviously taking a decadal average affects the statistical properties of a data set. These varve series seem not only to be highly non-normal, but even the logged values are non-normal. For example, I’ve been looking at the distribution of the raw Loso data and it “looks” to me rather like a Levy distribution or an inverse gamma distribution, either of which raise interesting questions when it comes to correlations and confidence intervals.(While there are a number of reasons to disagree with Loso’s statistical methods, unlike many other authors, Loso has very creditably provided sufficient raw data to consider such statistical questions.)

Now consider Kaufman’s comment in the email:

If you are interested in other types of data that were not part of the study (annual … )

In many cases, Kaufman’s decadal series are averages of annual data. Just because Kaufman coerced annual data into decadal versions (paying no heed obviously to Matt Briggs’ admonition not to smooth data) doesn’t mean that the annual data used for the decadal average wasn’t “part of the study”. It was.

In many cases, Kaufman’s decadal series are averages of annual data. Just because Kaufman coerced annual data into decadal versions (paying no heed obviously to Matt Briggs’ admonition not to smooth data) doesn’t mean that the annual data used for the decadal average wasn’t “part of the study”. It was.

Unless anyone has another suggestion, I will reply and send him the 10-year data (which is
already posted at NOAA-Paleoclimate) and explain that they were the basis for all of the
calculations. He might want the annual data that the mean values were based on. I suppose
we’ll cross that bridge when we get to it.

I’m disappointed. I did not get the sense that anyone wanted to play “gottcha” or be unreasonable with Dr. Kaufman. Steve, I think you run a good ship wrt your blog policies – worthy of much respect – but this does not mean you have to get all touchy feely with the Team.

Kaufman has demonstrated that using prior commentary about his work as an excuse not to defend it is a standard Team tactic to avoid cross-examination of controversial results. You’d think he’d never stepped into an academic debate the way he behaves. Who else has done this? Mann, Hughes, Jones, Steig…and now Kaufman.

The strong inference I take from this is that his study is indefensible to a knowledgeable group of commentators and he knows it.

The strong inference I take from this is that his study is indefensible to a knowledgeable group of commentators and he knows it.

In my opinion, you’re jumping to an unjustified conclusion based on present evidence. And unfortunately this is the sort of excessive statement that reinforces guys like Kaufman.

I have no reason to believe that Kaufman “knows” that his study is indefensible. On the contrary, my presumption is that he doesn’t “know” this and sincerely believes the opposite. So when he reads this sort of statement, his reaction is “BS” and uses that as an excuse to reject any valid criticisms.

I remind readers time and time again not to go a bridge too far and to avoid over-editorializing and this is another occasion.

When he said that their findings would be “SCRUTINIZED” and that they had to be prepared to “publicly defend” their findings, one could reasonably assume that Climate Audit was on their radar. I do not believe that workshops among “collaborators” were what they had in mind. Thus it is disappointing that Kaufman turns out not to be prepared to “publicly defend” his findings in the most visible public forum for such scrutiny.

The strong inference I take from this is that his study is indefensible to a knowledgeable group of commentators and he knows it.

Disagree. My guess is he thinks, like many CA detractors, that Steve is, “as usual”, making a mountain out of a molehill, that any small errors “don’t matter” in the grand scheme of things, that the result is fundamentally robust, and that we are just a bunch of hacks trying to sink a ship by poking pinholes in the hull. We’re just “vicious” “deniers”. “Court jesters”.
.
If Arctic warming is unprecedented in 2000 years, I really, really want to know. That’s what makes this paper so sad. I criticize it and suddenly … I’m a denier? Wha?! I accept the proposition as plausible. I just want to see the evidence. Including reasonably robust estimates of uncertainty.

The strong inference I take from this is that his study is indefensible to a knowledgeable group of commentators and he knows it.

The standard counter is that Steve M et al. are a bunch of hacks, not eminently qualified reviewers, and engaging with such unscrupulous types can only lead to legal trouble. Best to cut your losses by moving on. Fine, let him defend his work at RC. We can document all the snipping that we know they will do and show them for what they all are: afraid to be SCRUTINIZED.
.
Flakes

I try to put myself into Kaufman’s position. As I’m sure that many here do – even while feeling that his new Hockey Stick is profoundly unsatisfactory and questionable, for reasons well-understood here. I try to think, assuming the best of Kaufman, what did he mean by “I really, really tried”? That he sent info that he thought Steve still lacked? That he offered a phone exchange, twice?

It is easy to simply miss seeing things, when they come from someone with whom one has deep unresolved disagreements. Perhaps he misread Steve’s request that he (Kaufman) note any specific “vicious” comments and let Steve know. Probably he does not realize how Steve has already offered private threads eg to Juckes when

Juckes ignored the special thread and never dealt with any of my questions. Instead of dealing with me, he picked fights on other threads, typically with the least sophisticated commenters. His comments were bickering and very unhelpful – mostly the sort of drive-by slagging that we’ve seen from time to time from climate scientists, behavior that I don’t tolerate from regular readers.

Perhaps he didn’t know what Steve means when Steve said

Your email attaching pdfs of various articles is unresponsive to my request, as is your suggestion that I contact the various original authors for the supposedly “publicly available data”.

and felt he had no time to spare to try to understand what Steve meant.

I don’t want to force words into anyone else’s mouth, but at the same time, I want to understand the sticking-points that are making Kaufman misunderstand what seem like perfectly reasonable requests and are making him back off all reasonable communication.

Hero complex – maybe not. I don’t think that Kaufman has such a complex. I think it comes down to something more fundamental.

Steve knows his mathematics especially on the analysis of time series and the use (or abuse) of cherrypicked proxies and unsuitable statistical models. And there’s lots of people who have considerable experience of this sort of analysis and the fundamental issues involved.

And I don’t believe that anyone on Kaufman’s list gave much thought to replication and dissection by competent analysts. That’s just the Ivory Tower syndrome of academia.

Its a fear response when people behave like Mann, Jones, Steig and Kaufman. A fear of humiliation in front of people they are not expecting to raise substantive issues.

At the bottom of this is the combination of dubious proxies in a temperature reconstruction where significant variance comes from data series which are uncalibrated with temperature. And that is what Kaufman is ducking.

I don’t believe that anyone on Kaufman’s list gave much thought to replication and dissection by competent analysts

And yet they KNEW they would be SCRUTINIZED, even if by non-expert (whatever that means) hacks. And so now that day has come … and so why on Earth are they unprepared? (Copenhagen rush job?)
.
Kaufman’s co-authors may be advising him to steer clear of CA. If the data are good and the results are robust, then this is bad advice. Kaufman is the leader. He should be willing to stand up for his co-authors. Failing to comment here is a failure of leadership. His co-authors should be pressuring him to do show up and give it an honest try. Because the fact is he really, really did NOT try. Not at all.

There are two words in Steve’s final letter which stand out: “Offensive” and “Unresponsive”. I’m sure that these could have been replaced by “mistaken” and “incomplete”, and this would have provoked a less strong reaction.

Before anyone says that Kaufman should be less sensitive, perhaps the same could be said of Steve who was so massively offended by what Kaufman wrote. Unfortunately something in Steve’s letter that caused Kaufman to take his ball home. Using words like these and a demand to “withdraw” a comment is more likely to stop participation than encourage it.

I try hard to maintain a civil atmosphere at this site. At the suggestion of a reader, I notified Kaufman of the discussion of Kaufman et al here (in case he wasn’t aware of it) and sent him a polite and cordial invitation to participate. Given the effort that I make in maintaining an atmosphere that would not offend my mother or sisters or children, yes, I found it “offensive” when Kaufman slagged this site as engaged in “vicious commentary”. I wasn’t “massively offended” – again I ask readers not to attribute views to me that I haven’t expressed. In my first response, I did not express the fact that I found this remark “offensive” (though I did); I advised Kaufman that I had blog policies requiring politeness and asked him to identify comments that breached those policies. This is my regular response to such broadbrush allegations (as readers know from inline discussions on other occasions.)

Kaufman’s response:

Thank you for your offer to clean up the blog.

was another derogatory comment. He could have answered in a variety of ways but intentionally made a snotty (and unresponsive) reply. I again asked him to justify his claim that I had engaged in “vicious commentary”, this time pointing out that I found the allegation “offensive”, which was true.

If someone asked me to support a similar comment, or, in the alternative, if I were unable to support it, to withdraw it, I would either support the comment or withdraw it. I wasn’t asking him to do anything that I wouldn’t have done myself. It was open to him to say that he had reviewed the threads in question and that he withdrew his observation that I had engaged in “vicious commentary”. Wouldn’t have been any skin off his nose.

As to the term “unresponsive”, again it is correct. I had asked for data on four occasions. Kaufman knew exactly what I was asking for and, in my opinion, intentionally sent me material that was “unresponsive” to my request. The issue wasn’t that his response was “incomplete”. Indeed, he foreclosed any prospect of supplying the requested data by telling me to try to get the data from the original authors.

I again asked him to justify his claim that I had engaged in “vicious commentary”

I’m pretty certain he was speaking of the “commentary” in the replies, not your head-post commentary. Therefore I think he was miffed that you thought he was talking about you. Not that I agree with his characterization of the replies here which are much more civil that those in essentially all other open comment websites I visit. But I think you two are talking past each other.

I’m also not quite sure why you didn’t take him up on a phone call. It might have been fruitless, and in any case wouldn’t have been quotable (which I expect was his intention). But you would at least have known where each other was coming from.

Steve: I hadn’t said that I wouldn’t call him. It wasn’t a priority but I hadn’t declined the offer. I was still thinking about what we were supposed to be talking about, but was pretty busy over the last few days.

The questions suggested by readers were the sorts of questions that one might ask in an examination for discovery – and indeed those are the sorts of questions that I might have ended up asking him – but realistically I don’t think that he was making himself available for a telephone examination for discovery.

Plus I was still in the process of analysing the paper based on available information -this sort of thing takes me a while. Especially when, as in this case, there are some new proxies. Until I’d finished that process, I didn’t think that I’d be in a position to decide what to talk about.

In contacting him, I did not do so for the purpose of asking him questions or for the purpose of readers asking him questions. I had two purposes. I requested data – which was refused. And secondly, I notified him that I had made some comments at CA about Kaufman et al 2009 – this notification was entirely a courtesy. It wasn’t so that people could ask questions of the Herr Dr Professor. If he wanted to comment, I was offering him a no-strings opportunity.

I hadn’t said that I wouldn’t call him. It wasn’t a priority but I hadn’t declined the offer.

This is, to borrow your term, unresponsive. If his purpose was to be grilled by you on details of the paper, e-mail exchanges would have been much better. Instead he was interested in talking to you in a way which wouldn’t show up the next day in a blue box on CA. I’m thinking trying that phone number immediately would have been better than trying to turn the call into a substitute for e-mails or blog posts. If you couldn’t get him to come here after a phone call or two, then you could be the one saying you’d really, really tried.

Steve: I don’t mind talking to opponents. At the NAS panel hearings, for example, I was quite sociable. In this case, I was very busy and preoccupied through the day of Sep 16 with family things and this proved to be the only window. Calling Kaufman wasn’t an urgent priority for me and I had no reason to believe that there was a one-day window. Blog time sometimes dilates. Kaufman’s email indicating a telecon arrived in the late afternoon of Tues, Sep 15. I was playing squash in the late afternoon (my first game in a long time as I’ve been injured) and sent a responding email just after midnight Eastern (I presumed that the telephone number was an office and not a home number). After a rather trying Sep 16, I received a 2nd email from Kaufman about 9:52 pm Eastern, not providing data and making another snotty comment about CA. It was after business hours both here and in Arizona. I replied to Kaufman a bit after midnight Eastern. His “really really tried” email arrived just after 1 am eastern. So the actual response window available was only the day of Sep 16 (when I was busy).

I find these threads spend way too much time on who is offended and by what and whom. If Dr Kaufman choses to reply that is a bonus – providing that his reply is on the topic at hand and not a lecture on the preceived disposition of some posting here.

I displayed the entire 23 proxy and average of all 23, as calculated by Kaufman et al, with breakpoints on another thread and obtained zero responses. Maybe what I did is obvious to all posting here, but it was revealing to me to see the mish mash of proxies results with their many differences all coming together to give a hockey stickish appearance. To me that is what we should by posting about.

Exactly. The data speak for themselves. The reconstruction is not robust. We already showed that. What more is there to say? What’s left to defend? Inversion of Tiljander? Choice of Yamal over Polar Urals? Sure, just like Graybill over Ababneh. Or Jacoby over Dagneau. Indefensible choices.

In many cases, Kaufman’s decadal series are averages of annual data. Just because Kaufman coerced annual data into decadal versions (paying no heed obviously to Matt Briggs’ admonition not to smooth data) doesn’t mean that the annual data used for the decadal average wasn’t “part of the study”. It was.

I concur that Kaufman is obligated by Science policy to provide the annual data that went into his decadal averages.

However, decadal averaging isn’t “smoothing” in Briggs’ sense, unless you try to run a regression with annual observations on a 10-year moving average (as people often do!). Since Kaufman was simply using non-overlapping decadal observations, he was not inducing the severe serial correlation that would have been created with overlapping averages. In fact, he is probably reducing any serial correlation that was originally present.

There is always some loss of information from time-aggregation, but I gather that at least some of his proxies were only decadal to start with, so he may have had to chose between dropping these or going decadal with all the series.

There are much greater problems with how he calibrated his decadal index to temperatures.

I sent the following letter to Sciencemag requesting the refused data (copy to Kaufman):

Dear Dr Wills,

Kaufman et al 2009, a recent publication in Science, stated:

“We compiled available proxy climate records that (i) were located north of 60°N latitude, (ii) extended back at least 1000 years, (iii) were resolved at an annual to decadal level, and (iv) were published with publicly available data”.

Unfortunately the Supplementary Information did not include URLs providing the location of the supposedly “publicly available data”. I have been unable to locate publicly available versions of the following proxies as used in Kaufman et al 2009:

For the latter four series, Kaufman et al say that they obtained annual versions directly from the authors, somewhat contradicting their claim to have used “publicly available” data.

I’ve sent four requests to Dr Kaufman asking him to provide the original data (or public URLs) for the above data sets. He has refused to provide this data – which, according to the representations of the article, was supposed to be “publicly available”. In some cases, the original data was published by coauthors by Kaufman et al 2009 (e.g. coauthor Bradley is the originator of the Lake C2 data, which is not “publicly available”).

Your data policy clearly states:
“After publication, all data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science.”

The Supplementary Information to Kaufman et al 2009 does include decadal averages of the original data, scaled to a 980-1800 reference period. This is clearly insufficient to “understand, assess and extend the conclusions” of the manuscript (particularly when the data was supposed to be “publicly available” in the first place) and I accordingly request that you require the authors of Kaufman et al 2009 to provide the data in the form requested above.

In addition, their assertion that they compiled proxy records with “publicly available data” appears to be at least partly untrue, given their inability to provide URLs for the data or the data itself, and I request that you seek some explanation from the authors for this claim.

I’ve already received an acknowledgment from Stewart Wills, thanking me for making them aware of the situation and saying that the message had been passed on to the editor of the article.

I’ve obviously had some experience with Science and Nature on their data policies.

I can’t imagine how Kaufman thinks that he will be able to uphold his data refusal; Science and Nature have lost patience for this sort of nonsense and I expect them to make Kaufman deliver up the data quite quickly. Perhaps Kaufman is unaware that Science has tightened up its administration; otherwise, I have trouble understanding why Kaufman refused to provide the data in the first place.

Steve, slightly OT, did you ever get all of Santer’s data from Santer? Or did you have to get it someplace else? (I realize Santer et al 2008 was not in “Science” or “Nature”, but I was just curious how helpful that particular journal was, Journal of Climatology, or somesuch, IIRC)

Steve: No. IJC did not have a data archiving policy. This was supposed to be taken up at the BAMS meeting in the spring – maybe someone can follow up on this.

I followed the IJC policy on data archiving with the Royal Met Soc in June. I had a very polite email from Prof Hardaker as follows:

The Scientific Publishing Committee met a couple of weeks ago (on 11 May) and I have an action from that meeting to come back to you to let you know the outcome. Apologies that I hadn’t yet managed to get to that.

I did present to them some examples of what other publishers have in place regarding policies on making data available. The Committee felt that there would be value in the Society formalising a policy on this that would apply to all our journals. They have asked me to bring a draft proposal to their next meeting (which is in the autumn) for us to finalise the details.

It’s a bit frustrating that something as simple as this can take so long, but there does appear to be some movement at least. I have it diarised to follow up in October.

(I think I posted a comment here in the summer, but it may have been missed.

He didn’t say you or the blog engaged in vicious commentary, exactly. He said he didn’t want to get involved in vicious commentary, so it’s possible he thinks his arrival would cause vicious commentary to appear. He could also just be generalizing to all ‘skeptic’ blogs, and not accusing CA in particular.

From his point of view, it looks like he tried, putting together several zip files.
If I had received the e-mails you sent, I might have reacted the same way.
‘Your email attaching pdfs of various articles is unresponsive to my request, ‘
This would irritate me.

Steve: If he didn’t think that I had engaged in “vicious commentary”, it was open to him to clarify this point. As to the data, I specifically asked him four times for data said to be “publicly available”. Instead of sending me what I asked for, he sent me something unresponsive. He knew that it was unresponsive and intended it to be unresponsive. Why would he or anyone else be irritated at being told that it was unresponsive when it was. On his side, maybe it was worth a try to see if he could fob something off on me and hope that I didn’t notice, but why be irritated when that didn’t work?

He didn’t say you or the blog engaged in vicious commentary, exactly. He said he didn’t want to get involved in vicious commentary, so it’s possible he thinks his arrival would cause vicious commentary to appear. He could also just be generalizing to all ‘skeptic’ blogs, and not accusing CA in particular.

He directly implied that the blog needed “cleaning up”. Presumably it’s the “vicious” parts that need cleaning up? Steve said he would be happy to do that if Kaufman could identify them. Crickets.

using words like ‘failure’ and ‘unresponsive’ would irritating me

Lots of things irritate me. You, for example. That doesn’t stop me from responding. Rather the opposite, in fact.

I don’t think he has to withdraw a comment made in an e-mail, and asking him for a recanting is too much. Also, using words like ‘failure’ and ‘unresponsive’ would irritating me. I would be thinking, who is this guy making demands of me? Especially if I had put together a zip file of numerous proxy data.

Steve: he did NOT make a zip file of numerous proxy data. Had he done so, my reaction would have been quite different. What on earth makes you think that he made a zip file of data?

Re: MikeN (#140), Figure it out, Mike. Kaufman used the data, and personally has the data. He said the data are publicly available. Steve asked for the data. In response, Kaufman sent him pdf copies of articles that discuss the data. Articles that discuss the data are not the data. The data are two-column ascii files. Kaufman has the data, owes the data, promised the data in his published assurance of public availability, and then didn’t supply the data when asked. That’s being unresponsive.

I think that argument depends on whether Steve McIntyre is unknown in the field of climate science and multiproxy reconstructions, and particularly to Kaufman. That’s pretty unlikely.

Is the request for data and methodology a bridge too far? The entire point of having a lead author is to act as a conduit for interested researchers to ask further questions and to make data requests.

But there is a more general point that again and again, climate researchers have failed to make data and methodology available even when a) the terms of the government grant say they must b) their terms of employment with academic institutions say they should and c) when the data availability rules of the publishing journal make it a condition of publication that they do so.

I have no reason to believe that Kaufman “knows” that his study is indefensible. On the contrary, my presumption is that he doesn’t “know” this and sincerely believes the opposite. So when he reads this sort of statement, his reaction is “BS” and uses that as an excuse to reject any valid criticisms.

Now we’re both guessing, but I ask the question: if you sincerely believed that your reconstruction was defensible and you saw people who were pulling it apart, you’d step up to defend your work, wouldn’t you? You might not necessarily want to reply to every single comment, but you want an opportunity to respond to the major criticisms on the statistical model used, the proxy selection, calibration, inference of temperature change and the error analysis. So you’d write an editorial on those points, and send it around to make sure everyone knew.

Besides which, my inference was made AFTER he’d decided to go into his shell. If I was in Kaufman’s position I’d make sure I answered on the substantive issues, unless I knew my argument was weak.

There’s nothing vicious about the commentary here that does not happen every day in every science department and between science departments every day and twice as hard. The only “vicious comments” were the ones that asked the hard questions mentioned above.

If every scientist were to run and hide every time they were “irritated” by a critic, nothing would EVER get published because there would never be any response to irritating peer review. Irritation is not the cause of run and hide.

Unfortunately these data refusals are part and parcel of climate science. In my opinion, a stand needs to be taken against each and every obstruction. Don’t worry – we’ll move on back to the proxies in due course, using the information that is available.

You’re OK Steve. What you are doing is great; posting correspondences, defending your actions and displaying transparency. I just think that once the window is open to speculation, motives, etc. some readers tend to get a little enticed by the drama and the comments start to resemble those found on typical skeptic blogs. I fear that ‘The team’ as you put it here, probably revel in C.A getting worked up into a tiz. It’s just one more diversion for them. (I speculate)

So it wasn’t proxy data, but he did put together something.
It does hinge on whether he knew ClimateAudit to begin with.

Steve: Mike, of course it wasn’t proxy data. That’s what I said in my correspondence and what I objected to. Nothing “hinges” on whether he “knew” Climate Audit. I asked for data on four separate occasions. He knew precisely what I was asking for and intentionally sent me something else that I hadn’t asked for – as though that were some big deal. Then he had a temper tantrum when I told him that this was “unresponsive” – which it was. It’s Kaufman’s problem, not mine. I’m 100% sure that Science will make Kaufman disgorge the information. Kaufman is merely going to annoy the editors at Science by involving them in stupid Team games.

Well I don’t know it was intentional. It could be, and of course you have the experience working with the ‘team.’ However, if it isn’t intentional, then your words have escalated things.

Steve: Give me a break. I asked him four times for data and told him that his final refusal was “unresponsive”. It was his refusal that caused the issue, not my telling him that his reply was “unresponsive”. If there’s an issue, it has nothing to do with the words that I used to ask for the data. I get a little tired of people saying that if I’d asked just a little bit differently, said some magic word, that the Team would have sent the data. That’s simply untrue. I see nothing wrong with the language that I used. I do see something wrong with Kaufman’s data refusal and with his cheap shots at Climate Audit, that he couldn’t support. My original invitation to him was purely courtesy. If he didn’t want to participate, that’s fine – he could simply have written something like: “thank you for the invitation. However, I don’t have the time right now to get involved in blog discussions. In addition, if I were to get involved in blog discussions, I would probably do so at realclimate where my coauthor Ray Bradley is a co-host. If I change my mind, I’ll let you know. Thanks again for the courtesy of notifying me of the discussion.” That’s what I would have done. Instead, he made gratuitous insults about “vicious commentary” and “cleaning up” the blog and wasn’t prepared to back them up.

1. So irritation is not a valid excuse for non-compliance.
2. What monopoly wouldn’t be “irritated” by a change in regulation & enforcement that threatens the monopoly? Those feeding IPCC are too used to having their cake and eating it too. If you are going to do research that affects the global economy you can expect to be asked to release your data to the public. This is fair. “Proprietary” data can not drive public policy. Release the data, or get out of the arena (of policy-oriented research).

Steve M did “really, really try”. Several times. To get someone to comply with a disclosure rule to which they had already agreed. Why should he really, really try to get the guy to show up here? What’s Kaufman going to reveal here that we don’t know already? Be realistic.

If Steve had simply sent a quick e-mail back to Kaufman saying, “I’d love to talk to you. Busy this afternoon. How about tomorrow afternoon?” And if he’d held off posting the initial reply from Kaufman until they’d talked, things might have been different… or not. The question is how to appear gracious when someone else is trying to get your goat.

Dave, when I go into a meeting, I like to have some idea what I’m trying to accomplish and I was still mulling that over when Kaufman had his temper tantrum barely 24 hours into the discussion. I tried to chat pleasantly with Hughes after the NAS panel meetings – he’s from Liverpool and I asked him about a then recent comeback match between Liverpool and a Turkish team; I tried to be sociable. I had beer with Crowley after the NAS panel meetings and chatted about basketball. I had lunch with Ammann and made a very sensible proposal to him. There was a nice reception after the NAS panel meeting in 2006 and I chatted pleasantly with D’Arrigo, Hegerl, North and others. I’m sure that I could chat with Kaufman about the Arizona Cardinals or the Phoenix Suns or the Phoenix Coyote bankruptcy – if I were stuck in an elevator, that’s the sort of small talk that I’d come up with.

But it’s hard to chit-chat on the phone about distributions and statistics. I hardly ever do that even with Ross. I think that your notion that a brief telephone conversation would yield relevant insight is wishful thinking. I agree that chit-chat has a useful social function and it’s too bad that Kaufman had a temper tantrum, but realistically it wouldn’t have been more than chitchat.

I guess that I’m used to life situations where using words like “unresponsive” do not cause the opposite party to wither like a precious little flower.

There are situations where it’s a reasonable strategy to be unresponsive. If I were being unresponsive as a strategy (as Kaufman was), you have to expect to be called out on it and not to take offence when you’re called on it. That’s what happens.

I try to describe situations accurately. Sometimes people don’t like what I report, but it’s not often that they can point to errors and defects in my reports (not that I claim infallibility – audits exist because people aren’t infallible- but I’m careful). In many cases, it’s probably not “nice” to call a spade a spade. It wasn’t very “nice” to observe that Aslak Grinsted’s embedding dimension manifold was simply a triangular filter and to somewhat the bubble – but the observation was correct.

I think that there’s a useful role for critical analysis of studies in the public eye. I try to be ensure that standards of politeness are maintained here – but there is no obligation to be gooey-nice.

Re: Dave Dardinger (#171), If Steve had … And if he’d ..things might have been different… ..The question is how to appear gracious when someone else is trying to get your goat.

My two penny worth having read this posting:

The question is not “how to appear gracious”, it is not how he asked for the data, it is not what would have happened if he had worded things differently, if he had said this or that, if he had not failed to grovel on his knees etc.

The question is should the the data be made available on request? The answer without a doubt is YES.

Re: Kenneth Fritsch (#130), I find these threads spend way too much time on who is offended and by what and whom…

True. Though who is offended and by what and whom is being raised as a justification for not revealing the data.

I wonder how any real meaningful discussion could be done on the reconstruction without the data. Maybe this does play into the hands of the criticism I have read elsewhere that Climate Audit keeps cribbing about the non-availability of data and making a mountain out of a molehill. But I dont see any way around it till the data is finally revealed.

Re: Jeff Id (#160),
That’s an appeal to authority. The correct questions to ask are “how is influence measured?” and “what are the relative measures of influence for each of the proxies?”. One way of measuring influence is the PC1 correlation. Ken listed them. Yamal #22 ranks middle of the pack. In my mind a more relevant measure is the difference between CWP and MWP reconstructed temp with and without the series. Do this for each of the series and you will have an objective measure of influence for each proxy, and you will find that #22 Yamal ranks near the top if not at the top. I like that measure better because I think people get too hung up on the breakpoint at the blade and don’t look enough at the slope of the handle. The substantive issue here is CWP vs MWP levels, not HS shape.

Actually it’s a little trickier than an appeal to authority. It seems like an unusually strong statement from Steve. If he wasn’t certain, usually he won’t say it so you outed my attempt to figure out if the study has been replicated. Heck, I’ve been way too busy to even read all the comments so if you told me Kenneth did it in #32 I’d have to look but in skimming, even the gridded hadcrut data has been a challenge.

That’s usually what I have in mind when I speak informally about “contribution” to HS-ness; or alternately, the difference between the late 20th century mean and the pre-20th century average. Yamal’s 7-sigma closing is VERY material to the overall 1995 average.

Re: Steve McIntyre (#168),
My comment was more directed to Jeff Id (and John A) or anyone else who might be too focused on overall shape (dominated by blade geometry) without looking at endpoint levels. I know you understand that the substantive issue is CWP vs MWP. A smooth rise to a 7-sigma value (!!) accomplishes both. (Say, what’s the sigma on the bottom of the blade at the breakpoint on Yamal? -2?)

Yes, we know that. In one of the posts, I observe that a 19-series subset does not have a HS shape. The HS-contributors are Yamal, the Loso varve series (hence the post on this series) and a couple of lesser contributors. In Kaufman’s decadal version, Yamal ends at a nearly 7-sigma standard deviation, much the highest of any other series.

I’ll do up a version in which I (1) replace Yamal with Polar Urals and see what happens; (2) experiment a little with the Loso series (which has a known inhomogeneity in 1957.)

Do we know that for sure, or is it just for the impressively obvious similarities between the final result and the one proxy.

Wouldn’t “knowing for sure” require direct access to a fully detailed end-to-end process archive of everything used to produce the Kaufman HS — data, programs, process descriptions, procedures, and analysis methods?
.
If these materials are not available for direct examination, then the process is a black box process for all practical purposes — triggering a blue box process here on the Climate Audit blog.

#166. A lot of materials are available for Kaufman et al, so quie a bit can be said with the existing record.

Kaufman archived decadal averages of series as he used them and that’s very helpful. A portion of the problem arises out of the fact that prior authors haven’t archived their data and Kaufman’s claim to have only used “publicly available data” is untrue. For example, Bradley’s Lake C2 wasn’t archived. I suspect that hell would freeze over before Bradley sent me the data for this as Bradley has refused any prior data requests without even acknowledging a single email. Kaufman’s suggestion that I try to get data from Bradley (who is additionally a coauthor of Kaufman et al) is a sham. I’ll work through Sciencemag and as noted above, I’m confident that I’ll get the data.

There is a spatial sampling bias in this study (Fig 1) that is disconcerting. The type of proxy (lake, tree, ice) correlates heavily with longitude (45W=Greenland, 150W=Alaska, 60E=Russia). This could introduce strong biases into the reconstruction. The point merits discussion.

Steve: if strong local patterns could be demonstrated, then perhaps. But typically series are inconsistent within a region e.g. Yamal and Polar Urals. IMO it’s the cherry picking of series that is the more material issue, as opposed to any potential regional bias.

You’re probably right that no data was forthcoming. But I don’t think that makes it OK to use language that would be abrasive. I realize you have plenty of experience in being obstructed, and that the ‘team’ doesn’t consider you to be engaged in science.

It’s not just this time. I saw the same type of behavior with Aslak Grinsted in a prior thread. Someone else commented that you weren’t being nice.

If you think he was being unresponsive as a strategy, then your words make sense. But why start the process if you don’t think you’ll get anything?

Mike, it’s plain as day to me that he was being intentionally unresponsive about supplying the data.

Why ask him for the data “if you don’t think you’ll get anything?” Maybe I’d be surprised. I didn’t know for certain in advance that he would be unresponsive. He might have said that the failure to archive the annual data was an oversight and that he’d promptly remedy the situation. That has happened. Not every data request goes into journal-litigation. Even in cases where I don’t expect the author to supply me the data, I still ask for it so that the author can’t say – well, he never asked me. It’s like following a legal process. You have to dot the i’s and cross the t’s.

It is my belief that the data should be publicly available at the time of publication of the article – especially when the author says that the data is “publicly available”. Thus, any notice to Kaufman prior to notifying the journal was merely courtesy on my part.

In my experience, if an author intends to provide the data, he does so right away on the first pass without any song and dance in response to an email written in my usual business-like tone.

This has happened on a couple of occasions with non-Team oceanographers (William Curry springs to mind), who were surprised that they hadn’t archived the requested data and took immediate steps to rectify the situation. Konrad Hughen, one of whose series was used by Kaufman, was another who archived data in response to an inquiry from me.

I ask for things in a business-like way. I think that I’m polite, but if I were worried about being “nice”, I wouldn’t be doing what I’m doing. In business terms, I’m not hard-nosed but I’ve got enough experience to recognize when people are being evasive.

I didn’t post up my first two requests to Kaufman, which I provide below. In the old days, I got up to 30 emails or so to Crowley without getting data. I’m much quicker to pull the plug now. I give the author a couple of opportunities to rectify the situation and then publicize it here and then go to the journal.

I did not immediately publicize Kaufman’s obstruction. I asked him offline first. Here’s the first request on Sep 3 (which also notified him of upside-down Mann):

Dear Dr Kaufman,

Mann et al 2008 used the Tiljander series upside down from the orientation in the underlying articles (see McIntyre and McKitrick, PNAS 2009) – a point confirmed with Mia Tiljander (pers comm). I notice that you used this data in the upside-down Mann orientation, though you seem to be aware of the issues surrounding this series, as you truncated it at AD1800. You should report that you used this series upside-down to the orientation recommended by the authors.

You say that you selected series from those with “publicly available data (8) (table S1) (www.ncdc.noaa.gov/paleo/pubs/jopl2008arctic/jopl2008arctic.html “. The link only refers to 6 or so of the 23 data sets. I have been unable to locate “publicly available” versions of many of the data sets including: SIU Table 1 series 3, 6, 7, 12,13,19, 21 (in a few cases, as you note, you used annual versions that are not publicly available.) Could you please provide me with the above data that is not publicly available. You may wish to amend your text to be a bit more precise prior to the final print version.

Regards, Steve McIntyre

To which he promptly replied (attaching the decadal version now available at NOAA, but not when I sent my first letter):

Thank you for your comments.

I wasn’t aware that Mann et al. had flipped the Korttajarvi series, and I’m afraid that I didn’t see your correction in PNAS. I’ve plotted our original series along with the re-orientation of the Korttajarvi data to get a feel for the effect on the overall result (attached).

I’ve attached an excel file with the data as well. The column headings are keyed to the sites in Table S1.

Thank you for your interest in the study,

I responded that the version supplied was not what I asked for and re-requested the data as follows:

Thank you for your acknowledgement. I note that you have provided the decadally averaged and re-scaled versions as used – good -, but this is not exactly what I’d asked for, You had said that you had selectly “publicly available” daa. I’m pretty familiar with “publicly available” data sources and I haven’t been able to locate a public source for 6- Lake C2; 19- LAke Nautajarvi; 21- Lake Lehmilampi. Could you point me to the public location of these particular series that are supposedly “publicly available”? In the alternative, I’d appreciate copies of the data prior to the decadal averaging and re-scaling.

I am only aware of Mann’s version of 20- Lake Korttajarvi in public circulation though I obtained an original version of the series from Mia Tiljander, which corresponds. Is there another public version of this series?

As you note in your article, you used annual versions of 7,12,13 and 16 that are not publicly available. Again, I’d appreciate copies of these series as I am interested in examining this dataset on an annual as well as a decadal basis.

You observe that it doesn’t matter whether Korttajarvi is used upside down or not. You should probably check whether the other Finnish series are used upside down as well. My guess is that it probably won’t matter whether all of them are used upside down. It would be interesting to experiment with whether you can turn a majority of the series upside down without “mattering”. I suspect that you could as the Hockey Stick-ness of the result seems to depend heavily on series 1, 4, 9 and, of course, Briffa’s Yamal series, which has been a staple of these sorts of studies for many years. It would be highly desirable for someone to do a detailed reconciliation of why the updated version of Polar Urals yields such different results to Briffa’s Yamal series.

Regards, Steve McIntyre

This received no acknowledgement within a week or so. So I sent the follow-up in the correspondence already shown above.

I think that the correspondence is business-like. It’s how I write. I do not believe if that if the correspondence had been a bit “nicer” that Kaufman would have sent the data.

The last note sent contained two messages. One was a continued request for data and the other was a description of the types of auditing that should be performed on the data. I’ve always found that it is better to keep to a single message in a note. The description of the types of auditing that need to be performed could have been misconstrued as a direct statement about the quality of the paper and the competence of the authors.

On the other hand, did they not expect critical and stringent examination of the results of an article, published in Science, that makes such strong claims. I have heard far more critical commentary and questions about papers expressed in the presence of the authors at panel sessions in engineering conferences. I have heard one major researcher in the field of machine learning being told that her work was unsound and that she lacked credibility during a Q&A session of a keynote address at an AI conference. Her main fault was that her simple systems worked while the complicated systems of the prior art didn’t. She had the embarrassed the machine learning establishment and was to be made to pay a price. I suppose that there is a direct analogy to the current situation in this..

Steve: the PI meeting notes stated that they should be prepared to be “SCRUTINIZED” and to be a “lightning rod” and that their data should be available. For the nth time, I could have said pretty please with sugar on it, Kaufman had no intent of giving me the data.

Re: bender (#123), And yet they KNEW they would be SCRUTINIZED, even if by non-expert (whatever that means) hacks. And so now that day has come … and so why on Earth are they unprepared? (Copenhagen rush job?)

Hence why there’s no mention on http://www.realclimate.org/ about Kaufman even though the paper’s been out for a while. Just some more navel gazing posts.

Re: BlogReader (#192),
I wondered about what they might (NOT) be saying. The silence is deafening. The only question is which is more distiurbing to them: (1) trying to defend OMG-the-arctic-is-melting-and-IWTWT pseudoscience, or (2) coping with the possibility that burning fossil fuels in the 20th c. has prevented the next ice age. I’m always interested to hear what Gavin has to say. The rest I can do without.

One Trackback

[…] On the tobacco issue, the first major study on the link between lung cancer, heart attacks and smoking was ground-breaking research based on questionnaires returned from over 34000 British doctors. This study was continued for 50 years, reinforcing the original findings. Further, independent studies not only corroborated these initial findings, but enhanced the detail. Much of the initial temperature data for AGW studies were more ambiguous, reliant on a loose correspondence between the rise in greenhouse gases and average global temperatures. Moreover, data is often not properly archived, whether early studies (eg. Jones et al 1990), or later ones (e.g. Kaufman et al 2009) […]