Hegerl et al in this week's Nature

There’s a new study by Hegerl et al. in this week’s Nature, which, among other things, describes the performance of something called the CH-blend, a secret blend of 12 proxies – a secret somewhere up there with the Caramilk secret. As I mentioned previously, I requested the identity of the sites in the blend as part of the IPCC review process and was refused and even threatened with expulsion as a reviewer if I made any further attempts to obtain the data from Hegerl et al. At the time, I was told that such considerations were the province of the journal and that it was not the job of an IPCC reviewer to carry out any additional due diligence on data.

I brought this refusal up at the NAS Panel discussed here. Hegerl explained to the NAS Panel the paper was under review by Nature, implying that showing the site locations to an IPCC reviewer would be a breach of Nature embargo. While I objected, I got the impression that, with the mention of a Nature embargo, a hush fell over the NAS panel and everyone immediately "understood".

So naturally when Hegerl et al. was finally published in Nature this week, I immediately went to see if the Caramilk secret was finally revealed.

Three other paleo reconstructions are mentioned (Mann and Jones [2003], Briffa et al [2001] and Esper et al [2002] as re-calibrated in Cook et al [2004], but most of the discussion is about the CH-blend, about which comments such as the following are made:

For CH-blend, our estimate of climate sensitivity fully accounts for the uncertainty in the amplitude of the record.

Results for the CH-blend (short) reconstruction, for which we have the most reliable uncertainty estimate, yield a 5-95% range for sensitivity of 1.4K to 6.1K and a median sensitivity of 2.6K over the preinstrumental period 1505-1850 (Figure 3a).

The resulting 5-95% ranges for CH-blend (short) shrink to 1.6K to 4.6K, those for all proxy data combined to 1.5K to 6.2K. This result reduces the probability from 36% to 15% or less that climate sensitivity exceeds the upper limit of the IPCC range of 4.5K.

So what is the CH-blend? Well, it is said to be:

our own new decadal reconstruction termed “CH-blend” of annual average 30-90°N temperature [15] (Figure 1). A version of CH-blend using 12 records extends from 1505 to 1960; and a reconstruction based on 9 sites (“CH-blend long”) is used from 1270. Both reconstructions use a relatively small number of well spaced sites (often based on multiple records, including some regional reconstructions).

The acknowledgements also said that "TC provided the reconstruction of past forcing and developed the CH-blend reconstruction" – that must be the Tom fellow that Hegerl talked about so much in her NAS panel presentation.

However, they didn’t actually mention what the sites were. OK, it’s not unreasonable to put lists in Supplementary Information and Hegerl et al provided a Supplementary Information here . The SI webpage tantalizingly shows a table described as follows:

This file contains our new reconstruction and its 2.5% and 5% uncertainty ranges. It contains data necessary to reproduce our result.

Aha, the Caramilk secret finally. But, no. The Table only provided a digital version of the CH-blend from 1251 to 1960. To 1960 ?!? Why only 1960? Was there a Divergence Problem?

So what are the 12 sites? Well, they are nowhere listed in the Nature article. Instead, they are attributed to: Hegerl, G. C. et al. Detection of human influence on a new, validated 1500 yr temperature reconstruction. J. Climate, submitted. (2006).

A couple of points:

(1) Journal of Climate has no rules putting an embargo on identifying data. So this excuse of a Nature embargo has been total nonsense from beginning to end.
(2) IPCC refused data to me on the basis that that was the job of journal reviewers. Here we have an example of a paper which I have been following through the process and there is not a speck of evidence that the site details were either provided to or considered by Nature reviewers.

Just for fun, I’m going to try to guess what the Hegerl et al sites are, from the limited information available and from the other guiding Hockey Team principle -maximum non-independence of data. The article says that they used 12 series from 1505 on and 9 series from 1270 on (although the SI starts in 1251). Why didn’t they use the series from 1251 on? So here are my guesses together with reasons:

(1) the van Engeln composite starts in 1251 – the same year as the CH-blend starts in the SI. The van Engeln composite is used in Jones and Mann 2004 and Osborn and Briffa 2006. So it’s my top pick as being in the CH-blend.

(2) my next pick is the Yang China composite. If they used a van Engeln series in Europe, there’s a good chance that they used the Yang composite – it has the added attraction to the Hockey Team of including the Dunde and Guliya ice cores, which impart a HS to it. It was used in Mann and Jones 2003, Moberg and Osborn and Briffa 2006. So it’s my #2 pick.

(3) if they used these two composites, they will also probably use a Greenland ice core. Probably the Fisher dO18 series from West Greenland – which is used over and over – MBH, Jones et al 1998, etc. I strongly doubt that they would a Greenland borehole as these have high MWP values. Crowley and Lowery 2000 used GISP and the Tom fellow might have returned to this series. But on balance, I’ll guess the Fisher composite.

With only 12 series, I think that there will be 9 tree ring series, so I’ll try to guess from these.

(4) The information shows that 3 series are added in between 1270 and 1505. Osborn and Briffa have 3 tree ring series joining in during that period: Quebec, Tirol and Mangazeja. I might vary these guesses as I think about it a little more.

(5) So we have 6 more series to guess. I’ll guess that they will be old faithful’s in Osborn and Briffa: the Luckman-Wilson Alberta series, Tornetrask, Yamal, Taimyr, Jacoby’s Mongolia and yes even the PC1 from Mann and Jones [2003].

But whatever the answer the Caramilk secret is still locked in the safe at the Nicholas School of Environmental Studies.

BTW, I’m going to immediately send in a Materials Complaint to Nature about the failure to provide the data for the CH-blend. They agreed that Moberg had to provide his data and even required Moberg to issue a Corrigendum a month ago related to data access. So there’s little doubt in my mind that they will require Hegerl et al to disclose the sites and the site data. But it’s ridiculous to have to do this all over again. It takes about 15 seconds to see the problem. You’d think that they would do this ahead of time.

But to date, as we’ve seen here, the last IPCC report has been quoted extensively as a kind of last word on certain key factors.

But the IPCC does not actually do any scientific work to confirm such and validate such factors, instead it’s kind of a “Wha’ts happened in the peer reviewed literature in the past X years” Almost like a Readers Digest condensed version.

As such should it be cited with as much confidence as it usually is?

That not even getting into the veracity of the Peer reviewed journals.

This story sounds oddly like a movie I once saw: “Indiana McIntyre and the Raiders of the Lost Arc(hived datasets)” Favorite scene: Indie pulls out his materials complaint and dispatches the flashy, self-impressed sword-swinger with a look of utter disgust.

Does Nature have an explicit policy of an embargo on data under review ?
Will you be formally copying this to the NAS Panel, and the editor of the Journal of Climate ?
Is the IPCC’s refusal to do due dili explicitly stated anywhere in their rules and regulations ?

Peter, your opinion is similar to a train track. It doesn’t swerve much, in spite of the mountains in its way. If you actually provide an unbiased, scientific review of anything, I will make it a point to roll over in my grave when I die.

jae, the problem is that most of the vocal “majority” aren’t outside observers.

#12. Andrew Weaver is the editor of Journal of Climate. The chances of a Hockey Team article being rejected by Andrew Weaver are nil.

AS to the impact of rejection, look at the example of Wahl and Ammann at Climatic Change. They relied even more heavily on benchmarks from their GRL submission, which was rejected – after acceptance but before going to print or even type setting. As you know, I wrote Schneider about this; he wrote back telling me more or less to screw off. Climatic Change didn’t give a damn about the rejection. I wonder if they’ll even change the wording.

Or another example, Jones and Mann [2004] cited Mann et al [submitted to Climatic Change] as evidence that MM03 was wrong. The Mann submission to Climatic Change was rejected because Mann refused to provide data. But by that time, they got their sound bite into circulation through Jones and Mann [2004] and could circulate that sound bite without the need to provide any evidence.

Re #20 you betray your real self ‘sir’ and for the second time in this thread.

More interestingly the BBC programme concluded ‘nothing in our progamme should give comfort to these sceptics’ otoh, it did raise genuine concern about both the reporting of science and alarmism sometimes associated. I agreed with a lot of it.

re #8: Does Nature have an explicit policy of an embargo on data under review ?

-this took about 45 seconds to find on Natures’ Publication Policies page. I copy only the first paragraph; the remainder is primarily clarificatinsand extensin s of this,a dn you can read it there anyway:

“5. Prepublicity.
Once submitted, contributions must not be discussed with the media (including other scientific journals) until the publication date; advertising the contents of any contribution to the media may lead to rejection. The only exception is in the week before publication, during which contributions may be discussed with the media if authors and their representatives (institutions, funders) clearly indicate to journalists that their contents must not be publicized until Nature’s press embargo has elapsed (1800 h local UK time on the day before the publication date).

And before you ask; find the link your own damn self. It isn’t difficult. You should have done it **before** you asked a question with scurrilous imputations.
——-

In the original article, Steve says:
“…there is not a speck of evidence that the site details were either provided to or considered by Nature reviewers.”

Also from the Publication Policies page:
“Authors submitting contributions to Nature who have related material under consideration or in press elsewhere should send a clearly marked copy, in confidence, to Nature at submission. Authors must disclose any such information while their contributions are under consideration at Nature.”

So, the supporting paper, by Nature’s policies, must have accompanied this paper. Now, review is confidential (which means, BTW, that trying to insinuate what the reviewers asked for and did or did not look at is simply guesswork, at best; Steve could just as easily claim that there is no evidence that reviewers looked at anything at all, or even existed ata ll), but by policy Nature required that the supporting paper be in their hands during review.
—-

And as long as we are indulging in guesswork about review: I have, a couple of times, reviewed papers (not in climatology; I’m a biologist) that critically depended on not yet published work either in press or submitted. In the in press case the relevant paper accompanied the review package that was sent to me. In the submitted case, I requested it and had it in my hands within a week. In both cases I recommended acceptance with minor revisions, and in the case of the submitted paper included a note to the editor that I had looked at the accompanyig submitted aper, it looked sound and my judgement as tot that formed part of the basis of my recomendation, but I was not the formal reviewer of that paper, and left that decision in the editor’s hands.

The fact that it is obviosu that this is critical information, it is referenced as not yet published, is referenced as being submitted, and is clearly a sufficient body of work to deserve independent publication, argues strongly agaisnt the imputation that anything is shady here, as Steve strongly implies and as some of the responses are pretty overtly saying.

The fact that it is obviosu that this is critical information, it is referenced as not yet published, is referenced as being submitted, and is clearly a sufficient body of work to deserve independent publication, argues strongly agaisnt the imputation that anything is shady here, as Steve strongly implies and as some of the responses are pretty overtly saying.

Maybe nothing “shady,” but it certainly creates some problems when a critical cited paper is rejected, which is what happened in the case Steve mentions. Shouldn’t a responsible Journal (and a responsible IPCC) make note of the fact that some of the supporting evidence for a paper has been rejected (for whatever reason)?

#23 & #24 – I think that post #16 expresses Steve’s frustrations about the nature of split papers. If one is dependant on the other, is it right to publish the first without the other passing peer-review? Would the peer-reviewers of one paper be necessarily the appropriate choice for the other paper especially in light of possible controversies? With reference to the particular Wahl & Ammamm paper accepted by Climatic Change, if their proof for the RE statistic is in the rejected GRL paper, what merits does their argument have? Schnieder will appear to be playing politics to try to protect MBH98 if he does publish. This can however only be a figleaf and will actually damage his cause. I don’t believe he has thought this through.

#23 Lee, either Steve M. will be chastened, or you will be. For myself, if the past is any guide (and as a ranter against inductive conclusion-mongering I freely admit I’m working from a necessarily ill-constrained hypothesis here), the referenced, submitted Journal of Climate paper will include a map with proxy-site dots but will not list the specific ITRDB or other series used in the proxy re-construction; neither in the JoC paper nor in the supplementary material.

If that turns out to be the case, Steve M. will ask JoC for a list of the proxy data series, and the journal editor will stonewall.

#23. Lee, I said at the top of the post that Hegerl seemed nice to me.

There are many shades between poor practice and shadiness. If you’ve followed this blog at all, I’ve commented often on poor practices in the paleoclimate community of archiving data. This is merely one more example. I don’t think that the failure to archive the data is a sign of “shadiness” per se, as much as poor practice.

Having said that, now that I see the truncation of the CH blend at 1960, I’m definitely anxious to see what’s in the blend. (I should probably change some of my guesses as to what’s in the blend, as these guesses would give enough of a HS shape that they would probably not truncate in 1960.)

In the past, there has been some definite sharp practice in how the Hockey Team deals with post-1960 values and the “Divergence Problem”. I posted up last year that the post-1960 values of Briffa’s MXD construction in Briffa [2000] (which go down) were clipped off in IPCC TAR and then in later editions of Briffa’s MXD network. Crowley spliced instrumental values into the Crowley and Lowery [2000] proxy record.

Without the data, one has no idea why the post-1960 values were truncated.

Lee, do you have any suggestions as to why the CH blend is truncated in 1960?

“Lee, do you have any suggestions as to why the CH blend is truncated in 1960? ”

Without the paper? No, of course not. As you just said (“Without the data, one has no idea why the post-1960 values were truncated.”), neither do you.

I haven’t followed this blog. I’ve recently, over the past few days, been doing a tour of science-oriented blogs related to AGW.

As in introduction to this site; this thread is dreadful. The article is filled with veiled and indirect claims of scientific dishonesty.

This introduction to the aprt where you guess about eh proxy series:

“Just for fun, I’m going to try to guess what the Hegerl et al sites are, from the limited information available and from the other guiding Hockey Team principle -maximum non-independence of data.”

is mroe than a veiled cliam. You say direclty tha tyo expect tha twhen you se the data, it will have been selected primarily to give the results they desired. That is an accusation of dishoinesty, based by yor own admission on guesswork at best.

I’m still going to look around here; the controvery that you ignited on this narrow proxy topic is worth exploring. But this is not a promising start.

If you would like an introduction to this blog, try the “Road Map” post. To expect every, and/or any, post on a blog to be an introduction to that blog is patently absurd.

“That is an accusation of dishoinesty, based by yor own admission on guesswork at best.”

Dishonesty is neither asserted, nor implied. Your assertion of an “accusation of dishoinesty” is baseless. If you have any hope, much less expectation, of being taken seriously here, try being at least plausible, or better yet, accurate.

Abysmal incompetence on the part of the authors, and/or the editor/reviewers, may be as plausible an explanation as dishonsesty for the behaviours that Steve mentions. Perhaps I should thank you for serving as a volunteer foil for an occasion expressly to state such an obvious point.

As an apparenty new visitor to this site, your evident ignorance of what has transpired before is quite understandable, if not applaudible, but your apparent arrogance/presumptuousness is not.

You should understand that not revealing data and methodology is the chief bugbear of Steve McIntyre. After you’ve been accused of “intimidation” and worse for even asking to see them, you’ll know why Steve is sometimes a little sarcastic and more than a little suspicious of “secret blends” of proxies.

Lee, you might also consider whether the Nature restriction would (or should) include knowledge of the source information that the paper relies upon, as it is this that Steve is seeking. The authors have clearly been able to disclose that they use a blended series of proxies with quite specific (and important) properties, and they have been able to name that blend. It would not seem then to be a problem to then identify the actual proxies would it ?

If, Lee, you had made the effort to gain some background in this field you might not have been so naive as to assume that such data was always readily available once publication was complete. Once you realise that such vital and relevant data has been frequently deliberately withheld, often for a number of years, you might understand the tone a bit better. When you further realise that there are significant questions on the suitability of the data and its selection, you would understand even better. Can you, in your field, select which pieces of the data you report upon, omitting those that would tend to invalidate your conclusions, and then expect to be able to prevent the discovery of such tactics by refusing to reveal which data you did use and how you selected it ?

I was not referrring to a “here, letus tell you a bit about the place” introduction. I was referring to a first impression of style and substance. And the entire middle of Steve’s post contains guesswork based on an hinted on (at least) assumptions that the authors were going to be cherrypickng data based on desired conclusions. How on earth is that NOT an implication of dishonesty.

Ed, in my field (and its been a while. I’m not currently doing science; I backed off to a slower-track career in deference to the fact that I place a higher value on time with my family), it is common to exclude large parts of the raw data. As long as one does it based on logical criteria related to the methodology, and not to the desired goals, than there si no problem. And you make an implication of overt dishonesty. “omitting those that would tend to invalidate your conclusions” has litle other possible interpretation.

In my own experience, after 25 years as an academic research scientist, the stonewalling by researchers and editors following Steve M.’s entirely reasonable requests for data and methods is not short of shameful; perhaps (one hopes, anyway) unprecedented.

The apparent confusions over which data sets were used in a proxy, the loss of proxy data sets, the misplacements of data sets, the ‘I’ll-get-that-for-you-when-I-have-time-but-never-appearing’ data sets, the data truncations left unspecified in published methods, and the apparently wide-spread practice of citation-kiting critical data into separate submissions, the tendentious and circular cherry-picking of data, the use of data-mining computer algorithms, and the seemingly incestuous refereeing of climate proxy papers, all imply a field riven with practices amounting to pathological science. Indeed, pseudo-science comes to mind.

Dear Lee
maybe I can understand that you found it a fairly forceful introduction to the site. However, it is worth considering the issues raised.

We have a paper in nature which relies heavily on a paper which is not even in press, but merely submitted. I know that I have tracked back to prestigious papers, seen a reference to in press/ submitted, and then been completely unable to locate the “in press” article. It isn’t hard to see that authors are going to be quite happy with the Nature paper, and maybe not too concerned with their minor paper, especially if they get embarrassing referee’s comments; which could even involve the embarrassing prospect of changing the method/ results which they have already published…

I appreciate that you had the time and ability to referee not only a paper, but also a paper to which it referred ! The Nature paper was on climate models, and yet it relies on a paper which is about a climate reconstruction. I find it quite plausible that the referees would neither have the time nor the expertise to referee the “in press” article; and I can’t see how Nature is going to referee that article to the standards of JClimate (or whatever) to find out if it is acceptable.

Me personally, I will be delighted if the “submitted” paper is published, all the data is fully archived, and it is an excellent paper. Right now, I cannot know if that will transpire.
yours
per

#33. Lee, I usually try to be forceful without being strident and I apologize if I erred here. If I posted up my correspondence with the authors and with IPCC, you might understand a little better.

In this case, you were offended by my predicting the proxy series. You say that making such a prediction is an imputation of dishonesty. Look, I’m not the one that’s doing the imputing, I’m only doing the predicting. But I’ll bet that I’ve got at least 6 of 12 proxies right. There are a few that I’m not sure of, but van Engeln, Yang, Mongolia, Yamal, Alberta, something from the bristlecones – I’d be shocked if they are not in the CH-blend.

Lee, you acknowledge the very point being made, “As long as one does it based on logical criteria related to the methodology, and not to the desired goals, than there si no problem.” If requested you surely would be able (and willing) to disclose your selection criteria and be able to justify them as logical criteria related to the methodology, and not the desired goals.

Have a look at the records of practitioners and see if this standard is being met. For what it is worth, I would opine that it is not. It is surely contrary to your standard to select the most “hockeystick” shaped proxy records and use them to demonstrate that temperature as reconstructed has a “hockeystick” shape without any cogently argued reason for selecting just those proxies ? If dendro climate proxies from around 1960 onward do not accurately track (in general, although some records do) your instrumental temperature records, would you think it acceptable to ignore all but the compliant records after 1960, citing some unknown and unspecified “anthropological effect” as the cause ?

There need not be overt dishonesty, but the practitioners could do much to improve their disclosure. Heck, if there is a convincing story on all these matters, why not set it out and explain it. I for one would be more than happy to read and evaluate such to the best of my ability. This science has significant public policy implications, openness and disclosure are vital to an informed discussion. What’s wrong with openness, disclosure, and the archiving of data ?

2. You’re full of **** for saying that Steve shouldn’t try to guess the series. Why the hell are they secret? And WHISKEY TANGO FOXTROT is wrong with trying to guess the answers to puzzles? Oh…and Steve rawks at figuring out what the hell these guys are doing.

” I was referring to a first impression of style and substance. And the entire middle of Steve’s post contains guesswork based on an hinted on (at least) assumptions that the authors were going to be cherrypickng data based on desired conclusions. ”

Just a selected quote.

So Lee
Can I assume that you’ve walked into the middle of a “conversation” that has been going on for more than a year (This blog), taken one statement out of context (for you have admitted to not familiarizing yourself with the rest of this blog) and started to make all sorts of accusations about Steve and this blog in general. How presactly is your accusation any different than Steve’s. I can tell you one way that it is different. Steve has been doing this for years, as such he has loads of information about these people, and their methods. You on the other hand, as you have stated have none.

Fact of the matter is that Steve made his thinly veiled sarcastic remarks in regards to cherry picking, data truncation etc, not based on this one incident, but on a pattern of behavior by the people involved that goes back over a decade.

To use an analogy (And it’s just an analogy, not to attribute behaviors to these people).

If you see one person smoking a joint once, and someone says, that guy is a bad drug addict. You then say “How can you say that, you’ve only seen him smoke one joint.”

An interesting thing has happened. You’ve just accused the person of jumping to conclusion based upon the fact That YOU have in fact jumped to a conclusion. Because you don’t know that, that person has watched the other person waste his life away on drugs for a decade.

So before you start jumping to conclusions about Steve Accusing others of cherry picking data, or onerous data manipulation. Maybe you should try to get the whole story first, and see what these “scientists” have done in the past that might lead Steve to make a particular accusation, before you start implying, without having sufficient knowledge, that others are being unduly dishonest.

Actually, Steve’s done quite a bold thing here, by providing a falsifiable hypothesis. If Steve is right in his predictions about the CH-Blend ingredients, it shows he knows the data field perhaps even better than the authors, as well as proving the larger point about cherry-picking.

If he’s wrong, Lambert can publish “McIntyre Screws up” (“again” is not required, as Tim has never laid a glove on Steve).

#40. When you think about it, maybe it’s a good thing that Hegerl et al didn’t report their makeup of their blend since it gives an opportunity to speculate on its composition in advance. It adds a little fun. If I get 2 or fewer right, I would say that I’m under-estimating the Hockey Team. If I get 5 or more right, then I’d say that I’ve got their number – you can throw in some random series for variation without affecting very much so I don’t claim to be able to predict selections with 100% certainty.

My guesses above were done in about 10 minutes. I might want to revise the list slightly to consider quirks of the Tom fellow. For example, in Crowley and Lowery 2000, he used Zhu 1973, a series known to be obsolete. The other thing that worries me a little about my predictions is the post-1960 truncation. With the roster that I’ve posted above, they’d have a HS through 1980 and would probably not truncate in 1960. I’ll have to think about this a little. I’ll think about it some more and do one more iteration.

I think that a possible defense that they might have would be that these are the series that are the best temp proxies because they track instrumental temp best. You can engage on this then, but it will need to be at a higher level of sophistication.

1. Show other proxies that track temp, but were not included
2. Show the divergence problem.
3. Some sort of noise argument that shows that all proxies tend to be randomly noisy in general and average out to zero and that by controlling for one area, you can get that correlation with instrument over a short period with enough samples, but will be straight line in the rest of the arena since that is the natural tendancy. (There must be some mathematical test to show this.)

Somebody here already did something like that because I remember it making some things clear to me and remarking on that. Basically the shaft of the the hockey stick had to regress to the mean far away from the forced blade which meant you were bound to have a gradually descending shaft to link up to the forced rising instrumental record. And the blade itself had to be rather flat because of the noise cancelation you’re talking about. IOW, the major features of the Hockeystick were built into it by construction.

Lee: Please read some of the threads on this blog. You will simply be amazed at what Steve has uncovered, regarding the deficiencies in paleoclimatology, especially the dendroclimatology portions. You will have a hard time believing all the stonewalling, misrepresentations, and general unscientific behavior that Steve has battled. It’s the best “novel” I’ve ever read, and it’s in real time.

Lee: Also, note the title of this blog. All Steve is trying to do is reproduce some of the studies. He has been trying to get the necessary data and methodology from some of the authors for over a year and in doing so has uncovered many problems with the data, the methods, the peer-review system, the journals, etc. He is not on some crusade against global warming, which he has made clear many times.

I agree that we’ve seen this type of analysis at other times/places on the blog. Not claiming that it’s my idea or that Ross/Steve have not thought this way in past. Just making point that this may be the way to analyze this.

Was annoyed when Ross said (in effect) “I’ve looked at overall reconstructions in past” to defend his use of analysis of weighting within PC1 versus within the overall reconstruction. It’s not about whether we’ve done the right thing in the past, but whether we’re doing right thing at the moment.

Lee,
It’s all in the public record. Climate scientists admiting that “In order to make cherry pie one has to pick some cherries”, and “dendroclimatology is the only science where cherry-picking is allowed”. John Hunter (and others) have come right out and said that they don’t feel compelled to release data from published studies unless the requester, in their opinion, is competent to understand it and wants to work with them. Mann has admitted that he is “not a statistician”, yet his widely hailed published works are (somewhat novel) statistical analyses of existing data. It’s all here on this site. My advice: do some reading and catch up with the rest of us first.

In this week’s issue of Nature, they report a 5% probability that climate sensitivity is less than 1.5°C and a 95% chance that it’s less than 6.2°C. That’s still pretty high, but a far cry from 9°C or 11°C.