Using our model, we calculate that there is a 36% posterior probability that 1998 was the warmest year over the past thousand. If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years.

Comments

The temperature of a cave in China, as you ought to know, no more proves the existence or otherwise of the MWP and LIA than an unseasonally warm spring day would prove AGW. Both are local phenomena, one in space, the other in time.

It’s not a brand name, it’s a contraction of a longer one – “I Can’t Believe It’s Not A Final Nail”.

It’s all quite Gilligan-esque – this is the one where they almost get off the island! Absolutely for sure – just like last time – the Final Nail is being driven into a fat lady, to make her sing. Oh, and the jig is up. Can’t have a denialist circle-jerk without the jig being up.

MFS; again you don’t read what I write, which is about Hockey-sticks; that’s relevant isn’t it?

jakerman; so, you went to the WFT anomaly chart; where are your calculations for the trend slopes you conclude at 193? In respect of trend slopes I suspect you are including 2010 as a full year; that is inappropriate since 2009 is the last complete year so this trend depiction is truer:

Ahaha – brilliant parody there, highlighting how the pointless cherry pick of specific outlier years – 1998 in particular – was used in the past to dishonestly create warped trends. I especially liked the use of 40 year trends compared to a 12-year one, that’s a really nice touch. Thankfully almost nobody tries this old nonsense on anymore…

This would seem to be an apposite time to make mention of the paper “Phenological data series of cherry tree flowering in Kyoto, Japan, and its application to reconstruction of springtime temperatures since the 9th century” by Aono & Kazui (Int. J. Climatol. 28: 905–914 (2008). [DOI: 10.1002/joc.1594](http://onlinelibrary.wiley.com/doi/10.1002/joc.1594/abstract)). Their figure 6 is especially interesting, although as it is in three panels the implication of the trajectory is a little difficult to discern.

The x-axis is time in years AD, and the y-axis is the mean March temperature as inferred from the commencement of blossom burst, dutifully noted by generations of Japanese celebrating the Cherry Blossom Festival.

It is statistically appropriate to point to this year’s frigidity as evidence that the theory of man-made global warming is suspect.

Or so he wrote on January 22nd this year, in a piece called ‘Actually Weather is Climate.’

So how much more statistically appropriate can it be to take the unprecedented freak weather conditions prevailing across the globe this year, not to mention the record global temperature overall, to confirm the theory of AGW, given that they are completely consistent with it?

And Briggs seems to have been blissfully unaware in that post that the world does not begin and end in the north-eastern US; rather a remarkable oversight for some putative statistical Solomon…

No amount of nit-picking or rotating the Hockey Stick clockwise can actually make the observable phenomena around us go away, and we have to remember that by 2100 the blade’s dominance over the handle will be beyond the power of tweaking to ignore.

jakerman: “and why is my warming slope higher than yours if there was a slowing from the beging of 1998 to present?”

Because your slope begins in 1975 not 1976 where there was a pronounced step or break in the trend; by starting a year earlier your OLS spreads the step from a cooler beginning accentuating the slope; from the endpoint you also ignore the downtrend from 1998, the hottest year of the 30 year satellite period; correcting your warming slope to take account of these 2 dominant factors and your increased warming trend disappears.

The denier song never changes. It has four verses: the “Look, an anomaly!!!” verse, the global warming has stopped verse, the hockey stick is broken verse, and the Al Gore verse. These verses are separated by choruses of “World socialist government! Oh nooooo!”

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with university level, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.
On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD;
what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample. As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all
able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures
of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.
Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated.
The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of 42 truly independent observations (i.e., the effective sample size) may be just
too small for accurate reconstruction.
Climate scientists have greatly underestimated the uncertainty of proxybased reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast
out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).
Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many
centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate.
Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes
here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

For anyone who might be wondering, here’s a rough and ready overlay of McShane’s and Wyner’s graph at the head of this thread, on the [very rough and ready cut’n’paste of the Aono and Kazui cherry blossom figure](http://i36.tinypic.com/2hehf7p.jpg).

They agree quite well after around 1400 AD, but someone’s wagon seems to be wheel-less before that time…

Apologies for the poor quality of the graphic – not having my own laptop to hand, I had only Paint and Word to merge the figures together, and little patience to do it carefully!

Research on multi-proxy temperature reconstructions of the earth’s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with university level, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in some respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.

On the one hand, we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data. The fundamental problem is that there is a limited amount of proxy data which dates back to 1000 AD; what is available is weakly predictive of global annual temperature. Our backcasting methods, which track quite closely the methods applied most recently in Mann (2008) to the same data, are unable to catch the sharp run up in temperatures recorded in the 1990s, even in-sample. As can be seen in Figure 15, our estimate of the run up in temperature in the 1990s has a much smaller slope than the actual temperature series. Furthermore, the lower frame of Figure 18 clearly reveals that the proxy model is not at all able to track the high gradient segment. Consequently, the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.

Our main contribution is our efforts to seriously grapple with the uncertainty involved in paleoclimatological reconstructions. Regression of high dimensional time series is always a complex problem with many traps. In our case, the particular challenges include (i) a short sequence of training data, (ii) more predictors than observations, (iii) a very weak signal, and (iv) response and predictor variables which are both strongly autocorrelated. The final point is particularly troublesome: since the data is not easily modeled by a simple autoregressive process it follows that the number of 42 truly independent observations (i.e., the effective sample size) may be just too small for accurate reconstruction.

Climate scientists have greatly underestimated the uncertainty of proxybased reconstructions and hence have been overconfident in their models. We have shown that time dependence in the temperature series is sufficiently strong to permit complex sequences of random numbers to forecast out-of-sample reasonably well fairly frequently (see, for example, Figure 9). Furthermore, even proxy based models with approximately the same amount of reconstructive skill (Figures 11,12, and 13), produce strikingly dissimilar historical backcasts: some of these look like hockey sticks but most do not (Figure 14).

Natural climate variability is not well understood and is probably quite large. It is not clear that the proxies currently used to predict temperature are even predictive of it at the scale of several decades let alone over many centuries. Nonetheless, paleoclimatoligical reconstructions constitute only one source of evidence in the AGW debate.

Our work stands entirely on the shoulders of those environmental scientists who labored untold years to assemble the vast network of natural proxies. Although we assume the reliability of their data for our purposes here, there still remains a considerable number of outstanding questions that can only be answered with a free and open inquiry and a great deal of replication.

>*Because your slope begins in 1975 not 1976 where there was a pronounced step or break in the trend*

So you cut out the warming from 1975 to 1976. And did you expect us not laugh?

>*by starting a year earlier your OLS spreads the step from a cooler beginning accentuating the slope;*

Again what you are doing is removing the warming 1975 to 1976. Why not remove some more and argue that warming is even less?

>*from the endpoint you also ignore the downtrend from 1998, the hottest year of the 30 year satellite period; correcting your warming slope to take account of these 2 dominant factors and your increased warming trend disappears.*

OMG.
HOW ABOUT IF YOU JUST READ THE PAPER instead of ridiculing yourself. Please read even the Conclusions if anything else.

They indeed cite M&M various times (you said they didnt). They support Wegmans findings. They conclude THS is heavily lacking in the data. They conclude even fake noise has more ‘predictability’. They critisize the blade from thermometer data (page 3). They critisize almost EVERYTHING that hockey stick is and was. Even the plot you so much love to cherry pick out of context DOES NOT SUGGEST MBH98 was correct. Yeah, it is a bit stickey if you tilt it counter-clockwise.

No. It does NOT replicate nor support THS in the manner you would like to believe it does. Please take your AGW-glasses off.

Does any true believer in CAGW dare to read this paper? Judging from this thread they prefer to read totally misleading and out of context quotes, than find out anything about the paper at first hand.

Wow says MW “never once mentions M&M”. True, apart from on pages 5, 16 and 19 and in the references. It should be obvious even to the most ardent true believers who has read the paper that the purpose of MW was to show that even if M&M were wrong in their criticism, and in particular even if all the proxies were perfect, the Mann method produces nothing of any value.

You have to read between the lines to work out whether they really think M&M were wrong, or that the proxies were any good, so that may be a little more difficult.

“I did.
Why don’t you try it as opposed to reading on WuwT what you’re supposed to think it says.”

No you did not, or you just dont understand what you have been reading. Your making yourself a denier now.

Did I say a word about WUWT? I dont give rats ass what Watts says since I can go ahead and read the actual paper from page 1 to page 47. The message is clear unless having CAGW-glasses on. I dont need any bloggers opinions to find out whats in there and what isnt. The paper clearly dismisses THS in both, in the data and in the analysis, they even “replicate it” (I guess your head must be tilted clockwise to see any “replication”.)

@Lotharsson and truesceptic: I’m pretty sure the McShane & Wyner paper is using the word “predict” in the standard way that statisticians do. Any time a model constructs an estimate of one quantity (e.g., global temperature at time t) it is quite common to refer to that estimate as the model prediction for that quantity. That’s still what you call it even if you (the scientist) actually know the true value of that quantity and don’t need to “predict” it in any meaningful sense, or if the event has already happened, and so you’re talking about a “prediction” about the past.

In stats language, it’s perfectly sensible to refer to the “model predictions for global temperature, years 1000-1850″. In fact, except for people who still cling to the old language, the standard name in statistics for “variables you use to construct your model estimates” is a “predictor” [1]. As a stats person I spent most of the paper mentally reminding myself that the “proxies” referred to the predictor variables, and that “temperatures” are the outcome variables. Otherwise the paper makes no sense to me!

In any case, the way that M&W use the word “predict” is really very standard in statistics. It’s so common that (a) I’ve forgotten what the word “predict” means in everyday language and therefore (b) I’m actually worried that I’ve misunderstood your concerns. Was that what you were disagreeing with, or have I missed your point?

[fn1. the older term was “independent variables”, but that’s just a terrible name because they’re generally not independent of anything in particular.]

And more to the point, seeing as you brainboxes must have some other as yet undisclosed mechanism in mind – when do you see the global temperature trending down, because that’s what matters, not your trash-driven beliefs.

And lets look at the “mentions” shall we, since the newboi trolls don’t want to let anyone know what they’re talking about (since they don’t know either):

Page 5: “Chairman of the subcommittee on oversight and investigatons formed an ad hoc committee of statisticians to review the findings of the findings of M&M.”

Yeah, this isn’t really mentioning the report. It’s merely reporting. I thought this was supposed to be a research paper, not a newspaper?

Page 16: “similar to the that of McIntyre and McKitrick”.

Yeah, this isn’t a mention that brings anything more to the table. M&M’s paper is mentioned in the same way as the subcomittee was.

Still nothing about the paper except to say “they didn’t do it this way either”.

And page 19: “it was shown in McIntyre and Mckitrick”.

Nothing there either. Nothing about the paper except a blatant assumption that the paper was right.

“It was shown that the Earth is the centre of the universe” was also true.

It was wrong, but there were papers showing that the earth was the centre of the universe. The statement didn’t say they were *right*.

And looking on page 21, Fig10 has a lot of graphs that show Current Temps (it ends their graphs on ~1990, so the two warmest decades on record are not plotted…) all look very much like this warming is unprecedented and very much like a hockey stick (though you could call it a scythe).

Did either of you two bozos read the paper yourself, or did you just get instructions?

“@Lotharsson and truesceptic: I’m pretty sure the McShane & Wyner paper is using the word “predict” in the standard way that statisticians do.”

And this is the problem with statisticians working in a clique like this: they don’t know anything but the stats.

They need to know the physics.

They really should work more with climate scientists because without that expertise, their statistical methods may be correct, but their physical basis is woefully lacking, to an extent that makes the work nearly worthless.

PS how come when Al Gore uses words in a common-sense way, the denialerati jump all over it as being non-science, but when a bunch of math bods may be saying AGW is all wrong, there’s a deafening silence when they do the same thing. If anything, a huge rush of apologetics making excuses.

Well, no. You never hear shills saying where they’re from because that would negate their efforts.

An astroturf attempt saying “We’re from HumungoCorp who sell this and we’re saying that I as a normal punter LOVE this product!!! It’s *sooo* good!” would not have the effect that kidding on their an ordinary punter, maybe a “happy customer”.

So, no, not saying a word about WuwT is no proof that you aren’t taking orders from there.

Hm. Now that I re-read the thread, I think the difference between the statistical meaning of the word “predict” and the everyday meaning (which I’ve forgotten) might explain the questions asked by SteveWW and Dean Morrison way back at the start of the thread.

To comment on SteveWW’s concern, in stats terms, it’s okay to talk about predicting the past: your model can make predictions about any variable, regardless of what it corresponds to. Past events still correspond to “things unknown to the model” therefore the model is allowed to try to predict them. And now that I’ve finally twigged to the fact that everyday language is different to stats, this also explains Dean Morrison’s comment too: in a stats sense, if we had instrumental measures going back to year 1000, these would be perfectly “predictive” of the instrumental measures going back to year 1000, so you wouldn’t need to use proxy variables at all. But the point is that “global average temperature at time t” is a variable, and its value is unknown to the model. So the model is allowed to predict it. The authors’ concern is that if the proxies lack “predictive power” with respect to those temperature variables, then these predictions will be rubbish. It would mean that the proxies really can’t be used to construct sensible estimates of (i.e. predictions about) past temperatures. Obviously, I don’t know what the predictive power of the proxies is as regards past temperatures, but that’s the issue that they’re referring to.

In any case, I’ll now shut up about this, but seriously, I’m very excited. I literally had no idea that not everyone thinks about the word “predict” the same way statisticians do. Somehow I’d actually forgotten what the word normally means! My guess is that M&W were similarly blind to the fact that the term isn’t obvious to non-statisticians.

And this is the problem with statisticians working in a clique like this: they don’t know anything but the stats (emphasis mine)

I think that’s unfair. The paper is submitted to a stats journal. They’re allowed to use standard statistical terminology in a statistics journal. I really can’t convey to you just how standard the language is, and how unreasonable that sounds to a statistician’s ear. It would be like asking climate scientists to call (say) “climate” something different, in a climate science journal, because a non-climate scientist doesn’t understand the technical meaning of the word. That’s insane. Climate scientists get first naming rights on climate terminology, physicists get first naming rights on physics terms, and statisticians get first naming rights on statistics terms.

That said, it’s obviously the case that if they want the results to be taken up by climate scientists, they need to do a better job of explaining themselves to that audience. And it’s also the case that you generally need a richer understanding of the underlying physical basis of the data to produce a really good analysis, so again I agree that collaboration with people who understand the substantive issues better is a very wise strategy.

What I’m disagreeing with is the implied claim that the use of a standard stats term in a stats journal is evidence of bad behaviour. There are indeed problems with statisticians working in isolation. But this isn’t one of them.

3) All (but one) of the talks are expert, sober discussions of statistics/climate issues and how to make progress.
I especially recommend (statistician) Jim Berger’s talk, p.19, which says:

“How can statisticians become involved?
The Key: Becoming involved in a ‘team environment’ with scientists.

Facilitating infrastructure:
• NCAR, where teams operate
• SAMSI (and NPCDS), where teams can be formed
• National labs (both LANL and LLNL have climate/stat teams)
• Large interdisciplinary grants available today

Barriers:
• Statistics cannot generally fund involvement of statisticians in other disciplines which, in turn, rarely have much money for statistics.
• Shortage of statisticians
• The time needed for a statistician to get deeply involved with another science and to also learn the statistics needed for it.
• Scientists often have a hard time judging what they can do
themselves and when they should seek statistical help.”

The two last items are *important* and akin to what I wrote in CCC, Appendix A.10.4. Some top statisticians have learned enough science to be useful and are highly regarded by climate scientists.

M&W show zero evidence of having bothered, and evidence otherwise in Wyner’s series of posts mentioned in #11. McShane was doing his PhD in a completely unrelated area.

I’ve been lucky to have had exposure to statisticians in the (good to great) range and I do not believe this paper is an exemplar of the statistics profession.

BTW, at NCAR, one talk differed strongly from the rest, and it is strangely relevant to this whole discussion.
As an exercise for the reader wishing to hone their Web forensics skills:

a) Which one is different?

b) Is there any chance an audience of experts (especially, but not limited to Ben Santer) might have been irritated by slides 2-4 in that talk, especially slide 4?

You might want to take a closer look at page 5 before you get too much more condescending–if that is possible. You quote page 6 with “Chairman of the Subcommittee…” If you page back to 5, virtually the entire page is a reference to M&M beginning with…

Can anyone explain to me why, when a group of climate scientists do a statistical analysis, subsequently supported by many others, showing temperature rises, increasing trends, etc, etc, a whole bunch of people leap on it screaming “lies, damned lies and statistics”, or “oh you can’t always trust statistics”, or “oh you can make statistics say anything you want”, then when someone publishes a statistical analysis implying AGW is all wrong, those same people shout “Oh look! The statistics in this paper prove it’s all a scam!”

I understand that many sceptics have a long and distinguished record of wanting to re-invent laws of physics, conjuring up communist conspiracies, annointing classics scholars as world-renowned earth science experts, and many other interesting pursuits, but can anyone explain this bizarre “statistics are unreliable – until they agree with my side” phenomenon to a humble layperson like myself?

Barriers: • Statistics cannot generally fund involvement of statisticians in other disciplines which, in turn, rarely have much money for statistics. • Shortage of statisticians • The time needed for a statistician to get deeply involved with another science and to also learn the statistics needed for it. • Scientists often have a hard time judging what they can do themselves and when they should seek statistical help.”

a) Which one is different?
b) Is there any chance an audience of experts (especially, but not limited to Ben Santer) might have been irritated by slides 2-4 in that talk, especially slide 4?
c) And where might slides 2-4 have come from? Extra points.

Here’s the text:
In the ice core (Vostok) data that Al Gore illustrates in the(sic) Inconvenient Truth, the temperature time series leads, not lags, the CO2 time series by 800 to 1000 years. What is the causal mechanism? It would seem that temperature increases cause CO12 release, not vice versa. The common answer is that there is an (unspecified) feedback mechanism.

It appears to me that Wegman is a very silly man. I sure hope that he walked away from the lectern sporting a shiny new posterior orifice after the Q&A session!!

Mr. Watts, I first heard of ENSO at its AMS symposium debut in 1985. As the Southern oscillation is global in scale and decadal in duration, and McS&W lay 4 to 1 odds of ’98 topping the hottest decade of the last millennium, to dismiss it is” a super El Nino anomaly event, not a super global warming event. Weather, not climate.“ does such Moranic violence to the English language that the mind is repelled.

It would be wonderful to see the reception afforded your reply were you to repeat it at the next AMS meeting on the subject- it is far too good for yack radio, and only wants the attention of an SNL writer to assure its 15 seconds of fame

jakerman@225; the current rate of temp increase is declining on a decadal basis [even allowing for the inclusion of the 2010 monthly values, which is a cheat because El Nino, which was strong at the beginning of 2010, has now faded and cooler temps will now push 2010 annual temp back down: it will not set a record for hottest year!];

> I literally had no idea that not everyone thinks about the word “predict” the same way statisticians do.

Thanks Dan

I see where I imagined a different concept.

For some reason I interpreted “predicting temperature from proxies”, especially in the context of climate science, as being the notion of predicting a set of data for a time period that is **not** present in the underlying proxy data. This is opposed to using proxy data for the time period (and perhaps beyond) where you are comparing the estimates derived from the proxy data with other temperature measures.

For example suppose you used proxy data for (say) 1850-1900 and 1950-2000 to predict temperatures for 1900-1950. It doesn’t seem obvious that the proxies outside of 1900-1950 contain enough information on their own to let you infer much about temperatures in 1900-1950.

But if you’re using the proxy data for 1850-2000 to “predict” temperatures during 1900-1950 for comparison with other temperature measures in the way that Dan suggests, that seems a whole lot more plausible and useful

So, let me see if I have the analogy right.
It’s been arged that height is determined by genetics and by the amount of protein in the diet. We have some recent data (over the last 100 years or so) about the per-capita protein intake for various Western countries. We also have collections fo bones from various cemeteries and burial sites, from North America, Europe, China, Japan, parts of Africa, etc. Over the past 100 years, skeletal heights (predicted by leg length and sex) have tracked average protein intake quite well within each site, although there are differences between nations. Using that information, one set fo researchers used dated femur samples from various locations, came up with mean predicted protein intake for the past 1000 years in Japan, Somalia, Australia, England, Spain, Italy, etc., and combined these into a global average protein index. They reported the data within each region tracked well together, although not always across regions (a famine in Australia might coincide with increased fishing productivity in Japan, for example).
Another set of researchers takes this same regional data, and now tries to see how well each region matches global average protein intake. Over the past 100 years, it tracks relatively well, but there was little agreement between regional values and global values for the previous 900 years. Thus, they conclude that height does not actually track protein intake all that well, although mean height is greater now, and protein intake is also greater now, than is likely at any point in the past 1000 years.

Is this analogy close? If so, which group of researchers is more likely to be correct? If it’s not close, why not? (And I won’t accept nutritionists and physical anthropologists are arguing for one world government)

The main percieved problem with the reconstructions is the size of the dataset. As more data becomes incorporated uncertainty is reduced. IMO, at some point statistical criticisms of the original paper will become pointless, as more and more data keep emerging that consistently support the original temperature reconstruction…

“Can anyone explain to me why, when a group of climate scientists do a statistical analysis, subsequently supported by many others, showing temperature rises, increasing trends, etc, etc, a whole bunch of people leap on it screaming “lies, damned lies and statistics”, or “oh you can’t always trust statistics”, or “oh you can make statistics say anything you want”, then when someone publishes a statistical analysis implying AGW is all wrong, those same people shout “Oh look! The statistics in this paper prove it’s all a scam!”

I understand that many sceptics have a long and distinguished record of wanting to re-invent laws of physics, conjuring up communist conspiracies, annointing classics scholars as world-renowned earth science experts, and many other interesting pursuits, but can anyone explain this bizarre “statistics are unreliable – until they agree with my side” phenomenon to a humble layperson like myself?”

Look up “comfirmation bias” and I think you will find the answer to that. That is why the term “sceptic” is not always accurate.

>*jakerman@225; the current rate of temp increase is declining on a decadal basis*

You’ve cherry picked out such a short time frame that natural variablity is influencial compared to the warming trend. You also tried to divert from question “is is the current warming unprecidented”?.

The evidence still support the answer: yes, the current warming is faster and more sustained than anything on record.

>*which is a cheat because El Nino, which was strong at the beginning of 2010*

What a hypocrite, 1998 was a super El Nino and you want to use that as the new baseline for every anomally. Unlike 1998, 2010 is a period of solar minimum and not a “super” El Nino.

>*And you are ignoring the steps in temperature which occurred from 1976-78 and 1997-98; these 3 years generated most of the temp increase up to 2000*

jakerman, you’re not reading what I’m writing; the prospect of Joolya for another 3 years must be distracting you. I don’t want to use 1998 as a new baseline; what I’m saying is that around 1976 and 1998 are steps in the temperature record and the step up in temperature around those 2 years is the total temp increase for the period from 1976-1998:

Wow, this is getting a bit embarrassing, I am starting to feel sorry for you. The clue to the fact that you have not read the paper yourself is the word “apparently”. Why “apparently”? They either did or did not refer to M&M, and it is a simple matter to find out for yourself rather than rely on others to read it for you.

Please read the paper yourself to see whether or not they refer to M&M 2005. (HINT: page 19)

I can’t help but notice that not one of the deniers here has actually made any attempt whatsoever to engage with the substantial criticisms of the paper. Ironically enough, none of them has even given any indication that they’ve read them.

“”If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years”. M&W (2010)”

Did you read the WHOLE paper? This is simply out of context. They critisized the DATA itself and said even random noise has better predictability. They also critisized the thermometer-data as the blade (page 3).

So according to this data, the statement is true. But the data itself is *LACKING* and the thermometer data as a blade *MISLEADING* (smoothing etc). Read what the paper says ABOUT THE DATA ITSELF.

So NO, the paper does NOT verity that “now its warmer than in last 1000 years with 80% propability”, unless you take it completely out of context. NOR it verifies Mann’s results (or I guess your head is TILTED if you think it does.)

“And every year since has been warmer still.
Who exactly do you think you’re kidding with your loudly proclaimed, wishful thinking, 5-1 bet on this unreviewed paper killing the extant paleo record?”

According to what? Every year HAS NOT been warmer still, the warming stopped in 1998 as it is still the warmest year. Even 2010 will be only 2nd warmest besides a strong El Nino (except if you cite the oh mighty adjusted and “smoothed” GIStemp artefact which disagrees with DMI(arctic) and with both satellite datasets). Even the trend has flatlined ever since. If you do a Polynomical fit on HadCRUT for example you can also see the clear flatlining, if not beginning of a decline:http://users.tkk.fi/kse/hadcrut3-quadfit.png

“And more to the point, seeing as you brainboxes must have some other as yet undisclosed mechanism in mind – when do you see the global temperature trending down, because that’s what matters, not your trash-driven beliefs.”

This isnt about discussiong about any “incoming cooling”, its about the S&W paper and what it says and what it confirms and what it DOES NOT. But at least you had a nice try mixing up the discussion.

You don’t seem to have anything other than quotes mined out to support your belief system.

So tell me why MBH gets a 95% confidence limit agreement with their analysis from proxy to measure yet this paper for most of its extent gets less than 30% when using “fake” proxy data and that this “proves” that MBH is wrong.

>*I don’t want to use 1998 as a new baseline; what I’m saying is that around 1976 and 1998 are steps in the temperature record and the step up in temperature around those 2 years is the total temp increase for the period from 1976-1998

And you are ignoring the steps in temperature which occurred from 1976-78 and 1997-98; these 3 years generated most of the temp increase up to 2000

I am confused. Why should there not be some periods within a timescale where temperature rises more than others? Where does it say a temperature rise should be linear and eliminate any natural variability?

Since you didnt read the paper I’ll paste the Conclusions. Read it word to word and think again if this validates Manns or IPCC:s results:

“Conclusion. Research on multi-proxy temperature reconstructions
of the earth’s temperature is now entering its second decade. While the
literature is large, there has been very little collaboration with universitylevel,
professional statisticians (Wegman et al., 2006; Wegman, 2006). Our
paper is an effort to apply some modern statistical methods to these problems.
While our results agree with the climate scientists findings in some
respects, our methods of estimating model uncertainty and accuracy are in
sharp disagreement.
On the one hand, we conclude unequivocally that the evidence for a
”long-handled” hockey stick (where the shaft of the hockey stick extends
to the year 1000 AD) is lacking in the data. The fundamental problem is
that there is a limited amount of proxy data which dates back to 1000 AD;
what is available is weakly predictive of global annual temperature. Our
backcasting methods, which track quite closely the methods applied most
recently in Mann (2008) to the same data, are unable to catch the sharp run
up in temperatures recorded in the 1990s, even in-sample. As can be seen
in Figure 15, our estimate of the run up in temperature in the 1990s has
a much smaller slope than the actual temperature series. Furthermore, the
lower frame of Figure 18 clearly reveals that the proxy model is not at all
able to track the high gradient segment. Consequently, the long flat handle
of the hockey stick is best understood to be a feature of regression and less
a reflection of our knowledge of the truth. Nevertheless, the temperatures
of the last few decades have been relatively warm compared to many of the
thousand year temperature curves sampled from the posterior distribution
of our model.
Our main contribution is our efforts to seriously grapple with the uncertainty
involved in paleoclimatological reconstructions. Regression of high
dimensional time series is always a complex problem with many traps. In
our case, the particular challenges include (i) a short sequence of training
data, (ii) more predictors than observations, (iii) a very weak signal, and
(iv) response and predictor variables which are both strongly autocorrelated.
The final point is particularly troublesome: since the data is not easily
modeled by a simple autoregressive process it follows that the number of
42
truly independent observations (i.e., the effective sample size) may be just
too small for accurate reconstruction.
Climate scientists have greatly underestimated the uncertainty of proxybased
reconstructions and hence have been overconfident in their models.
We have shown that time dependence in the temperature series is sufficiently
strong to permit complex sequences of random numbers to forecast
out-of-sample reasonably well fairly frequently (see, for example, Figure
9). Furthermore, even proxy based models with approximately the same
amount of reconstructive skill (Figures 11,12, and 13), produce strikingly
dissimilar historical backcasts: some of these look like hockey sticks but
most do not (Figure 14).
Natural climate variability is not well understood and is probably quite
large. It is not clear that the proxies currently used to predict temperature
are even predictive of it at the scale of several decades let alone over many
centuries. Nonetheless, paleoclimatoligical reconstructions constitute only
one source of evidence in the AGW debate.
Our work stands entirely on the shoulders of those environmental scientists
who labored untold years to assemble the vast network of natural
proxies. Although we assume the reliability of their data for our purposes
here, there still remains a considerable number of outstanding questions
that can only be answered with a free and open inquiry and a great deal of
replication.”

MartinM wonders why we have not discussed the “substantial criticisms of the paper”. There have been no substantial criticisms on this thread, since most of the posts have been made by critics who have clearly not read the paper.

Tim Lambert’s joy in imposing a hockey stick in one of the diagrams is a bit like a credulous person whose faith is strengthened by finding the face of Jesus on a cookie. Anyone who has read the paper knows that it is total destruction of the Mann08 method even allowing for several (very far-fetched) assumptions such as stationarity and linearity of the relationship between the proxies and temperature (why?) and good data collection and selection (they must be joking there).

And this just not a condition of ENSO/PDO accumulating heat but such natural factors causing conditions which allow heating to step and not reduce such as interrupted upwelling and cloud cover variation; ie Pinker.

Many Denialati have descended here with insistent howls that the “hockey stick is broken”, and that there is no modern warming (and I see that cohenite is still obsessing about his “breaks”, the poor dear – how does his gravity model of atmospheric heating mesh with this, by the way?). A part of the problem seems that they will find any excuse to dismiss whatever proxy is used for historic temperature reconstructions, with no serious explanation of why they do so.

To this end I’d hoped that at least one of them might have a pick at the [cherry blossom-burst record](http://scienceblogs.com/deltoid/2010/08/a_new_hockey_stick_mcshane_and.php#comment-2734484), and what it implies for regional spring temperature trends in Japan. Sure, it’s local, and it’s vulnerable to non-global stochasticities (somewhat reduced by Aono’s et al 31-year smoothing), but it’s a damned long record, and blossom burst is after all highly correlated with temperature.

Since I posted my previous graph relating mean March temperatures in Kyoto, as inferred from the commencement of the annual Cherry Blossom Festival, with the McShane and Wyner hockey stick, I have discovered that last June Aono and Saito updated their pre-1400 records. It’s interesting to compare this newer data with both [McShane and Wyner](http://i36.tinypic.com/fu9ahx.jpg) and with [Mann et al 2008](http://i34.tinypic.com/z7dhx.jpg), and to this end I digitised the Aono and Saito graph and superimposed it over the respective hockey sticks, using the termini of the instrumental record as references points to locate respective dates in the Cherry Blossom Festival record.

Of course, as I alluded in my previous post, there’s no reason to assume that the implicit linear relationship between the mean March temperature in Kyoto and annual mean global anomaly, inferred by the overlaying of either Mann et al 2008, or of McShane and Wyner, with the Aono and Saito 2010 data, actually is linear. There are many other caveats of course, but irrespective of these it seems that a very straight-forward, robust, and meticulously documented proxy for temperature is quite impressively agreeing with Mann et al 2008, and dissing McShame’s and Wyner’s attempt to hide the non-decline…

2) Fred Singer & Frederick Seitz were leaders of a continuing attack on Ben Santer for his IPCC SAR efforts, carried out though WSJ OpEds, etc. Hence, Ben cannot have been overly pleased to see 2 Singer books.

3) Wegman’s abstract of 11/05/07 talk at GMU included:
“Although both paleoclimate reconstruction and climate modeling have many fundamentally statistical/stochastic issues, the convergence of the perspectives of statisticians and climate scientists is not great. This talk is not an anti-anthropogenic global warming talk, but will probably irritate climate scientists anyway. (It did at NCAR, but discussion is good.)”

He attended a different workshop than the rest of those there, and seems produ of irritating people. In a forthcoming tome, I’ve assigned a Meme# (an extension of John Cook’s list, called:

Meme#f “Faux fight between statisticians and climate scientists.”

The statisticians who attended that NCAR meeting know better, as do many others, including this group of fine people. If people haven’t read DC’s notes on Interface 2010, they should. Wegman&Said put together sessions including Fred Singer and Don Easterbrook (and Jeff Kueter from GMI), Said talked about climategate.

4) I have 2 separate accounts from attendees at NCAR.
One said the reception was frosty, but to be fair, apparently Wegman&Said got lost, as the 2nd day was in a different lab. So they showed up 30 minutes late for his own talk. The other observed that he didn’t know climate science and didn’t want to.

It may well be that M&W are carrying on this fine tradition.

5) Finally, I’ve mentioned that slides 2-4 came from somewhere else. If no one finds them soon, I will reveal all, as it is pretty hysterical.

I think some avid klimazwiebel readers will be heavily disappointed that their new favorite “final nail in the coffin” is considered a piece of shoddy work…
Note the frequent references to McShane and Wyner apparently not having read the paper(s) they essentially criticise, and that Eduardo thinks they got their knowledge about the papers from blogs. One wonders which blogs those are…

Whether or not I read the paper is irrelevant. But to answer the question, yes, I did.

What I find humorous about your particular brand or rhetoric is your clear belief that you are right–always. Yet you fail to even quote correctly from the very paper you are trying to ridicule. If you had read it, why is it you would choose to quote part of it, then falsely reference the page it came from? You either did it intentionally, or you scanned through looking for a part to suit your needs, or you fail to understand the numerical sequence of the pages. Which is it?

And do spare me vague references to brothers I have never heard of or references to other pages in the paper taken out of context. I don’t know who the “Brothers Dim” are, nor do I care. And I am not trying to make this paper out to be any more than it is at face value–a statistical breakdown of Mann’s data using his very own data.

I do find it more than a little interesting the amount of time and energy this paper seems to be taking from the AGW devotees–especially from the likes of the omniscient Mr. Mashey who seems to have spent quite some time on this here and at myriad other blogs. There must be more there than nothing, otherwise, why all the concern? And I really wonder why I see posts now questioning whether this journal is still “scientific.” They choose to publish a paper some don’t agree with and that makes them no longer worthy of journalistic respect? Funny that.

Because Billybabe as the denialists discovered a long time ago, they can knock out in seconds any old sh*t, which subsequently takes actual time to refute effectively. It’s the stock-in trade of globally famous liars like Monckton for example, and one reason that the rest of us are thankful for the efforts of dedicated, interested and knowledgeable volunteers to shed light onto the intellectual corporate darkness you and your ilk would prefer we live in.

So don’t go flattering yourself and getting your gonads all a’flutter that your current heroes M&W are any better than the spew of the lowliest and most ill-educated Wattsian dufus.

Billybabe? Me and my ilk? Nice. Condescending with a side of sweeping generalization.

Kindly point out where I make any reference to this current paper being in any way shape or form the way, the truth or the light. You can’t. Because I didn’t. Primarily because I don’t think that. What I did ask was why, if it is such a throwaway, does there seem to be so much scrambling to prove it wrong rather than, even for the short term, accepting it as merely another point of view? One which was apparently valid enough to be reviewed and will soon be printed in a respected journal. One which may or may not stand the test of time.

Sorry to say friend, but if your opinion is that any and all people who disagree with you are “inconsequential hindrances” then you are the poster child for the close-minded and arrogant scientific method.

Nobody knows everything. You might try being humble because the current course of brow-beating those you feel are beneath you will ultimately fail to get the results you desire.

The irony is, I actually desire the same ultimate results despite your inaccurate, and ignorant, belief that I somehow wish harm and doom to the planet. The difference is I don’t claim to have all the answers, nor do I claim to be morally superior simply because I think I am right.

Wow. Change subject much? You called someone a “newboi troll” and then, later in the same post a “bozo.” You do this while quoting page 6 of the paper, but claiming it was page 5. This make me wonder who the bozo is.

Now, what do I claim was “no problem?” And where are all the “denial” posts calling people names and berating their intelligence? Virtually every one of your posts has the tone that you are somehow better than the person you are addressing with the possible exception of those you agree with.

You want to call someone a bozo or a troll? Try getting the basic reference points–read page numbers–correct at the very least when trying to point out how stupid someone is.