Does the Endpoint of Santer H2 "Matter"?

Yes.

Perhaps the first thing that I noticed about this article was the endpoint for analysis of 1999 – this seemed very odd. I mentioned that a Santer coauthor wrote to me, saying that the endpoint didn’t matter relative to the Douglass endpoint of 2004. That turns out to be true, but why would anyone in 2008 use data ending in either 1999 or 2004? (This applies to both Douglass and Santer). There’s been lots of criticism over the use of obsolete data in controversial articles – so why was either side of this dispute using obsolete data?

The Santer SI contains a sensitivity study of the H1 hypothesis up to 2006. There’s been some discussion here about whether trends to 1999 could be extended to 2006 for comparison purposes – something that made sense to me and Santer et al took the same position. They state:

In the second sensitivity test (“SENS2”), we calculated observed trends in T2LT and T2 over the 336-month period from January 1979 to December 2006, which is a third longer than the analysis period in the baseline case. As in SENS1, we set s{bm} = s{bo}. Since most of the model 20CEN experiments end in 1999, we make the necessary assumption that values of bm estimated over 1979 to 1999 are representative of the longer-term bm trends over 1979 over 2006. Examination of the observed data suggests that this assumption is not unreasonable..

They observe that the longer record leads to a sharpening of CIs for observed trends (as we’ve discussed) here but report that this does not affect their H1 results:

Even with longer records, however, no more than 23% of the tests performed lead to rejection of hypothesis H1 at the nominal 5% significance level

Later in the SI, they discuss several sensitivity tests for the H2 hypothesis, but, for some reason, they do not report on the impact of the SENS2 test on the H2 hypothesis – a rather surprising omission.

It’s completely trivial to do these calculations on up-to-date data. CA readers can obtain results to Santer Table III, updating to the most recent UAH data as follows:

By using current data, the value of the Santer d1 test (a t-test) increases to 2.232 (from the 1.11 reported in their Table III), yielding an opposite conclusion in this respect from the one reported in the article.

These results are obtained not by doing the tests in a different way that I happen to prefer, but using the same methodology as Santer et al on up-to-date data.

You can check results to end 2007, which would have been readily available to Santer et al at the time of submission of the article as follows, yielding a d1-value of 1.935, which would be significant against important t-tests.

f(msu[,”Trpcs”],”UAH_T2LT”,end0=2007)$info

The value for 2006 was 1.77, which would be significant against a one-sided t-test (and against a two-sided t-test at 90%). It seems odd that they would have gone to the trouble of doing the SENS2 sensitivity study on the H1 hypothesis, but not the H2 hypothesis. And if they did the SENS2 test on the H2 hypothesis, these results would be important and relevant information.

And when they saw these results, you’d think that Gavin Schmidt, Santer and so on would be curious as to what would happen with 2007 results. RC has not been reluctant to criticize people who have used stale data and you’d think that Schmidt would have taken care not to do the same thing himself. Especially if the use of up-to-date data had a material impact on the results, as it does with the H2 hypothesis in respect to the UAH data.

#3. No, because there are other issues here. For example, and this is something that I’ve noted as we go along, I am not in a position to opine as to whether UAH or RSS is “right”. As long as competent specialists disagree on these matters, there is an imponderable that very much prevents someone saying that the models are “invalidated”. So don’t get all excited about this.

All we’re saying here is that one of the Santer claims doesn’t hold up. Different point.

However, there is perhaps en element of your point underlying this. The t-value has been climbing rapidly because of the discrepancies in recent results.

I have to disagree with you here Steve – “competent specialists” is the reason – don’t fret I won’t mention the “F” word.

Let me draw the obvious and be-laboured analogy.

Here in the UK the coming (as was in 2005) financial crash was a dead cert. Pinning the date on donkey was the trick, US mortgage crash or no it was comming. We had a consensus of sorts at my place of work that the banking practices in the UK vis-a-vis mortgages, credit and checks and balances was a busted flush. Only one of my colleagues was an accountant by training the rest of us IT geeks.

The “competent specialists” in UK banking were trusted by all to know better – they were wrong. Confidence now is shot which is why the UK gov has had to nationalise banks that in the bank’s own estimations were healthy.

..so minor OTT digression completed I trust you can easily see the comparison. How is it that a layman can look at the graphs of model predictions V temperature observations and instinctively know that the models are wrong. Yet WE are beholden to some golden unwritten standard to trust those that should know.

Let me simplify my point. If the trusted (and I include you here) cannot say if the squiggles say what we all think they say then why would you (if in gov office) use or defend the use of said squiggles to justify policy decisions.

Gavin commented on RealClimate that the model runs were from the IPCC AR4 which had a cut-off date of 2004 (for model runs only I presume since lots of other papers made it in even though they weren’t even published yet.)

Those 2004 model runs were based on observation data which was confirmed only up to 1999. So that is why they cut it off at 1999.

I note that the trend per decade from the UAH lower troposphere data for the tropics is the same number – 0.06C per decade – whether you stop at 1999 or continue the analysis into the September 2008.

So, the actual observations are still well below the average of the model runs at close 0.2C per decade but the date doesn’t matter much.

In my mind, this is a problem with a simple least squares regression of the trend. The 2008 tropics temps are below the 1979 temps but a regression line still projects an upward trend.

Part of the problem is that we’ve got translation going on at the policy level, where “not falsified” gets translated into “proven”. I think part of that is due to the use of technical language which contains the same words as plain language, but with different meanings.

As I understand it, “consistent with” means roughly “not falsified” (less than 95% chance of being wrong) in statistics, while in plain english, “consistent with” is much closer to “proven” (more than 95% chance of being right).

We continue the saga of the paper of Santer+16 co-authors [S17 in IJC 2008]. You recall from recent TWTW newsletters at http://www.sepp.org that it attacks the findings of Douglass, Christy, Pearson, and Singer [DCPS in IJC 2007] as well as of the NIPCC report “Nature – Not Human Activity – Rules the Climate” http://www.sepp.org/publications/NIPCC_final.pdf

S17 claim that the observed temperature trends in the tropical troposphere agree with those calculated from greenhouse (GH) models. The claim is based on two assertions: The observations (or more properly, the analyses of the data) have changed drastically just in the past two years. And also — the uncertainties of both observed and modeled trends are found to be much larger.

The first thing that struck me about S17 was their figure 6A, which depicts 7 (yes, seven) curves derived from the same set of radiosonde data, each claiming to show the true dependence of the temperature trend with (pressure) altitude. The curves fall into three “families” that show striking differences – for reasons that I will discuss elsewhere. Here I will concentrate on one feature only: the time interval chosen by S17 is 1979 – 1999. Please remember that 1998 was the year of unusual warmth because of a strong El Nino.

Does the choice of endpoint matter and affect the trend values shown? You betcha. To check up on this matter, I briefly thought of writing to Santer at to request the underlying temperature data. But why waste time? So I used a proxy, the MSU-UAH data set for lower troposphere temperatures from satellites, kindly sent to me by John Christy. Here then are the OLS trends calculated for a time interval starting at the beginning of the satellite data set, 1979, and ending in 1993, 1996, 1999, or 2002: -0.010, 0.035, 0.103, 0.121 degC/decade.

No need to comment further, except I just cannot resist quoting from page 130 of the CCSP-SAP-1.1 report (Karl et al 2006]. In an Appendix, Wigley, Santer, and Lanzante explain the mysteries of “statistical issues regarding trends” to the great unwashed in real simple words:
“Estimates of the linear trend are sensitive to points at the start or end of the data set…. For example, if we considered tropospheric data over 1979 through 1998, because of the unusual warmth in 1998 … the calculated trend may be an overestimate of the true underlying trend.”

3 Trackbacks

[…] Let me assure Gavin that Steve McIntyre also numbers among the “some” who downloaded climate data from archives in all sorts of places. With regard to requesting information from Santer: It appears Steve wants to figure out precisely what Santer did and whether certain pesky details affect the results. (Typical pesky detail associated with the choice of end point for the analysis discussed here.) […]

[…] you been following the Douglas vs. Santer bout? Do you remember the blog controversies that asked “Why did Santer stop analysis at December 1999 when Douglas ran analyses through 2004?” Have you been hoping someone would get TLT data we can compare to UAH and RSS? (Yes, I mean you […]