Question of the Week

Judith Curry has posted about the “pause”. The whole thing was spurred by my asking for her “scientific basis” for her claims about temperature trend in the Berkeley data. She didn’t answer the question. Instead she substituted a different question.

The “question of the week” was: what’s your scientific basis for your own claims?

You said “Our data show the pause.” That means the Berkeley data. You didn’t say “maybe.”

You used that claim to accuse Richard Muller of “hiding the decline” — to a reporter from the Daily Mail. You also said “There is no scientific basis for saying that warming hasn’t stopped.” The implication is clear, that if Richard Muller makes a claim about temperature trend you insist he have a scientific basis for it. So when you made a claim about temperature trend in the Berkeley data I asked you for your scientific basis.

17 responses to “Question of the Week”

Are the hazards of post hoc analysis ever discussed when it comes to the weekly attempts to look back at every possible time window in every data set relevant to climatology and identify a significantly “significant” pause/decline/whatever?

Also, look how she turns this one around: “lack of statistical signficance does not negate the existence of a pause as defined here.” Surprisingly, she doesn’t link to anyone claiming that there is a statistically significant non-pause over her cherry-picked time periods.

‘If one is seeking to identify an anthropogenic signal, one should choose years at each end point that are neutral in terms of ENSO and also the 9.1 year AMO signal discussed by Muller et al.’

And then:

‘What is of interest on this timescale is whether natural variability (forced and unforced) can dominate the AGW signal on decadal timescales and produce a ‘pause’ or a ‘stop’. This is the issue addressed by Santer et al., searching for the AGW signal amidst the natural variability noise. Santer et al. argue that “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”‘

And then comes this conclusion:

‘So in this context, starting the analysis in 1998 is not unreasonable. ‘

Am I missing something here? Neutral in terms of ENSO and at least 17 years implies 1998?

Much as I’d prefer not to defend Curry, yes, you’re missing something here. Curry’s saying that 17 years is roughly what you need to identify the anthropogenic signal, and therefore if you want to find a period in which natural variation overwhelms that signal, you need to look at less than 17 years.

In other words, she’s conceding that her ‘pause’ would be completely meaningless, even if it did exist. Pity she didn’t make it more explicit.

Thanks MartinM. That seems like a reasonable interpretation of Curry’s text. Of course, given her definition of pause this implies that she is essentially looking for shorter time periods where short-term variations gives a lower trend than the long-term one. That one can find such time periods is obvious and proves absolutely nothing.

I don’t know if you saw a comment I left on SKS a few weeks ago, but I came up with a a way of visualizing the locations of cherry pics in temperature trends by plotting least squares calculated trend contours vs. start year and length of trend. The results for gistemp can be found here. You can immediately see that most years ending on the present show a warming trend, but that one can cherry pick 2000 or 2005 to get a cooling/stop trend in GISTEMP. Many negative 10-year trends exist in the data, but all 20 year trends are all strongly positive.

Years with a disproportionate influence on temperature trend show up as a diagonal down and/or horizontal line. Pinatubo for instance, shows up as a cool diagonal (trends ending on it are cooler) and a warm horizontal (trends starting from it are warmer). This method doesn’t have the statistical rigor of your takedown, but it helps to visualize exactly what the problem is for those playing along at home….

And don’t forget, ‘there’s no basis for saying that warming hasn’t stopped’ opens a large and deeply profound class of arguments, ie: There’s no basis for saying that aliens haven’t invaded and taken over our minds.

I’m with Groucho on this: ‘I’d never belong to any club that would have me as a member.’

I would have thought that temperature records of 22 years would normally be the minimum required to give a reasonable chance of identifying the anthropogenic warming trend, given the roughly 11-year solar cycle. 17 years back from today puts us in a different part of the solar cycle. If I look at the TSI records, the best period to use (i.e. the one with the lowest trend in TSI, and therefore the best chance of distinguishing the AGW trend from natural variability) is the last 28 years. This gives:

Also, if we plot the warming trend that we would expect *just* from CO2 (i.e. if nothing else had any effect on global temperature, except the rising level of CO2), then that would would be 0.19°C per decade, if I’ve calculated it correctly. I presume that this is the same as the actual warming trend because the warming effect of other greenhouse gas increases is counteracted by the cooling effect of increasing aerosols.

One thing the BEST data tells us is that the temperature trend is long term. Over the past two centuries (I do not know where they got the early 19th century data), temperatures have risen consistently at ~0.6C/century. There have been several bumps and pot holes during that time, but the data is overall, rather consistent.

[Response: The trend estimate since 1975 together with its uncertainty level contradicts the claim of a consistent rise at 0.6 C/century (0.006 deg.C/yr).]

It looks like the most “controversial” aspect of BEST’s approach is the “scalpel” — slicing series at discontinuities and then treating them as separate “stations.” Intuitively, I would think that this one of the more important features since it can address undocumented changes within the metadata and it works well with the “least squares” method.

But one of the criticisms is that if there is a trend in the data (and we all know what direction that is), it will find more downward shifts than upwards shifts, biasing the results in the direction of the trend. The paper says that their method is simplistic, and hints that it will be revisited in the future. Specifically it says:
******
The detection of empirical breakpoints is a well-developed field in statistics (Page 1955, Tsay 1991, Hinkley 1971, Davis 2006), though
relatively little work has been done to develop the case where spatially correlated data are widely available. As a result, the existing groups have each developed their own approach to empirical change point detection (Menne and Williams 2009; Jones and Moberg 2003, Hansen et al. 1999).
In the present paper, we use a simple empirical criterion that is not intended to be a complete study of the issue. Like prior work, the present criterion must be applied prior to any averaging.

In principle, change point detection could be incorporated into an iterative averaging process that uses the immediately preceding average to help determine a set of breakpoints for the next iteration; however, no such work has been done at present. For the present paper, we follow
NOAA in considering the neighborhood of each station and identifying the most highly correlated adjacent stations. A local reference series is then constructed by a weighted average of the neighboring stations. This is compared to the station’s records, and a breakpoint is introduced at places where there is an abrupt shift in mean larger than 4 standard deviations.

This empirical technique results in approximately 1 cut for every 12.2 years of record, which is somewhat more than the changepoint occurrence rate of one every 15-20 years reported by Menne et al. 2009. Future work will explore alternative cut criteria, but the present effort is
meant merely to incorporate the most obvious change points and show how our averaging technique can incorporate the discontinuity adjustment process in a natural way.”
*******

My question is, wouldn’t it be possible to detrend both the “local reference series” and the individual series (the one being tested for breakpoints) over a reasonable time period, and then apply the breakpoint analysis? Wouldn’t this elminate the bias in one direction or another?