The Analysis of the Global Change using Hurst Re Scaling

S.I.Outcalt : Emeritus Professor of Physical Geography, University of Michigan

Abstract: Three data sets used to document the case for anthropogenic global warming were analyzed using Hurst Rescaling. The analysis indicated that a more likely interpretation of the data is that the observed linear trend in global temperatures is an artifact of regime shifts. The dramatic “hockey stick” trace, which began in 1976 accompanied by a major transition in the Pacific Decadal Oscillation, ends at the onset of the 21st Century and might be better termed the modern warming regime. This regime was replaced by a pronounced cooling regime. These observations attenuate the demonic interpenetration of the linear trend in the historic global temperature data.

Introduction: Hurst Re Scaling or Integral Inflection Analysis is a simple operation which is used to detect regime transitions in serial data. Although it is seldom employed the technique of has been demonstrated to be extremely effective in the detection of regime shifts in serial data [Outcalt et.al.(1997), Runnalls and Oke (2006)]. The method is named in honor of H.E.Hurst, who used the extremes of the integral of deviations from the record mean of serial data to analyze persistence in time series. The method is based on the assumption that most natural data is composed of regimes ranging in scale for geologic epochs to turbulence. In this world view nature has a strongly fractal structure with serial regimes covering the entire range of space and time.

Implementation: Dplot software uses a variety of rapid operators to analyze serial data. A small group of operators are used in Hurst Re Scaling Analysis. These operators are the calculation of the integral trace or the cumulative deviations from the record mean, mean value subtraction, linear trend removal and normalization. The analysis begins with the subtraction of the record mean followed by integration. Inflections in the integral trace signal regime transitions. If several variables are used in the analysis they may be normalized and plotted on the same graph. Another informative integral trace can be produced by removing the linear trend before integration. This operation phase shifts the initial inflections but signals subsets of record that might be parsed and analyzed using simple integration after mean subtraction. Even in the case where the data is in deviations from the record mean initial mean subtraction ensures integral closure. Trend removal on integral traces before normalization insures that the normalized integral traces cover the entire range of zero to unity.

The Test Signal:Three sets GHCN, HadCRUT3 and NASA were used as test signals. These data signals are remarkably similar and are displayed as figure 1.

Figure 1. The three record used as a test signals.

Integration: Integral traces were calculated from the test signals. Two integrations were performed. The first integration was done after a second mean subtraction to assure integral closure and the second followed trend removal and mean subtraction. These traces are displayed as Figure 2.

Figure 2. The initial integration (open symbols) displayed strong inflections near the the major global climate transitions in 1936 and 1976, which were accompanied by major ocean circulation transitions. The integrals of departures from the linear trend (filled symbols) indicate a major transition in the last decade of the 20th Century.

Figure 2 suggests that the period from 1976 until the end of the record should be parsed for detailed analysis. The traces of the 1976-2008 segment of the record were integrated and normalized after mean subtraction. The traces resulting from these operations is displayed as Figure 3.

Figure 3. These traces indicate that the modern warming regime ended in 1997.

Figure 3 indicates that a major transition occurred at the onset of the 21st Century. The global thermal response to this transition is somewhat muted. An inspection of the data displayed as Figure 1 shows only slight downturns near the end of the record in 2008. However, ground temperature data collected by Janke(2011) and analyzed by the author indicates a major shift from a warming to cooling regime in the early years of the 21st Century. This ground temperature data is based on the mean annual temperatures calculated from probes at 1 m intervals in three 6 m boreholes along Trail Ridge Road in Rocky Mountain Park, Colorado. The annual mean temperatures were calculated from hourly observations and are therefore extremely robust. The data were collected in mountain tundra terrain above treeline along an east / west ridge. The data from these boreholes is displayed as Figure 4.

Figure 4. Mean annual temperature profiles from Trail Ridge. The temperature inflection in BH2 profile is an artifact of the 1976 onset of modern warming. The Terzaghi equation makes it possible to estimate the overlying inflection dates. The upper inflections in all three boreholes indicate a dramatic transition from a warming to cooling regime in the early years of the 21st Century.

Figure 4 indicates a dramatic shift in the climate at Trail Ridge. Linear extrapolation if BH2 profile below 4 m to the surface yields an extreme minimal estimate of a 2C surface temperature drop. As disturbance profiles are parabolic [Terzaghi (1970)] the actual drop in surface temperature over the first decade of the 21st Century is probably more than double the conservative estimate in the realm of 4-6 C.

Conclusion: This short analysis indicates that an alternate model of climate change based on serial regime transitions rather than anthropogenic global warming is consistent with the results of the Hurst Re Scaling analysis.

I believe there was a Canadian borehole study of a similar style that showed a 6 Deg C rise in the Arctic over the past 150 years. If ‘things go back’ it might mean a return to the earlier temperatures. Is it possible that during a cooling event, the Arctic might continue to warm as a follow-on from the ending of the ice age? If so the overall change will be different from the recent past (10k years).

The mainstream media won’t touch this. It does not fit thier agenda. The founders thought they established a system where a free press would prevent government from asserting its will over the populace. Unfortunately, the founders never envisioned a scenario where the press would be sympathetic to a totalitarian cause.

3 July: Guardian: Leo Hickman: Is it now possible to blame extreme weather on global warming?Wildfires, heatwaves and storms witnessed in the US are ‘what global warming looks like’, say climate scientists
VIDEO: ‘The odds are changing’: Kevin Trenberth, a climate scientist at the US National Center for Atmospheric Research, discusses the relationship between weather extremes and global warming on PBS Newshour.

Hickman: I put this question to a number of climate scientists…

Kerry Emanuel, professor of atmospheric science at the Massachusetts Institute of Technology…

Dr Peter Stott, head of climate monitoring and attribution, at the Met Office Hadley Centre…

Professor Michael Mann, director of the Earth System Science Center at the Penn State Department of Meteorology…

Dr Clare Goodess, senior researcher at the University of East Anglia’s Climatic Research Unit…

Dr Doug Smith, who leads decadal climate prediction research and development at the Met Office Hadley Centre…

Michael Oppenheimer, professor of geosciences and international affairs at Princeton University’s Woodrow Wilson School and Department of Geosciences…

Harold Brooks, head of the mesoscale applications group at Noaa’s National Severe Storms Laboratory…

no prizes for guessing what the abovementioned “CAGW scientists” had to say.

back in the real world:

3 July: UK Daily Mail: Graham Smith: Washout summer could lead to rickets epidemic in children not exposed to regular sunlight needed to produce vitamin D
Dr Nicola Balch, an associate specialist in child health at the British Medical Association: ‘People need just 20 to 30 minutes of sun three or four times a week to ensure they get enough vitamin D, but obviously with our weather it can be impossible to get this.’…
The miserable weather has sparked calls from doctors for vitamin D to be added to foods and supplements rolled out nationally…

3 July: Fox News: Douglas Main: What’s behind the record heat?
Climate change?
The early heat waves of summer — following higher temperatures in spring and winter — could also be part of a pattern of climate change.
“It’s consistent with what we’d expect in a warming climate, but it’s hard to quantify any effect climate change might have on an individual event like this heat wave,” Crouch said.
While only one heat wave cannot by itself be linked to climate change, a significant increase in these types of events over time could be a hallmark of a warming planet. “An increasing frequency of heat waves —that’s one aspect of climate change you can point to,” Carbin said.
Over the past few years, daily record high temperatures have been outpacing daily record lows by 2-to-1 on average, according to the website Climate Central. A 2009 study found that if the climate were not warming, that ratio would be expected to be even. So far this year, there have been 40,113 high temperature records set or tied, compared with just 5,835 cold records, a ratio of about 7-to-1.
“This could be a harbinger of things to come,” Weber said

Some questions though.
Besides the fact that we’ve trashed the continuity (and virginity) of the databases you ran the Hurst exponent series on; I am curious about the legitimacy of running this series on a generic (also frequently discontinuous and confusing) aggregate of methodically different world temp measurements. Wouldn’t it be better to use this exponent series on some validated long term temp records from specific locations? There’s no need to wade into the alligator choked swamp to contest the number of unknown beasts in the water. Especially when the keepers of those databases never seem to know exactly what goes into any particular aggregate.

Could you run this on the satellite measurements? I know you can’t get a very long time series from the satellite data, but it should be more trustworthy.

There’s a whole branch of the solar-terrestrial-climate literature that went off the rails – raising decades of controversy – simply because B was either ignored or hidden. Whether ignorance or deception, bright forces are up against the ugly, dark side of human nature.

Let’s see: first, we have data from a personal communication, with the methods of data collection poorly described. then, we have a totally impenetrable figure 4, with (a) no clear explanation of which variables are calculated and which variables are measured and (b) no clear explanation of which plotted variables are responses and which are antecedents; third, the diffusion equation contains no terms for seasonal variation in temperature (with freezing in fall and winter, thawing in spring and summer) or annual variation in rainfall, insolation, temperature or snow cover. Lastly, in the Hurst analyses and the core analyses, there are no comparisons of the calculated curves to some expectancies dependent on any models of random variation plus linear change (or any other model of change), that is, there is no hint of what is usually called “statistical significance”.

The references are a little sketchy. And the post itself is pretty barebones. I am not the kind of person that has time to go look all of this stuff up. Given that our economy is based on specialization, I can only assume that Mosher will hold this against me while everyone else understandingly forgives. Still, a little more explanation is needed or this is just toilet paper floating in the septic tank of the internet. Sorry if that seems harsh, but it is totally forgettable.

Matthew R Marler (July 3, 2012 at 8:30 pm):
“[…] there is no hint of what is usually called “statistical significance”.

The model assumptions don’t hold, so why waste time generating meaningless p-values? Climate research is in the exploratory phase; it’s nowhere near the level of knowledge necessary to do meaningful inference. Too many turn a blind eye to this reality. I do hope you (& many others) will reconsider from the perspective of deeper fundamentals.

It’s a casual blog post written by a volunteer, not a formal document written by a well-paid employee with a guaranteed-secure pension. The distinction is day vs. night (real, animated grassroots vs. stuffy, yawn-inducing formality).

All a blog post has to do to succeed is stimulate the audience.

Whether applied well or not by the author of this particular article, the post draws attention to a very useful exploratory tool (that can also be used as a meaningful inferential tool in other contexts where inference model assumptions are tenable).

My concern is that we’re already severely short on climate blog articles about data exploration. Sometimes we go for weeks without anything interesting. If the bar is set artificially high for data exploration articles, we just get watered down in more (waste-of-time IMO) philosophy & politics.

Such a technique assumes the data plots being analyzed are legitimate. After all the howling about how the major composite temperature records (other than satellite) are fatally flawed at best and fraudulent at worst, isn’t it colossally mendacious for readers to claim the subject analysis of those records means anything at all?

3 July: Fox News: Douglas Main: What’s behind the record heat?
Climate change?
The early heat waves of summer — following higher temperatures in spring and winter — could also be part of a pattern of climate change.
“It’s consistent with what we’d expect in a warming climate,…………
===================================================
Sure, it’s a ridiculous load of tripe. Every year since time began, some points on the earth break records. Every year, without exception. This is because there was never a time that some place didn’t have extreme weather. So, what the lunatics have been doing, is chasing an area with extreme weather and checking the records. Sure enough, some of them get broken. Recall, they said the same exact thing about the winter we had in Kansas a couple of years ago…. except they were cold records…… which, they said they expected. Well, so do I. It doesn’t have anything to do with a warmer world. In fact, the global temps this year have been well below the temps of 2010.

“I am curious about the legitimacy of running this series on a generic (also frequently discontinuous and confusing) aggregate of methodically different world temp measurements. ‘

long ago on CA a couple of us played around with Hurst ( Im a fan) and these series. I think either stockwell or D hughes and I discussed it. mmm cant recall now. I was bothered by the fact that the index is a amalgamation of air temps and SST.. hmm two entirely different beasts.
The point being, it is one thing to compare the temperature index to itself to look at changes.
its quite another thing to analyze it as if it were physically meaningful.

OHC is physically meaningful.. the combination of air temps and SST.. hmm. not so sure.

That said, its always fun to play with methods.

In simple terms the temperature index is really beside the point when it comes to AGW.
we knew long ago, long before the index ever went up that GHGs will warm the planet.
And if the temperature index goes down, we will still know that GHGs will warm the planet.

2012 summer temperatures here in northwestern USA have been below normal while temperatures in states further south and eastward have been above normal, but only the hot temperatures and storms and forest fires and their possible connection to climate change are what is reported in the MSM. In the northwest, some people have replanted their gardens 3 times because the seeds rotted in the cold soggy soil.

Very cool paper and thanks for posting this Anthony. This is above Tamino’s head. Hurst makes perfect sense and these methods are well defended. This is better/more accurate than just fitting a linear trend.

Sins of omission as usual, by Mosher, while preaching his favourite Gospel. Let me correct it

” we knew long ago, long before the index ever went up that GHGs will warm the planet.
And if the temperature index goes down, we will still know that GHGs will warm the planet, PROVIDED ALL OTHER THINGS REMAIN UNCHANGED, WHICH IS HIGHLY IMPOSSIBLE IN A CHAOTIC SYSTEM ”

Impossible! We were told there was runaway global warming by the United Nations, for Chrissake! AND James Hansen! AND Michael Mann! We’re STILL being told that by the UN, AND Jim, AND Mike. And when have the UN and these other guys EVER told us a lie? Impossible!

And besides, I only predicted imminent global cooling in an article written in 2002! So how could it have started five years earlier? Show me the time machine? Impossible!

This may be an interesting way of bringing out what is obvious to an objective inspection of temperature and sea level data and was stated by Phil Jones: “there has been no statistically significant warming since 1995.” However, I think you need to explain the processing a bit better and comment on why the shoot upwards in the period when the warming stops.

Also , is this a cumulative intergral (CDF) or a sliding window. It’s a bit unclear what you are actually plotting here, though I think once it is clear it would make the point nicely.

We should welcome this kind of analysis, which is sorely lacking in mainstream climate science, because it allows us to identify short to medium term climate drivers and assess their size/impact. In this case I think snow cover is behind the borehole cooling, but would like to see data from boreholes outside snow cover areas.

Even NASA/GiSS admit that the CO2 forcing is not much more than 25% of the total change in forcings since 1850, and given the large uncertainties could well be less.

Why does this analysis end in 2008? Colour me skeptical, but with data available to 2011, my instinctive first question is – would the results of this analysis change if it were extended to use all currently available data? And why on earth was that data not used?

Also: no links provided to data used, no detailed description of the method used, no assessment of statistical significance, no description of the interpretation of these charts.

Southern Hemisphere’s response is affected by the Circumpolar current’s temperature wave, which interpolate within GS cycle, in addition to the inertia of larger oceanic mass damping the natural oscillations. Any serious analysis should consider giving a degree of disengagement between the hemispheres

Very interesting. That’s a splendid inflection in Fig 3 – very hard to deny that something happened. I’m also intrigued by the much deeper and more sharply-defined 1930 dip in the HadCRUT data compared with the other two in Fig 2. Must be the different techniques used in British and US data massage parlours – ours will give you inflections that’ll be the envy of all your friends!

vukcevic, that’s very interesting. Do you show anywhere what this “geo-solar” cycle is composed of ? There certainly seems to be a strong relation.

I would caution on the use of moving averages (which I assume is what your 3yma means) running means pull peaks to one side or the other dependant on surrounding data (see your NH 1902). In worst case situations they can invert a peak (see your AMO 1958). None of this is helpful when you are looking for correlation.

They also let a lot of spikiness through that one is usually hoping to remove.

In this case I would suggest at least using a 1-2-1 binomial weighting for your three year filter.

It also looks like you are not removing enough “trend”. Since the fitted slope is a result arbitrarily dependent on where the circa 65y cycle falls in the window of data available, you would probably be no less justified in removing the slope to best fit the cyclic component, or centring the detrending on 1940. Even if the non-cyclic component can crudely be taken as linear, it would be pure chance if detrending the essentially arbitrary period of the data, found the correct trend.

I think this kind of cycle nature is actually what is shown by the analysis in this article rather than the “regime shift” the author calls it.

I look forward to seeing what you’re writing up. This looks a lot more credible that a lot of stuff I’ve seen.

>>
That basic issue has been a focus of climate research for 20 years, ever since the National Academy of Sciences’ famous “Charney report” (see references below) in 1979 estimated that the world would warm between 1.5° and 4.5°C if the amount of carbon dioxide in the air were to double.
>>

So despite all the shouting and MASSIVE gravy train of expenditure we have not advanced one jot on the fundamental question since 1979 !!!

D.Koutsoyiannis has published much about Kolmogorov-Hurst phenomena so it is not as rare as the OP means.
The problems I have with this post are:
– it is not exactly described what is being done (a few equations would go a long way). It is really not clear to me what the figures say.

– all H or K-H analysis apply to random autocorrelated data. Indeed the fundamental assumption is that the studied variable is random. It is neither clear nor accepted that the system’s variables (temperatures, pressures, velocities, densities, cloudiness etc) are random.

– Using a “global temperature” which is produced by spatially averaging over some spatial grid per definition destroys all spatial correlations. As “climate shifts” are supposed to be the result of interaction between spatial structures (see Tsonis and Oceanic Oscillations f.ex), it is highly doubtful that a statistical analysis of such a composite variable where all spatial correlations are destroyed would show anything relevant to the dynamics of “climate shifts”.

– Then, completely on the other extremum of spatial structures, a single point is chosen. While the global parameters destroy spatial correlations, using local parameters supposes that there are none.
So considering a time series at a single spatial point is hardly relevant to the dynamics either.

What stays is that while the both spatial methods are invalid considered independently, they show many similitudes. This is puzzling.

Borehole temperatures are not just dependent on solar but mainly geothermal heating which averages 30W/sq.m. over continental crust but can be as high as 100W/sq., even higher near volcanoes but lets ignore that. So such variable input make this data unreliable I would have thought.

P. Solar says:
July 4, 2012 at 1:57 am
…..
Hi Solar
Thanks for the notes. I am well aware of moving average shortcomings (yes it is 3 y ma), it is as an aid for easier visual inspection, but annual values are also shown.
AMO is taken from http://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.data where the trend is already removed
The NH temps appear to have an upward trend, I’ve just plotted trend line and came with y = 7E-05x – 0.075.
The article is nearly finished, based on half a dozen well known data files, it shows mechanism at work, but does not go into theory of the energy transfer, that will come later on; words of caution: the article is only dealing with short to medium term oscillations, longer term and the upward trend are not considered.
When finished it will be available on line.
I’ve put similar post on the RC (Unforced Variations thread), but since I am not on Gavin’s list of ‘favorites’ he may demote it to ‘bore hole’ where lot of good stuff is to be found.

” we knew long ago, long before the index ever went up that GHGs will warm the planet.
And if the temperature index goes down, we will still know that GHGs will warm the planet”
==========
Here is a simple proof that says otherwise:
Radiation in = radiation out at TOA. An atmosphere with GHG radiates from both the surface and the atmosphere, while an atmosphere without GHG radiates only from the surface. For any value of atmospheric radiation greater than zero, the surface radiation must be reduced by an equal amount to maintain the radiative balance. Thus, surface radiation must be lower with GHG than without. Thus, the surface temperature must be lower with GHG than without.

I agree with John Marshall above….geo-thermal energy is assumed constant AND insignificant in the one dimensional Carbon warming models. The ‘radiative budget’ is an intentional deception and ‘energy balance’ is the real determining factor….in a chaotic system that never achieves balance. What happens in the atmosphere is the final, visible end reaction to a long series of unseen primary forces.

Well I’d always try to get my data a unprocessed as possible before spending too much time on it.

“The NH temps appear to have an upward trend, I’ve just plotted trend line and came with y = 7E-05x – 0.075.”

Yes that’s small anyway, but if you are taking the “trend” over that full data you show in that plot, you are biased by starting and ending at different points in the cycle. This is not the underlying trend. It is the underlying trend plus part of the trend of an incomplete number of cycles.

I don’t think the concept of “trend” has much validity in this context so I would suggest you are free to remove whatever linear variation makes the data fit best, without any loss of generality.

There may be some linear or quadratic or century scale variations as well. You may wish to grossly approximate whichever it is by a linear relation. It’s a fair first approximation that may help isolate the short to medium cycles you are looking at but it is a bit arbitrary, which is why you don’t lose anything by doing what works best.

Even if the AMO data claims to be “detrended” this is not more rigorous than an arbitrary linear adjustment so there is not reason why you should not add your own linear adjustment to that which has already been done. Since the linear model which was fitted to do the detrending has not physical meaning you are not losing any generality by adding your own.

I can see by eye from your plot that both NH and AMO would fit the GS signal better with a bit more linear adjustment.

Tom in Indy says:
July 3, 2012 at 6:55 pm (Edit)
The mainstream media won’t touch this. It does not fit thier agenda. The founders thought they established a system where a free press would prevent government from asserting its will over the populace. Unfortunately, the founders never envisioned a scenario where the press would be sympathetic to a totalitarian cause.

——————————————————————————————–

That is so true. The Frankfurt School has totally infected the media, organized labor and the social democrats everywhere. We are all in great peril.

long ago on CA a couple of us played around with Hurst ( Im a fan) and these series. I think either stockwell or D hughes and I discussed it. mmm cant recall now. I was bothered by the fact that the index is a amalgamation of air temps and SST.. hmm two entirely different beasts.
The point being, it is one thing to compare the temperature index to itself to look at changes.
its quite another thing to analyze it as if it were physically meaningful…”

Glad you weighed in Steve. I was wondering if you’d tried this math before. Do you know if Lucia has also explored using Hurst rescaling?

“…That said, its always fun to play with methods…”

Agreed. Exploring math challenges always swallow more time, as the intellectual puzzle unfolds, than one ever intended initially. I’ve never played with this series before, it’s like opening a hidden puzzle door. I’ll have to add it to my list of math to play with. Right now, I’m playing with lens formulas/physics. That is, refractive/reflective light using multiple elements and coatings comprising telescope optics.

Besides worrying about man’s GHG bank account, any insights on the use of Hurst rescaling on temperature data?

An interesting topic, which seems to be causing a lot of comment. I find it more than just interesting because I’ve been using somewhat related methods for about 18 years, I’d guess, specifically to examine climate time series for hints of discontinuities. My general conclusion is that climate data seem to be to a large extent periods of very little enduring change punctuated by short periods of very rapid change. These changes occur in both senses but over long periods have tended to be mainly increases. The notion that climate data have any simple relationship to the almost ubiquitous linear model when looked at over periods of many years is quite preposterous. Simple plotting of the cusum (relative to the period mean) over a hundred or so years, either as annual means or of “deseasonalised” values, instantly shows the folly of placing any trust in the analytical value of a linear fit (or “de-trending”) for such a period of time. Cusum plots instantly reveal the general behaviour of a series and suggest periods for speicial study, either because the are times of sudden change or because the can identify periods when change was effectively absent. Such periods are of course linear. If the linear fits that appear regularly in the literature (“scientific” and the blogosphere) also provided confidence intervals at some useful probability level those who believe in them would soon realise how little useful information they contain. When I compute a “trend”, linear or quadratic, I always display the confidence intervals for a mean value and a single future observation at a given value of the x variable, which are pairs of hyperbolae for linear models or more complex forms for higher order models. (The software I use does cannot compute approximate confidence intervals for non-linear models.) Note that there is an inherent assumption of a very approximately normal distribution for the underlying data – clearly not correct but I think adequate for the approximate nature of the science and observations.

This makes a lot of sense. Since 9 of the 10 warmest years in the global temperature record have occured since 1997, this means that the earth is now cooling. If the next El Nino year brings a new global high temperature record, that must mean that the earlth is quickly cooling. In my region, we have had 1.2 inches of rain since May 1. Perhaps the corn is dying from too much water!

James Sexton says:
July 3, 2012 at 9:35 pm
In fact, the global temps this year have been well below the temps of 2010.

Very true. For more detail:2012 in Perspective so far on Five Data Sets

2012 started off rather cold but has warmed up since then. So the present rank may not be the most meaningful number. Therefore I will also give the ranking by assuming the latest month’s anomaly will continue for the rest of the year.

With the UAH anomaly for May at 0.289, the average for the first five months of the year is (-0.089 -0.111 + 0.111 + 0.299 + 0.289)/5 = 0.0998. If the average stayed this way for the rest of the year, its ranking would be 12th. This compares with the anomaly in 2011 at 0.153 to rank it 9th for that year. 1998 was the warmest at 0.428. The highest ever monthly anomalies were in February and April of 1998 when it reached 0.66. If the May anomaly continued for the rest of the year, 2012 would end up 5th.

With the GISS anomaly for May at 0.65, the average for the first five months of the year is (0.34 + 0.41 + 0.47 + 0.55 + 0.65)/5 = 0.484. If the average stayed this way for the rest of the year, its ranking would be 10th. This compares with the anomaly in 2011 at 0.514 to rank it 9th for that year. 2010 was the warmest at 0.63. The highest ever monthly anomalies were in March of 2002 and January of 2007 when it reached 0.88. If the May anomaly continued for the rest of the year, 2012 would end up 4th.

With the Hadcrut3 anomaly for May at 0.474, the average for the first five months of the year is (0.217 + 0.194 + 0.305 + 0.482 + 0.474)/5 = 0.3344. This is about the same as the anomaly in 2011 which was at 0.34 to rank it 12th for that year. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. If the May anomaly continued for the rest of the year, 2012 would end up 9th.

With the sea surface anomaly for April at 0.292, the average for the first four months of the year is (0.203 + 0.230 + 0.242 + 0.292)/4 = 0.242. If the average stayed this way for the rest of the year, its ranking would be 14th. This compares with the anomaly in 2011 at 0.273 to rank it 12th for that year. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. If the April anomaly continued for the rest of the year, 2012 would end up 12th.

With the RSS anomaly for May at 0.233, the average for the first five months of the year is (-0.058 -0.121 + 0.074 + 0.333 + 0.233)/5 = 0.0922. If the average stayed this way for the rest of the year, its ranking would be 16th. This compares with the anomaly in 2011 at 0.147 to rank it 12th for that year. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. If the May anomaly continued for the rest of the year, 2012 would end up 11th.

So on all five of the above data sets, for their latest anomaly average, the 2012 average so far is close to that of 2011. If present trends continue, 2012 will be warmer than 2011, but a record is out of reach on all sets.

No, they do not. The U-shaped traces are an artifact of the method. You see the same “regime shifts” in random data analyzed the same way, as Tamino noticed.

Warming ended in 1997? Before the hottest months, years and decade in any of the global temperature records? That claim should have been a warning the analysis was wrong.”

—–
The author’s analysis & interpretation are messed up. The utility of the article is to get people thinking more about methods. Tamino prefers a different method for locating changepoints. Assumptions of randomness are patently untenable. Regards.

—-
My pleasure vukcevic. I fully support P. Solar’s eminently sensible suggestions. Feel welcome to request my assistance with any of P. Solar’s suggestions either publicly or privately at any time. Implementation is not difficult. Best Regards.

If you run a cumulative sum (the correct name for this analysis technique) on a zero centered linear trend with noise, you will see a parabolic curve with a bend near the middle. Regardless of noise level, regardless of the steepness of the trend, even regardless of the length of the data set – you get a parabola bending somewhere near the middle of that trend (near where it crosses zero), with some variations as per the noise. Not regime changes at all – just the outcome of cumulative sums on a centered trend plus noise.

Tamino has described this in his latest post (http://tamino.wordpress.com/2012/07/04/sum-fun/), and multiple folks have chimed in with their own random noise/linear trend graphs reproducing Outcalt’s graphs. Whether you care to look or not, this is a horrible piece of analysis on Outcalt’s part. Cumulative sums can be tricky, and the appropriate care was not taken here.

Furthermore, if the heat is going into the deep ocean somehow instead of warming the surface, then what are we worried about? As long as the sea surface is cooling, then hurricanes cannot get worse since there is less energy for them. As well, since the surface temperatures are not increasing, why should we care if the deep ocean gets 0.1 C warmer?

Paul Vaughan writes,
“Tamino prefers a different method for locating changepoints.”
That is true, but that’s not Tamino’s objection. The far larger problem is that Outcalt’s method is no method at all for this purpose. It will find a U-shape parabola inflected in the middle (i.e., the mid-90s) for any line with a positive slope. It will find an inverted-U inflected in the middle for any line with a negative slope. If you take a line with a positive slope and add noise, that makes a lumpy U exactly like Outcalt’s key figure above. So that lumpy U is not evidence of a regime shift in 1997, it is simply an artifact.

“Assumptions of randomness are patently untenable.”
No one assumed randomness. What statistics can do is test whether imagined patterns are different from random. Outcalt’s extraordinary claim that “the modern warming regime ended in 1997″ fails this test in a very basic way.

Excellent point! Here are the specific ranks for 1997 according to five different data sets: GISS-13th, Hadcrut3-11th, RSS-14th, UAH- 20th, Hadsst2-10th.

However it is possible to draw graphs from 1997 for three of the data sets showing essentially zero slope for over 15 years as per the details below. Just focus on #4, #5 and #6 below.Can it be said the warming ended because we can get slopes of 0 from 1997 on?

On all data sets, the different times for a slope that is flat for all practical purposes range from 10 years and 8 months to 15 years and 7 months. Following is the longest period of time (above 10 years) where each of the data sets is more or less flat. (For any positive slope, the exponent is no larger than 10^-5, except UAH which was 0.00103655 per year or 0.10/century, so while it is not significant, it could be questioned whether it can be considered to be flat.)

1. UAH: since October 2001 or 10 years, 8 months (goes to May)
2. GISS: since May 2001 or 11 years, 1 month (goes to May)
3. Combination of the above 4: since October 2000 or 11 years, 6 months (goes to March)
4. HadCrut3: since January 1997 or 15 years, 3 months (goes to March)
5. Sea surface temperatures: since January 1997 or 15 years, 4 months (goes to April)
6. RSS: since November 1996 or 15 years, 7 months (goes to May)
7. Hadcrut4: since December 2000 or 11 years, 6 months (goes to May using GISS. See below.)

For #7: Hadcrut4 only goes to December 2010 so what I did was get the slope of GISS from December 2000 to the end of December 2010. Then I got the slope of GISS from December 2000 to the present. The DIFFERENCE in slope was that the slope was 0.0046 lower for the total period. The positive slope for Hadcrut4 was 0.0041 from December 2000. So IF Hadcrut4 were totally up to date, and IF it then were to trend like GISS, I conclude it would show no slope for at least 11 years and 6 months going back to December 2000. (By the way, doing the same thing with Hadcrut3 gives the same end result, but GISS comes out much sooner each month.) See:

you may want to look closer at this paper and an explanation of Hurst coefficients, limitations of noise analysis and how steeply this curve gets indicating a regime change and not just noise creating a parabolic curve. As far as Tamino is concerned, he knows how to apply statistics but he has a habit of overstating the finality of his analyses a particular technique and his results. Clearly the “trend” is not some linear concave up occurrence to be correlated with so called GHG forcings.

Regarding p values and statistical significance, there are numerous limitations to looking at probability of committing a type I or type II error and in such a complex and chaotic system it is better to look at occurrences and relationships using Hurst between time periods.

jcbmack: Please explain why any linear rising trend with white noise gives the same basic result as Outcalt. Doesn’t this demonstrate that once again this site has published a post with conclusions that are totally nonsense?

jcbmack – One of the basic, and critical, techniques I use in evaluating an analysis method is to test with an assortment of synthetic data; to see if what goes in comes out in the analysis. Outcault’s work fails this test, as any trending series with random noise produces curves just like the ones he shows, even though there are no regime changes in that synthetic data! Let me restate that for clarity – given synthetic data of known characteristics, Outcault’s analysis method indicates aspects that are not present, that are false conclusions, bad analysis.

In addition, his work is using cumulative sums, not the Hurst exponent (which is used to analyze autocorrelation, not regime shifts), as discussed in http://tamino.wordpress.com/2012/07/04/sum-fun/, which I would suggest you read. Outcault has (perhaps inadvertently?) mislabeled his technique.

Again – the opening post makes conclusions not justified by the analysis.

Moderator: Did you snip the Tom Curtis comment [SNIP: the Tom Curtis comment was snipped for exactly the reason moderator dbs said it was. Insult your host or impugn his integrity, your friendly moderators will snip the comment. Complain about moderation policy, and that, too, will be snipped. Abuse fellow commenters and you will be snipped. Demand that Anthony justify posting an article…. get the picture? Discuss the science or not as you please, but demeaning, snarky comments will be snipped. -REP]

@ferdberple (July 4, 2012 at 5:39 am)“…Here is a simple proof that says otherwise:
Radiation in = radiation out …”

Are you serious?

You do know the difference between the emission spectrum of a black body with an effective temperature of 5780 K and the emission spectrum of a black body with an average temperature of 288 K? And you do know the absorption spectrum of CO2? Yes? No? And you do know that your blatant over-simplification shows that you don’t know what you are talking about? Yes? No?

Except of course, if you are a proponent of climate science and are abused by the fake skeptics that frequent this site, you will not find their abuse snipped. But long live the hypocritical censorship at WUWT

Oh, quit sniveling about “censorship”. They don’t censor here. If you call the site owner ‘dishonest’, do you really expect kid glove treatment from the moderators? And you’re “abused”? Grow up, crybaby.

But in my contrary way let me try to pass along his main point anyway, in gentler terms. Basically, Outcalt’s post is transparently wrong, as anyone who thought about his arithmetic has figured out by now. Why the post was written and published without noticing that is a fair question. How Outcalt and Anthony respond now will shed light on the answers.

[REPLY: Anthony publishes lots of stuff, not all of which he agrees with. It is simply “interesting”. WUWT commenters are fully capable of critiqueing the work and have done so. Anthony is not a co-author on this article and is not required to justify or explain anything, especially to anonymous individuals using anonymous proxy servers. Check site policy regarding that. -REP]

Did you not notice that I said the analysis was done wrong & misinterpreted?

One point where you are wrong is on randomness. You’re not thinking deeply enough about fundamentals underpinning inference. Specifically you are not being careful enough with stat inference model assumption integrity.

With respect to figure 3, perhaps I am misinterpreting the analysis here, but I believe that any monotonically increasing time series, analyzed in this fashion, will produce an integrated sum with a decreasing, linear regime followed by a faster-than-linear increasing regime. Let’s take a mean-subtracted time series y(t) – . The cumulative sum is the integral of these terms. It is dominated by the integral of the negative term at short times and by the integral of the positive term at long times. If y(t) is linear, the latter part of the integrated time series will be quadratic, if y(t) is exponential, the latter part of the integrated time series will be exponential, etc. However, growth rates are small relative to the absolute level of the signal and the timescale in this 1975-now plot. The linear->transition->faster-than-linear behavior is obvious for the longer time series in the open symbols in Fig. 2. You can also look at any other data set that’s more or less described by exponential growth (population, some country’s GDP, stock market indices) and see the same thing.

Editing the above post because brackets in comments do not work here:
With respect to figure 3, perhaps I am misinterpreting the analysis here, but I believe that any monotonically increasing time series, analyzed in this fashion, will produce an integrated sum with a decreasing, linear regime followed by a faster-than-linear increasing regime. Let’s take a mean-subtracted time series y(t) – mean(y). The cumulative sum is the integral of these terms. It is dominated by the integral of the negative term at short times and by the integral of the positive term at long times. If y(t) is linear, the latter part of the integrated time series will be quadratic, if y(t) is exponential, the latter part of the integrated time series will be exponential, etc. However, growth rates are small relative to the absolute level of the signal and the timescale in this 1975-now plot. The linear->transition->faster-than-linear behavior is obvious for the longer time series in the open symbols in Fig. 2. You can also look at any other data set that’s more or less described by exponential growth (population, some country’s GDP, stock market indices) and see the same thing.

Moderator: According to the moderation policy, “Most people wouldn’t be rude, loud, or insulting in somebody’s home or office, I ask for the same level of civility and courtesy here.” With this policy in mind, please consider the following comment:

“Smokey says:
July 5, 2012 at 4:21 am
Tom Curtis,

Oh, quit sniveling about “censorship”. They don’t censor here. If you call the site owner ‘dishonest’, do you really expect kid glove treatment from the moderators? And you’re “abused”? Grow up, crybaby.

[REPLY: There is nothing incorrect or wrong about Smokey’s comment. It is not rude to comment on someone’s rude behavior. -REP]

Paul Vaughan writes,
“Did you not notice that I said the analysis was done wrong & misinterpreted?”
Yes, that’s pretty vague. The analysis turns out to be meaningless, but earlier you defended it. And did you not notice that I said Tamino’s objection was not due to his preferring a different method for locating change points, as you claimed in your next sentence? Tamino’s objection was due to Outcalt’s method not being a method for locating change points at all.

“You’re not thinking deeply enough about fundamentals underpinning inference. Specifically you are not being careful enough with stat inference model assumption integrity.”
Specifically where did I say something wrong about stat inference model assumption integrity? Feel free to propose your own test for how cumulative-sum-of-deviation plots can be used to tell a true regime shift from the U-shape with squiggles that will always result when you apply Outcalt’s method to a straight trend with noise. Then show that your method works, as Tamino has very clearly and simply shown that Outcalt’s approach does not.

@Gneiss – A good test of this method is to take a data series that (1) increases in the long run, (2) has little noise compared to the temperature record and (3) changes trend for periods of several years. One such time series is the annual average of the Dow Jones Industrial Average. If this is a reasonable method of visually detecting changes in trend, you should obviously be able to pick out the rapid growth rate during the 90s followed by the multi-year decline of the tech bubble.

Over at the “other site”, arch stanton clearly mis-interpreted Dr. Outcalt’s key finding in this paper,
But never fear WUWT regulars, I was there to defend this site’s integrity:

arch stanton | July 5, 2012 at 4:15 pm | Reply
Clearly, “demonic interpretation” is a legal term, even if “dist” isn’t. (but don’t use it in regards to Smokey)…

Zinfan94 | July 5, 2012 at 5:02 pm | Reply
arch, please be more careful with your terminology.
A “demonic interpretation” would mean that the demons have caused a misinterpretation of the data. OTOH, a “demonic interpenetration” indicates that demons have actually changed the system creating the data, or changed the data measurement system. And according to sources, the process they use involves the demonic interpenetration of human spirits, causing the humans to act under the influence of the demons.

In short, in the Outcalt theory publicized by Anthony Watts, demons invading human spirits cause the system changes that result in the linear trend of rising temperatures.

With regard to “demonic interpenetration”, the suggestion is that using a linear trend to interpret the historical temperature record is a demonic activity. IMO no other interpretation makes sense. Of course, that is not at all abusive of the people who do in fact apply a linear trend to the record.

The power of holding two contradictory beliefs in one’s mind simultaneously, and accepting both of them… To tell deliberate lies while genuinely believing in them, to forget any fact that has become inconvenient, and then, when it becomes necessary again, to draw it back from oblivion for just as long as it is needed, to deny the existence of objective reality and all the while to take account of the reality which one denies – all this is indispensably necessary. Even in using the word doublethink it is necessary to exercise doublethink. For by using the word one admits that one is tampering with reality; by a fresh act of doublethink one erases this knowledge; and so on indefinitely, with the lie always one leap ahead of the truth.

The useful content at WUWT often comes out in comments, not articles. Integrity of an article is not necessary to generate deeply insightful comments. The occasion to learn & the stimulation to further explore is what matters most.

My commentary about Tamino’s preference for other methods was based on articles he has written in the past. From your comments I infer that he has now posted commentary on this WUWT thread. I will take a look.

Gneiss, I have taken a look at Tamino’s “Sum Fun”. Nothing has arisen in that thread that wasn’t already obvious. Outcalt fooled few, if any, at either site, but we all had occasion to delve into data exploration & methods (which is a welcome, refreshing change from politics, which I almost always skip). Many commenters at Tamino’s appear to falsely assume uniform composition of the WUWT readership. We are actually a very diverse bunch, associated only in the loosest sense. Nature is beautiful & fascinating. Politics? Not so much… Cheers to All.

KR and others: The idea of random noise/white noise is over stated. So called noise is mostly part of the signal.For so called global climate mean temps, you cannot just dismiss data as noise. P values for statistical significance is not nearly as useful as the public is lead to believe.

Also of interest this 1997 date closely coincides with data showing global warming went flat after 1998.

There is also significant data on climate regime shifts that coincide for the year 1997 among other years. Looking it up is easy.

The real issue here is the completely falsifying of the Hockey Stick, with terrible statistics like where data is smoothed and then re-input for more analysis instead of independent non-smoothed data being used for comparison. for example see here:

Now while more statistical studies looking at these results maybe warranted, the fact that you can create parabola with so called “noise” does not excuse smoothing data and using that smoothed data as an input where it should not be.

Oh and the humor of such commentary from here: http://wottsupwiththat.com/ on my comment to Anthony and the same quoting from Tamino is not lost to me. For one there is a parabola shape there to be sure and of course Anthony may or may not agree with the paper by Outcalt, however, the paper uses a valid method and arrives at a scientifically defensible conclusion. Picking at lack of noise adjustments or questions about p values does not eliminate this paper from due consideration. And it is also true not everyone here who blogs at wattupwiththat buys into it either, but it is still superior to Tamino’s analysis and Mann’s reconstructions.

Another point of irony is that I ended up being what many would term a “denialist” byt studying the science and defending it among many blogs, this one included. As I saw how the pro AGW blogs were censoring and deleting many well written and evidenced counter arguments and even arguing with me when I would not say the sky was falling, I looked deeper into the data and statistical analysis. Outcalt uses a parsimonious form of analysis with greater accuracy than what has been seen from the mainstream climate establishment. I stand by my remarks that Hurst is a good choice and this will go over Tamino’s head. Interestingly enough over at “Open Mind” many posters are claiming Anthony would not publish their posts, but none of my responses to Tamino’s quote was allowed until the relatively innocuous comment of mine was allowed: “still overhead”.

In this effect, then whether Anthony has a different view or not, Tamino attempted to shoot down the paper without proper consideration. Interestingly enough this paper coincides well with long term paleo-climate temperature records in terms of the curves which also form, eeek gasp… parabolas!

jcbmack – You appear to be discussing something other than my comment. The test of an analysis method against a known signal, synthetic data (which can include noise), resulting in conclusions that do not match the known signal indicates a bad analysis method. No more, no less.

P-values, time periods too short for any statistical significance, and the “Slaying the Dragon” nonsense (which Watts and Fred Singer have both noted is not even worth discussing) are red herrings in this discussion. Outcault’s test produces false conclusions from synthetic data, and can be expected to do the same with observations. It’s a bad method.

KR I addressed your comment. There is plenty of data showing regime shifts and weather patterns and climate shifts indicating support for Outcalt’s paper. Furthermore the hockey stick has 0 validity. The method of Outcalt is a good one. The only thing that maybe good is another analysis by Outcalt with a longer time period. Briggs is a good statistician to read though he does not consider the hockey stick fraud but full of un- admitted errors. If you were to look at research on climate regime shifts and there are plenty to choose from you will find either a cooling from 1997 or a natural warming signal from 1997-1998 and then a drop off in global temps. We each have our own minds, so it should not surprise you if we do not all agree with each other within the skeptical community.