Double Standard

Since 1975, global average surface air temperature has increased at a rate of 0.17 deg.C/decade (estimated by linear regression using either the NASA GISS or HadCRUT4 data sets). But the rate of increase hasn’t been perfectly constant over that entire time span.

As a matter of fact, there’s a 15-year time span during which the rate is notably different. Fifteen whole years!!! By at least one calculation, the difference is “statistically significant.”

Does this mean that global warming is wrong? That the computer models are utter junk? That this whole climate science thing is just a hoax, a nefarious scheme to cheat us all out of tax dollars in order to support the lifestyle of gaudy luxury that we all know scientists wallow in? (Science: money for nothin’ and your chicks for free…)

That 15-year time span covers the years 1992 through 2006, during which the rate of warming was 0.28 deg.C/decade. That’s a lot faster than the warming rate from 1975 to now.

Just a few years ago, when Rahmstorf et al. (2007) compared climate observations to computer model projections, they noticed the faster-than-expected warming leading up to 2006. It was faster than expected and faster than projected by those dreaded “computer models” used by the IPCC. According to the data, global average surface temperature was on a “mad dash” to extreme heat.

How did these evil denizens of global warming react? Did they use that result to push world government based on socialism, so that they could destroy our economy by taxing the super-rich out of some of their hardly-earned riches? Did they run screaming through the streets yelling about how we’re all going to suffer spontaneous combustion by the year 2100?

No. Instead, they attempted to understand the result.

And what explanation, some bunnies may wonder, crossed their minds first? What was their first instinct regarding how this mad dash of global warming might have come about? This:

The first candidate reason is intrinsic variability within the climate system.

Wow. When the data indicated surface warming faster than expected, the first explanation offered by those greedy bastards was natural variation.

You missed your chance, guys. How ya gonna rob the super-rich of all their billions with that?

Since that time, when they failed miserably to capitalize on the opportunity for alarmism, there’s been another 15-year time span when the trend differed noticeably from the trend-since-1975. It covers the years from 1998 through 2012:

The evil cabal of climate scientists are somehow trying to explain this away as simply being “natural variation.”

But the poor, downtrodden “deniers” are on to them. They know the truth. You see, that extra-fast warming period really was just natural variation, but the extra-slow period is all because the computer models are junk, the whole climate science thing is just a hoax (gaudy luxury for scientists to wallow in), and we’re headed for decades of imminent global cooling.

After all, isn’t that what Aunt Judy would say? Didn’t she already say that “natural variability” was responsible for more than half of the global warming since the 1970s — but isn’t she now pushing as hard as she can that “the pause” is proof that we don’t really understand what man-made tampering is doing to our climate? Hey — it’s all just a “regime shift” anyway.

Isn’t that what Willard Tony would say? Maybe not — maybe he wouldn’t blame the extra-fast warming on natural variability at all, he’d just claim that the temperature record isn’t reliable. If it shows extra-fast warming, that is — when the temperature record shows extra-slow warming it’s scientific proof.

It’s kinda like the changes in Arctic sea ice. When it takes a nose-dive like in 2007 and again in 2012, that gets blamed on “weather.” But when it makes an up-tick like 2013 — recovery!!!

I think I finally understand the Aunt Judy/Willard Tony approach to science. When data says we have a problem, either it’s just “natural variability,” or the data are either faulty or fraudulent. But whenever data says we don’t have a problem — even if it’s just a single year’s data — voila! Scientific proof.

98 responses to “Double Standard”

Right on cue, Tamino! I just got CATO’s resident fake skeptic, Patrick Michaels, to bet on what he said would be a good bet: that we’ll go 25 years w/o statistically significant warming. So, I bet him that we would, in HadCRUT4, with 95% confidence level.

Now I’m getting some crap on Twitter, of course, about what method of calculation for the 95% level we’re using. This is something we didn’t address in the bet, but the wingnuts are suggesting we use the CATO method (which I assume is Michaels’). Is the method of calculating the confidence level going to make much difference?

I’m a gambler, not a climate scientist, so any help with this would be appreciated.

[Response: Yes, the method of calculating statistical significance can make a difference, and make it easier or harder to establish by a lot.]

I’d make a comment on this but as an active scientist I’m too busy spending all that milk from the teat on fast cars and even faster women. I have so much I’m going to give half of it away to a bunch of folks who say they know how to administer it. Times are good!

” Yes, the method of calculating statistical significance can make a difference, and make it easier or harder to establish by a lot.”

So, what’s a fair method to agree to in advance?

[Response: Linear regression to estimate the trend. As for the uncertainty in that estimate …

If you’re using monthly data, I would suggest the method of estimating uncertainty used in Foster & Rahmstorf. I believe it was implemented on the web somewhere? If you’re using annual averages, then the autocorrelations are too uncertain so I’d just use the “white-noise” model and let whatever computer program you’re using compute it “straight out.”

You might also use a t-test rather than just 2 standard deviations to compute the 95% confidence interval. For 23 degrees of freedom (a times series of 25 annual averages) this means 2.06866 standard deviations rather than the 1.96 (often rounded to 2) associated with the normal distribution.

Then there’s the very important issue of whether you’re basing the bet on a statistically significant *change* or on a statistically significant *warming* at 95% confidence. The first is a two-tailed test, the second is a one-tailed test which makes it much easier to establish significance IF it has been warming.

Here’s the way which is not unfair, but makes it easiest to establish warming: use annual averages (for simplicity), ignore autocorrelation (which is hard to estimate for annual averages), and use a one-tailed t-test (critical value 1.714 standard deviations).

Here’s the way which is not unfair, but makes it hardest to establish warming: use monthly averages, estimate autocorrelation by the Yule-Walker estimate (pretty vanilla way to do it), compute uncertainty by the method used in Foster & Rahmstorf, and don’t forget to compensate the number of degrees of freedom for autocorrelation also, and use a two-tailed t-test.

The most important issue in this is whether to use a 1-tailed or 2-tailed t-test. If your opponent is betting on no statistically significant *warming*, then you should get a 1-tailed test. If your opponent insists on a 2-tailed test, then you should insist that you win the bet if there’s statistically significant *cooling*!

I suggest you put this in the agreement: that the “null hypothesis” will be “trend = zero” and the “alternate hypothesis” will be “trend greater than zero.” That is, after all, exactly what your adversary wishes to test. It also forces the use of a 1-tailed test.]

> Dr. Michaels is betting on no statistically significant warming (at the 95% confidence level) in the HadCRUTx data for the 25 year period starting in 1997. Scott is betting on at least that much warming.

So, I will add to the comments what you suggested:

> the “null hypothesis” will be “trend = zero” and the “alternate hypothesis” will be “trend greater than zero.”

And I thank you and all the other nice folks who’ve helped me understand this stuff.

My post appears to have been scrambled- should read ‘If the alternative is ‘trend >0’, the null should be ‘trend <=0' , ie no warming.'

[Response: That’s not how statistics is done. The null hypothesis has to enable us to compute the probability of the observed result. But “trend less than or equal to zero” doesn’t, because we have to know the trend to compute the probability. THE standard statistical test for a positive trend is to use the null hypothesis “trend = 0” and the alternative “trend greater than zero.”]

Fair enough. You could of course integrate the likelihood of observed data over the range of trends considered possible under each hypothesis (warming v not), but the Bayesian approach is probably too logical for deniers.

I’d add an extra condition to the bet, and that is that if any period from 1997 to between now and 2021 gives a statistically-significant warming, you win the bet. This is very much within the spirit of Michael’s claim of continued lack of warming, and an earlier resolution of the bet would further demonstrate Michael’s untenable position on the matter.

Scott,
Is Spencer betting on no warming or is Spencer betting on one standard deviation less than IPCC average with a cherry picked starting point? If Spencer wins when there is warming but 1.1 standard deviations less than IPCC with a cherry picked starting point that is not a very good bet. It might be close to 50/50 but Spencer gets to claim he wins when the IPCC is basically correct.

I think you are likely to win a bet, if implemented fairly, but I disagree with the thrust here. First of all, to be truly fair, you need a volcano exception. More fundamentally, the implications are asymmetric. If you win, it comes close to completely disposing of brainless zombie arguments based on short-term wiggles. (Not that brainless zombies will necessary be daunted by reason, evidence, and properly applied statistics.) But if you lose, that does NOT mean that global warming is dead as an issue and a danger, though your opponents will spin it that way. The loss might result from volcanoes, or from falling barely short of significance, and/or just from having a poorly understood lull at the “wrong” time. Yet warming might easily pick up the pace right after the bet expires, leaving the 21st century, once completed, in just about as bad shape as previously feared.

This point seems insufficiently appreciated in the trenches. It’s important to point out abuses, as Tamino so ably does. It’s also important to stay oriented to the overall context, which the proposed bet fails to capture.

I’m not sure, but I think it returns 2 sigma error bars, so you’d need to convert to a one-tailed test by multiplying the error by about 1.74/2.06. If the trend is bigger than the error in that calculator multiplied by 1.74/2.06 then it should be significant… although I’d check that with a proper statistician first!

Once you’ve made sure you’re doing it right, then it gives you a useful idea of what to expect based on different data selections etc. For example, if you use Roy Spencer’s temperature data, that’s a satellite time series measuring the lower troposphere. The temperatures there wobble about more in response to things like El Nino, so they have more statistical noise and the error bars are bigger.

Since 1997 the NASA trend error is about 0.12 K/decade, while the UAH trend error is about 0.21 K/decade. This means that it takes longer for satellite data to reach statistically significant trends.

Also, data series change with time, so you’d need to get an agreement on which dataset to use and whether to update it. The UK Met Office’s HadCRUT3 dataset was updated to HadCRUT4 which uses more weather stations, particularly in the far north where warming has been greater. This was very unpopular with ‘skeptics’ because it meant that the recorded warming since around 1998 was faster than it had been previously. If more data or infilling techniques become available than the HadCRUT trend will probably increase more because the Arctic in particular has been warming quickly recently.

Yes, I like the trend calc at SS, but I didn’t know the bit about the error bars, which is above my pay grade, but I understand it. Pat agreed to use HadCRUT4. We just have to agree on how to calculate the confidence. Dr. Spencer stated it as HadCRUTx, I assume to allow for further additions and corrections.

I still don’t know what to do about Dr. Spencer’s offer. HadCrut is +/-0.126 C/decade (2 sigma) and Dr. Spencer wants me to take over 0.162 for even money. I’m thinking I should split the diff and offer .144.

Is the planetary energy imbalance fairly constant, with warming in the surface temperature series speeding up and slowing down as the balance of heat between oceans and atmosphere changes…

…or:

Is the planetary energy imbalance increasing and decreasing substantially as surface warming speeds up and slows down…

…or:

Do we not have sufficiently accurate measurements to tell, either way?

Another way of looking at it: Is the total heat content of the climate system rising faster and slower as this natural variation plays out, or is the total heat content rising smoothly while just the surface warming speeds up and slows down?

I don’t have a comprehensive grasp of this question, but I’ve got to point out that it’s not a clean either/or, since a shifting of the balance between ocean and atmosphere also shifts the planetary energy balance: energy at 200 m can’t very well radiate away, after all.

Scott, one factor is that both Michaels and Spencer are insisting on a cherry picked start date. On HadCRUT4, the effect of the the 97/98 El Nino was to raise the temperatures at peak to the trend value for 15 years later (using a 0.17 C/decade trend). Using that peak as a start date will lower any subsequent trend from that year relative the expected trend as predicted by the IPCC (0.2 C / decade), or as projected from ENSO adjusted data (0.17 C / decade). Using a very crude method I estimate that over 25 years the impact of the cherry picked start date would be to lower the HadCRUT4 trend by 0.02 C per decade, but that should be treated as a very rough estimate. If Tamino is generous, he will be able to calculate a more accurate estimate.

At that level of impact, it should be irrelevant to your bet with Michaels. The proposed bet with Spencer, however, is different. If you are betting against on the accuracy of IPCC models, the mean expected trend given that cherry picked start point on my crude estimate drops to 0.18 C / decade and 0.162 C per decade will be within the confidence intervals. Betting on the continuation of the current (since 1975) warming trend, the impact drops the expected trend below the 0.162 C per decade proposed by Spencer.

Consequently, I would not take Spencer’s bet unless he acknowledges this issue, and adjusts the expected trend accordingly using a accurate estimate of the effect on the trend (by Tamino if he will provide one). Expect a lot of push back from Spencer and other deniers about this, however. The concept involved is one that they instinctively understand when pointing out (incorrectly) that the Arctic sea ice extent record starts at a high point in 1979; but seem unable to comprehend when discussing the temperature “hiatus”.

Same with the recent Arctic ice is recovering hoopla. Never mind that the IPCC predicted a recovery this year, by virtue that the predicted IPCC trend line is way above the actual Arctic ice extent, that’s right, no one has pointed out that this years Arctic ice extent is way below what the conservative IPCC “prediction”, so what’s a good sceptic to do, find the most extreme straw-man (in this case Maslowski), and point out how wrong he was!

And Maslowski isn’t wrong, either–his prediction was 2016 +/- 3 years, so this was merely the first year in a six-year ‘window.’ If I were offered odds to bet that he was in fact wrong, I’d consider them carefully before accepting. An ice-free summer by 2016 seems far from impossible to me, even if the ‘mainstream modelers’ are expecting it more on the order of 2030.

dorlomin,
You are assuming that your Steve Goddard who began his flaming within the Guardian’s web pages February last year is the same one who writes such tosh elsewhere, the one of whom not much is known about. If it is the same person, he is letting his secret identity slip a little (or planting red herrings) as in this comment the Guardian flammer purports to live in rural Colorado.

I’m pretty sure that political moderates also think that they are right and that those who disagree with them threaten society. (Though perhaps they perceive that ‘threat’ a tad less existentially.)

If Mr. Goddard–probably *not* the blogospheric one, judging by the sheer volume of posts ‘blogSG’ generates and the evident enjoyment he gets out of controlling the forum–could come up with an example of a political segment that doesn’t “think they are right,” his comment would be less bathetic.

Thank you, thank you ever so much, I was waiting for you to come along.

The real FAIL is starting with the 1997/1998 el Nino spike to get a shallower trend.

When I show how that sword cuts both ways, deniers get irritated. That’s because it exposes their double standard — it’s fine for you to blame *warming* on other factors, but God forbid you should admit other factors can go both ways.

Speaking of other factors, the “shallower trend” doesn’t just start with a monster el Nino, it ends with a double-la-Nina and lower-than-usual solar cycle. Will Anthony Watts or Judith Curry — or you — ever admit that the “shallower trend” is due to those factors?

Judith Curry has publicly accused an editor of Geophysical Research Letters to have acted maliciously by deliberately rejecting, with extreme prejudice, a manuscript by Patrick J. Michaels, Paul C. Knappenberger, John R. Christy, Chad S. Herman, Lucia M. Liljegren, and James D. Annan in 2010, in which, she says, she hadn’t found any serious mistakes. The four reviewers, supposedly carefully selected by the editor because they supposedly would recommend rejection, are being accused by Judith Curry to have recommended the rejection out of confirmation bias (although she doesn’t use this terminology). The manuscript claims an inconsistency between CMIP3 simulations and observations.

although I haven’t got any reply to it by Judith Curry or any of the authors.

I also sent an email to the Editor in Chief of GRL, Eric Calais, to inform him about the publicly made accusations against the responsible editor (I don’t know who the responsible editor was) and the journal.

Based on what Tamino has said here time and again (including this very post), “Assessing the consistency between short-term global temperature trends in observations and climate model projects” would seem to be a fools errand.

Tamino:
”
I think I finally understand the Aunt Judy/Willard Tony approach to science. When data says we have a problem, either it’s just “natural variability,” or the data are either faulty or fraudulent. But whenever data says we don’t have a problem — even if it’s just a single year’s data — voila! Scientific proof.
”

You give Dr Curry far too much credit.
She is not at all concerned with “data”.
(Her abusive treatment of it is evidence enough of that.)

No – She is a martyr on a mission to save the world from climate science.

R.
The first piece of obfuscation perpetrated in the post you link is in Fig 3 where the equivalent of the mauve line in Fig 2 is absent. Why the ‘author’ was minded to present such a line in Fig 2 and not in Fig 3 is difficult to understand. Perhaps his mauve crayon broke. Perhaps the ‘author’ intended it to be absent and this is what is meant by his statement “the effect is vanishingly small.” Or perhaps it was entirely unintended and the ‘author’ has fatally spoilt his own analysis due to this silly oversight.
Certainly anyone familiar with the HadCRUT4 temperature series will know there are dips in global temperature corresponding to volcanic eruptions, at least there are on this planet.

The ‘author’ says he is carrying out (presumably) linear regression of temperature (HadCRUT4) against some factor of volcanic forcing. A quick squint at his linked spreadsheet shows marked-up periods of increased volcanic forcing which is suggestive of a regression using individual events in some way, but I draw the line at delving into poorly annotated spreadsheets.

Indeed. What is the point?

That the abilities of the ‘author’ to carry out the analysis is wholly questionable is perhaps best illustrated by there being other work attempting the same feat (unmentioned by the ‘author’), work that has been accepted as not containing error or being based on misguided methods. (An example of this work for instance here.) Such works are not in any way obscure. And the results of all this work are radically different from the ‘author’s’ results. Therefore this would suggest that the one possibility for this ‘author’ not to be entirely in error must be that he is actually talking about a different planet. (This is not uncommon for posts published on Wattsupia.)

Is it actually possible that the ‘author’ is mixing up data from different planets? His assertion that “The reason that the temperature change after an eruption is so small is that the effect is quickly neutralized by the homeostatic nature of the climate.” certainly seems to suggest he is on a different planet. Here on Earth, the reason these dips in temperature are not more massive is because the effects of an eruption are short-lived. There are no known homeostatic processes evident, at least (bless my scaly back) not known to Earthly science.

“Now, before I discuss these claims about volcanoes, let me remind folks that regarding the climate, I’m neither a skeptic nor am I a warmist.

“I am a climate heretic. I say that the current climate paradigm, that forcing determines temperature, is incorrect. I hold that changes in forcing only marginally and briefly affect the temperature. Instead, I say that a host of emergent thermostatic phenomena act quickly to cool the planet when it is too warm, and to warm it when it is too cool.”

Ooookay, then. It’s ‘homeostatic’ because he says its homeostatic. It’s interesting that in the entire post he never names one of the alleged “host” of “emergent thermostatic phenomena.” Well, they’re ’emergent’ after all–but apparently only to a “vanishingly small” extent.

Silly Kevin: You clearly do not understand the nature of emergent homeostatic mechanisms which overcome marginal forcings like human-caused CO2 runups.

For example, when it warms up, the heat goes into the ground making lava hotter. This increases the number of eruptions thereby lowering the temperatures. The lowered temperatures cool the lava slowing down volcanic eruptions. Simple homeostasis.

Concerning this Willis Eschenbach post from the planet Wattsupia.
While I decline point blank to be drawn into examining Eschenbach’s spreadsheet to find what he actually did in this analysis of his, I did visit the comments thread on his post (Such Wattsupian posts are usually good for a laugh.) to see if any explanation would be given there as to what he did. Or perhaps to see if there was any protest that he is talking bollocks.
But I learn little apart from Eschenbach insisting that his primary result derives from a “standard linear regression”. As for protest, at one stage he is told very politely in a confused sort of way that sensitivity cannot be measured as described, with the telling reply from the heretic “I don’t understand this.” followed by much obfuscation. Others say effectively that you cannot compare X against f(ΔX) in this manner, but again stated unconvincingly, and gaining no response.

Smoothing away the long term trend from HadCRUT4 (T – 10-year ave) and multiplying the Sato optical depth by 23 to yield forcing should provide data as used by Willis (or very similar), and a least squares gives:-

0.035ºC per W/M^2 forcing (Willis – 0.04ºC*) @ R^2=0.014 (Willis – 0.02**)
* When he multiplies this value by 3.7 (for 2xCO2) he gets 0.13ºC suggesting that the pre-rounded 0.04ºC was close to 0.035ºC
** The graph of R^2 helpfully-presented in his post scales to R^2=0.015.

From his results the self-confessed climate heretic declares that ECS = 0.05ºC x 3.7 = 0.18ºC. Bully for him. What a twonk!!
Any fool would see that this application of “standard linear regression” is at most the effect of a climate forcing after only 8 months. Does it then magically disappear? Is the heretic’s homeopathic response so sudden?
Of course not.
The warming would go on at a not-dissimilar rate for something like ten-times that 8-month period and thereafter at diminishing rates for many further decades, this multi-decadal period adding at least 50% or so to the initial decade’s warming. Thus we need to multiply the heretic’s ECS by something like 15 to achieve a realistic estimate.

Canty et al. estimated the cooling from Mt. Pinatubo after correcting for the AMO (which they call the AMV). They write:
“If the AMV index is detrended using anthropogenic radiative forcing of climate, we find that surface cooling attributed to Mt. Pinatubo, using the Hadley Centre/University of East Anglia surface temperature record, maximises at 0.14°C globally and 0.32°C over land. These values are about a factor of 2 less than found when the AMV index is neglected in the model and quite a bit lower than model and quite a bit lower than the canonical 0.5 C cooling usually attributed to Pinatubo.”http://www.atmos-chem-phys.net/13/3997/2013/acp-13-3997-2013.pdf

I think one thing to keep in mind is that even if the null hypothesis of 0.17 degrees C/decade (the average warming rate of the past 40 years) is rejected by the data at sometime in the future (it isn’t yet) then we should remember that statistically significant variations have occurred before and they occurred with little variation in CO2 growth rate.

For example, there was a (statistically significant) cooling period from December 1939 to March 1951. This was a (very) statistically significant change from the warming trend that occurred from 1920 to December 1939.

So statistically significant variations are not new. The thing that is different from the past is that there haven’t been any statistically significant cooling periods since 1951 and there isn’t going to be another one anytime soon unless something incredible is about to happen.

Isn’t that what Willard Tony would say? Maybe not — maybe he wouldn’t blame the extra-fast warming on natural variability at all, he’d just claim that the temperature record isn’t reliable. If it shows extra-fast warming, that is — when the temperature record shows extra-slow warming it’s scientific proof.

Sometimes the next beat in that argument goes;

“The temperature record is unreliable. When it’s cooling we use your own data to rub your noses in it!”

Considering that it really depends on the smoothing that you choose, you can indeed prove almost anything: the climate is – warming, cooling, or staying the same. What needs to happen is for all groups to agree on the smoothing period that they all feel is actually representative of a reasonable temperature trend. I would suggest the 11 year sunspot cycle – just because it seems obvious to attach the temperature to something outside of human influence. See if we tie it to something like the sunspot cycle – even if there is no co-relation between the sunspot cycle and temperatures on earth, it isn’t an artificial time period. Humans will always set cycles to produce the results they want, so go outside of human time period influences.

[Response: Just because a time span is not arbitrary, that doesn’t mean it’s useful. A 1-year period isn’t arbitary either (it’s dictated by the orbit of the earth), but it’s obviously too brief to give meaningful trends. So is an 11-year period.

What needs to happen is that those who apply a double standard — like Judith Curry and Anthony Watts — get called out for their double standard, and that their ridiculous arguments get the ridicule they deserve so they don’t affect public policy decisions.]

A 1-year period isn’t arbitary either (it’s dictated by the orbit of the earth), but it’s obviously too brief to give meaningful trends. So is an 11-year period.

It appears that John Robertson was referring to an 11 year “smoothing period”, rather than an 11 year “trend period”.

The 11 year smoothing period is indeed sufficient to take out the high frequency (year to year or few-year) noise leaving the long term warming trend.

In fact, as Tamino has shown in one of his best posts (in my opinion) a 5 year average (with non overlapping periods) is actually quite sufficient to show the warming trend 9ancd other trends as well) — not up to the absolute present, of course, but pretty close, and not much is lost, particularly when one uses a 5 year non-overlapping average.

It’s unfortunate, in my opinion, that even climate scientists have often used the linear regression line to try to convey to the public what is going on because the 10-year (or even 5 year) average is much clearer (simpler to interpret), particularly for people who don’t know anything about statistics.

While saying that the (linear least squares) trend for the last 15 years is “not statistically significant” might be convincing to those who understand statistics, it’s very unhelpful for those who don’t (the scientists might as well be speaking the Navajo tongue, which even the Nazi code-breakers had trouble with)

The statistically uninitiated understand nothing about “statistical significance” and “uncertainty” (when it is even provided, which it often is not) and simply look at the slope of the line and conclude “Well, it sure looks to me like warming has stopped/paused/plateaued {or whatever the favored term of the deniers is}”.

And, of course, those pushing the “Pause” meme actually capitalize on this ignorance of the general public.

And make no mistake, when those who should know better (particularly climate scientists) do so, it is quite dishonest.

I should have said “difference between the (linear least squares) trend for the last 15 years and the trend since 1975 is “not statistically significant” which is as good a reason as any for talking about averages instead!

The way I look at this is plotting Hadcrut & GISS surface temps vs CO2, not vs time. Using Mauna Loa CO2 and Ice core CO2 data (adjusting for seasonality if you like), you can go back to 1850. This 163 year trend line shows that the long term warming rate due to CO2 (allowing for El Nino and other variability) has not changed in 163 years. Each constant factor increase in CO2 gives the same amount of warming.

This also clearly shows that 1998 and 2001 through 2005 were warmer than usual (due to El Nino cycle) and the last 2.5 years have been cooler than usual (extended La Nina/ neutral), but all perfectly within the natural variability of the ENSO cycle. This means warming is still on track and any pronouncements by prominent climate scientists that warming has “slowed down” in the last 15 years are in error. The next El Nino will make things toasty once again!

Now look at them scientists that’s the way you do it
You play the public with the IPCC
That ain’t workin’ that’s the way you do it
Money for nothin’, and trips for free
Now that ain’t workin’ that’s the way you do it
Lemme tell ya them guys ain’t dumb
Maybe get a blister on your mousing finger
Maybe get a blister on your bum

I shoulda learned to play the public
I shoulda learned to play them ducks
Look at that drama, he sounds so slick up on the camera
Mann will he make some bucks
And he’s up there, what’s that? Hill lyin’ noises?
Bangin’ on the public for a carbon fee
That ain’t workin’ that’s the way you do it
Get your money for nothin’, get your perks for free

[PDF] Using Data from Climate Science to Teach Introductory Statistics
G Witt – Journal of Statistics Education, 2013 – amstat.org
… Standard data sources were used for each of the three variables. Background
information about each variable and its source are listed with each graph. 3.1
Data for Temperature, Carbon Dioxide, and Solar Irradiance …

With IC, the trend of winter HN 2001-2013 is not significantly different from the 1975-2000 trend, while the IP seems to show otherwise.

Is that the trend significantly different with IC and common data (eg : winters HN, 1999 to 2013 / 1975 to 2000), indicates that the change is significant change, if we do not fix the number of degrees of freedom for the common data ?

Thank you !

[Response: There are some aspects of your questions I’m not sure I understand (I think there’s a bit of a language barrier).

However, to test whether two trends are different one should use the uncertainties of the individual trends themselves (which define the confidence intervals). Use these individual uncertainties to define the uncertainty in their *difference* (which is larger than either individual uncertainty), then use that to test whether the difference is significantly non-zero.]

Thank you very much, I understand that the good method is a comparison of the CI of the 2 trends, with your example :

HN winters CI trend 1975-2000 (GISS)
> confint(reglm)
2.5 % 97.5 %

x 0.01828873 0.03821041

HN winters CI trend 2001-2013
> confint(reglm)
2.5 % 97.5 %

x -0.03985252 0.02183055

So, the difference between trend 2001-2013 and trend 1975-2000 is not significant.

But with HN winters CI trend 1999-2013
> confint(reglm)
2.5 % 97.5 %

x -0.03287723 0.01977833

the difference between trend 1999-2013 and 1975-2000 is significant, but there are 2 same anomalies (1999 and 2000) in the 2 trends (overlapping period)

I would you like to know, if we can say (or no, I think : it is no) in this case, the change is significant with overlapping period, without calculate an another number of DOF that the standard number of DOF (without overlapping period), to take account the true number of DOF with the overlapping period ?

The data seems to support a linear increase. Dont the CO2 models claim an increasing increase?

I’ve looked at the collection of predictions made over the past decades, and the charts always start increasing in slope just to the right of today. In particular the one in the Inconvenient Movie, which got my attention first.

If the proposal were for more of the same, I don’t see much problem with it.

Isn’t that what Willard Tony would say? Maybe not — maybe he wouldn’t blame the extra-fast warming on natural variability at all, he’d just claim that the temperature record isn’t reliable.
————–
Well Anthony is attributing the rising temperature trend to urbanization. So the 15 year temperature plateau must have come about since urbanization has come to a dead halt for the last 15 years. So Anthony will at any minute be pointing to a complete standstill in the world economy as vindication of his ideas.

But hold on, he’s not! I wonder why?

Search for:

Support Your Global Climate Blog

New! Data Analysis Service

Got data? Need analysis?
My services are available at reasonable rates. Submit a comment to any thread stating your wishes (I'll keep it confidential). Be sure to include your email address.