Fake Predictions for Fake Skeptics

Note that there’s a decided hot fluctuation in 2003. So we’ll “predict” the time span 2003 to the present, based on data from 1993 to 2003.

First we’ll make a real prediction using those data. Then we’ll fake it. Here’s the data from 1993 to the present, together with a trend line estimated from 1993-2003 data:

Here’s a common-sense prediction: the trend continues. That looks like this:

We can see that observations don’t follow the prediction exactly — of course! The main difference is that during 2003, the observations were hotter than the prediction. For that time span at least, the oceans had more heat than predicted.

If you’re a fake global warming skeptic then that won’t do. Fake time! We’ll take the actual prediction, the one based on common sense, and just move it upward so that it coincides with the first obervation of 2003. That looks like this:

Voila! A fake prediction, tailor-made for a fake skeptic.

When you look at reality it’s kinda obvious how fake that is. But if you’re a fake skeptic, that’s O-tay.

60 responses to “Fake Predictions for Fake Skeptics”

Wow. I went to see what this was referring to, thought I was reading the correct article, and was treated to a thousand words or so of a horrible attempt to rewrite “Bringing Down the House” that didn’t have much to do with the author’s conclusion that 3,000 instruments couldn’t possibly estimate a rate of change at some arbitrary level of precision.

After finding the correct article, I have to say that it was pretty savvy to cherry-pick a starting point that wasn’t right at the peak.

Lastly, my eyes tell me it’s obvious that there’s a change point in the data at 2004. Did you do a change point analysis?

First, let’s assume that there is no spatial-temporal autocorrelation in the data. None. Zip. Nada. Zilch. You know, like it’s all independent & random.

Bad assumption.

Next, let’s assume an instantanious transfer of surface heat flux throughout the entire water volume (yes, as a matter of fact, WE did just that in a previous post, yes the entire ocean depth even, with delta T set to zero even), or over the upper 1,800 meters (current gibberish post).

What’s the rationale for keeping the same slope as earlier in the fake trend (with a higher intercept)? Why not just start a new trend line in 2001, which would give a(-n implausibly) steep trend over the next few years?

I am not following the denier websites, so I have no idea who dreamed up the fake predictions.

BillD – being an non-science/math person, I wondered the same thing for a moment, but I think what the point is it makes the observed OHC look like it’s falling far below the trend when in actuality the observations are inline with the longterm trend. It’s a visual stunt.

KR,
Yeah – that was my bad. I was a) sleep deprived and b) trying to match where Hansen’s scenarios meet as Evans shows it and c) sleep deprived.

I think Evan’s bad is actaully that the plots aren’t centred around a common baseline and he describes estimates of climate sensitivity as something that has remained at 3C per doubling of CO2 – when Hansen’s models assumed 4C.

And a bunch of other stuff on the other plots.

His comparison of IPCC projections makes no mention of the fact that the IPCC plots are off because the emission scenario doesn’t match reality rather than the sensitivity overstated.

The big issue with most of the critiques of Hansen 1988 are that actual emissions were actually below his Scenario C (given the collapse of the Soviet Union, the Montreal Protocol, economics, etc), and given actual emissions Hansen’s model is off by only about 25%. Which matches the sensitivity in his model, 4.2C/CO2 doubling if I recall, rather than the ~3C/CO2 doubling considered the modern estimate.

I’m sure daddy Pielke will proudly endorse this and suggest it be published, and that Anthony Watts will not even blink twice before posting it and touting it as yet “another” alleged nail in the coffin of AGW. Good luck publishing scientific fraud BT.

ML, don’t forget the widespread claims of “pal review” when BT fails to get it published in The Southeast Luxembourg Journal of Tree Frog Reproduction. He might be forced to go to a less relevant journal (E&E, doubtless).

If BT is criticised for his choice of start point for being too high, has anyone checked to see if Hansen started his at a too-low variable?

It may be wrong for BT to ‘zero the data’ in such a way that the trend prediction is offset, but how come it is right for Hansen to ‘zero the data’ in the way that he has?

What if 1993 was a very low anomaly in ocean heat content?

I think BT assumes the models would give much the same slope whether the runs start in 1993 or 2003. Is that a fair enough proposition? Say, for example, Levitus’ data started in 2003 rather than 1993, and Hansen made the comparison from 2003. What would be substantially different between his results and BT’s? Would Hansen ‘zero the data’ differently to BT in this case? And why?

Hansen et al were running a physics-based model. If they started at a year with an anomalously high ocean heat content, and the model had any skill at all, it would tend to lower the ocean heat content down almost immediately, thereby significantly reducing the short-term slope.

BT, on the other hand, is performing a simple time trend-extrapolation forecast, so it has essentially no physics underlying it. The time trend follows the path of “zero noise” and so he should start the extrapolation at a “zero noise” point, not at any arbitrarily selected observed data point.

Hansen’s references for the modelling are obscure (cites G A Schmidt et al and J Hansen et al ‘in preparation’ *), but the five model runs were 1880 to 2003, not 1993 to 2003. They were not bound or tuned in any way to the OHC data (Willis 2004) they were compared against.

The difficulty with probing BT’s rationale WRT Hansen’s “choice” is that we don’t know how the OHC obs/model comparison was done. How Hansen overlaid the OHC data from Willis is completely undescribed. Therefore, it’s hard to demonstrate that the fit was a product of sound methodology rather than a fortuitous result of using the Willis data. It looks as if Hansen et al simply ‘zeroed’ the model and obs at 1993, with no regard for whether 1993 was a low or high anomaly in the observed data. Some of BT’s argumentation implies that if Hansen can be cavalier with his choices, why can’t BT? Hansen used a new improved data set, why, so does BT (ARGO-enhanced). Hansen arbitrarily zeroed data and obs at 1993, why can’t BT do the same at 2003? Etc.

According to OHC data from NOAA – BT’s choice for OHC data – 1993 is a cool anomaly in the record, but the departure is not as huge as the 2003 spike from which BT starts his trend comparison. I do wonder about how to soundly ‘zero’ model runs and observations in this regard, when the model runs were not tuned in any way to the (Willis) obs. This is the point to illuminate, I think, if you care to drag your knuckles through BT’s reasoning. Why is Hansen’s method here not arbitrary?

* I believe the modelling is described in Hansen et al (2007a): Climate simulations for 1880–2003 with GISS modelE

The 1993 start date is arbitrary. It came to be because that was the start of the satellite-derived sea level record and that marks the start of near global sea level coverage. This improved coverage permitted scientists to better estimate/constrain the OHC data. Willis et al. say, “Satellite altimetric height was combined with approximately 1,000,000 in situ temperature profiles to produce global estimates of upper ocean heat content, temperature, and thermosteric sea level variability on interannual timescales”.

From Hansen et al. (2011), “The 1993-2008 period is of special interest, because satellite altimetry for that period allows accurate measurement of sea level change.”

As it happens 1993 was not too long after Pinatubo and there was a local minimum in OHC (albeit not significant) around then because of that.

For 1993-2003 Willis et al. (2005) estimated increase in 0-700 m OHC to be 0.85 W/m^2 [+/- 0.12]. For 1993-2003 period Hansen et al. (2005) report a modelled (GISS) increase of ~0.60 W/m^2 in the top 750 m of the oceans. Lyman et al. (2010) estimate a rate of 0.63 [+/- 0.28] for 1993-2003 for the top 700 m.

Lyman et al. (2010) estimated that the increase in OHC between 1993 and 2008 in the top 700 m at of 0.64 W m-2 [+/- 0.11].

von Schuckmann and Le Traon (2011) estimated an increase in OHC for 0-700m between 2005 and 2010 of 0.45 W/m^2, and 0.60 W/m^2 for 0-2000 m.

Fast forward to 2012 and Loeb at al. (2012) estimate that for the period 2001 to 2010 the increase in OHC was ~0.5 W/m^2 [+/- 0.43]. Note, that is for the top 1800 m plus an estimate for the abyssal ocean.

These values all seem consistent with the decadal GISS model projection of around ~0.6 W/m^2 reported in Hansen et al. (2005), so the fake skeptic claim that “GISS projection is still 3.5 times higher than the observed trend” seems highly dubious to me.

I think that Tisdale may be one of the first of the Denialati to move me to tears – whether of frustration or of despair is moot – because it is becoming ever more apparent that humanity as a whole is terminally Stupid…

The guy just doesn’t get it. I suspect that writing a whole dissertation on the basic statistical reasons why Tisdale is Wrong (and illustrating the same with a thousand graphs), would still be insufficient to illuminate this person’s benighted Dunning-Kruger affliction.

And the number of WWWTians who blindly swallow his tripe only adds to the misery. Truly, these people are “Wootians”, given the nonsense that they accept as being “clear”, “devastating” “science”. I reckon that if Tisdale or Watts or Eschenbach wrote that Hansen/Mann/Gore did not “compensate for the readjusted anti-matter tachyon resonance in the quantum flux capacitor vibrational calibration” and therefore that “human-caused global warming is not occurring/is negative/is insignificant/is good for us/(insert meme-of-the-day)”, the masses would say “ah, yes, of course, it’s all so obvious, frauds!, liars!!, Satanists!!1!, hippy-greenie-envro-Nazis!!1!eleventyone!”

I read the link you provided but it’s not clear to me what he doesn’t get? Figure 3 in his post seems to address the specific complaint that Tamino raised here. The divergence is unchanged even if shifted to conform with an (also) arbitrary base. Of course the question of whether an 8 year divergence is long enough to be meaningful is another matter…

Well he really does shoot himself in the foot when he shows that Fig 2. (from Schmidt) in his “rebuttal” (cough). From that figure, the GISS-EH model predicts an increase of about 12 x10^22 J between 1993 and 2010 (i.e., that 0-700 m OHC in 2010 would be ~12 x10^22 J). How much did the 0-700 OHC increase (and remember this is one out of about seven OHC products)? It increased by ~10×10^22 J.

Looking at the numbers another way– the rate of increase in the GISS-EH model predicts an estimated rate of increase in 0-700m OHC of about 0.67 x10^22 J yr-1 for 1993-2010, while the NOAA data show a mean rate of increase (from OLS fit) of about 0.62 x10^22 J yr-1 for the same period–I used annual global OHC from NOAA.

So again, the fake skeptics claim that the IPCC models (actually just GISS ) are diverging from reality seems false to me.

Looking at the numbers, it seems that the GISS-EH (not ER) is what Gavin and others have been looking at.

“Thanks for the opportunity to post it once again up front in this post, Tisdale.”

You know someone has reached the level of babbling cretin status when they start talking to themselves in public with regularity.

As if BT is really stating, I’ll repeat the lie often enough and loud enough, so as to keep all the confirmation biased D-K’ed Wattins in line. As if.

There are five types of Wattins;

(1) The Posers (people that write/post the thread topics)
(2) The Lemmings (people that follow unquestioningly)
(3) The Insane (people who promote themselves with Ancient Aliens theories)
(4) The Futile (people who post counterarguments over there)
(4) The Few (people who go there for the shear humor of it all)

I have the impression Curious George is not honestly curious, but if some other people here are … this poster (Lyman & Johnson, large pdf) gives estimates of uncertainty by year and source in OHC data:

The “sceptic” tricks are monumentally stupid if the point is to do legitimate science.

If the point is to rally the converted on the blogsphere and perhaps try to win some political points with supposedly “real data”, then unfortunately they are not quite so stupid.

It would be interesting to play similar games with things people are more familiar with than climate change. For example, can one prove the world economy or stock market are booming? Or perhaps one can predict that items in supermarkets will be free by 2015? Such tricks may seem more absurd if they are played with something one is more familar with.

It would be interesting to play similar games with things people are more familiar with than climate change.

Yep… this is what makes it so important to expose the lies of the denialists about, e.g., where their money comes from, or inconsistencies in their stand on openness, or in what they claim to know and not know. People understand this better than science. This kind of thing was the undoing of the tobacco conspiracy: the bosses denied knowing about nicotine being habit-forming, while their research departments were working on additives to strengthen the effect…

So, let’s have the Global Warming Policy Foundation be open about their funding sources. We don’t even want to see their emails, just their account books :-)

In related news, has anybody heard anything from the George Mason University’s forever investigation into the Wegman affair?

If we outrageously cherry-pick our start date for economic data – say at 2008 – that short term trend sure looks bad. But of course there’s a lot of noise in the economic data (sub-prime loans are the sulphate aerosols of the GDP!), so we need to look at the long term trends to really see the signal. And the long term trends are up, always up, exponentially into the future, higher and higher…

This is unrelated, but I gots to know…
Anyone know why the blogmaster at Accuweather.com Brett Anderson blog seems to go out of his way to protect denialists on the comment threads? They seem to have the run of the place!

Bastardi hasn’t been with Accuweather for about a year. The blog itself is actually pretty good. I don’t see any denialist leaning in the articles posted.http://www.accuweather.com/global-warming.asp
As for the comments, they really don’t seem to be regulated much. And since they switched to a facebook posting system, the number of comments has plummeted. I don’t know about the rest of the site, but Anderson does not seem to be a denier.

Other than his hidious manipulation of the OHC data, BT’s analysis is dead in the water for four very good reasons:

1) If he feels compelled to start in 2003 because of the Argo network, fair enough, but then he should use **all the Argo data to below 1500-2000 m**.
2) Deciding on that start date though poses problems though, because 2003-present is to short a period to calculate robust statistics or trends, and is simply too short a period to evaluate/validate models.
3) Because of internal variability (see Meehl et al. for example) even if he did detect a robust hiatus or even cooling trend in OHC (which he won’t) it is moot and in no way refutes the models because we know that the system can accumulate energy in the long run albeit with sporadic decade-long slow downs or pauses.
4) The GISS-EH simulations he is using are unaware of the prolonged recent solar minimum and increased aerosol loading since ~2000, yet BT and Pielke turn a blind eye to that fact. It will be interesting to see what the CMIP5 model runs show when they are, hopefull, driven with the most recent forcings.

Willis et al. (2004) calculated the OHC data to mid 2003, so perhaps both Tamino and BT got that wrong, perhaps more so BT b/c the last two data points would have increased the trend quite a bit. In their abstract Willis et al. (2004) actually say “mid 1993 to mid 2003”, but the content of their paper does not seem to support mid 1993. Confusing to say the least.

But, the end-point issue is moot because what one probably should do is compare what the GISS model predicted the OHC would increase by from 1993 through the end of 2010. That is, we are not interested so much in whether the observed 1993-2003 trend in OHC continued beyond that, but rather how the estimated trend in OHC compares with the observed trend.

When one does that, as I noted above, the model has done a pretty good job for that window of time:
“Looking at the numbers another way– the rate of increase in the GISS-EH model predicts an estimated rate of increase in 0-700m OHC of about 0.67 x10^22 J yr-1 for 1993-2010, while the NOAA data show a mean rate of increase (from OLS fit) of about 0.62 x10^22 J yr-1 for the same period–I used annual global OHC from NOAA.”

A far cry from the alarmist generalization the models are failing!”.

There may be a problem with BT’s replication of the Schmidt Figure (see BT’s Fig. 9), he shows the OHC increasing by ~15 x10^22 J between 1993 and 2010, while I get at most 13.5 x10^22 J– I have used 1 W yr/m-2 to be 1.06 x10^22 J, versus his 1.13 x10^22 J, and a 2010-1993 difference of 12.75 W yr/m^2. But is annoying having to eyeball this stuff. Regardless, I do not think his Fig 9 is a “reasonable facsimile” as he claims.

Anyhow, BT is making a mountain out of a molehill and probably trying to help his readers forget about his egregious error with the offset.

BT managed to use the word “‘disciples” eight times in his post…he needs to calm down and tone down the rhetoric if he wants to be taken seriously, that and learn how to do stats properly ;)

1) BT is comparing to Hansen et al, where the depth is 750 meters. Including other data would be (even more) inappropriate.

2 & 3) Hansen’s period is 11 years. Is this a long enough time period for robust OHC trends?

BT’s OHC analysis 2003+ is not creditable, but even though he draws attention to Hansen’s fig 3 with some spurious criticisms, it is rather difficult to defend the methodology for the OHC model/obs comparison in Hansen et al 2005. It may well be that the Willis 04 data set is a *convenient* set, and I can find no explanation regarding the baselining of the obs to the model mean. (The model runs were 1880 – 2003)

Thanks for your thoughts. I was not very clear with my point #1. Of course if anyone is validating the model against 0-2000 m they should look at the same depth in the modelled ocean– I should have made that clear. To do so should be possible, but probably not trivial, to estimate the 0-2000m OHC from the model runs.

Prior to Argo researchers had no choice but to use 0-750m because most XBTs do not record data below that depth, but they now do have more data thanks to Argo. The rationale for going deeper than ~700 m is studied Palmer et al. (2011) they made three importnat findings (which aslo addresses your concerns about points #2 and #3)

1) Decadal trends in SST place only weak constraint on TOA, because “models show substantial decadal variability in SST, which could easily mask the long-term warming associated with anthropogenic climate change over a decade
2) As we measure OHC deeper, we gain increasingly good predictions of TOA
3) Trade-off between measuring longer or deeper for given uncertainty in TOA

As noted by RealClimate ” there is likely to be a systematic issue if we only look at the 0-700m change – this is a noisy estimate of the total OHC change.”

Unlike BT, Hansen et al. (2005) and Willis et al. (2005) provide error bars and quantify the uncertainty, BT does not. Hansen et al. (2005), “Figure 2 shows that the modeled increase of heat content in the past decade in the upper 750 m of the ocean is 6.0 +/- 0.6 (mean +/- SD) W year/m2, averaged over the surface of Earth,
varying from 5.0 to 6.6 W year/m2 among fivesimulations. The observed annual mean rate ofocean heat gain between 1993 and mid-2003 was 0.86 +/- 0.12 W/m2 per year for the 93.4%of the ocean that was analyzed.”

So one can use ~10 years or less if forced to by the data availability constraints, but then one should limit one’s statements to that period and that period alone, and not make grand statements confidently asserting that the models are wrong and have failed or are on the verge of failing like Pielke and BT have tried to do, especially if they have not looked at all the data.