James Hansen's climate forecast of 1988: a whopping 150% wrong

From their Die kalte Sonne website, Professor Fritz Vahrenholt and Dr. Sebastian Lüning put up this guest Post by Prof. Jan-Erik Solheim (Oslo) on Hansen’s 1988 forecast, and show that Hansen was and is, way off the mark. h/t to Pierre Gosselin of No Tricks Zone and WUWT reader tips.

Figure 1: Temperature forecast Hansen’s group from the year 1988.The various scenarios are 1.5% CO 2 increase (blue), constant increase in CO 2 emissions (green) and stagnant CO 2 emissions (red).In reality, the increase in CO 2 emissions by as much as 2.5%, which would correspond to the scenario above the blue curve.The black curve is the ultimate real-measured temperature (rolling 5-year average).Hansen’s model overestimates the temperature by 1.9 ° C, which is a whopping 150% wrong.Figure supplemented byHansen et al.(1988) .

One of the most important publications on the “dangerous anthropogenic climate change” is that of James Hansen and colleagues from the year 1988, in the Journal of Geophysical Research published. The title of the work is (in German translation) “Global climate change, according to the prediction of the Goddard Institute for Space Studies.”

In this publication, Hansen and colleagues present the GISS Model II, with which they simulate climate change as a result of concentration changes of atmospheric trace gases and particulate matter (aerosols). The scientists here are three scenarios:

A: increase in CO 2 emissions by 1.5% per year

B: constant increase in CO 2 emissions after 2000

C: No increase in CO 2 emissions after 2000

The CO 2 emissions since 2000 to about 2.5 percent per year has increased, so that we would expect according to the Hansen paper a temperature rise, which should be stronger than in model A. Figure 1 shows the three Hansen scenarios and the real measured global temperature curve are shown. The protruding beyond Scenario A arrow represents the temperature value that the Hansen team would have predicted on the basis of a CO 2 increase of 2.5%. Be increased according to the Hansen’s forecast, the temperature would have compared to the same level in the 1970s by 1.5 ° C. In truth, however, the temperature has increased by only 0.6 ° C.

It is apparent that the next to it by the Hansen group in 1988 modeled temperature prediction by about 150%. It is extremely regrettable that precisely this type of modeling of our politicians is still regarded as a reliable climate prediction.

Irrelevant. The Farmer’s Almanac predicted last year would be colder but it wasn’t. Modeling accuracy changes over time and improvements are obviously made just like with other measurement techniques, climate or otherwise.

chicagoblack says:
“Modelling accuracy changes over time”
Modelling inaccuracies become apparent over time. Nothing could be more relevant than showing the “Team”, then and now, cannot predict climate and have over-estimated the warming from CO2 by 150 percent.
Irrelevant? Give us a break, Dude.

@Chicagoblack
The problem is that, as with the IPCC executive summary, the findings are political in nature. We are already told that catestrophic global warming was emminent based on their finding. Now we see how far off their models were, and continue to be. Yet the finding does not change. The political is driving the scientific. Your comment “Modeling accuracy changes over time and improvements are obviously made just like with other measurement techniques, climate or otherwise.” is irrelevant. Even if the models were re-made to show 100% accuracy in hindcast and observed it would not change the political intent to fundimentally change social progress. Capitalism and the western way of life is the target.
IMHO.

chicagoblack says:
Irrelevant. The Farmer’s Almanac predicted ….
Good comparison though you didn’t mean it that way. Hansen is about as scientific as the Almanac when it comes to predictions and he’s still trying to justify the ’88 “projections”. They might even have a better a track record than him. Can’t be any worse than the Brits though.

The graph is mislabeled. The blue line represents the projected increase in carbon taxes, green subsidies and climate science funding and should be labeled in dollars. Hansen was spot on with this model.

I wonder how long before someone in the press says, “Hey, look at this. If we go back 20 years and look at the IPCC predictions, they’re all wrong.”
Silly me. That would only happen if there were still reporters and journalists instead of advocates.

A fascinating, and incorrect, post. Given actual CO2 emissions and the reduction in greenhouse-active CFC’s due to the Montreal Protocol, forcings have been closest to the “B” scenario – about 5-10% below “B” scenario total forcing. Not the “A” scenario as argued here. That is a strawman argument (http://en.wikipedia.org/wiki/Straw_man). It might be reasonable to argue that Hansen didn’t predict economics very well, but then again this was a _climate_ model, not an _economic_ model.
Assuming that CO2 is the only active greenhouse gas is a common, but serious, error. CFC decreases were huge.
Hansen did use a 4.2°C per doubling sensitivity – now thought to be too high, with ~3°C the current estimate. That resulted in a slight overestimate of warming, with the model showing an overestimate of ~20% when run with actual forcings. It’s noteworthy that Hansen’s 1988 regional temperature distribution predictions (regional predictions being an issue many folks seem to raise with climate models) are quite accurate (see http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf, Plate 2).
The sensitivity estimate he used in 1988 (considered reasonable then) was rather too high, and considering actual forcings (given political and economic developments) close to the “B” scenario, Hansen’s 1988 model was surprisingly good.

chicagoblack says:
June 15, 2012 at 9:10 amIrrelevant. The Farmer’s Almanac predicted last year would be colder but it wasn’t.
The difference is that the FA is basically just a long-range weather forecast. I don’t know what % their accuracy rating is, but it’s a good bet that it’s far higher than Hansen and his fellow climate prognosticators is. The reason for that is that Hansen and crew have the mistaken notion that our C02 is somehow driving climate. Since the very basis for their models is false, no amount of fiddling with them or adjusting is going to make them any better than what they are and always have been: pure unadulterated horse manure.

It’s more than 150% wrong because the data are made up! No correction for Urban Heat Island effect, but adjusted so to maximize warming trend! It’s actually been cooling since 1998, so that means it’s infinite times wrong during that time frame!

Speaking of 1988, that was the hottest and driest summer since the Dust Bowl in the midwest. Nothing remotely close since then. I’ve seen some talk of drought this year, and the drought monitor does show a lot of drought. But I don’t know what drought they’re monitoring! 2010 and 2011 were the wettest two year period on record, and precipitation is only barely below normal for 2012. How in the heck can we be in a drought? It’s like the fake Minnesota drought last month and the fake UK drought this spring… soon as we get a dry period, they start hyping of a drought. One thunderstorm is all it takes to erase the entire yearly precipitation deficit!

KR says:
June 15, 2012 at 9:50 am
“A fascinating, and incorrect, post. Given actual CO2 emissions and the reduction in greenhouse-active CFC’s due to the Montreal Protocol, forcings have been closest to the “B” scenario – about 5-10% below “B” scenario total forcing.”
Just to complete your argument then, could you give in watts per metre squared, the reduction in forcing due to the reductions of CFC’s? I just want to see how the numbers compare.

Not to worry, increased warming will “return with a vengeance” later this decade (or maybe the decade after)! Just you wait and see. It’s not only coming back, but it’ll be mad as hell and looking to kick sceptic rear!

It’s worse than just being wrong. If we had done what Hansen wanted in 1988, even without any impact at this point he could be saying “it’s working, look how much warming we’ve avoided!” and there’d be no way to prove it hadn’t been the mitigation activities. Think of the accolades the team could be glowing in right now if it weren’t for them meddling kids ……. er …. skeptics. No wonder they’re PO’d.

KR says:
June 15, 2012 at 9:50 am
“The sensitivity estimate he used in 1988 (considered reasonable then) was rather too high, and considering actual forcings (given political and economic developments) close to the “B” scenario, Hansen’s 1988 model was surprisingly good.”
So now forcings are political in nature? All this time I though forcing referred to a physical process happening in real time. Silly me! Well, as long as there was a consensus….

KR, thanks for that explanation. If I may summarize:
1. Hansen used sensistivity that was way too high (ie was wrong)
2. Hansen assumed all other factors would remain constant (ie was wrong)
3. Yet proclaimed the science to be settled (ie was wrong)

Olen says:
June 15, 2012 at 9:19 am
Evidently being right is not in his job description.
I take it you mean the German author of this hatchet job, he gets just about everything wrong about the Hansen et al. paper. I find it hard to believe that they actually read it. It’s fine to criticize the work by comparison with the actual data, but at least get the facts right, at the time Hansen described scenario B as ‘perhaps the most plausible of the three cases’. So to linearly (not log) increase the scenario A value as if it was due to 1.5% growth in CO2 emissions (which it wasn’t) and claim that Hansen’s prediction was out by 150% is nonsense.

KR says:
June 15, 2012 at 9:50 am
A fascinating, and incorrect, post. Given actual CO2 emissions and the reduction in greenhouse-active CFC’s due to the Montreal Protocol, forcings have been closest to the “B” scenario – about 5-10% below “B” scenario total forcing. Not the “A” scenario as argued here. That is a strawman argument (http://en.wikipedia.org/wiki/Straw_man). It might be reasonable to argue that Hansen didn’t predict economics very well, but then again this was a _climate_ model, not an _economic_ model.
Assuming that CO2 is the only active greenhouse gas is a common, but serious, error. CFC decreases were huge.
The IPCC seems to have declared that chlorofluorocarbons were a “greenhouse gas” without any observations to back that up.

@Chicagoblack You said “Modeling accuracy changes over time and improvements are obviously made just like with other measurement techniques, climate or otherwise.”
No, it doesn’t. “Climate” models will always be wrong – there are just too many variables to accurately predict climate – they can’t even accurately predict weather yet, lol!!!

Warmists would argue that it’s all that natural/Chinese SO2 cooling that is offsetting the warming and if we hadn’t had the cooling we’d be in trouble. Of course the opposite would also apply, if we hadn’t had the warming we’d be what? Colder than the Little Ice Age?

KR says:
June 15, 2012 at 9:50 am
A fascinating, and incorrect, post. Given actual CO2 emissions and the reduction in greenhouse-active CFC’s due to the Montreal Protocol, forcings have been closest to the “B” scenario – about 5-10% below “B” scenario total forcing. Not the “A” scenario as argued here. That is a strawman argument.
Hansen’s predictions were based solely on CO2 levels, hence the post is both fascinating and accurate.
Your comment is a red herring.

chicagoblack says:
June 15, 2012 at 9:10 am
Irrelevant. The Farmer’s Almanac predicted last year would be colder but it wasn’t. Modeling accuracy changes over time and improvements are obviously made just like with other measurement techniques, climate or otherwise.
So, by the sheer age of the Almanac’s model it should be quite accurate but, but by your own argument it isn’t.
Q.E.D.

While I have little interest in defending Hansen from anything, I think this “150% wrong” is a math error and we can do better than that. I assume the 150% comes from “projecting” 1.5 ° C while observations report 0.6 ° C In my book, the observation is 60% lower than the projection. Had the the observation reported no change, that would be a 100% error and the offered 150% error means that observation should be a drop of 0.75 ° C.
We do not have to twist percentages around to show that Hansen’s projections are wrong. Exaggeration is not necessary, that’s for people like Stephen Schneider.
Even better – I claim percentages beyond 20 or 30% are the work of the devil and should be avoided unless you like to play with fire and brimstone.

KR says:
June 15, 2012 at 9:50 am
The sensitivity estimate he used in 1988 (considered reasonable then) was rather too high, and considering actual forcings (given political and economic developments) close to the “B” scenario, Hansen’s 1988 model was surprisingly good.
I can tell you have been reading SkepticalScience, haven’t you? Their patently absurd claim is no more true having restated it. When your statistical argument doesn’t even pass the eyeball test you really should not make it. Real world measurement OBVIOUSLY is a much closer fit to Scenario C than it is to A or B. Please look at the graph again.

Hey KR, as someone else asked, what are the Watts per square meter of forcing from CFCs, and by how much as that decreased over time?
Also note how we are beneath Scenario C (even with “corrected” forcing, we’d be right on the C line, and following its shape), which is absolutely no emissions at all after 2000. So what does that tell you?

Come on guys/gals…..you know that we’ve all used a mulligan a time or two in golf….JH just uses one whenever he wants!! (ie GISS temp. record, 1988 testimony, arrest -> release ->arrest -> release…etc….)

davidmhoffer says:
June 15, 2012 at 10:23 am
KR, thanks for that explanation. If I may summarize:
1. Hansen used sensistivity that was way too high (ie was wrong)
If I may summarize: he used the accepted value of the time, which now appears to have been too high compared with the present value.2. Hansen assumed all other factors would remain constant (ie was wrong)
He didn’t, read the paper. He quite explicitly made various assumptions about future emissions, volcanoes etc.3. Yet proclaimed the science to be settled (ie was wrong)
He didn’t! In fact he said:
“Major improvements are needed in our understanding of the climate system and our ability to predict climate change.
We conclude that there is an urgent need for global measurements in order to improve knowledge of climate forcing mechanisms and climate feedback processes.”
Making such demonstrably false statements as you have done here certainly diminishes any credibility you may have had.

techgm says:
June 15, 2012 at 10:45 am
Are the “Observed” data from satellites, surface stations (complete with lousy sitings, reduced high-elevation and high-latitude sites, disappearing sites, and “adjustments”), or a mix of both?
Probably pulled from thin air like the rest of the piece!

but at least get the facts right, at the time Hansen described scenario B as ‘perhaps the most plausible of the three cases’. So to linearly (not log) increase the scenario A value as if it was due to 1.5% growth in CO2 emissions (which it wasn’t) and claim that Hansen’s prediction was out by 150% is nonsense.

You really don’t understand what Hansen was saying, do you?
Actual CO2 is above scenario A. That means that Hansen is wrong and continues to be wrong until such time as he withdraws the paper and publically admits to lying to Congress for financial gain.

KR,
If we assume that the reduction in CFC’s have the net result of making Scenerio B the closest to reality (ie CO2 increase is offset by CFC decrease), which is what I think you are saying, then I’ll agree that the 150% wrong claim may be hyperbole, but it still looks like Hansen is wrong. Real temps are not close to Scenerio B predictions.
Greg Tuttle,
CFC’s have long been know to have a GHG effect. My personal opinion is that the Montreal Protocol will turn out to be a good thing – even if by accident – with regard to warming. It doesn’t seem to have done much for what it was intended to do.

Increasing CO2 amplifies the delusions of the susceptible, plenty of evidence for that since belief in anthropogenic climate change has paralleled banking delusions, European single currency delusions and global government delusions of the UN. All follow the same hockey stick, but global temperature doesn’t.

Hansen will soon be made an FRS. When you get things this wrong that’s how you are rewarded. Look at Paul Ehrlich who has been been made an FRS. I cannot think of anyone whose ‘prophecies’ have been more wrong than his: in 1980 he predicted 50% of all species would extinct by 2000 and everything extinct by 2015. Let alone his predictions about starvation etc. He probably thinks that swallows hibernate in winter.

When you compare simulation projections to observations there are several tings you have to do in order to complete a competent and objective analysis.
First, you have to understand that simulations provide a conditional prediction:
IF, future forcing looks like X, and IF cilimate sensitivity is Y, then we predict a temperature
of Z ( +- std)
The first problem you face in evaluating a projection is understanding that it is not and cannot
be a controlled experiment. That is, we cannot force the IF statements to be true. So,
we typically do sensitivity testing or scenario testing. The closest thing I can think of
is war gaming analysis. The kind of analysis and simulation that goes on prior to a armed
conflict.
To evaluate hansens projection we first have to find the scenario that is closest to the projected
forcings. That is scenario B, not C and not A. Then we have to understand that hansen used
the wrong figure for climate sensitivity. His figure was more likely than not too high.
What’s that mean? what can we conclude from the simulation? we can say that a climate sensitivity of 4.2 is likely to be too high. We can say that the simulation needs work. we can
also conclude that scenario development is one of the weakest parts of the process.
We can also conclude that testing more scenarios is a good idea if we want to grade simulations and improve simulations
What would be interesting is this. It would be interesting to take a newer version of ModelE ( sensitivity is 2.7) and re run Hansens experiment with the following inputs
1) scenario A, B and C
2) actual forcings during the observation period.

Reduction in CFC’s has not previously been put forward as a reduction in forcing, given that CFC’s were not originally considered as greenhouse gases and by 1988 the Montreal Protocol was already having an impact in reducing these gases. To use this as an argument for dismissing Scenario A is misdirection.
Yes, Hansen has suggested that Scenario B was the most likely, but that was based (quite specifically, IIRC) on reductions in CO2 emission growth. This was his whole point in arguing that Kyoto was irrelevant because this was not going to be enough of a brake on CO2 emissions and we (the western world) should have been doing more
The fact is that the actual emissions on CO2 have outstripped scenario A, yet we have not seen anything like the increase in temperatures that this increase was supposed to have caused. Such a falsification (which is precisely what it is) pretty well demands major revision of the model – not the tinkering around the edges that Hansen and supporters have been trying to do.

My Real Science comment on this: Let’s assume he actually knew roughly that temps would follow scenario C — regardless of what happens with CO2. And thus that he also knew that what we did on CO2 emissions would have no effect at all. So, IF we at the time instituted drastic CO2 cuts, he could say that his cuts worked, because temps would be doing what they are doing now. This would be used to justify even more cuts, as we head then toward the leftist dream of apocalyptic de-industrialization.
What if at the time he didn’t get govts to institute crazy CO2 cuts? For one, at the time, Hansen was doubling down, was trying to rush the people and govts into implementing his whacky economy-busting ideas.
Secondly, he probably just wasn’t going to bother with thinking that far ahead — nearly 25 years now. But now he needs to pay the piper for his demonstrated proven bullshit.

There’s a good discussion of the actual forcings at http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/ – and what actually happened are forcings slightly below Scenario B (within ~0.1 W/m^2). See this figure showing scenario and actual forcings: http://www.realclimate.org/images/Hansen88_forc.jpg
Again, the various scenarios are “what-if”s, the actual model is how the climate would respond to various forcing levels. On that measure Hansen 1988 holds up surprisingly well for a 25 year old model.
And, to repeat – while CO2 has progressed roughly as both scenarios A and B projected, we have not gone through Scenario A, due primarily to CFC reductions and a rather lower than expected amount of methane. Arguing that Hansen’s model was flawed based upon events that didn’t happen is a completely bogus strawman argument.

Hansen does not care about observations or the science. Hansen and his cohorts exaggerated, manipulate, and cherry pick data to push the extreme AGW agenda.
The planet’s response to a change in forcing is to resist the change (negative feedback) as confirmed by Lindzen and Choi’s analysis of top of the atmosphere radiation vs ocean surface temperature changes. Hansen and the IPCC require the planet to amplify forcing changes (positive feedback) to create the scary extreme warming scenario that the jihad environmentalists use to justify the Western governments spending trillions of dollars on “green scams” that do not significantly reduce CO2 emissions which is not a problem anyway as plants eat CO2. Western governments are deeply in debt and losing the battle for jobs with Asia.
When logic, reason, science, and economic reality is removed from public policy the resultant is chaos.http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf
On the Observational Determination of Climate Sensitivity and Its Implications
We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000-2008) satellite instruments. …
We argue that feedbacks are largely concentrated in the tropics, and the tropical feedbacks can be adjusted to account for their impact on the globe as a whole. Indeed, we show that including all CERES data (not just from the tropics) leads to results similar to what are obtained for the tropics alone – though with more noise. We again find that the outgoing radiation resulting from SST fluctuations exceeds the zerofeedback response thus implying negative feedback. In contrast to this, the calculated TOA outgoing radiation fluxes from 11 atmospheric models forced by the observed SST are less than the zerofeedback response, consistent with the positive feedbacks that characterize these models. The results imply that the models are exaggerating climate sensitivity.http://www.forbes.com/sites/jamestaylor/2012/04/11/a-new-global-warming-alarmist-tactic-real-temperature-measurements-dont-matter/
A New Global Warming Alarmist Tactic: Real Temperature Measurements Don’t Matter
What do you do if you are a global warming alarmist and real-world temperatures do not warm as much as your climate model predicted? Here’s one answer: you claim that your model’s propensity to predict more warming than has actually occurred shouldn’t prejudice your faith in the same model’s future predictions. Thus, anyone who points out the truth that your climate model has failed its real-world test remains a “science denier.”
This, clearly, is the difference between “climate science” and “science deniers.” Those who adhere to “climate science” wisely realize that defining a set of real-world parameters or observations by which we can test and potentially falsify a global warming theory is irrelevant and so nineteenth century. Modern climate science has gloriously progressed far beyond such irrelevant annoyances as the Scientific Method.

Phil. says:
June 15, 2012 at 11:04 am
davidmhoffer says:
June 15, 2012 at 10:23 am
KR, thanks for that explanation. If I may summarize:
1. Hansen used sensistivity that was way too high (ie was wrong)
***************
Phil;
If I may summarize: he used the accepted value of the time, which now appears to have been too high compared with the present value.>>>>
He used a value that he was a big part in arriving at and promoted it heavily, and it was…. wrong.
Phil;
2. Hansen assumed all other factors would remain constant (ie was wrong)
He didn’t, read the paper. He quite explicitly made various assumptions about future emissions, volcanoes etc.>>>>>
The assumptions he made about future emissions were LOWER than actual emissions, so his predictions should have been even HIGHER. Again, he was….. wrong.
Phil;
3. Yet proclaimed the science to be settled (ie was wrong)
He didn’t! In fact he said:
“Major improvements are needed in our understanding of the climate system and our ability to predict climate change.
We conclude that there is an urgent need for global measurements in order to improve knowledge of climate forcing mechanisms and climate feedback processes.”
>>>>>>>>>>>>>>>>
He has been, and repeatedly so, part of the “hockey team” meme that the “science is settled” that action is “urgent” and has even advocated that those who disagree be put into jail. I judge him not by the caveats in this single paper, but his behaviour over all since then which shows quite conclusively that he continues to promote his clearly WRONG paper as if it was right, comes up with weasel word explanations for his errors to spin them as something other than errors, and has become more vociferous about his demands for action on climate issues, not less.
Phil;
Making such demonstrably false statements as you have done here certainly diminishes any credibility you may have had.>>>>
LOL. Like defending Hansen to anyone who has actually taken the time to look into his work and the facts in any amount of detail confers any credibility upon you. I stand by my remarks.

One not so minor note: What were these scenarios, and how likely did Hansen feel they were? From Hansen 1988:
Scenario A: “…assumes that growth rates of trace gas emissions … will continue indefinitely […] since it is exponential, must eventually be on the high side of reality…”
Scenario C: “…a more drastic curtailment of emissions than has generally been imagined, it represents elimination of CFC emissions by 2000 and reduction of CO2 and other trace gas emissions to such a level that the annual growth rates are zero…by 2000”
Scenario B: “…is perhaps the most plausible of the three cases.”
And interestingly enough, we’re running quite close to Scenario B. Pat Michaels was wrong to claim we were on Scenario A 15 years ago, testifying before Congress. And the opening post is just as wrong to claim it now.

KR, you can’t differentiate CO2from the Scenario, it’s the whole point. If he got the CO2 pretty close in Scenario A and B and the temperature pretty close in C that doesn’t equate to a mostly correct model.
Hansen’s scenarios show quite plainly that his assumed forcing of CO2 is wrong. His actual temperature model didn’t approach reality until he removed CO2 as a forcing.

Hey, KR, would you mind telling me what forecasts wouldn’t be “surprisingly good” when you allowed for the things the forecaster was wrong about?Every forecast would be “surprisingly good” under those conditions, which is why your post is useless. Because you see, the whole point of a forecast is to see how good your understanding of a process is based on your forecast at that time. If you’re allowed to go back and essentially redo it and incorporate all the things you failed to forecast, it kind of defeats the purpose.
I guess you have to leave it to a ‘climate science’ type to essentially come up with the idea that forecast is “surprisingly good” so long as you incorporate all the things the forecaster failed to, you know, forecast…

Well, it doesn’t explicitly say so, but is this using HadCRU3 as the ultimate real-world temps? Looks like it to me, given the 1998 peak. If that’s the case, it seems a bit odd to compare a projection based on GISS temperature data to anything other than the up-to-date GISS data.

Forcing valuesfor the various gases can be found on http://www.esrl.noaa.gov/gmd/aggi/ – increases shown there from 1988 to 2010 are 0.71 W/m^2 (a bit less than some other references, mind you), of which CO2 represents ~0.56 W/m^2.
Hansen 1988 scenarios for that period were around:
A: 0.6 CO2, another 0.8 from CFC’s, methane, N2O, etc. ~1.45 W/m^2
B: ~0.6 CO2, 0.4 from other gases. A bit under 1 W/m^2.
C: ~0.4 total, mostly CO2. 0.4 W/m^2.
The various trace gases forcings are less than half of what was considered under Scenario B, and 1/4 that of Scenario A.

Dave H;
If that’s the case, it seems a bit odd to compare a projection based on GISS temperature data to anything other than the up-to-date GISS data.
Dave H;
Also – did Hansen’s 1988 projection use a 5-year moving average? If not, it would seem a bit odd to do so when comparing against temperature data
>>>>>>>>>>>>
What seems a bit odd is that someone can make predictions so completely wrong, and yet people turn up to try and justify by ask silly questions that, even if they were correct, would make nearly no difference to the end result anyway
WHICH WAS THAT HANSEN WAS COMPLETELY AND TOTALY WRONG!

Because Hansen did not include CFCs in his model calculations to produce this infamous graph,he CANNOT now subtract them out to get a better fit with his CO2 graph. If he is to do that properly, then he would have to show a graph that included the warming effect of the CFCs which would generate a warming curve significantly higher than curve A. Subtraction of the CFC’s effect then lead right back to his curve A. His graph is specifically calculated for CO2, there is NO effect of CFCs in his graph for which a subtraction would be valid.

I predicted a score in a hocky game in which, had the home team scored three more goals, and the visiting team two less, and the game had ended in the second period, and the score from the previous game has been added to this game, my prediction would have been correct.

Phil & KR (and even Mosh) are bright chaps, they are fully capable of understanding simple arithmetic, pre-industrial levels of CO2 were around 280ppm, today the level is 395ppm, a rise of 115ppm or 41%. But CO2 forcing is logarithmic, a 41% rise represents around 70% of the forcing due to CO2 doubling, most bright chaps, even including Hansen, agree that forcing from a CO2 doubling is in the range 1-1.5ºC. 70% of that range is 0.7-1ºC. I wonder what the observed rise in temperature has been since pre-industrial times?, well blow me down, at 0.8ºC it falls exactly in that range! What does this observation mean for those that argue that climate sensitivity is greater than 1, given a century’s worth of data observations, not a lot!
Are you sitting comfortably?, because there’s more, let’s examine what’s happened since 1997 when CO2 was around 360ppm, the rise since then is 35ppm. Yes, that’s right, almost 10% of ALL THE CO2 in the atmosphere has been added since 1997! Again, CO2 forcing is logarithmic, a 10% rise represents around 18% rise in forcing from CO2 doubling, so assuming no other factors global temperature should have risen 18% of 1-1.5ºC or 0.18-0.27ºC since 1997. Again I wonder what the observed rise has been in those 15 years, strike a light, with baselines adjusted the mean of HadCrut3, GISStemp, RSS & UAH gives 0.05ºC, somewhat outside (lower) than the expected range calculated above. Those that argue that climate sensitivity is greater than 1 now have a greater than decadal observation which appears to suggest precisely the opposite!
Climate sensitivity greater than 1 is not supported by multi decadal observations, I suggest that Mr Nuccitelli and Mr Cook might like to reconsider their support for Mr Hansen, because observations would indicate that temperature, like sea level rise, ocean heat content rise and net ice loss just isn’t following the script…

I do believe nuclearcannoli has delivered the sweet cream with this one: I guess you have to leave it to a ‘climate science’ type to essentially come up with the idea that forecast is “surprisingly good” so long as you incorporate all the things the forecaster failed to, you know, forecast…
On another note, it appears based on the translated post linked above that the European “consensus” is crumbling. Of course, I don’t have the data to back up that statement, but hey, it’s climate science!

KR says:
June 15, 2012 at 11:53 am
There’s a good discussion of the actual forcings at http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/ – and what actually happened are forcings slightly below Scenario B (within ~0.1 W/m^2). See this figure showing scenario and actual forcings: http://www.realclimate.org/images/Hansen88_forc.jpg
Again, the various scenarios are “what-if”s, the actual model is how the climate would respond to various forcing levels. On that measure Hansen 1988 holds up surprisingly well for a 25 year old model.
And, to repeat – while CO2 has progressed roughly as both scenarios A and B projected, we have not gone through Scenario A, due primarily to CFC reductions and a rather lower than expected amount of methane. Arguing that Hansen’s model was flawed based upon events that didn’t happen is a completely bogus strawman argument.
—–
CFC reductions?
Try again.http://en.wikipedia.org/wiki/File:AYool_CFC-11_history.pnghttp://ds.data.jma.go.jp/ghg/kanshi/ghgp/cfcs_e.html
CFCs are more or less flat for the past 2 decades.
Why is methane less than expected? If methane is less than expected because the model expected rising temperatures to release more methane, that is a failure of the model.

What I really like about actual temperatures trending lower than Scenario C is that Scenario C was the one with CO2 emissions stopping in 2000.
The take away? Doing nothing has been better than completely stopping CO2!

davidmhoffer says:
June 15, 2012 at 12:04 pm
LOL. Like defending Hansen to anyone who has actually taken the time to look into his work and the facts in any amount of detail confers any credibility upon you. I stand by my remarks.
Your claim to have taken the time to look into his work and the facts are belied by the errors you have made. This post referred to the 1988 paper and demonstrably is seriously wrong, your comments above are wrong also, trying to cover your mistakes up doesn’t help.

Another way of looking at it, when Hansen made his projections, he assumed that the only non-zero forcing was CO2 to make his graph. His graph projection already had made the assumption that the effect of CFC’s was zero. Scenario A is what warming should have occurred due to CO2 with a value of ZERO for all other forcings. The additional warming effect of the CFC’s should lead to even more warming than predicted by CO2 only (curve A). not less.

KR says:
June 15, 2012 at 9:50 am
“Assuming that CO2 is the only active greenhouse gas is a common, but serious, error. CFC decreases were huge.”
Did you even look at the link you provided? Figure 2 http://www.esrl.noaa.gov/gmd/aggi/aggi_2011.fig2.png
shows: CFC-12 at about 525 ppt in 1998 and 515 ppt in 2011, and CFC-11 at 250 ppt in 1998 and 235 ppt in 2011.
What’s your definition of a huge decrease? And assuming you are right, the logical conclusion is that CFC’s were the problem, and that CO2 has little, if anything, to do with climate change. I’m happy with that result, time to end the CO2 BS once and for all. The science is settled.

I would like to speak out on behalf of the red herrings.
We are not a popular species, and we have had a bad press recently. In fact, we have been smoked
BUT….
when sea levels rise by ten feet, as jimbo has promised, in 20 years, we, the underpriviledged, kippers rouge, will have payback.

KR says:
June 15, 2012 at 9:50 am
A fascinating, and incorrect, post. Given actual CO2 emissions and the reduction in greenhouse-active CFC’s due to the Montreal Protocol, forcings have been closest to the “B” scenario – about 5-10% below “B” scenario total forcing. Not the “A” scenario as argued here. That is a strawman argument (http://en.wikipedia.org/wiki/Straw_man). It might be reasonable to argue that Hansen didn’t predict economics very well, but then again this was a _climate_ model, not an _economic_ model.
Assuming that CO2 is the only active greenhouse gas is a common, but serious, error. CFC decreases were huge.
Hansen did use a 4.2°C per doubling sensitivity – now thought to be too high, with ~3°C the current estimate. That resulted in a slight overestimate of warming, with the model showing an overestimate of ~20% when run with actual forcings. It’s noteworthy that Hansen’s 1988 regional temperature distribution predictions (regional predictions being an issue many folks seem to raise with climate models) are quite accurate (see http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf, Plate 2).
The sensitivity estimate he used in 1988 (considered reasonable then) was rather too high, and considering actual forcings (given political and economic developments) close to the “B” scenario, Hansen’s 1988 model was surprisingly good.
=============================================================
Soooooo …… The take away is that the climate model from the Wizard of COz is wrong because it didn’t (couldn’t?) account for everything that could effect climate … even out to just a couple of decades. And we should trust his work to the tune of trillions of dollars and the surrender freedoms to the UN?
What climate model prediction do you suggest we trust that accounts for all the possible variables, known and unknown, out to a hundred years? Manns’ “postdictions” based on synthetic (Mann-made) inputs?

Extrapolation (projecting out into the data nowhere) will get you in trouble every time. Always a bad idea! In fact, Extrapolation herself should be arrested with Hansen next time they are out predicting-about together, always on co2 ‘death trains’ of course.
The real fact is that those “death trains” and the vast energy they contain have extended human life expectancy more that any other single factor in the history of human kind through inexpensive and continuous electricity.

If thirty years ago we had gathered all of the Climate modelers into a room and given them coins to flip, heads the climate warms, tails the climate cools, and then given each of them a million dollars to go away, half of them would have been right and we would have saved billions that could have been put to productive use
[Moderator’s Note: It is not a good idea to use an e-mail address as a screen name, so I’ve taken the liberty of removing the identifying portion of your screen name. -REP]

Alcheson says:
June 15, 2012 at 12:53 pm
Because Hansen did not include CFCs in his model calculations to produce this infamous graph,he CANNOT now subtract them out to get a better fit with his CO2 graph. If he is to do that properly, then he would have to show a graph that included the warming effect of the CFCs which would generate a warming curve significantly higher than curve A. Subtraction of the CFC’s effect then lead right back to his curve A. His graph is specifically calculated for CO2, there is NO effect of CFCs in his graph for which a subtraction would be valid.
Why don’t you actually read the work you criticize? Hansen used the data on CFCs from the Chemical Manufacturer’s Association for F-11 and F-12 in scenario A ( a growth rate of 3%/yr).
To make such incorrect statements just removes all credibility you might have had!

Alcheson says:
June 15, 2012 at 1:13 pm
Another way of looking at it, when Hansen made his projections, he assumed that the only non-zero forcing was CO2 to make his graph. His graph projection already had made the assumption that the effect of CFC’s was zero. Scenario A is what warming should have occurred due to CO2 with a value of ZERO for all other forcings. The additional warming effect of the CFC’s should lead to even more warming than predicted by CO2 only (curve A). not less.
Again total rubbish, try reading page 9361!

Yet Hansen is prepared to be arrested based on his beliefs rather than observed data. Thank God we didn’t take drastic action back then otherwise he would be revered today as being correct. FAIL! No panic over a non-problem.

burkebri says:
June 15, 2012 at 1:24 pm
If thirty years ago we had gathered all of the Climate modelers into a room and given them coins to flip, heads the climate warms, tails the climate cools, and then given each of them a million dollars to go away, half of them would have been right and we would have saved billions that could have been put to productive use”
xxxxxxxxxxxxxxxxx
Ha ha ha ha but TRUE! 🙂

Or if you maintain that Hansens’s projections included as KR says
A: 0.6 CO2, another 0.8 from CFC’s, methane, N2O, etc. ~1.45 W/m^2
B: ~0.6 CO2, 0.4 from other gases. A bit under 1 W/m^2.
C: ~0.4 total, mostly CO2. 0.4 W/m^2.
Then curve A and B are NOT showing projected warming simply due to CO2, but due to CFCs and other trace gases. It was sold to the public as being the projected warming due to CO2 only and that if we didn’t drastically cut CO2 that this was our future. If you show a graph that includes many variables but claim only one (CO2) is important, isn’t that fraud?
Analysis of the graph shows that warming due to CO2 increases is NOT very much, since the , reduction in CFCs alone resulted in a near zero net forcing for the past 15 years.

So I take it that Hansen’s model results are worse than a “random walk” but some people here seem to think Hansen’s faulty projection based on his AGW theory is more acceptable than monkeys churning out numbers even though the monkeys would on average be closer to the answer. Talk about faith!! You guys are true believers. No matter how wrong and consistently so Hansen and his fellow AGW climate advocates are, you find a way to excuse their failure. So when after a decade of negative temperature trend for the world at large, you practice Cognitive Dissonance rationalizing away why it is acceptable for the results to be opposite what’s happening in the real world.
Funny, in the real world when you consistently forecast in error you cease to have credibility. But then every day weather people display the forecast that more often than not turn out to be wrong for a seven day time period. It just goes to show people want to know what the future holds to the point they are held captive by it and are suckers for the prognosticators.
Junk Science Week: Climate models fail reality testhttp://opinion.financialpost.com/2012/06/13/junk-science-week-climate-models-fail-reality-test/A 2011 study in the Journal of Forecasting took the same data set and compared model predictions against a “random walk” alternative, consisting simply of using the last period’s value in each location as the forecast for the next period’s value in that location. The test measures the sum of errors relative to the random walk. A perfect model gets a score of zero, meaning it made no errors. A model that does no better than a random walk gets a score of 1. A model receiving a score above 1 did worse than uninformed guesses. Simple statistical forecast models that have no climatology or physics in them typically got scores between 0.8 and 1, indicating slight improvements on the random walk, though in some cases their scores went as high as 1.8.
The climate models, by contrast, got scores ranging from 2.4 to 3.7, indicating a total failure to provide valid forecast information at the regional level, even on long time scales. The authors commented: “This implies that the current [climate] models are ill-suited to localized decadal predictions, even though they are used as inputs for policymaking.”
The bottom line here is only when people are dying by the thousands of cold and hunger even then will you not relent until you are the ones doing the dying of cold and hunger. Nature has a cruel but efficient way to weed out the foolish, unwise, unfit and defective, REALITY. Nature will herself delete you from the gene pool on her own time table, I therefore need do or say nothing but to observe the inevitable. And the monkeys crank on…

dana1981 says:
June 15, 2012 at 1:23 pm
I don’t know what Prof. Jan-Erik Solheim is a professor of, but that is one exceptionally shoddy analysis. A professor should be able to do a better analysis than an amateur like me, not vice-versa.
xxxxxxxxxxxxxxxxxxxxxx
With bated breath we await your analysis / debunking.

Thanks REP. As someone with an astrophysics degree myself, allow me to apologize on behalf of the astrophysics field for this absolutely amateurish error-riddled analysis by Solheim. Though I see he’s one of Humlum’s buddies – that explains the poor analysis. Let’s see how many errors we can identify in this very short post:
1) Comparing the supposed rate of emissions increase since 2000 to Hansen’s scenarios (which begin in 1984, as I recall)
2) Thinking that a ~2 ppm annual CO2 increase is 2.5% of ~390 ppm (arithmetic fail!)
3) Ignoring all non-CO2 GHGs in Hansen’s Scenarios
4) Ignoring all other forcings (i.e. aerosols)
5) Screwing up the temperature plot (sorry no, in no data set I can find is 1998 hotter than 2005 in the 5-year running average).
Then having the balls to wrongly claim Hansen was off by 150%, and criticizing people who listen to Hansen when you make such a high density of errors yourself – wow. I’d be utterly embarrassed to have written this post.

[Moderator’s Note: It is not a good idea to use an e-mail address as a screen name, so I’ve taken the liberty of removing the identifying portion of your screen name. -REP]
===============================================================
Applause for the site’s Mod Squad!

The world is made of four archetypes of human beings, Attila, the Witch Doctor, Ballast, and the Producer.
Ballast are those who “go through life in a state of unfocused stupor, merely repeating the words and the motions they learned from others.” (p. 200)
Attila rebels against reason by focusing on the physical means for survival. Specifically, Attila chooses to physically conquer those who choose to conquer nature. Attila tries to take by force the things that others produce. Attila bases his/her actions on sensations (urges, desires, aversions). Attila focuses on materialistic pleasures,those things that do not require much use of reason, specifically conceptions. Since the conceptual aspect of consciousness cannot be wholly neglected, Attila searches for something that gives his/her life meaning or a sense of being right. The Witch Doctor provides Attila with a code of values.
The Witch Doctor rebels against reason by attempting to conquer those who conquer the Producers. The Witch Doctor does not conquer Attila physically, but psychologically. The Witch Doctor rebels against reason by denying the ability to change nature. The Witch Doctor views the objects of sense as immutable. He/she believes that his/her feelings and senses can provide infallible knowledge of the universe. If reality clashes with what the Witch Doctor believes is true, then the Witch Doctor ignores reality. The Witch Doctor sets him/herself up as the authority on truth. This is how he/she conquers others, by convincing them that they ought to deny their own thoughts and ideas and blindly accept the Witch Doctor’s ideas as true. The Witch Doctor sets him/herself up as the authority on right and wrong.
While Attila focuses on the concrete and neglects abstraction, the Witch Doctor focuses on the abstract and neglects the concrete. Neither is adequate to deal with existence. So, the Witch Doctor and Attila become dependent upon each other. Together, they use fear and guilt to control the Ballast and undermine the Producers.

I think Hansen’s graph is extremely accurate. The problem for the AGW crowd is that it’s accurate for the C scenario only – no increase of CO2 from 2000 onwards. What it proves, quite strongly, is how insignificant the effect of man’s CO2 output really is.

What a few posters don’t seem to realize is that when predictions are made no one really cares about the details. You are either right or you are wrong. You’re a hero or you’re just another failure. In fact, one could say that all the assumptions are really predictions in and of themselves. So, bottom line is Hansen’s predictions were an epic fail. All the whining by apologists won’t change that fact.

dana1981 says:
June 15, 2012 at 1:57 pm
allow me to apologize on behalf of the astrophysics field for this absolutely amateurish error-riddled analysis by Solheim.
xxxxxxxxxxxxxxxxxxx
Your apology would be better served if you and yours, apologized for Hansen….first.

KR says “Again, the various scenarios are “what-if”s, the actual model is how the climate would respond to various forcing levels.” And, you admit the forcing used was in correct, but that does not invalidate the model technique.
KR, correct me if I am wrong but what you are saying is that models are just what if scenarios and, the values inputed into them such as forcing levels are also what ifs.
So, the models can be “accurate” but since they are only “what-ifs” and many of the inputs are not really known, it should not be surpising that the models do not match the the observations. And, furthermore not surprising that they fail as predictors of future climate.
Me thinks, you are in violent argreement with most of the people commenting and with Anthony. Climate models are not accurate predictors of future climate. Guess you are a skeptic like me.

I would hate to see what would have happened, if we had actually halted CO2 emissions. According to their erroneous theory and the anticipated cooling effect of zero CO2, we would already be in a LIA.
Thank goodness we did nothing and we don’t need to evacuate Canada. Now can we have a refund on AGW measures considering CO2 seems to have zero actual effects to GMT. Hansen was just another religious prophet nutcase predicting the end is near. GK

So, Jan-Erik Solheim claims that we should be above scenario A because CO2 emissions have been higher. But Hansen considered several GHGs and in reality total emissions have tracked below scenario B. Pretty clear error from Solheim there.

For apologist who try to argue that the 1988 model is “irrelevant” because the science has “moved on” note that from the latest issue of Nature, Professor Mark Maslin and Dr Patrick Austin claim (BTW, Patrick is from the UCL Environmental Change Research Centre and Mark from the Geology Department and are not sceptics) that:
“None of this means that climate models are useless….Their vision of the future has in some ways been incredibly stable. For example, the predicted rise in global temperature for a doubling of CO2 in the atmosphere hasn’t changed much in more than 20 years.”http://www.nature.com/nature/journal/v486/n7402/full/486183a.html
The standard rhetorical ploy is that climate models represent “applied physics” or “basic physics” so therefore in some unspecified sense contain a degree of infallibility. When it’s then pointed out that a particular model we can actually evaluate did a poor job, the response will be that the model is old and that the “science has moved on”. Apparently therefore “basic physics” has changed a lot since 1988. 😉

G. Karst says:
June 15, 2012 at 2:57 pm
Thank goodness we did nothing and we don’t need to evacuate Canada. Now can we have a refund on AGW measures considering CO2 seems to have zero actual effects to GMT. Hansen was just another religious prophet nutcase predicting the end is near. GK
=================================================================
As long as they caught on to the fact that cars generally have the right-of-way in parking lots, they’d be quite safe in the US. (As long as we could send our loonies to Canada first!)

Let’s try this in a different way. Hansen’s defenders are stating that he wasn’t wrong because the reduction of other Greenhouse gases compensated for the rise in Co2. Therefore there is no reason to try to reduce Co2 because we stoped the rise in temps by other means. I guess Co2 isn’t as important a greenhouse gas as thwy’ve been saying.

Gras Albert says: June 15, 2012 at 12:56 pm
— — —
Time for equilibrium is a factor you have not used in your analysis. This appears to be an unknown quantity and is a fudge factor used to claim that the missing heat is in the pipeline. I suppose one day when CO2 levels do stop rising, it will be slightly easier to estimate what this value might be.

We seem to be on the “stopped” / stagnant growth track. Makes sense. Stagnant fecundity, stagnant economy. That means stagnant waste heat, stagnant albedo modification and oh yeah, the GHG thingey. The real horror will not be Malthusian. It will be something quite the opposite. Free fall.

A couple of further observations. The two standard defences of the Hansen 1988 prediction are:
(1) Other greenhouse gases did not rise at the levels forecast by Hansen
(2) CO2 has not increased at the level predicted by Hansen’s scenario (a)
I have in the past researched both claims. For (1) the IPCC does describe the contribution of various greenhouse gases to estimated warming. The total of all other gases combined amounts to about 10% of the contribution of CO2 if my memory is correct. (If I am wrong here, please refresh my memory). It was, anyway, a very minor component of the forecast. If you want to be as fair as possible to Hansen, then reduce the 150% over-estimate by around 10%
Regarding (2) there has been one academic claim I found, that the global financial crisis caused an economic downturn and this was used as an explanation for why the planet did not warm as expected. However, the majority of other studies I have looked at seem to be in agreement that CO2 increases have been larger than originally anticipated.
“The IEA assumed in 2008 that future emissions would grow from 2005 to 2030 at 1.5% per year. Actually, from 2005-2010 emissions increased by 2.4% per year (data from PBL in this PDF). The 1990 to 2010 average was a 1.9% increase per year, and 2009 to 2010 was a whopping 5.8% increase.”http://www.iea.org/etp/explore/#d.en.27418http://rogerpielkejr.blogspot.com.au/2012/06/lowballing-carbon-dioxide-emissions.html
There is no basis for the claim that emissions have been lower than expected or predicted, especially recently.
The final point to observe is that the last time I checked, even Real Climate had given up on defending the 1988 Hansen forecast. Those who still do, one might describe as the last bastion of “true believers”. For this group, observational data will never get in the way of their passionate convictions.

What I see is an enormous amount of rubbishing details to try to prove that people who were completely wrong were slightly less completely wrong than claimed.
Which is all a dodge to avoid the “completely wrong” part.
When I was a kid, I was told we’d all live in a waterless toxic wasteland or be frozen over by now. By people who were completely wrong.
If we had heeded their cries, we would have wrecked ourselves trying to avoid a disaster that wasn’t even *plausible*.
And here we are again. Meet the new scare, same as the old scare.
IT DIDN’T HAPPEN. That’s what matters.

dana1981 says:
June 15, 2012 at 1:57 pm
Thanks REP. As someone with an astrophysics degree myself, allow me to apologize on behalf of the astrophysics field
>>>>>>>>
Wow! You speak for all the astrophycists in the world? You must be a VERY inportant person, Tell me please, were you elected to this position? Or self appointed?

Dana1981 is strong on passion but seems weak on facts. None of this claims (accusations?) include citations. I do note that he offered a new explanation for why the Hansen forecast is still possibly valid:
“4) Ignoring all other forcings (i.e. aerosols)”
Please pay attention to “Fig. Aerosol optical depth” in the link below, which shows that aerosol optical depth has decreased steadily since 1988, which is the opposite to what is needed if the above explanation made sense.http://web.me.com/uriarte/Earths_Climate/Appendix_3._The_climatic_effects_of_natural_atmospheric_aerosols.html

Dana1981
“2) Thinking that a ~2 ppm annual CO2 increase is 2.5% of ~390 ppm (arithmetic fail!)”
Wow, what an epic fail for claiming epic fail on this.
The 2.5% increase does not refer to total atmospheric CO2, but to an increase in human CO2 emissions. Since human CO2 emissions are dwarfed by natural emissions, this does not translate into a 2.5% increase in actual atmospheric CO2 content. It’s way smaller than that. Hansen at least understood this, but you seem blissfully ignorant of these simple facts. Where did you get your degree? I have no degree at all, but I spotted that looper in an instant. And sure, everyone makes mistakes, but not usually when claiming that other people have made obvious and stupid epic fails!
I appreciate the entertainment value of this site more and more as time goes on.

The timing of Hansen’s “Global climate change, according to the prediction of the Goddard Institute for Space Studies” is interesting. It was 1988, two years after the Challenger space shuttle disaster and the postponement of further shuttle flights. NASA were in crisis and no longer sure of their mission.
In August 1987 former astronaut Dr Sally Ride wrote a report for NASA entitled “NASA Leadership and America’s Future in Space”:
“The U.S. civilian space program is now at a crossroads…in the aftermath of the Challenger accident, reviews of our space program made its shortcomings starkly apparent. The United States’ role as the leader of space-faring nations came into serious question.
“Mission to Planet Earth is an initiative to understand our home planet, how forces shape and affect its environment, how that environment is changing, and how those changes will affect us.
“Global-scale changes of uncertain impact, ranging from an increase in the atmospheric warming gases, carbon dioxide and methane, to a hole in the ozone layer over the Antarctic, to important variations in vegetation covers and in coastlines, have already been observed with existing measurement capabilities. The potentially major consequences, either detrimental or beneficial, suggest an urgent need to understand these variations.
“We currently lack the ability to foresee changes in the Earth System, and their subsequent effects on the planet’s physical, economic, and social climate. But that could change; this initiative would revolutionize our ability to characterize our home planet, and would be the first step toward developing predictive models of the global environment.
“Space-based observations would also be coordinated with ground-based experiments and the data from all observations would be integrated by an essential component of this initiative: a versatile, state-of-the-art information management system. This tool is critical to data analysis and numerical modelling, and would enable the integration of all observational data and the development of diagnostic and predictive Earth System models.
“This global observational system would be designed to operate for decades, serviced either by astronauts or robotic systems to ensure long life and to provide the continuing data collection, integration, and analysis required by this initiative.
“NASA’s responsibilities would include the information management system and platforms and experiments described previously. Most important, NASA would also provide the supporting technology, space transportation, space support services, and much of the scientific leadership.”http://history.nasa.gov/riderep/cover.htm
Then the following year, in June 1988, Hansen testified before a Senate sub-committee about global warming and “sounded the alarm with such authority and force that the issue of an overheating world has suddenly moved to the forefront of public concern“. as reported by The Times-News August 30 1988.
“At a 46-nation Conference on the Changing Atmosphere in Toronto shortly after Hansen’s testimony, scientists and policy makers urged development of energy consumption policies that would drastically reduce carbon emissions.
“One of his major projects (after joining NASA) was the spacecraft study of Venusian atmosphere, where a rampant greenhouse effect has produced surface temperatures hot enough to melt lead
“He recalled that on previous occasions the Office of Budget Responsibility, which reviews official statements that have implications for the budget, had forced him to delete from the text any recommendations…rather than remove such statements he testified as a private citizen….not as a government employee.
“NASA is an agency without a mission,” Oppenheimer said. “If it was smart, NASA would treat Hansen as a star. Here is a problem and a mission that the public might really get behind.”http://news.google.com/newspapers?id=ukgaAAAAIBAJ&sjid=YiYEAAAAIBAJ&pg=6677,7151240&dq=james+hansen+1988&hl=en

RC update 2012We noted in 2007, that Scenario B was running a little high compared with the forcings growth (by about 10%) using estimated forcings up to 2003 (Scenario A was significantly higher, and Scenario C was lower)…
As we stated before, the Hansen et al ‘B’ projection is running warm compared to the real world (exactly how much warmer is unclear).http://www.realclimate.org/index.php/archives/2012/02/2011-updates-to-model-data-comparisons/
Is it really unclear after another 8 eights years of observations? When in 2007 they also said,“Maybe with another 10 years of data, this distinction will be possible. However, a model with a very low sensitivity, say 1 deg C, would have fallen well below the observed trends.”
Perhaps they just don’t want to say. Is this evidence for a low sensitivity?

A fascinating, and incorrect, post. Given actual CO2 emissions and the reduction in greenhouse-active CFC’s due to the Montreal Protocol, forcings have been closest to the “B” scenario – about 5-10% below “B” scenario total forcing. Not the “A” scenario as argued here. That is a strawman argument
…
The sensitivity estimate he used in 1988 (considered reasonable then) was rather too high, and considering actual forcings (given political and economic developments) close to the “B” scenario, Hansen’s 1988 model was surprisingly good.

Seriously?
You mean to say that the observed temperature line is anything at all like the scenario B line? Sorry, I just reject that. You can wiggle all you like, it is just not comparable.

@dana1981
“conradg – you are correct, my mistake. That’s 1 error for me and 5 for Solheim.”
============================
Do you want to itemise the ‘errors’ that you think you see here? Because all I can see from your postings are speculations/opinions without citations.
Although I do agree with you, that I can’t see how you get to a 1.9C difference, unless possibly you take the difference from the actual temperature at a particular point in time rather than the trend, then you try to estimate the presumed model output from the actual exponential CO2 increase. (The real difference is closer to 1C.) Neither tactic is fair though, and plays into the hands of apologists. I would expect (and have seen) that sort of behaviour from Real Climate or Skeptical Science, but sceptics should be held to a higher standard.

Another error, maybe due to an incorrect translation – the temperature plotted in black is not a 5-year rolling average. It is annual data, from HadCRUT3 (which has of course been replaced by HadCRUT4), as near as I can tell.

Here’s a relevant paper. (I think there was a WUWT thread on it, back in the day):

tonyc (21:30:19) :
A friend posted this note that about a recent peer reviewed paper in Physics Reports detailing that CFC’s are to blame for warming observed in 20th century.http://network.nationalpost.com/np/blogs/fpcomment/archive/2010/01/09/the-ozone-hole-did-it.aspx
The abstract for the paper:http://network.nationalpost.com/np/blogs/fpcomment/archive/2010/01/09/the-ozone-hole-did-it.aspx
Cosmic-ray-driven electron-induced reactions of halogenated molecules adsorbed on ice surfaces: Implications for atmospheric ozone depletion
Qing-Bin Lua
Department of Physics and Astronomy and Departments of Biology and Chemistry, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
Accepted 26 November 2009.
editor: S. Peyerimhoff.
Available online 3 December 2009.
Abstract
The cosmic-ray driven electron-induced reaction of halogenated molecules adsorbed on ice surfaces has been proposed as a new mechanism for the formation of the polar ozone hole. Here, experimental findings of dissociative electron transfer reactions of halogenated molecules on ice surfaces in electron-stimulated desorption, electron trapping and femtosecond time-resolved laser spectroscopic measurements are reviewed. It is followed by a review of the evidence from recent satellite observations of this new mechanism for the Antarctic ozone hole, and all other possible physical mechanisms are discussed. Moreover, new observations of the 11 year cyclic variations of both polar ozone loss and stratospheric cooling and the seasonal variations of CFCs and CH4 in the polar stratosphere are presented, and quantitative predictions of the Antarctic ozone hole in the future are given. Finally, new observation of the effects of CFCs and cosmic-ray driven ozone depletion on global climate change is also presented and discussed.
Keywords: Cosmic rays (CRs); Dissociative electron transfer (DET); Chlorofluorocarbons (CFCs); Ice surfaces; Ozone hole; Climate change
PACS classification codes: 94.20.Wq; 82.30.Fi; 82.30.Lp; 34.80.Ht; 92.60.hd; 92.60.Ry

There has not actually been such as a 0.6 degrees Celsius temperature rise since the 1970s.
Rather, as can seen looking at UAH satellite data, such as http://www.drroyspencer.com/latest-global-temperatures/ , meaningful global temperature increase since 1979 has been 0.3 to 0.4 degrees Celsius at most.
Compared to Hansen’s prediction (even his scenario B being about 1+ degrees Celsius, let alone scenario A and above), that is multiple times less. That would fit a climate sensitivity accordingly less than his claims and also far less than the climate sensitivities still used by alarmists today in their current predictions. That is even before what would happen if one rather takes into account natural influences, like how the AMO/PDO predominantly increased over that period, leaving less of a temperature signal to be accounted for by manmade causes, let alone also considering solar and GCR changes. (Actual climate sensitivity: http://www.sciencebits.com/OnClimateSensitivity ).
Without high climate sensitivity, without being able to claim future warming greater than the beneficial Holocene Climate Optimum, the basis of CAGW collapses.
The global warming movement utterly hates the term CAGW because it blows the dishonest binary thought fallacy they love* by rather breaking the matter down into its true nature: For what they want, global warming has to be:
Catastrophic, not beneficial like the Minoan Warm Period, etc. … else nobody cares.
Anthropogenic primarily … else that defeats the point**
Global
Warming
* (That was the whole basis of Doran & Zimmerman 2009 dishonesty, for example, getting a 97% “consensus” by 2 survey questions of if the Earth warmed since what was the Little Ice Age and asking if humans have a scientifically significant as in non-zero effect — when even having a black or white roof has technically a non-zero effect).
** The point is trying geoengineering by an agent (CO2) chosen for exceptionally low ineffective radiative forcing per ton (some other agents orders of magnitude more cooling radiative effect per ton just as some orders of magnitude more warming effect per ton), maximum harm from its rationing to the industrial basis of prosperous high-consumption modern civilization which these guys hate, and maximum complex biological side effects as in harm to agriculture and plant growth from its reduction.

Phil. says:
June 15, 2012 at 1:28 pm
Why don’t you actually read the work you criticize? Hansen used the data on CFCs from the Chemical Manufacturer’s Association for F-11 and F-12 in scenario A ( a growth rate of 3%/yr).

So the Montreal Protocol came out in 1987, yet Hansen in 1988 assumed a growth rate of 3% per year for CFCs? That Hansen guy is really sharp.

@Bill Illis says:
June 15, 2012 at 5:08 pm
GHG assumptions (Scenario A had CO2 at 393 ppm for 2011 while B was at 391 ppm – Actual was 390.44):
===================
While I sometimes I only read WUWT posts in the hope of locating a comment or two from you… 😉
I note that: NOAA has CO2 levels at 2011 @ 394ppm and 2012 @ 396.78ppmhttp://www.esrl.noaa.gov/gmd/ccgg/trends/
Which still implies that actual CO2 output has been above scenario A if I read you correctly.

The warmist here are arguing that based on “actual” forcings scenario b is the correct one and that is pretty close if you correct for a mistakely too high sensitivity.
But one should lok at the source o0f the “forcing” corrections. Since Hansen’s 1988 predictions it has been clear that climate models overstating the temperture increase. The choices were to re-evaluate the models and therefore reduce sensitivity or to find other factors which would change the forcings. So since 1988 alarmist climate scientists have been searching for anything which could plausibly lower the net forcing. They have come up with Chinese aeresols and now CFC’s. Basically they are tuning the inputs to force the output to get closer to reality.
The logic is, “The models are right but reality does not match their output so what other factor have we missed that might fix the growing divergence.” Of course the idea that the models are wrong is not even considered.
This mind set is a classic example of confirmation bias.

Will Nitschke says:
June 15, 2012 at 3:40 pm
A couple of further observations. The two standard defences of the Hansen 1988 prediction are:
(1) Other greenhouse gases did not rise at the levels forecast by Hansen
(2) CO2 has not increased at the level predicted by Hansen’s scenario (a)
I have in the past researched both claims. For (1) the IPCC does describe the contribution of various greenhouse gases to estimated warming. The total of all other gases combined amounts to about 10% of the contribution of CO2 if my memory is correct. (If I am wrong here, please refresh my memory). It was, anyway, a very minor component of the forecast. If you want to be as fair as possible to Hansen, then reduce the 150% over-estimate by around 10%
To refresh your memory check out Fig 2 in the 88 paper, you’ll see that the forcing for the ‘other trace gases’ is far more than “10% of the contribution of CO2”.

jknapp says:
June 15, 2012 at 5:54 pm
“The warmist here are arguing that based on “actual” forcings scenario b is the correct one and that is pretty close if you correct for a mistakely too high sensitivity.”
I know you’re just remarking on that in passing and may not have seen my recent prior comment yet. However:
Even if just pretending that temporarily for the sake of simple argument, Hansen’s scenario B is about 1 degree. Actual meaningful net warming since the end of the 1970s is 0.3 to 0.4 degrees Celsius as can be seen from this:http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_May_2012.png
If absurdly, we pretended there was zero warming from natural causes meanwhile (no PDO/AMO rise existing, etc.), that would still make Hansen’s climate sensitivity around a factor of 3+ times too high. The climate sensitivities used by alarmists today are not 3+ times less than Hansen’s. That’s still just blatant creative lying and no excuse.
“Basically they are tuning the inputs to force the output to get closer to reality.”
Indeed. It would get even more ludicrous if they were to try to explain relatively non-fudged temperature data like the pattern in this with such:http://earthobservatory.nasa.gov/Features/ArcticIce/Images/arctic_temp_trends_rt.gif

William,
On the Observational Determination of Climate Sensitivity and Its Implications
We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea
surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000-2008) satellite instruments. …
I think the importance of of climate sensitivity is that it provides the missing link in heat absorption. If you build the sensitivity then there there is an unexplained sink somewhere to frighten politicians.
However, satellite imagery shows the extra CO2 is being absorbed by plants, such as old forest trees becoming wider in their girth. Other studies tend to confirm this, so that countries like Australia are actually net CO2 absorbers.http://www.john-daly.com/co2-conc/ahl-co2.htm
In that respect, this has been a brilliant manoeuvre to keep people off the track about the actually effect of increasing CO2.

Henry Clark says:
June 15, 2012 at 6:19 pm
Even if just pretending that temporarily for the sake of simple argument, Hansen’s scenario B is about 1 degree.
Why would one pretend that instead of using the actual value he shows in Fig 3?

Phil. says:
June 15, 2012 at 6:06 pm
To refresh your memory check out Fig 2 in the 88 paper, you’ll see that the forcing for the ‘other trace gases’ is far more than “10% of the contribution of CO2″.
Then why should we be bothered about worrying about CO2 at all. It appears to be harmless. Hansen said this: (see section 4.1 of the paper).
“Scenario A assumes that growth rates of trace gases emissions typical of the 1970s and 1980s will continue indefinitely;”
CO2 actually did increase at a greater rate from 1988-2011, compared to 1970-1988 (if only slightly) and no warming. Much ado about nothing it seems.

[like with Dana Nuccitelli’s sanctimonious lecture, yours is even worse. I’m just not interested in engaging you anymore given the way you treat me and other WUWT denizens elsewhere – be as upset as you wish – Anthony]

dana1981 says:
June 15, 2012 at 6:16 pm
The problem with ‘skeptics’ here is that all you care about is “Hansen was wrong” (one commenter above admitted exactly that). Well of course his model was wrong – all models are wrong. The useful question is not whether it was “wrong”, but what we can learn from it.
**********
I think we have learned that we shouldn’t be spending money on this nonsense. People can believe whatever they want, but when governments and NGO’s start imposing their belief system on private citizens, through taxes and other costs, based on little more than wild assumptions and doomsday scenarios, we have a right and a duty to question them.
You admit, “Hansen was wrong.” Has Hansen ever admitted this? Why are we wasting billions of dollars on this folly? It boggles the mind.

dana1981 says:
June 15, 2012 at 6:16 pm
“If you actually check to see by how much it overestimated the temperature change and calculate the equivalent sensitivity for real-world temperature changes, it’s right around 3°C.”
===============
You sound like my investment adviser.

If as KR and others say that Hansen’s predictions used too high a sensitivity etc. How do they continue to remain silent against claims from the alarmist community that “It’s worse than we thought” when clearly newer data is revealing it to be very much better “than they thought”?

scarletmacaw says:
June 15, 2012 at 5:52 pm
Phil. says:
June 15, 2012 at 1:28 pm
Why don’t you actually read the work you criticize? Hansen used the data on CFCs from the Chemical Manufacturer’s Association for F-11 and F-12 in scenario A ( a growth rate of 3%/yr).
So the Montreal Protocol came out in 1987, yet Hansen in 1988 assumed a growth rate of 3% per year for CFCs? That Hansen guy is really sharp.
It didn’t go into effect until 1989, the scenario A assumed the continuation of the existing growth rate, scenario B assumed a reduction in emissions with elimination of CFC emission by 2010, scenario C assumed an earlier elimination by 2000.

dana1981 says:
June 15, 2012 at 6:16 pm
For the record, despite Solheim’s poor analysis, it is true that observed temps have been closest to Scenario C, while emissions have been closest to Scenario B. What this tells you is indeed that Hansen’s model was “wrong” – meaning its sensitivity was too high.
==============================================================
ME: The Wizard of COz was rubbing his crystal ball based on CO2 emissions, not all emissions. He and his model was, and continues to be, just plain wrong. (That little dot at the end of the sentence is a PERIOD!)
===============================================================
The problem with ‘skeptics’ here is that all you care about is “Hansen was wrong” (one commenter above admitted exactly that). Well of course his model was wrong – all models are wrong. The useful question is not whether it was “wrong”, but what we can learn from it. And what we can learn from it is that climate sensitivity is probably less than 4.2°C, and probably close to 3°C. The problem with ‘skeptics’ is that you get stuck at “Hansen was wrong” and thus don’t learn anything useful from the exercise. And if you’re not going to learn anything useful, then why do it?
================================================================
ME: I admit. I haven’t learned anything new. I already knew that it’s foolish to bet a few trillion dollars on a surely wrong thing.
=================================================================
Well, of course the answer to that question is to reaffirm your belief that “Hansen was wrong”. But if you were real skeptics, you would actually try to learn something useful from the exercise. The problem is, what you learn is very inconvenient for your beliefs, so you just stop thinking before it becomes inconvenient.
===================================================================
ME: (See preceeding) I already knew not to bet on a losing proposition before it became “inconvenient”.
Should we shut down coal powered power plants based on a climate model that is WRONG?
Should we mandate mercury filled light bulbs in childrens’ bedrooms based on a climate model that is WRONG?
Should North Carolina spend millions of dollars to protect their coast from sea level rise predictions based on a climate model that is WRONG?
Should we surrender authority to the UN based on a climate model that is WRONG?
Have the warmist learned the answer yet?
We already knew.

Hansen’s 1988 Scenario B forecasts are really no different than the IPCC forecasts from its first report in 1990. So, while some people like to “believe” the sensitivity was too high or “make up any other rationale”, the global warming forecasts are farther off the more you go back in time.
The IPCC AR5 forecast submitted this year is even off by 0.2C even though they knew the actual temperatures for 2010/2011.
The forecast from one year ago is too high and the forecast from 20 years ago is 100% too high. Play with the numbers anyway you like. But ALL the global warming forecasts are too high whenever they were made.

All of this discussion appears to miss some very important points.Forcings, as observed, as driven by economics, are around Scenario B, the most likely outcome according to Hansen 1988.
The data is available (http://www.esrl.noaa.gov/gmd/aggi/), as I noted before, for anyone taking the effort to look. Comparing temperature observations to Scenario A is really quite absurd, and IMO reveals those making such a strawman comparison as more interested in rhetoric than science.
Enough said.

As soon as I see the name “Hansen” I lose interest. He has been wrong about everything I have ever investigated.
What has the man been right about? Can anyone name a single thing?
The man manages to be wrong about things it is very hard to be wrong about. For example, I hate strip mining, but as soon as Hansen speaks, he makes strip mining look saintly.
The very fact this guy is not in jail, and instead has status and wealth, seems proof being wrong pays.
Meanwhile, where does honesty get the average guy? Fired, more likely than not, and therefore the honest guy buttons his lip. However, the ballot is still a secret ballot. It is still the place an honest guy is allowed to speak the truth.
I hope and pray Truth prevails, and decent and honest people vote out those who allowed the likes of Hansen to —self snip—.

“What would be interesting is this. It would be interesting to take a newer version of ModelE ( sensitivity is 2.7) and re run Hansens experiment with the following inputs”
1) scenario A, B and C
2) actual forcings during the observation period.
—
What would be more interesting is if we actually knew what differential equations Model E was solving (along with their numerical methods). The code is poorly documented junk…

“The IEA assumed in 2008 that future emissions would grow from 2005 to 2030 at 1.5% per year. “Actually, from 2005-2010 emissions increased by 2.4% per year (data from PBL in this PDF). The 1990 to 2010 average was a 1.9% increase per year, and 2009 to 2010 was a whopping 5.8% increase.”
Just a couple of questions.Would that include the “missing”1.4-billion tonnes here in 2010?
And if China got it so wrong in 2010, what other years did they get wrong?http://www.scientificamerican.com/article.cfm?id=china-emissions-study-suggests-clim

Reg Nelson says:
June 15, 2012 at 7:01 pm
dana1981 says:
June 15, 2012 at 6:16 pm
The problem with ‘skeptics’ here is that all you care about is “Hansen was wrong” **********
============
I think we have learned that we shouldn’t be spending money on this nonsense. People can believe whatever they want, but when governments and NGO’s start imposing their belief system on private citizens, through taxes and other costs, based on little more than wild assumptions and doomsday scenarios, we have a right and a duty to question them.
You admit, “Hansen was wrong.” Has Hansen ever admitted this? Why are we wasting billions of dollars on this folly? It boggles the mind.
================
Sorry, Reg. I didn’t see you’d already made the same point I went for.

Gunga Din says:
June 15, 2012 at 8:07 pm
Sorry, Reg. I didn’t see you’d already made the same point I went for.
******
No worries. The more people that push back against this BS the better.
Good on ya, Gunga Din.

[snip. By now you should know that labeling those who don’t share your view as “denialists” violates this site’s Policy. It almost appears to be deliberate, thus your entire comment is deleted. Feel free to go on the internet and complain. But not here, you are wearing out your welcome. ~dbs, mod.]

KR says:
June 15, 2012 at 7:56 pm
All of this discussion appears to miss some very important points.
Forcings, as observed, as driven by economics, are around Scenario B, the most likely outcome according to Hansen 1988.
=======================================================
I think you’re right. Hansen’s WRONG model has led to burning money and is driving the economy into the ash heap. Is that the outcome he wanted?

Ric Werme says:
June 15, 2012 at 4:44 pm
No, he’s 60% wrong – see my 10:55 am comment. 60% of 1.5 ° C is 0.9 ° C, and his projection was 0.9 ° C too high.
However below the diagram, we read:In reality, the increase in CO 2 emissions by as much as 2.5%, which would correspond to the scenario above the blue curve. The black curve is the ultimate real-measured temperature (rolling 5-year average). Hansen’s model overestimates the temperature by 1.9 ° C, which is a whopping 150% wrong.
I must admit this part is very unclear, but it seems that since scenario A is for a 1.5% increase, then another line should have been drawn higher up to show what should have happened according to Hansen if the increase was 2.5%. Had that been done, we may have seen the 1.9 C at 2011. Or am I missing something?

Phil. says:
June 15, 2012 at 7:22 pm
It didn’t go into effect until 1989, the scenario A assumed the continuation of the existing growth rate, scenario B assumed a reduction in emissions with elimination of CFC emission by 2010, scenario C assumed an earlier elimination by 2000.

Reagan signed the Montreal Protocol in 1987. There were no major CFC producing countries opposing it. By 1988 there was no reason to expect CFC production to continue as usual. Like I said, Hansen was clearly not the sharpest tack in the shed. Either that or he purposely mislead the public by displaying a scenario he knew was not going to happen.

Climate science is become a religion for the modern age. You must believe, because modeling is theorizing is secular “theology.” Now that the crucified 1988 report has been found dead, the true believers want to believe so passionately that they poke at it again and again looking for signs of life. The scientific method never asserts that theories and models are facts, but rather propositions which subsequent observations MUST validate. When the validation fails, the proposition was false. This is as true of the population worries like Ehrlich whose “theology” has been repeatedly proven wrong, and still true believers gather together to lie under their purple blankets and wait for the mother ship. Science demands verified proof by those who would disagree; when propositions are verified, they are then become fact. Climate science operates the other way around, and then they pass the collection plate hoping for more money as the choir sings “Praise Hansen”.

i really admire the mods here. responsive rather than reactive. helpful, thoughtful, perspicacious.
sometimes they’re the best part of a thread – at least, the part that provides some rational basis for optimism in a world gone stupid. thanks for being.

In the REAL WORLD, if sales projections were this far off of actual sales, the product would be deemed a failure and would be abandoned.
If the CEO kept pouring $millions into the failed project on focus group studies, new ad campaigns, package redesigns, product tweaking, etc., the project could eventually bankrupt the company.
In business and in science, you can’t always get it right. Moreover, if you don’t have an utter failure every now and then, it means you’re complacent and your competition will eventually eat your lunch. The key is knowing when to pull the plug on a failed project.
The bottom line is that the CAGW theory is complete and utter failure. Just because the product still sells well in Washington DC, is no justification to keep selling the failed product worldwide, as it’s tanking in all other markets.
It’s time to cut bait, abandon the CAGW project and FIRE the managers responsible for keeping a failed project going for as long as it has , while wasting $TRILLIONS needlessly.
Dr. Hansen…..YOU’RE FIRED!!!!

Jerome
“You mean to say that the observed temperature line is anything at all like the scenario B line? Sorry, I just reject that. You can wiggle all you like, it is just not comparable.”
See http://tinyurl.com/29e53y, Figure 2. Forcings are in Figure 1. Observations have been very close to Scenario B. You are, quite frankly, contradicted by the data.

That’s just a wiggle. A “Let’s change some parameters so it ‘fits’ a bit better, and we’re still right”, sort of wiggle.
What’s even more telling is that in the last 5 years (since that ‘wiggle’ post on Unreal Climate), we are still following or lower than Scenario C, that was the one with NO emissions after 2000. I fail to see how that ‘data’ contradicts what I said, really.

“SAMURAI says:
June 15, 2012 at 8:58 pm
It’s time to cut bait, abandon the CAGW project and FIRE the managers responsible for keeping a failed project going for as long as it has , while wasting $TRILLIONS needlessly.”
Regardless of our oppinions of Hansen, the money is certainly not being wasted by the people on the receiving end. I bet the eyes of the alarmists lit up like the sun when they thought this scam up.

dana1981 says:
June 15, 2012 at 1:57 pm
“2) Thinking that a ~2 ppm annual CO2 increase is 2.5% of ~390 ppm (arithmetic fail!)”
conradg says:
June 15, 2012 at 4:11 pm
“The 2.5% increase does not refer to total atmospheric CO2, but to an increase in human CO2 emissions.”
===========================================================================
I don’t think either of you are right. If you read Appendix B of Hansen’s 1988 paper, you find his 1.5% per year refers to the RATE OF CHANGE OF THE INCREASE in CO2. The second derivative, if you will.
Part of the problem is that Hansen is not really clear about what he means. In the text, under paragraph “4.1 Trace Gases”, he says “Scenario A assumes that growth rates of trace gas emissions….” In air pollutions studies “emissions” generally refer to the amount produced by some source, say a smoke stack . Units are mass per time or mass per power output. So it would be easy to think he means the mass increase of anthropogenic CO2.
However, in Appendix B it becomes clear he is talking about the RATE OF CHANGE OF THE YEARLY INCREASE in atmospheric CO2. For instance in Appendix B,when talking about Scenario B he says:
“In scenario B the growth of the annual increment of CO2 is reduced from 1.5% per year today to 1% per year in 1990, 0.5% per year in 2000, and 0 in 2010; thus after 2010 the annual increment in CO2 is constant, 1.9 ppmv per year.”
So the question is: which scenario best matches the actual rate of change of the increase in the atmospheric concentration of CO2 over the past 30 years?

KR says:
June 15, 2012 at 7:56 pm
All of this discussion appears to miss some very important points.
Forcings, as observed, as driven by economics, are around Scenario B, the most likely outcome according to Hansen 1988.
The data is available (http://www.esrl.noaa.gov/gmd/aggi/), as I noted before, for anyone taking the effort to look. Comparing temperature observations to Scenario A is really quite absurd, and IMO reveals those making such a strawman comparison as more interested in rhetoric than science.
Enough said.

Not quite enough. Supposing for the moment that the forcings are indeed closest to Scenario B, the temperatures are at Scenario C and there is quite a divergence from the Scenario B temperatures. When Gavin put up his post in 2007, you could (if you were generous) believe that the temps and forcings were somewhat close. With 5 more years of data, well, not so much.
The bigger issue is the own goal you scored by breaking down the forcing contributions. It was immediately apparent that CO2 wasn’t the main player, and given the temp record to date, it is STILL overestimated.

KR mentions,
“Forcings, as observed, as driven by economics, are around Scenario B, the most likely outcome according to Hansen 1988.”
What is most amusing in your desperate defense KR is that,
1) You know very well that CO2 is the primary claimed driver of global warming in that paper. This is obvious since its growth is the main factor that shapes those scenario curves in the paper.
2) You try to blame reductions in CFCs as the reason why the temperatures have not gone according to predictions. Yet, CFC have very little contribution to climate change and furthermore their abundance in the atmosphere has changed little since 1988.
At the end of the day, the observed CO2 emissions growth has been well ABOVE scenario A and yet the temperature is below scenario C. You really can’t get it any more wrong that this paper.
Ultimately, the CO2 forcing was vastly over-estimated as the IPCC tells us. Its just unfortunate, that billions of dollars have been wasted on these rubbish models.

OssQss says:
June 15, 2012 at 8:32 pm
Hey, this is America. Everyone has a chance.
Even those who were Rock Stars in their time!
‘Fess up — you picked that vid because of the hockey stick at 1:12, didn’t you?

Steven mosherTo evaluate hansens projection we first have to find the scenario that is closest to the projected forcings. That is scenario B, not C and not A.
Steven that is incorrect.
Here is from the original paper
Scenario A => Annual greenhouse growth rate of 1.5% of the 1980s emission
Scenario B => Annual greenhouse growth rate constant at the 1980s level
Scenario C => Annual greenhouse growth rate decreases after the 1980s such that it ceases to increase after 2000
Steven, CO2 emission has not been “constant at the 1980s level”, so Scenario B does not match observation. It is Scenario A that matches observation.
And here is comparison of Hansen’s projections with observations => http://bit.ly/JPvWx1

Scenario B => Annual greenhouse growth rate constant at the 1980s level
The greenhouse growth rate has been increasing by about 1.64% since the 1980s, slightly more than Scenario A’s growth rate of 1.5%.
Scenario A is the one closest to the reality.

D. J. Hawkins says:
“It was immediately apparent that CO2 wasn’t the main player, and given the temp record to date, it is STILL overestimated.”
So long as they stay above ZERO they are over-estimating !!

I read through this entire post with all the too-ing and fro-ing, it would seem to me that two things are evident. 1. There are some true believers that will go to any length to defend their hero.
2. Their hero ceased to be a real scientist many years ago.

KR says:
June 15, 2012 at 11:59 am
@ me: “These experiments begin in 1958 and include measured or estimated changes in atmospheric CO2, CH4, N2O, chloroflourocarbons (CFCs) and stratospheric aerosols…”
I have no idea where you got that misconception. But it’s quite wrong.
Good catch — I read the caption on Fig. 1 of the post, rather than an Hansen’s paper. Which brings up an interesting point: Hansen’s control run was based on estimated 1958 values, in which CFC-12 (Freon) was 50.3 parts per trillion (pptv). In 1986, CFC-12 was estimated to be 400pptv.http://www.ideaconnection.com/solutions/516-Chlorofluorocarbons-as-an-environmental-hazard.html
Hansen used a baseline CFC figure from a decade before when the figure from only two years’ prior was eight times as large, and he still came up short in all three scenarios.
But wait! CFC-12 is the most potent GHG *evah*! Oh, look — it has “20,000 times carbon dioxide’s capacity to trap heat in the atmosphere”!http://www.ideaconnection.com/solutions/516-Chlorofluorocarbons-as-an-environmental-hazard.html
Oh, wait — “Molecule for molecule, chlorofluorocarbons (CFCs) are the most potent of greenhouse gases. One type of CFC, CFC-12 or ‘Freon-12’ as it is known by its trade name, is 17,700 times more potent than carbon dioxide”http://education.arm.gov/studyhall/ask/past_question.php?id=407
Oh, wait again — CO2 has 72.369% of the heat retention characteristics of water vapor while CFC’s only have 1.432% of the heat retention characteristics of water vapor!http://www.geocraft.com/WVFossils/greenhouse_data.html
Oh, and still wait yet another when — UNESCO-ELSS doesn’t even consider CFC a greenhouse gas —http://www.eolss.net/ebooks/sample%20chapters/c06/e6-13-01-01.pdf
So, when you guys can’t even decide if CFC is an actual greenhouse gas, let alone how much effect it has (in parts per friggin’ *trillion*, no less), color me less than compelled by the narrative.KR says:
June 15, 2012 at 11:53 am
Arguing that Hansen’s model was flawed based upon events that didn’t happen is a completely bogus strawman argument.
Who are you trying to kid? Hansen based all three scenarios on things that either would or wouldn’t happen.

Error! Error! “Hansen used a baseline CFC figure from a decade before when the figure from only two years’ prior was eight times as large…” should have been “Hansen used a baseline CFC figure from three decades before…”
That’s what I get for typing while the security guys are talking about a capella doop-wop songs…

chicagoblack says:
June 15, 2012 at 9:10 am
Irrelevant. The Farmer’s Almanac predicted last year would be colder but it wasn’t. Modeling accuracy changes over time and improvements are obviously made just like with other measurement techniques, climate or otherwise.”
Irrelevant? Modeling accuracy improves over time and improvements are made?
What’s irrelevant are Climate Change models. Period. If their accuracy improves over time then wake me up when they are aren’t a giant FAIL.

AUSphysicist says:
June 15, 2012 at 10:50 pm
//////////////////////////////////////////////.
A succinct and very good summary highlighting the flaws in the paper/projections. In a nutshell all that one needs to know.
Real time empirical observation strongly supports the view that CO2 sensitivity has been widely over estimated by the IPCC and its team, and real time empiriacl observation presently leads to the conclussion that it is extremely doubtful that CAGW exists and any future CO2 causative warming will be modest and hence of no great concern.

So:
– The article claims emissions were above the A scenario by considering *only* CO2 emissions, and not all of the emissions actually included in scenarios A, B and C. Once that is considered, scneario B is the closest. So, the article contains a demonstrable error there.
– The article doesn’t make it clear which temperature series is being used, so we are left to guess. Looks like HadCRUT3 to me, which would be a poor choice given that Hansen’s projections were based on GISS. A probable error from the article there when using the same dataset as Hansen would be the obvious thing to do.
– The article claims that Hansen was out by 150% by comparing to a fantasy emissions scenario that bears no relation to any of the actual scnearios, based on the author’s CO2/total GHG’s misunderstanding. He was actually only out by ~25% when compared to scenario B. Another demonstrable error from the article there.
So, what do we take from this? That climate sensitivity to doubling of CO2 is likely in the 2-3C range, rather than Hansen’s ~4C.

Dana1981: “2) Thinking that a ~2 ppm annual CO2 increase is 2.5% of ~390 ppm (arithmetic fail!)”
Well, they are talking about emissions and not the increase in CO2 consentration. Don’t you know the difference and haven’t you heard about the missing sing? Keyword: ocean buffer

Bill Tuttle says:
June 16, 2012 at 1:37 am
…..So, when you guys can’t even decide if CFC is an actual greenhouse gas, let alone how much effect it has (in parts per friggin’ *trillion*, no less), color me less than compelled by the narrative….
==========================================================================
Al Gore said CFCs make the ozone holes bigger. Maybe the “missing heat” slipped out the ozone hole?
There’s a win-win! Stave off CAGW by bringing back freon. Whenever the predicted heat decides to finally show up, build more air conditioners releasing more CFCs making the ozone holes bigger letting out all the nasty heat. We can all stay calm and cool no matter what happens!
(I better add a “sarc” tag or a climate scientist might give this serious consideration.)

Girma says:
June 16, 2012 at 12:32 am
Steven mosher
To evaluate hansens projection we first have to find the scenario that is closest to the projected forcings. That is scenario B, not C and not A.
Steven that is incorrect.
Here is from the original paper
Scenario A => Annual greenhouse growth rate of 1.5% of the 1980s emission
Scenario B => Annual greenhouse growth rate constant at the 1980s level
Scenario C => Annual greenhouse growth rate decreases after the 1980s such that it ceases to increase after 2000
Steven, CO2 emission has not been “constant at the 1980s level”, so Scenario B does not match observation. It is Scenario A that matches observation.
Reading comprehension problems again!
Hansen in the 88 paper explicitly states a rate of increase of 1.9ppm/yr for Scenario B from 2010 onwards, last decade it averaged 2.07ppm/yr. http://co2now.org/
Why don’t you guys who want to criticize Hansen read the paper and get the facts straight, starting with the dog’s breakfast that the original author of this post made. To get the facts so consistently wrong just makes the critic look stupid.

AUSphysicist says:
June 15, 2012 at 10:50 pm
KR mentions,
“Forcings, as observed, as driven by economics, are around Scenario B, the most likely outcome according to Hansen 1988.”
What is most amusing in your desperate defense KR is that,
1) You know very well that CO2 is the primary claimed driver of global warming in that paper. This is obvious since its growth is the main factor that shapes those scenario curves in the paper.
No unlike you he’s read the paper and has his facts straight! I suggest you look at Fig 2 which quite clearly shows that you are wrong!2) You try to blame reductions in CFCs as the reason why the temperatures have not gone according to predictions. Yet, CFC have very little contribution to climate change and furthermore their abundance in the atmosphere has changed little since 1988.
Again read the paper! The point is not that they’ve gone down since 1988 (which of course they have), rather that they stopped increasing at the former rate which is what Scenario A was based on.At the end of the day, the observed CO2 emissions growth has been well ABOVE scenario A and yet the temperature is below scenario C. You really can’t get it any more wrong that this paper.
Again you’re wrong, Scenario B had CO2 increasing by 1.9ppm/yr after 2010 compared with the last decade’s average of 2.07ppm/yr

When one reads a hundred or so comments by informed people arguing about whether something with an indeterminant greenhouse gas effect measured in parts per trillion that didn’t change much validates the assumptions that underly a prediction of rising temperatures that didn’t rise, one tends to think they are locked in a loop arguing about how the tail ( specified greenhouse gases) really did wag the dog.
Bottom line as some have less bluntly put it: The only way you can say Hansen’s assumptions involving a group of gases might still be somewhat close – despite the fact that the main prediction of temperature is way off of all implied scenarios – is to admit that his main assumption, that C02 is the primary driver – isn’t important at all. — Even within the narrow range of greehouse factors to whcih he limited himself. It’s more than a little analagous to restricting one’s arguments to whether the St. Louis Cardinals won the 1964 World Series because I first became a fan that year.
What this Hansen justification loop strongly indicates is that CO2 is the gnat on the elephant’s — and that greenhouse gases as a whole don’t hold most of the answers to even the small climatic temperature changes we have experienced in our climatically short lifetimes.

scarletmacaw says:
June 15, 2012 at 8:45 pm
Phil. says:
June 15, 2012 at 7:22 pm
It didn’t go into effect until 1989, the scenario A assumed the continuation of the existing growth rate, scenario B assumed a reduction in emissions with elimination of CFC emission by 2010, scenario C assumed an earlier elimination by 2000.
Reagan signed the Montreal Protocol in 1987. There were no major CFC producing countries opposing it. By 1988 there was no reason to expect CFC production to continue as usual. Like I said, Hansen was clearly not the sharpest tack in the shed. Either that or he purposely mislead the public by displaying a scenario he knew was not going to happen.
The people who are misleading the public are those like the original poster who presents such an incorrect summary of Hansen’s paper. Hansen presented three scenarios, what would happen if we continued doing what we had been for the previous decade, what would happen if emissions were reduced somewhat (which he called the most plausible), and what would happen with more drastic restrictions. No misleading of the public by him, rather by those like Pat Michaels and Solheim.

Werner Brozek says:
June 15, 2012 at 8:29 pm
Ric Werme says:
June 15, 2012 at 4:44 pm
No, he’s 60% wrong – see my 10:55 am comment. 60% of 1.5 ° C is 0.9 ° C, and his projection was 0.9 ° C too high.
However below the diagram, we read:
In reality, the increase in CO 2 emissions by as much as 2.5%, which would correspond to the scenario above the blue curve. The black curve is the ultimate real-measured temperature (rolling 5-year average). Hansen’s model overestimates the temperature by 1.9 ° C, which is a whopping 150% wrong.
I must admit this part is very unclear, but it seems that since scenario A is for a 1.5% increase, then another line should have been drawn higher up to show what should have happened according to Hansen if the increase was 2.5%. Had that been done, we may have seen the 1.9 C at 2011. Or am I missing something?
You’re missing the fact that the original poster, Solheim, is wrong about just about everything he said about the 88 paper

Gunga Din says:
June 15, 2012 at 7:24 pm
dana1981 says:
June 15, 2012 at 6:16 pm
For the record, despite Solheim’s poor analysis, it is true that observed temps have been closest to Scenario C, while emissions have been closest to Scenario B. What this tells you is indeed that Hansen’s model was “wrong” – meaning its sensitivity was too high.
==============================================================
ME: The Wizard of COz was rubbing his crystal ball based on CO2 emissions, not all emissions. He and his model was, and continues to be, just plain wrong. (That little dot at the end of the sentence is a PERIOD!)
===============================================================
Nope, another one who can’t read! It was based on all emissions as has been pointed out several times in this thread.

So much misinformation (from both sides) I feel like my head is going to explode….
From the original AGU publication: “We make a 100-year control run and perform experiments for three scenarios of atmospheric composition. These experiments begin in 1958 and include measured or estimated changes in atmospheric CO2, CH4, N20, chlorofluorocarbons (CFCs) and stratospheric aerosols for the period from 1958 to the present. Scenario A assumes continued trace gas growth, scenario B assumes reduced linear growth of trace gases, and scenario C assumes a rapid curtailment of trace gas emissions such that the net climate forcing ceases to increase after the year 2000.” http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf
Basically the 3 steps in the Hansen (1988) model experiment are as follows:
For assumed GHG emission scenario » Δ atmospheric composition » Δ forcing » Δ temperature
Step 1: Assume future GHG emissions path (3 scenarios)
Step 2: Project GHG concentrations in atmosphere for each scenario
Step 3: Calculate forcing for resulting atmospheric composition
Step 4: Project regional and global temperature changes for change in forcing.
In evaluating how well Hansen’s projections have stacked up against the 24 subsequent years of observed reality, I see both sides making serious mistakes (that unsurprisingly favour their own arguments about whether the model projections were good or bad).
First, I’ll challenge the skeptics (since I consider myself to be one).
1. Everyone here is automatically comparing observed warming to Hansen’s Scenario A. This is more than a bit too convenient as it yields the greatest possible difference. Keep in mind… Hansen wrote in his paper “Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though growth of emissions in scenario A (~1.5%/yr) is less than the rate typical of the past century (~4%/yr)… Scenario B is perhaps the most plausible of the three cases” The question to ask is: “How did actual emissions during the last 24 years compare against the 3 scenarios?”
2. Justifications for using Scenario A as a reference have focused narrowly on CO2 emissions increases (observed 2.5%/yr vs. assumed 1.5%/yr). To be fair, it would be necessary to focus on ALL GHG emissions, not just CO2. I see some of this above (references to methane and CFC’s, but not a comprehensive inventory of all emissions).
Now, for the warmists (dana1981, Phil, KR etc.)
1. SkS presents a very detailed dissection of the Hansen scenarios by GHG, calculates the actual GHG forcings, and concludes that forcings were closest to Scenario B. However, this analysis ignores changes in EMISSIONS and jump straight to FORCINGS. Yes, forcings ended up different from the various scenarios in the model projections… however, nobody bothers to ask: A) why CO2 forcing is in line with projections despite even higher emissions, B) why methane forcing is so far below projections.
If methane forcings were less than projections, and CO2 forcings were inline with projections despite higher emissions that implies at least one of three things… A) methane emissions fell far below assumptions all by themselves, despite no coordinated effort to reduce emissions, B) methane and CO2 emissions don’t reside in the atmosphere for as long as assumed so a given level of emissions results in a lower than projected atmospheric concentration, or C) feedbacks (such as the release of additional methane or CO2 which further increases concentrations beyond what’s caused by primary emissions) just aren’t happening as modeled. If it was A) then Hansen can only be accused of making lousy assumptions, but if it was B) or C) that would indicate serious weaknesses in the model.
2. The warmists start their argument with forcing values closer to Scenario B and then conclude the projections weren’t so bad if you just use a lower climate sensitivity value (3.0 vs. 4.2 degC /doubling). The problem I have with this argument is that climate models don’t just plug in a desired value for sensitivity. They attempt to model physical processes and feedback mechanisms and the equilibrium climate sensitivity value is a derived output. So, it’s pretty lazy to just say… “well if you assume a lower sensitivity the projections would have been decent” without addressing the deficiencies in the model that led to the overstated climate sensitivity. It’s basically saying… “if the model projections weren’t so bad, they’d be pretty good”.

old engineer says:
June 15, 2012 at 10:13 pm
So the question is: which scenario best matches the actual rate of change of the increase in the atmospheric concentration of CO2 over the past 30 years?
Somewhat lower than Scenario B.http://co2now.org/

Russ R. says:
June 16, 2012 at 8:02 am
2. Justifications for using Scenario A as a reference have focused narrowly on CO2 emissions increases (observed 2.5%/yr vs. assumed 1.5%/yr). To be fair, it would be necessary to focus on ALL GHG emissions, not just CO2. I see some of this above (references to methane and CFC’s, but not a comprehensive inventory of all emissions).
Which is what I have been asking for here, continuing to focus on CO2 only in this thread amounts to lying since ample references have been provided to show that this is not true. In fact the claim that the CO2 levels are higher than Hansen predicted is also wrong, Scenario B expected a decline in CO2 growth to 1.9ppm/yr by 2010 whereas the last two decades have shown an average growth rate of 1.6ppm/yr and 2.07 ppm/yr. The 2.5% issue raised by Solheim is to be charitable a misunderstanding by him.2. The warmists start their argument with forcing values closer to Scenario B and then conclude the projections weren’t so bad if you just use a lower climate sensitivity value (3.0 vs. 4.2 degC /doubling). The problem I have with this argument is that climate models don’t just plug in a desired value for sensitivity.
Well we’re discussing Hansen’s paper not ‘climate models’ and he explicitly uses a model that has an equilibrium climate sensitivity of 4.2ºC/doubling of CO2. So the problem you mention is moot. Hansen’s summary concluded that there was a need for improvements “in our understanding of the climate system and our ability to predict climate change”. It is a popular ‘sceptic’ meme to try to portray this paper as ‘the last word’ in their criticism (and to misrepresent it) and use any deviation of the predictions from the observations (real or imagined) as a critique, whereas it is quite clear that this was not Hansen’s intent! The original post in this thread is just another in a long line starting with Michaels to adopt this approach (and a rather poor one at that).

Russ R. – Thank you for a rather balanced post. Scenario B is the closest to actual emissions, and hence the only reasonable starting point.
Some things I would point out, however:“…this analysis ignores changes in EMISSIONS and jump straight to FORCINGS”. Forcings are, after all, the most important thing to discuss. At any one point in time the forcings are what affect the climate, regardless of the emissions that make up those forcings.“A) methane emissions fell far below assumptions all by themselves” You might look at the literature, such as Dlugokencky 1994 (http://www.agu.org/pubs/crossref/1994/93GL03070.shtml), which notes that “…a sharp decrease in the growth rate in the Northern Hemisphere during 1992…the most likely explanation is a change in an anthropogenic source such as fossil fuel exploitation, which can be rapidly and easily affected by man’s activities.” Fossil fuel production is a major source of anthropogenic CH4, and between more economic recapture of waste gas and some increased regulations on releases, that factor has decreased significantly. We’ve reduced CFC’s with regulation, and CH4 with what is most likely a combination of economics and regulation.“B) methane and CO2 emissions don’t reside in the atmosphere for as long as assumed…” Actually, they do. CO2 has a residence time of 75/100 years to a 1/e decrease, while methane has an atmospheric lifetime of 12 +/- 3 years or so, before breaking down into more CO2 with a lower forcing.
Both Hansen’s high 4.2C/doubling sensitivity (the state of the science in 1988, mind you) and an actual forcing somewhat below Scenario B make his 1988 model overestimate temperatures somewhat. So yes, the 1988 model is far from perfect, as he himself stated multiple times in the paper itself. But certainly not by the strawman 150% presented here. Or the strawman “science is settled”, which Hansen clearly did not claim in this paper.
Furthermore, taking that very same model, using actual emissions, Hansen’s model’s best fit to observations is found when run with a climate sensitivity of ~3.2C/doubling. With regional temperature anomaly patterns very much in line with observations (see Plate 2 of his paper). Which is just one more piece (of a great many) of evidence supporting the 2-4.5C range, most likely 3C per doubling, climate sensitivity that is the current estimate.

It is hard for me to believe that commenters that I have always considered knowledgeable are commenting on Hansen’s paper without reading Appendix B of his paper.
I know that in the text he says “Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely: the assumed annual growth rate averages about 1.5% of current emissions…” That what he says, but that is not what he means. In Appendix B he clearly indicates that he is talking about a the ppmv increase from year to year increasing by 1.5% each year.
If you take the 1986 atmospheric CO2 level as 347.39 ppmv, and use the average yearly increase from 1970 to 1986 as 1.34 ppmv, then increase the 1.34 ppmv by 1.5% per year until 2011, you will get a value of 388.3 ppmv for the atmospheric CO2 . The measured CO2 concentration for 2011 is 391.6 ppmv.
So yes, Scenario A actually under estimates the measure atmospheric CO2 concentration, but is the closest to the actual. Therefore, considering only CO2 , Scenario A is the one that should be used for comparison

A suggestion for Hansen and his supporters.
It is time for Hansen of NASA to take his million+ dollars of compensation & awards that he received from alarmist science institutes & NGOs which he earned by being an obedient policy influenced scientific alarmist and use it to buy billboard space globally. He could finally achieve some integrity if he put on the billboards the message “I WAS WRONG ABOUT AGW CLIMATE ALARMISM, James Hansen NASA”. It would be what a person who is concerned about trust in science would do.
John

My apologies – in my previous post I erred on one of the numbers: Hansen’s model fits the data best with a sensitivity in the 3.4-3.6C/doubling range, not 3.2C/doubling (as shown at http://www.skepticalscience.com/Hansen-1988-prediction-advanced.htm), as opposed to the 1988 sensitivity figure of 4.2C.
Again, a piece of evidence (one of many) for sensitivities somewhere in the 2-4.5C/doubling range, most likely around 3C.

old engineer @ 9:07am,
1. You’re measuring concentrations, not emissions. Apples & Applesauce. You need to look at what annual emissions were in 1986, and apply the 1.5% growth rate to that figure.
2, You’re only looking at CO2. You need to include methane, NO2 and CFCs.

old engineer says:
June 16, 2012 at 9:07 am
So yes, Scenario A actually under estimates the measure atmospheric CO2 concentration, but is the closest to the actual. Therefore, considering only CO2 , Scenario A is the one that should be used for comparison
Since you have read Appendix B you will be aware that Scenario A should not be used for comparison because it includes some more speculative effects which were excluded from B and C. As far as CO2 alone is concerned there was a minimal difference in the expected level between A and B by the present (see fig 2).

Bottom line, Phil:
Did Hansen get it right with Scenario A? No. Did he get it right with Scenario B? No. Did he get it right with Scenario C? No.
What you’ve essentially been saying is that, if Hansen had some bread, he could make a ham sandwich with mustard, if he had some ham and a jar of mustard.

Alcheson says:
June 15, 2012 at 12:53 pm
Because Hansen did not include CFCs in his model calculations to produce this infamous graph,he CANNOT now subtract them out to get a better fit with his CO2 graph.
More lies Alcheson, now you’re really looking stupid.

Bill Tuttle says:
June 16, 2012 at 9:42 am
Bottom line, Phil:
Did Hansen get it right with Scenario A? No. Did he get it right with Scenario B? No. Did he get it right with Scenario C? No.
Did he do what he intended i.e. to bracket the future conditions, yes. Try not to make such a fool of yourself.

KR,
“Forcings are, after all, the most important thing to discuss. At any one point in time the forcings are what affect the climate, regardless of the emissions that make up those forcings.”
You seem to forget that people can only take action on emissions… which is why Hansen framed his scenarios on whether or not the world is able to either slow or reduce emissions.
But back to evaluating the projections in Hansen (1988). I agree with you that calling it “150% wrong” is ridiculous and probably embarrassing.
But, it’s also insufficient to confirm the validity of Hansen’s projections (given an assumed level of emissions) by only comparing differences in modeled vs. observed forcings, as you’re doing.
Remember Hansen’s 4 steps: Assume emissions growth rate scenarios >> project atmospheric composition >> calculate change in forcing >> project regional and global temperature variance.
You’re giving Hansen a free pass on step 2, which I’d argue is a pretty important part of modeling climate. If you can’t accurately project atmospheric composition from a given amount of emissions, you’re not going to be very convincing. It gets to important questions about how much of emitted GHGs are absorbed by sinks, how quickly they decompose, how long they persist, and how much additional GHGs are released due to feedbacks, etc.
So, the right thing to do would be to calculate 1986 (or 1988) total emissions in CO2-equivalent terms, and apply Hansen’s three assumed emissions growth paths to that starting point. Then compare this to the cumulative emissions that actually occurred since then to see which of the three scenarios is most relevant for comparison. Only then can you compare projected forcings for that emissions scenario to observed forcings.
Now, on explaining the lower than projected forcing, you cited a 1994 paper that measures a decrease in methane concentrations. This is insufficient to make your case as it only speculates on an anthropogenic cause. I didn’t see any actual data on a change in man-made emissions, nor any accounting for the variance in CH4 forcing at present vs. what was projected.
But you haven’t at all addressed the CO2 problem at all. How could CO2 emissions have increased at a greater rate than assumed, but forcings remain in line with what was projected?
Look, I’m not yet arguing that Hansen’s projections were good or bad compared to what’s been observed since 1988. I’m saying that I haven’t yet seen anyone here present an honest, fair analysis in order to come to an objective conclusion.
As to the estimation of climate sensitivity, I’m becoming increasingly skeptical that there’s a single CS value that describes the climate system under all conditions. I’d argue that the sensitivity value is materially different under different climactic conditions (ice age vs. interglacial vs. transition period) because the climate system is being driven by a completely different assortment of forcing and feedback mechanisms, but that’s a discussion for a different thread.

Russ R. says:
June 16, 2012 at 9:20 am
“old engineer @ 9:07am,
1. You’re measuring concentrations, not emissions. Apples & Applesauce. You need to look at what annual emissions were in 1986, and apply the 1.5% growth rate to that figure.”
============================================================================
Okay Russ R, I took my own advice and reread Hansen’s Appendix B. Here’s what Hansen says about Scenario A.:
“Specifically in scenario A, CO2 increases as observed by Keeling for the interval 1958-1981 [Keeling, et al., 1982] and subsequently with a 1.5% per year growth of the annual increment.”
The Keeling reference in his paper is titled “Measurements of the concentration of carbon dioxide at Mauna Loa observatory, Hawaii. “
Yes indeed, there is a difference in atmospheric concentrations and emissions, and Hansen clearly used atmospheric concentrations from Mauna Loa. Note also he used a 1.5% growth of the annual increment each year.
I did make an error in my calculations. I assumed that when Hansen said in the text that he used
growth rates typical of the 1970s and 1980s that is what he meant. As can be seen in the quote above he used the measured increase from 1958 to 1981, and the increase in the yearly increment of 1.5% for the remaining years of the study..
My calculations are therefore:
If you take the 1981 atmospheric CO2 level as 340.10 ppmv, and use the average yearly increase from 1958 to 1981 in atmospheric CO2 as 1.05 ppmv, then increase the 1.05 ppmv by 1.5% per year until 2011, you will get a value of 379.0 ppmv for the atmospheric CO2 . The measured CO2 concentration for 2011 is 391.6 ppmv.
Thus Hansen underestimated the atmospheric concentration of CO2 by 12.6 ppm, or about 3%.
Since I haven’t looked at the actual trends of the other trace gases he considered, I can’t comment on how well he estimated those.

Phil.,
“Well we’re discussing Hansen’s paper not ‘climate models’ and he explicitly uses a model that has an equilibrium climate sensitivity of 4.2ºC/doubling of CO2.”
So we’re both in agreement that Hansen (1988) overestimated future warming. Done.
But, he said it best himself in Section 6.1: “Forecast temperature trends for time scales of a few decades or less are not very sensitive to the model’s equilibrium climate sensitivity. Therefore climate sensitivity would have to be much smaller than 4.2ºC, say 1.5-2ºC, in order to modify our conclusions significantly.”

Russ R. – WRT methane emissions, I would suggest you start by googling “trends in methane emissions”, and reading some of the considerable work in this area. Numerous papers and presentations discuss this, including Wuebbles et al 2000 (http://www.atmosresearch.com/NCGG2a%202002.pdf), noting that anthropogenic sources have not increased much in the last 20 years or so:

“In the absence of any mechanism to explain a long-term decrease in natural sources such as wetlands, the answer appears to lie with anthropogenic emissions.”

In regards to the estimations of emissions, well, if Hansen was studying economics you might have a point. But he doesn’t – he studies climate. The relevant question is whether the model correctly ties forcings to climate state. If so, then the model is useful, allowing us to judge the results of our actions.“But you haven’t at all addressed the CO2 problem at all. How could CO2 emissions have increased at a greater rate than assumed, but forcings remain in line with what was projected?”
They haven’t! CO2 forcing for this period as per Scenarios A and B was estimated at ~0.6W/m^2, and according to figures you can look at on http://www.esrl.noaa.gov/gmd/aggi/ it was actually ~0.55W/m^2, slightly lower. Adding in trace gases, slightly lower stratospheric aerosols, etc. (http://tinyurl.com/8xw3dtp is directly relevant to this paper), our total forcings have been ~16% lower than Scenario B.
Meaning that despite Hansen not being an economist, his mid-line, most likely projections for forcings are actually fairly close to what has actually occurred over the past 25 years.“Look, I’m not yet arguing that Hansen’s projections were good or bad compared to what’s been observed since 1988. I’m saying that I haven’t yet seen anyone here present an honest, fair analysis in order to come to an objective conclusion.”
I would have to strongly disagree. See http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/ for just that evaluation, http://www.atmos-chem-phys-discuss.net/11/22545/2011/acpd-11-22545-2011.pdf for forcing histories,
Slightly different topic, side note, while I would agree that there won’t be a single climate sensitivity that applies to all climate regimes, ~3C/doubling is the value supported by the evidence for our current state.
—
But finally – this thread really centers on Solheim harping on a 25 year old paper, using inappropriate strawman arguments. Arguments that I feel have been shown to be unjustified – Hansen 1988 sensitivity was too high (the state of the art back then), and hence shows overestimates, but even with that his model does a good job of predicting regional temperature anomaly distributions 25 years out. Comparing observations to scenario A, which did not happen, is just plain silly.
Wouldn’t it be nice if the discussion could instead center on current work, current data? Rather than nit-picking quarter-century old works that the authors of clearly stated were works in progress?

old engineer,
My apologies, your method for extrapolating CO2 growth appears to be equivalent to Hansen’s.(Though it’s now apparent that Hansen’s own method differs somewhat from how it was summarized in the abstract… though not by enough for me to nit-pick.)
As for the other GHGs in the analysis, you might find this helpful: http://www.realclimate.org/data/H88_scenarios.dat
It’s the concentration projections from Hansen (1988) for each of the 5 GHGs (CO2, N2O, CH4, CFC11 and CFC12), for each of the 3 scenarios, for each of the 93 years from 1958 to 2050.
FWIW, Hansen’s CO2 projections for 2011 were:
– Scenario A: 393.74 ppmv
– Scenario B: 390.99 ppmv
– Scenario C: 367.81 ppmv
Why his numbers differ from those you get by following the method he spelled out in Appendix B, is entirely beyond me.

Russ R. said
I’m saying that I haven’t yet seen anyone here present an honest, fair analysis in order to come to an objective conclusion.

What’s desperately needed is a magisterial review-type paper that addresses itself to clearing away the misconceptions, identifying the points in dispute, evaluating the outcomes assuming one grants certain in-dispute points, etc. Then everyone can refer to it and use it as a take-off point.
But who’ll peer-review it? Maybe it should just be posted and then modified in light of comments posted below it (or defended against them, where warranted).

KR:
Several points:
1. It doesn’t trouble me at all if Hansen wasn’t accurate on his estimates for emissions (Step 1). He’s a scientist, not a clairvoyant. I’m entirely happy to work from whatever emissions assumptions he laid out. My concern is that from a given assumption about emissions, you’re granting him a pass on projecting atmospheric concentrations (Step 2). I don’t consider this an immaterial step (and he’s made it a bit complicated by being inconsistent in how he made assumptions… sometimes talking about changes in annual concentration increments and at other times talking about actual increases in emissions). For Step 3 (calculating forcing from a given atmospheric composition) I’m happy to take this part at face value since the math appears uncontroversial. (Myrhe et al. (1998)) But in the end, you’re trying to account for all of the remaining divergence in Step 4 (temperature variance for a given change in forcing), and pinning it all on a simple little overestimate of climate sensitivity. I’ll refer you back to what Hansen himself said about sensitivity in Section 6.1.
2. “In the absence of any mechanism to explain a long-term decrease in natural sources such as wetlands, the answer appears to lie with anthropogenic emissions.” To me that sounds like “We can’t account for the gap so we’ll just assume that less methane was emitted.”. In other words… there’s no data to allocate the variance between step 1 (lower than assumed emissions) and Step 2 (lower concentrations given assumed emissions), but you’re arbitrarily attributing all of the gap to Step 1. In the end, the distinction may very well prove trivial… but until there’s some evidence, we can’t just assume away the difference.
3. I agree with you that Gavin did a very respectable job of evaluating Hansen (1988) in his 2007 post on RC. However, that was 5 years ago, and you’ll note that Scenario’s B & C have diverged since then while observations have tracked much closer to Scenario C.
4. Thank you for sending the AGGI link… this is excellent. I’ll refer it to others in future when such questions arise.
5. Minor point: As for my CO2 emissions question… you replied: “CO2 forcing for this period as per Scenarios A and B was estimated at ~0.6W/m^2, and according to figures you can look at on http://www.esrl.noaa.gov/gmd/aggi/ it was actually ~0.55W/m^2, slightly lower”.
That doesn’t quite jive with Hansen’s projected numbers for 2011 :
– Scenario A: 393.74 ppmv
– Scenario B: 390.99 ppmv
– Scenario C: 367.81 ppmv
(http://www.realclimate.org/data/H88_scenarios.dat)
and observed average concentration for 2011:
– Mauna Loa: 391.57 ppmv
(ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_annmean_mlo.txt)
It looks to me like CO2 came in higher than scenario B but lower than A,, but still a very good estimate. Nonetheless, it’s a long way from Scenario C.
6. “Wouldn’t it be nice if the discussion could instead center on current work, current data? Rather than nit-picking quarter-century old works that the authors of clearly stated were works in progress?”
That would be nice indeed, except that “temperature records of at least 17 years in length are required for identifying human effects on global‐mean tropospheric temperature.” – Santer et. al (2011).
(Sorry, I couldn’t resist.)

KRsays:June 15, 2012 at 9:50 am
Hansen did use a 4.2°C per doubling sensitivity – now thought to be too high, with ~3°C the current estimate. That resulted in a slight overestimate of warming, with the model showing an overestimate of ~20% when run with actual forcings.
Don’t throw misinformation about like that, please. The model we are talking about is GISS GCM Model II, right? Developed in the 1970s / early 1980s and described in Hansen 1983, which is referred to as paper 1 in Hansen 1988.
However, the NASA GISS GCM Model II page explicitly says: “Historical versions of Model II (e.g., the computer code used in the 1988 simulation runs) are not currently available.”. They do have an improved, value-added version by the same name available to download though. It means there are unknown and (publicly) undocumented differences between the computational climate model used by Hansen et al. in the eighties and the current, publicly available version, developed and maintained by the Columbia University EdGCM project.
Therefore it is impossible to run the model with actual(or whatever) forcings, at least not the one which was used to produce Hansen’s 1988 predictions. It may be possible for someone having access to NASA code archives (provided they exist), but the original model is certainly not publicly available for independent, third party checking.
It means your claim the model shows an overestimate of ~20% is not a scientific proposition, just hot air defying verification attempts. The proper term for it is journalism, which used to have nothing to do with science in a pre-postnormal epoch.

@KR:
So, Hansen’s busted *theoretical predictions* don’t matter, because someone’s thought up some *theoretical reasons* that *might* partly explain why they failed. And that means we can talk as if they weren’t busted at all?

ON VERIFICATION OF AGW
Now comes the moment of verification and truth: testing the theory back against protocol experience to establish its validity. If it is not a trivial theory, it suggests the existence of unknown facts which can be verified by further experiment. An expedition may go to Africa to watch an eclipse and find out if starlight really does bend relatively as it passes the edge of the sun. After a Maxwell and his theory of electro-magnetism come a Hertz looking for radio waves and a Marconi building a radio set. If the theoretical predictions do not fit in with observable facts (http://bit.ly/JPvWx1), then the theorist (Hansen) has to forget his disappointment and start all over again. This is the stern discipline which keeps science sound and rigorously honest.
Note that CO2 emission growth rate since 1980s is 1.84% (http://bit.ly/P1dXaB), which is greater than the 1.5% for Scenario A of Hansen et al.

Phil. says:
June 15, 2012 at 6:39 pm
“Why would one pretend that instead of using the actual value he shows in Fig 3?”
While reading the graph is already sufficient to show scenario B was claiming about 1 degree Celsius rise over that period, digging around elsewhere to get beyond the paywalled paper link finds http://www.realclimate.org/data/scen_ABC_temp.data showing scenario B as going from 0.121 degrees Celsius on the temperature anomaly scale in 1979 to 1.065 degrees in 2012. That is +0.944 degrees 1979->2012. Slightly different years like 1980->2012 also give similar results for his prediction.
The overall curved black line at http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_May_2012.png shows under 0.3 degrees Celsius meaningful temperature rise over that timeframe. If particularly generous, pretend up to 0.4 degrees.
The claim of Hansen just erroring by assuming 4.2 degrees Celsius climate sensitivity for a doubling and it fitting 3 degrees being used by warmists now doesn’t work.
Observations were not 3/4.2 or 71% of his scenario B prediction. 0.3 / 0.944 is not remotely close to that, nor is even 0.4 / 0.944. Observed temperature increase was <=~ 32% to 42% of his scenario B prediction at most.
The preceding would be already be more like <=~ 1.3 to 1.8 degrees Celsius climate sensitivity / doubling.*
* BUT that is if doing the warmist fallacy of neglecting the warming component from natural sources, falsely pretending there was zero effect from rise in the AMO/PDO meanwhile, etc.; among other examples, natural factors are well illustrated inhttp://earthobservatory.nasa.gov/Features/ArcticIce/Images/arctic_temp_trends_rt.gif
Without that dishonest fallacy, concluded climate sensitivity is less.
And the preceding is all also just pretending scenario B for the sake of argument, to highlight how even that pretense wouldn't actually save their bacon.

I remember vividly the build up to the Montreal protocol and it was the ability of CFC’s to destroy ozone that was the motivating factor.
This is the quote of the stated purpose:
“Recognizing that worldwide emissions of certain substances, including ST, can significantly deplete and otherwise modify the ozone layer in a manner that is likely to result in adverse effects on human health and the environment, … Determined to protect the ozone layer by taking precautionary measures to control equitably total global emissions of substances that deplete it, with the ultimate objective of their elimination on the basis of developments in scientific knowledge … Acknowledging that special provision, including ST is required to meet the needs of developing countries…”
Iy worked too, CFC’s have a half-life of between 60 and 650 years, so almost all of those made are in the biosphere.http://www.ciesin.org/docs/003-006/fig1.gif
So how come the global warming argument wasn’t used/ Why didn’t they state that CFC’s were six or more orders of magnitude better GHG’s than CO2?
If you think about it, if it was getting rid of the CFC production that caused to leveling off of temperature, in the face of rising CO2. Would it be better to remove CO2 from the biosphere than stop generating CO2?

Girma:
You wrote: “Note that CO2 emission growth rate since 1980s is 1.84% (http://bit.ly/P1dXaB), which is greater than the 1.5% for Scenario A of Hansen et al.”
Not quite. The figures you cite from CDIAC are only the fossil-fuel contribution to CO2 emissions. There’s still additional CO2 from cement production and a land-use component.
Try this: http://www.tyndall.ac.uk/global-carbon-budget-2010#Jump to Data.
Using these more comprehensive data I calculated a 1.33% compounded annual increase from 1988 to 2010.

Russ R:
Thanks for information. I was looking for Hansen’s 2011 CO2 values to see if I really understood what he was doing. I think the reason my 2011 CO2 value didn’t agree with Hansens was that I used a different value for the starting yearly increase in CO2. I used the actual average annual increase over the period from 1958 to 1981 from this website:ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_annmean_mlo.txt
which may, or may not, be the data that Hansen used back in the 80’s. I came up with 1.05 ppm. From the 1988 paper’s Introduction section, Hansen may have used a value close to 1.5 ppm. (the quote is “…with current annual increments of about 1.5 ppmv…”) When I plug 1.5 in to my spreadsheet I get 395.7 for the 2011 concentrations compared to value you give of 393.74 ppm. That’s close enough for me to think I have his methodology correct , I just don’t have the exact numbers he used for 1981 concentration and annual increment.
After spending the afternoon rereading his paper, I think there is great deal to learn in a revisit to all of the papers that culminated in the 1988 paper. The 1988 paper at the end of section 4.1 states “The forcing for any other scenario of atmospheric trace gases can be compared to these three cases by computing DeltaT0(t) with the formulas provided in Appendix B.”
I must agree with those that said that the post that started this thread is essentially a puff piece. What is needed is a real revisit with the forcings recalculated for the actual trends of the GHG’s Hansen used.

Phil. says:
June 16, 2012 at 10:28 am
Bill Tuttle says:
June 16, 2012 at 9:42 am: “Did Hansen get it right with Scenario A? No. Did he get it right with Scenario B? No. Did he get it right with Scenario C? No.”
Did he do what he intended i.e. to bracket the future conditions, yes.
No, he *failed* to establish a bracket, Phil — kim2ooo June 15, 2012 at 9:38 am: “Temperatures are lower than Hansen forecast they would be if humans disappeared off the planet twelve years ago.”http://stevengoddard.wordpress.com/2012/06/15/clarifying-hansens-scenarios-worse-than-it-seems/Try not to make such a fool of yourself.
Physician, heal thyself.

An earlier comment pointed out that Hansen’s paper states that a significantly reduced climate sensitivity (i.e. 1.5C/CO2 doubling or smaller) is needed (they used 4.2 C/CO2 doubling) to make a significant impact on the predicted delta T over just a few decades. We now have data that shows this.
But its worse than we thought:
1. The surface temperature record is assumed to be error-free. This is not true. There is arguably a 50% contribution to the measured delta T due to UHI alone. This halves the delta T that can be attributed to trace gas increases.
2. The GISS surface temperature record has been corrupted/adjusted since this paper was published. This invalidates the control runs and all of the parameter fitting that was used to initiate the model. This also raises the question about which version of corrections of whose surface temperatures should be used.
3. The surface temperature record anomalies from 1988 to present are assumed to have negligible contributions from natural variability. Since the 1970’s thru 1990’s was the positive half of a 50 – 60 year natural climate cycle, some of the observed temperature anomalies are due to natural variations. This reduces the delta T that can be attributed to trace gas increases.
4. The 1988 paper has predictions of the mid-troposphere hotspot becoming very pronounced, whereas measurements show it to be nonexistent still. This points to a fundamental flaw in the climate model.
Based on these issues alone, any agreement between predicted T anomalies from this paper and observed temperatures should be attributed to a fortuitous cascade of compensating errors, also known as dumb luck.

old engineer:
“I must agree with those that said that the post that started this thread is essentially a puff piece. ”
I’m similarly in agreement. I’d never heard of Prof. (emeritus) Solheim prior to this piece but his “analysis” here doesn’t make a very good first impression.
“What is needed is a real revisit with the forcings recalculated for the actual trends of the GHG’s Hansen used.”
See here: http://www.esrl.noaa.gov/gmd/aggi/ (KR kindly linked to this above.)
Anyway, this has been a fun thread. I’d say the lower half of it is a good deal more reasoned and less polarized than the top half. And I can happily say, I’ve learned a fair bit along the way.

Phil. says:
June 16, 2012 at 8:01 am
Gunga Din says:
June 15, 2012 at 7:24 pm
dana1981 says:
June 15, 2012 at 6:16 pm
For the record, despite Solheim’s poor analysis, it is true that observed temps have been closest to Scenario C, while emissions have been closest to Scenario B. What this tells you is indeed that Hansen’s model was “wrong” – meaning its sensitivity was too high.
==============================================================
ME: The Wizard of COz was rubbing his crystal ball based on CO2 emissions, not all emissions. He and his model was, and continues to be, just plain wrong. (That little dot at the end of the sentence is a PERIOD!)
===============================================================
PHIL: Nope, another one who can’t read! It was based on all emissions as has been pointed out several times in this thread.
================================================================
ME: I can read (really!) but I hadn’t read some of what’s been said. Apologies.
But we both agree that his model was, indeed, “wrong”. By 150% or 60%? It doesn’t really matter. Either way it’s not trustworthy. My main beef is that policies, very expensive policies both in lost dollars and freedoms, have been made based on this and other faulty predictions and “postdictions”. Example, CO2 is ruled a pollutant because the Wizard of COz said it was.

KR says:
June 15, 2012 at 9:50 am
“However, the NASA GISS GCM Model II page explicitly says: Historical versions of Model II (e.g., the computer code used in the 1988 simulation runs) are not currently available.. They do have an improved, value-added version by the same name available to download though. It means there are unknown and (publicly) undocumented differences between the computational climate model used by Hansen et al. in the eighties and the current, publicly available version, developed and maintained by the Columbia University EdGCM project.”
I just checked out Model II. Man – I thought Model E was bad…yikes!. I encourage everyone with a scientific programming background to check out the Model II source code at the links provided in KR’s post. Count all the GOTOs. And documentation … uh … what documentation? What differential equaitons? What numerical methods? NASA can do much better than this…

Gunga Din says:
June 17, 2012 at 1:33 pm
Phil. says:
June 16, 2012 at 8:01 am
Gunga Din says:
June 15, 2012 at 7:24 pm
dana1981 says:
June 15, 2012 at 6:16 pm
For the record, despite Solheim’s poor analysis, it is true that observed temps have been closest to Scenario C, while emissions have been closest to Scenario B. What this tells you is indeed that Hansen’s model was “wrong” – meaning its sensitivity was too high.
==============================================================
ME: The Wizard of COz was rubbing his crystal ball based on CO2 emissions, not all emissions. He and his model was, and continues to be, just plain wrong. (That little dot at the end of the sentence is a PERIOD!)
===============================================================
PHIL: Nope, another one who can’t read! It was based on all emissions as has been pointed out several times in this thread.
================================================================
ME: I can read (really!) but I hadn’t read some of what’s been said. Apologies.
So you asserted that the model was based on CO2 only without any facts to back it up, you should apologize for such misleading statements!But we both agree that his model was, indeed, “wrong”. By 150% or 60%? It doesn’t really matter.
No we don’t, the “By 150% or 60%?” was based on ridiculous mis-statements which had no basis in fact! What is true is that the model which used a very good estimate for the upcoming emissions over the next 25 years, the sensitivity used a value which although reasonable at the time has proved to be slightly high.

Question to dana1981:
In your “rebuttal” you had shown that the amount of Greenhouse Gases/CFCs that have contributed to the recent warming was around 0.7 w/m^2 over the last ~22 years, which you showed was higher than Scenario C:http://www.skepticalscience.com/pics/SolheimForcings.jpg
You then showed a graph that depicted the average of surface temperature stations VS Hansen’s forecast:http://www.skepticalscience.com/pics/HansenSolheim.jpg
The temperatures are lower than Scenario C, yet we have an alleged higher energy imbalance that what was depicted in Scenario C? Something is not adding up.
It seems that your analysis unintentionally confirmed that Dr. Hansen DID overestimate Climate Sensitivity quite substantially.

And, to repeat – while CO2 has progressed roughly as both scenarios A and B projected, we have not gone through Scenario A, due primarily to CFC reductions and a rather lower than expected amount of methane.
At this point the most interesting aspect of climate science is what the next excuse will be.
On the plus side, we can predict with 100% certainty that we’ll be told we need to spend trillions of dollars to reduce CO2 emissions, no matter what temperatures actually do.

It was interesting to see a comment by our little buddy dana1981 that was dead center – re: ” an amateur like me” . As best as I can tell, amateurish pretty much fits dana like a glove.
Anthony, as one who learned a long time ago to ignore name calling, the whole denialist label isn’t something that bothers me that much. I think you are right to point out its use and remind people that there is an obvious agenda behind its use, but I’d argue that deleting comments because they use the term isn’t necessary. When dana uses it he reminds everyone not only how mean spirited and hateful an indivisual he is, but what lack of credibility he brings. He might as well be a walking talking billboard for what Skeptical Science really is about.REPLY: Well you see Dana Nuccitelli is rather immature (he’s a kid that rides a scooter) in his emotional view of the issue. He complained that commenters and contributors on WUWT were referring to Skeptical Science with the abbreviation “SS”, due the Nazi connotation it carried, so I made it a policy not to use that abbreviation. I asked him not to use “denier” anymore, but he’s so full of hatred he can’t help himself. So, I just don’t have much sympathy for somebody who makes demands but won’t reciprocate – Anthony

Why didn’t they state that CFC’s were six or more orders of magnitude better GHG’s than CO2?
Well, obviously they needed to save that for 2012 when temperatures hadn’t increased and they needed to explain why. Clearly you are not a climate scientist.

That doesn’t quite jive with Hansen’s projected numbers for 2011 :
– Scenario A: 393.74 ppmv
– Scenario B: 390.99 ppmv
– Scenario C: 367.81 ppmv
and observed average concentration for 2011:
– Mauna Loa: 391.57 ppmv
———————————————————————————————————-
Emissions were still worse than Scenario, A though — remember, CO2 absorption was among the many, many things Hansen was wrong about. China has been starting a coal plant every week (oddly we haven’t seen Hansen in front of any of them, I guess he heard what happened to that guy in Tiananmen Square around the time he was issuing failed predictions).

REPLY: Well you see Dana Nuccitelli is rather immature (he’s a kid that rides a scooter) in his emotional view of the issue. He complained that commenters and contributors on WUWT were referring to Skeptical Science with the abbreviation “SS”, due the Nazi connotation it carried, so I made it a policy not to use that abbreviation. I asked him not to use “denier” anymore, but he’s so full of hatred he can’t help himself. So, I just don’t have much sympathy for somebody who makes demands but won’t reciprocate – Anthony
But he’s cool with the abbreviation “SkS” and the Soviet weapon connotation.
Sheesh. Kids today…

Phil. says:
June 18, 2012 at 12:03 pm
PHIL: Nope, another one who can’t read! It was based on all emissions as has been pointed out several times in this thread.
================================================================
ME: I can read (really!) but I hadn’t read some of what’s been said. Apologies.
But we both agree that his model was, indeed, “wrong”. By 150% or 60%? It doesn’t really matter. Either way it’s not trustworthy. My main beef is that policies, very expensive policies both in lost dollars and freedoms, have been made based on this and other faulty predictions and “postdictions”. Example, CO2 is ruled a pollutant because the Wizard of COz said it was.
==========================================================================
PHIL:So you asserted that the model was based on CO2 only without any facts to back it up, you should apologize for such misleading statements!
===================================================
ME: Feel free to reread my first sentence.
++++++++++++++++++++++
ME from previous comment: But we both agree that his model was, indeed, “wrong”. By 150% or 60%? It doesn’t really matter.
===================================
ME: I did misread this. Dana admitted Hansen was wrong. You never did.
==================================================================
PHIL responding to “ME from previous comment”: No we don’t, the “By 150% or 60%?” was based on ridiculous mis-statements which had no basis in fact! What is true is that the model which used a very good estimate for the upcoming emissions over the next 25 years, the sensitivity used a value which although reasonable at the time has proved to be slightly high.
========================================================================
So how “wrong” would you say he was (and is)? If you want to claim he was right for the wrong reasons (CFC reductions etc.) then why all the fuss about reducing CO2? Why claim CO2 is a pollutant when it do what he said it would? Why harness the world’s economies to it’s reduction?

Gunga Din says:
Your comment is awaiting moderation.
June 18, 2012 at 3:01 pm
Corection: “when it do what ” should be “when it doesn’t do what”.
===============================================================
I can’t read. I can’t type. What can I say?

Anthony,
Your site, your perogative. I happen to believe that by allowing people to see the sort of comments people like Dana Nuccitelli make, you serve the purpose of marginalizing his ilk. One advantage of “hate speech” is it makes easy to identify the haters.
On the other hand, it isn’t as if any one who regularly reads this blog or who has ever ventured over to Skeptical Science doesn’t already know everything one needs to about Dana Nuccitelli’s behavior and I can see why you might be tired of his brand of crap. It is perhaps perverse on my part to watch him make an ass of himself.

Phil,
How’s this for “bracketing future conditions”:
Linear Trend 1984-2012
Hansen’s temperature projections:
Scenario A: 3.37 deg C / century
Scenario B: 2.87 deg C / century
Scenario C: 2.06 deg C / century
Observed trend:
GISTEMP : 1.75 deg C / century
As you say above… I think it’s very clear and borne out by the data.
And here are those data in excel format in case you want to check my math:https://dl.dropbox.com/u/78507292/Hansen%20%281988%29%20Scenario%20B%20vs.%20GISTEMP.xlsx
And for a bonus treat, the spreadsheet also includes a statistical analysis of the annual differentials between Scenario B and GISTEMP from 1984-2012 showing a difference with ~86% significance.
Enjoy.

Phil. says:
June 18, 2012 at 3:02 pm
Really, I think it’s very clear and borne out by the data: e.g.
Hansen’s projected numbers for CO2 in 2011 :
– Scenario A: 393.74 ppmv
– Scenario B: 390.99 ppmv
– Scenario C: 367.81 ppmv
and observed average concentration for 2011:
– Mauna Loa: 391.57 ppmv
You contend that Hansen merely “[d]id…what he intended i.e. to bracket the future conditions, yes” — future conditions, plural. What he actually did was postulate a series of “if/then” scenarios, but if you consider Scenario A the upper bracket for future conditions (if CO2=x, then temperature=y) and Scenario C the lower bracket, then “it’s very clear and borne out by that data” that he failed to establish a bracket for future conditions.
A bracket is supposed to enclose something. While the observed CO2 concentration from Mauna Loa falls within the upper bracket (Scenario A) the observed temperature falls *outside* the lower bracket (Scenario C).
If it was solely Hansen’s intention to bracket future conditions, he didn’t even get that right.

Gunga Din says:
June 18, 2012 at 3:06 pm
Corection: “when it do what ” should be “when it doesn’t do what”.
===============================================================
I can’t read. I can’t type. What can I say?
You’ve got the same problem I do — you think faster than you type.
Fortunately, I still move faster than I think, otherwise I’d have been delivered to the coroner quite a while ago…

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy