Climate Science Glossary

Term Lookup

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Term:

Settings

Beginner Intermediate Advanced No DefinitionsDefinition Life:

All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

What do we learn from James Hansen's 1988 prediction?

What the science says...

Although Hansen's projected global temperature increase has been higher than the actual global warming, this is because his climate model used a high climate sensitivity parameter. Had he used the currently accepted value of approximately 3°C warming for a doubling of atmospheric CO2, Hansen would have correctly projected the ensuing global warming.

Climate Myth...

Hansen's 1988 prediction was wrong
'On June 23, 1988, NASA scientist James Hansen testified before the House of Representatives that there was a strong "cause and effect relationship" between observed temperatures and human emissions into the atmosphere. At that time, Hansen also produced a model of the future behavior of the globe’s temperature, which he had turned into a video movie that was heavily shopped in Congress. That model predicted that global temperature between 1988 and 1997 would rise by 0.45°C (Figure 1). Ground-based temperatures from the IPCC show a rise of 0.11°C, or more than four times less than Hansen predicted. The forecast made in 1988 was an astounding failure, and IPCC’s 1990 statement about the realistic nature of these projections was simply wrong.' (Pat Michaels)

Hansen et al. (1988) used a global climate model to simulate the impact of variations in atmospheric greenhouse gases and aerosols on the global climate. Unable to predict future human greenhouse gas emissions or model every single possibility, Hansen chose 3 scenarios to model. Scenario A assumed continued exponential greenhouse gas growth. Scenario B assumed a reduced linear rate of growth, and Scenario C assumed a rapid decline in greenhouse gas emissions around the year 2000.

Notice that Michaels erased Hansen's Scenarios B and C despite the fact that as discussed above, Scenario A assumed continued exponential greenhouse gas growth, which did not occur. In other words, to support the claim that Hansen's projections were "an astounding failure," Michaels only showed the projection which was based on the emissions scenario which was furthest from reality.

Gavin Schmidt provides a comparison between all three scenarios and actual global surface temperature changes in Figure 3.

As you can see, Hansen's projections showed slightly more warming than reality, but clearly they were neither off by a factor of 4, nor were they "an astounding failure" by any reasonably honest assessment. Yet a common reaction to Hansen's 1988 projections is "he overestimated the rate of warming, therefore Hansen was wrong."

In fact, when skeptical climate scientist John Christy blogged about Hansen's 1988 study, his entire conclusion was "The result suggests the old NASA GCM was considerably more sensitive to GHGs than is the real atmosphere." Christy didn't even bother to examine why the global climate model was too sensitive or what that tells us. If the model was too sensitive, then what was its climate sensitivity?

This is obviously an oversimplified conclusion, and it's important to examine why Hansen's projections didn't match up with the actual surface temperature change. That's what we'll do here.

Hansen et al. only modeled the temperature response to greenhouse gas changes (and a few simulated volcanic eruptions). So in his simulations, the greenhouse gas (GHG)-only forcing and 'all forcings' are the same. In reality, they are not, with the main non-GHG forcing involving human aerosol emissions, whose effects remain one of the biggest uncertainties in climate science.

In our analysis here, we're interested in the changes since 1988, particularly through 1998. The radiative forcing changes since 1988 are shown in Figure 4.

Both the GHG-only and net anthropogenic forcing changes between 1988 and 1998 were very close to Hansen's Scenario C, consistent with Figure 1 above, primarily due to the CFC emissions reductions as a result of the Montreal Protocol.

Recreating Michaels' Congressional Testimony Graphic

As Figure 4 shows, Hasen's Scenario B is currently closest to the actual forcing (according to Skeie et al.), but running about 16% too high (since 1988). Figure 5 reproduces Hansen's Scenario B with a 16% reduction in the warming trend, to crudely correct for the discrepancy between it and the actual radiative forcing. This might be what Michaels' graphic would look like if he were to give an accurate version of his presentation today:

In Figure 3 we've included both GISTEMP data, and GISTEMP with solar, volcanic, and El Niño Southern Oscillations removed by Foster and Rahmstorf (2011). The 1988 to 2010 trends are similar, 0.20°C per decade with the natural effects, 0.18°C per decade without. Scenario B has a 0.23°C per decade trend, but when removing a simulated volcanic eruption in 1996, the trend decreases to about 0.22°C per decade.

As the figure above shows, Hansen's 1988 model overpredicted the ensuing global warming. However, it only overpredicted the warming by approximately 15 to 25%, which is a far cry from the 300% overprediction claimed by Michaels in his 1998 congressional testimony.

Climate Sensitivity

Climate sensitivity describes how sensitive the global climate is to a change in the amount of energy reaching the Earth's surface and lower atmosphere (a.k.a. a radiative forcing). Hansen's climate model had a global mean surface air equilibrium sensitivity of 4.2°C warming for a doubling of atmospheric CO2 [2xCO2]. The relationship between a change in global surface temperature (dT), climate sensitivity (λ), and radiative forcing (dF), is

dT = λ*dF

Knowing that the actual radiative forcing was slightly lower than Hansen's Scenario B, and knowing the subsequent global surface temperature change, we can estimate what the actual climate sensitivity value would have to be for Hansen's climate model to accurately project the average temperature change.

What we find is that Hansen's results add to the long list of evidence that climate sensitivity is not low. As noted above, Hansen's model overpredicted the ensuing global warming thus far by approximately 15 to 25%. Thus if we estimate that the sensitivity of his model was 15 to 25% too high (which is an oversimplification, but will give us a reasonably accurate back-of-the-envelope estimate), this suggests the actual climate sensitivity is approximately 3.4 to 3.6°C for doubled CO2, which is close to the IPCC best estimate of 3°C.

The argument "Hansen's projections were too high" is thus not an argument against anthropogenic global warming or the accuracy of climate models, but rather an argument against climate sensitivity being as high as 4.2°C for 2xCO2, but it's also an argument for climate sensitivity being around 3°C for 2xCO2, which is consistent with the range of climate sensitivity values in the IPCC report.

Spatial Distribution of Warming

Hansen's study also produced a map of the projected spatial distribution of the surface air temperature change in Scenario B for the 1980s, 1990s, and 2010s. Although the decade of the 2010s has just begun, we can compare recent global temperature maps to Hansen's maps to evaluate their accuracy.

Although the actual amount of warming (Figure 5) has been less than projected in Scenario B (Figure 4), this is due to the fact that as discussed above, we're not yet in the decade of the 2010s (which will almost certainly be warmer than the 2000s), and Hansen's climate model projected a higher rate of warming due to a high climate sensitivity. However, as you can see, Hansen's model correctly projected amplified warming in the Arctic, as well as hot spots in northern and southern Africa, west Antarctica, more pronounced warming over the land masses of the northern hemisphere, etc. The spatial distribution of the warming is very close to his projections.

Hansen's Accuracy

Had Hansen used a climate model with a climate sensitivity of approximate 3°C for 2xCO2 (at least in the short-term, it's likely larger in the long-term due to slow-acting feedbacks), he would have projected the ensuing rate of global surface temperature change accurately. Not only that, but he projected the spatial distribution of the warming with a high level of accuracy. The take-home message should not be "Hansen was wrong therefore climate models and the anthropogenic global warming theory are wrong;" the correct conclusion is that Hansen's study is another piece of evidence that climate sensitivity is in the IPCC stated range of 2-4.5°C for 2xCO2.

Hi there - I'm relatively new to commenting here so apologies if I'm missing something. I've read through dana1981's Advanced and Basic versions of this rebuttal, and something important appears to be omitted from this Basic version - namely that Pat Michaels was misleading in saying that "That model predicted that global temperature between 1988 and 1997 would rise by 0.45°C." Together with Peter Hogarth's updated chart (above), it appears that even though Hansen overestimated the sensitivity parameter, his Scenario C projection is not far off from the GISS measured temperatures. I'm not sure if it's too late to make any updates to the rebuttal, but the key conclusion here might be that Hansen's 1988 projections - even though based on far less data than we have now - were within the range of what has actually been observed. Furthermore, the measured warming provides support that Hansen had the fundamentals of climate science correct, namely that human factors are driving GHG emissions and causing global warming that is significant enough that it can be directly measured over just a few decades - not centuries from now.

Also while actual temps are in the range of Scenario C, greenhouse gas emissions have not followed those in that particular projection. It makes more sense to focus on Scenario B, which has been very close to actual emissions, and then determine why the actual temp change has been lower (mainly the climate sensitivity factor difference).

Ok thanks for clarifying about Scenario C. It still might not hurt to explain in the "Basic" version that: 1) Michaels was misleading by focusing on Scenario A and ignoring Scenario B, and 2) Hansen had less data in 1988 and got the sensitivity wrong, but his overall theory (GHG and temp increases) has been borne out by observations in the last decades. Thanks again for the great post here!

Climate sensitivity isn't an input, it's built into the model based on how various feedbacks react to a given forcing. I think understanding ocean interactions was one of the big challenges that took a while, perhaps the amount of CO2 uptake by the oceans.

The article says "we find that in order to accurately predict the global warming of the past 22 years, Hansen's climate model would have needed a climate sensitivity of about 3.4°C for a doubling of atmospheric CO2."
Can you show me those results. I'd love to see just how well the model worked for the 3.4 degree forcing.

RealClimate has a new post describing an article from Hansen 1981 and its prediction of future warming. Hansen was about 30% lower than observed warming for this 30 year validation. Perhaps a review of this article could be added to the predictions link.

Unfortunately, that prediction calls for a rapid increase in global warming in the near future.

"Forecast temperature trends for time scales of a few decades or less are not very sensitive to the model’s equilibrium climate sensitivity. Therefore climate sensitivity would have to be much smaller than 4.2ºC, say 1.5-2ºC, in order to modify our conclusions significantly." Hansen (1988)

Russ - I guess that depends on what's considered 'significant'. Transient climate response tends to vary fairly proportionately to equilibrium sensitivity, so a lower sensitivity also means a lower transient response, and a smaller short-term warming. Not a huge difference, but like I said, it depends what you consider 'significant'.

A minor note, inspired by re-reading Myhre et al 1998 on the radiative forcing of various greenhouse gases:

The forcing from a change in CO2 is estimated as F = α * ln(C/C0) - this is a shorthand fit to what is calculated from a number of line-by-line radiative calculations.

The 1990 constant, which is what I presume Hansen used in the 1988 model, had a constant α = 6.3, while Myhre et al 1998, using better radiative estimates, has α = 5.35. And that value has been used ever since in modeling estimations.

I suspect that difference in estimating radiative forcing may be responsible for much of the 4.2°C/doubling sensitivity Hansen 1988 (over)estimated, as opposed to the roughly 3°C/doubling value used now.

Tamino has updated Hansen's 1988 prediction by swapping in actual values of forcings (except volcanic) more recently than was done by RealClimate seven years ago. The forcings are closest to Hansen's Scenario C forcings. So actual temperatures should have been closest to Hanson's Scenario C model projection. Guess what?

Several obvious problems here.First, Hansen's 1988 presentation was about emissions, as the Congressional record clearly shows. Actual global emissions look like Scenario A (if mainly because of China). Yes, the forcings in the model depend on concentrations, but the fact that methane or CO2 concentrations didn't do what the model expected are failures of Hansen 1988 to accurately model the relationship of emissions to concentrations. This is part of what critics have accurately labelled the "three-card monte" of climate science: make a claim, then defend some other claim while never acknowledging the original claim was false. This is not how good science is done.Second, the graph purports to measure whether Hansen 1988 made accurate predictions, but it doesn't start in 1988. To show the hindcast (which the model was deliberately tuned to!) is another kind of three-card monte.One can quibble with the choice of the GISS surface temperature record, given that Hansen himself administered it, but the divergence from satellite doesn't really affect the conclusions, so suffice it to note the 1988 prediction looks slightly worse using UAH or RSS.Another minor quibble is that the article is now somewhat out of date — we're well into the 2010s and the predicted warming is still not materializing. This also makes the prediction slightly more wrong.Last, and most risibly, the resemblance of actual temperatures to Scenario C predictions is now being held up as vindication. Even if that were defensible on the merits, it would certainly have come as a great shock to Hansen and his audience in 1988. Imagine a time traveler arriving from 2014, barging into Senate hearings brandishing the satellite record and yelling "Hansen was right! Look, future temperatures are just like Scenario C!" The obvious conclusion would have been "oh good, there's no problem then."As this is an advocacy site, I do not expect these errors to be corrected, I merely state them for the record so visitors to the site can draw their own conclusions.

TallDave, you say: "Actual global emissions look like Scenario A (if mainly because of China)."

Except, that if you read the link in the post directly above your own, you find information to the contrary. Perhaps you'd care to present your alternative data, that demonstrates that emissions(or even better, forcings) do in fact look like scenario A.

Those emission scenarios were "what-if" cases (not predictions) of economic activity, which are not in and of themselves climate science. Any mismatch of between those scenarios and actual events would only be an issue if Hansen was writing on economics, not on climate. He could hardly have predicted the Montreal Protocols on CFC's, for example.

"suffice it to note" - Strawman WRT to satellite temps, as Hansen was discussing surface temperatures and not mid-tropospheric temperature; you're attempting to attack something that wasn't discussed in his paper.

Yes, that 1988 paper is now out of date. Amazingly, though, it's frequently brought up and attacked on 'skeptic' websites, as if it represented current information - so it's quite reasonable to discuss these attacks on SkS as the current denial myths they are.

The real question is whether the model performs well in replicating temperatures given particular forcings - and since it does, it was a pretty decent model. The predictions of the paper were not "either Scenario A, B, or C will occur", but rather "Given a set of stated emissions, here's what the temperatures are likely to be", establishing a relationship between concentrations of forcing agents, and temperatures.

The major problem with his 1988 predictions came down to slightly too high a sensitivity to forcings (4.2 C/doubling of CO2, rather than ~3.2). This was strongly influenced by the radiative models and available data of the time, with the simplified expression at the time for CO2 forcing being 6.3*ln(C/C0). That constant was updated a decade later by Myhre et al 1998 to 5.35*ln(C/C0). Hansen used the data he had at the time, and his sensitivity estimate was a bit high.

If Hansen had time-traveled a decade and used that later estimate, his input sensitivity would have been ~3.57 C/doubling of CO2 - and his emissions to temperature predictions for the next 25 years would have been astoundingly accurate.

Your comment is a collection of strawmen arguments, and rather completely misses the point of the Hansen 1988 paper. It drew conclusions regarding the relationship of greenhouse gases and temperatures, not on making economic predictions. If you're going to disagree with it, you should at least discuss what the paper was actually about.

"I would like to draw three main conclusions. Number one, the Earth is warmer in 1988 than at any time in the history of instrumental measurements. Number two, the global warming is now large enough that we can ascribe with a high degree of confidence a cause and effect relationship to the greenhouse effect. And number three, our computer climate simulations indicate that the greenhouse effect is already large enough to begin to effect the probability of extreme events such as summer heat waves."

Curiously, he makes no mention of emissions at all, when enumerating his three conclusions.

Later, and talking explicitly about the graph from Hansen et al (1988) which showed the three scenarios, he says:

"Let me turn to my second point which is the causal association of the greenhouse effect and the global warming. Causal association requires first that the warming be larger than natural climate variability and, second, that the magnitude and nature of the warming be consistent with the greenhouse mechanism. These points are both addressed on my second viewgraph. The observed warming during the past 30 years, which is the period when we have accurate measurements of atmospheric composition, is shown by the heavy black line in this graph. The warming is almost 0.4 degrees Centigrade by 1988. The probability of a chance warming of that magnitude is about 1 percent. So, with 99 percent confidence we can state that the warming trend during this time period is a real warming trend.

The other curves in this figure are the results of global climate model calculations for three scenarios of atmospheric trace gas growth. We have considered several scenarios because there are uncertainties in the exact trace gas growth in the past and especially in the future. We have considered cases ranging from business as usual, which is scenario A, to draconian emission cuts, scenario C, which would totally eliminate net trace gas growth by year 2000.

The main point to be made here is that the expected global warming is of the same magnitude as the observed warming. As there is only a 1 percent chance of an accidental warming of this magnitude, the agreement with the expected greenhouse effect is of considerable significance. Moreover if you look at the next level of detail in the global temperature change, there are clear signs of the greenhouse effect. Observational data suggests a cooling in the stratosphere while the ground is warming. ..."

(My emphasis)

Hansen then goes on to discuss other key signatures of the greenhouse effect.

As you recall, Talldave indicated that Hansen's testimony was about the emissions. It turns out, however, that the emissions are not mentioned in any of Hansen's three key points. Worse for Talldave's account, even when discussing the graph itself, Hansen spent more time discussing the actual temperature record, and the computer trend over the period in which it could then (in 1988) be compared with the temperature record. What is more, he indicated that was the main point.

The different emission scenarios were mentioned, but only in passing in order to explain the differences between the three curves. No attention was drawn to the difference between the curves, and no conclusions drawn from them. Indeed, for all we know the only reason the curves past 1988 are shown was the difficulty of redrawing the graphs accurately in an era when the pinacle of personal computers was the Commodore Amiga 500. They are certainly not the point of the graph as used in the congressional testimony, and the congressional testimony itself was not "about emissions" as actually reading the testimony (as opposed to merely referring to it while being carefull not to link ot it) actually demonstrates.

Ironically, Talldave goes on to say:

"This is part of what critics have accurately labelled the "three-card monte" of climate science: make a claim, then defend some other claim while never acknowledging the original claim was false."

Ironical, of course, because it is he who has clearly, and outragously misrepresented Hansen's testimony in order to criticize it. Where he to criticize the contents of the testimony itself, mention of the projections would be all but irrelevant.

As a final note, Hansen did have something to say about the accuracy of computer climate models in his testimony. He said, "Finally, I would like to stress that there is a need for improving these global climate models, ...". He certainly did not claim great accuracy for his model, and believed it could be substantially improved. Which leaves one wondering why purported skeptics spend do much time criticizing obsolete (by many generations) models.

Tristan/KR — as I pointed out in my original post, you're making the usual mistake of confusing "emissions" for "concentrations" or "forcings." Again, Hansen made predictions explicitly based on emissons to Congress, see his full remarks in the pdf below. Indeed, the purpose of the hearing was to persuade Congress to take action on emissions.

Obviously because they're the only ones that can be tested on any meaningful time scale. Contra this site, the ability of a model to hindcast a highly complex phenemonen gives little confidence in its forecast (something painfully well-known in other fields).

Tom C is not saying Hansen did not mention emissions, just that they were not central to Hansen's testimony, and did not figure into his main points which were about whether observed warming had happened and whether it could be linked to GHGs and therefore be tied to humans.

What matters in any climate model is the GHG forcing, which is a function of concentrations. Emissions affect that, but indirectly and in a lagged fashion, so the timing of emissions and their makeup affect the realized concentration.

Hansen could not know that methane would show it's odd pattern over time, that CFC's production would be curtailed by the Montreal protocol a few years later, and that the Soviet bloc would collapse, along with it's industry. Current emissions of CO2 in particular could be pretty high, but if the timing of those emissions was backloaded, you will not see the overall forcing.

You're argument for your fixation on old models doesn't ring true. More sophisticated GCMS produced later have plenty of new data they can be compared against. Nobody gauges what you can do with a current computer based on what the Commodore Amigo could do in 1988. That is a crazy idea.

Stephen — Of course emissions are central to his testimony. Without the assignment of emissions scenarios, A B and C are just random points on a graph, chosen for no particular reason, with no relevance to policy.

"Hansen could not know that methane would show it's odd pattern over time"

Exactly my point, thank you. Hansen made predictions about things he could not know.

"You're argument for your fixation on old models doesn't ring true. More sophisticated GCMS produced later have plenty of new data they can be compared against."

I don't think you quite understand the problem. Models may produce any arbitrary amount of data, but reality produces one year of results to compare their predictions against per year, and that only after the prediction.

TallDave - You appear to have rather completely misunderstood my comment and the graph therein (not putting the graph in this comment, as it's right up there in @22). Concentrations of GHGs are well below what would result from Hansen Scenario A, and in fact below Scenario C, and the concentrations listed in that graph for the scenarios is what results after emissions and after accounting for the carbon cycle in the model.

CO2 has been reasonably close to the Hansen scenarios, in fact to all of them, because there is little difference (at this early date) between A, B, C, and observed CO2. But there have been far fewer emissions of CFCs, CH4, NO2, and hence lower total GHG concentrations remaining than in any of the Hansen scenarios. To a large extent the 1987 Montreal Protocol limiting CFCs is responsible for that difference, rather than cuts in fossil fuels.

"Hansen made predictions about things he could not know" Bzzzzt!!! You are attacking something other than the subject of Hansens climate model. Hansen made projections of climate response, not predictions of economic development, demonstrating the modeled climate responding to various GHG changes. The scenarios were presented to map the response space. He wasn't, and isn't, speaking in the business of economics, but rather in the science of climate. If the relationship between observed emissions and climate change match that of his model, then it's skillful.

Attacking a climate model because economic development and the ensuing emissions didn't exactly match the economic scenarios posed to map the input/output of those climate models is just absurd. That criteria would only be applicable to economic models, to economists, not climate science.

The best test is to run the model against observed emissions and see whether it matches observed temperature response, and [with the more correct CO2 direct forcing incorporated from Myhre 1998] Hansens model does that quite well.

"Stephen — Of course emissions are central to his testimony. Without the assignment of emissions scenarios, A B and C are just random points on a graph, chosen for no particular reason, with no relevance to policy."

Not if his main point, As Tom C points out, was establishing the role of humans in climate change to that point, and not on the future projections. TC has deliberated on the testimony itself. You should address his points, not simply make an assumption based on your impression of the role the graph must have played based on what is in it. I'm not saying that his presentation was not intended to get congress to act on emissions, mind you, just that his testimony focused on whether the observed temp change was human caused, and not on those scenarios.

"Exactly my point, thank you. Hansen made predictions about things he could not know."

As KR points out, he used what-if scenarios precisely because he could not predict future GHG development. That is the whole point of scenarios, and lack of certainty about the future is implicit in their use. Tell me, what would you do in this situation? Assume no change? That would be highly unlikely.

BTW...one scenario he didn't have involved a management decision (the Montreal protocol) that he himself would have approved of because of his model results. Does that mean he didn't think it was a good idea, or he didn't believe that such an action would have no effect on GHG forcing? Of course not. That would be akin to saying that because the model was right the model was wrong. Scenarios can't be thought of in that way.

What distinguishes all four from TallDave is that they have actually consulted the concentration data for the three scenarios, and done the calculations and compared them to observed changes in radiative forcing. All show actual forcings due to greenhouse gases slightly less than that for Scenario B, with the exception of Tamino who compares to all forcings (except volcanic) and finds the result slightly less than scenario C. (Note: he is not in disagreement with the others, he merely makes a different comparison.)

As can be seen from Steve McIntyre's graph, and in the following graph from Dana, while growth in CO2 (and NO2) was close to that predicted in Scenario A, growth in other greenhouse gases was way below that predicted for scenario A so that the total forcing was significantly less than that in Scenario A.

(Note with respect to Dana's graph: Hansen 1988 included the value of a host of minor greenhouse gases by the expedient of doubling the concentration of CFC 11 and CFC 12. Because Dana compares to the actual values of CFC 11 and 12, he leaves out these other minor gases. The actual growth in GHG radiative forcing is slightly greater than shown in Dana's graph.)

The growth in CO2 concentration is close, but not the same as that in Hansen's scenario A. Specifically, throughout the 1990s growth in CO2 was less than projected in scenario C. Since then, the growth rate has exceeded that in Scenario A so that concentrations have recently risen to about the scenario A level (and will soon exeed it if it has not already) - a pattern that can be seen in the EPA graph. The lower initial growth results in a lower initial radiative forcing, and hence a lower initial temperature growth that will not be eliminated for several years due to the thermal inertia of the ocean.

This is one of many topics in climate science where the common pseudo-skeptical opinion (as presented by Dave) cannot be honestly sustained except by the expedient of not checking the data. Comments such as Dave's are therefore always either insincere, or misinformed. Given the copious sources of information to the contrary, if misinformed by somebody who maintains some knowledge on the topic (as TallDave clearly does), then they are negligently misinformed.

2) TallDave quotes a small portion of the congressional testimony from a section of which I have already quoted at length. It comes just before the section I bolded, a section which makes quite clear that the the purpose in mentioning the scenarios was simply to explain the features of the graph, not to draw any conclusions from it. In other words, in response to my extensive quotation, TallDave's only response is a small out of context quotation that fails to address any of the points I raised. Therefore it requires no further refutation.

His rhetorical question regarding Scenaro C is shown to be less than candid by the fact that the common opinion of those who have analysed the data is that the observed GHG forcings most closely match scenario B.

3)

"Obviously because they're the only ones that can be tested on any meaningful time scale. Contra this site, the ability of a model to hindcast a highly complex phenemonen gives little confidence in its forecast (something painfully well-known in other fields)."

Contrary to TallDave's missinformed epistemology, there is no logical difference between forecasting and hindcasting. The only additional epistemic support to be obtained from successfull forecasting is forecasting is by its nature immune to overfitting the data. With GCMs, the number of parameters is very small relative to the number of predicted variables. That is not the case if you only pay attention to GMST, which is why pseudo-skeptics only consider GMST (plus a few other cherry picked data) for comparison to models, whereas climate scientists validate models against a large range of observed data. That is also, by the way why there is an approximately 15% mismatch between hindcast GMST model and observed trends over the last thirty years. The models are not fitted to obtain that result (for if they were, they could get a better match), but obtain that near match anyway.

Response:

[PS] To all commentators on this thread. Please note the comments policy:

Concentrations are not emissions. If Hansen's explicitly described emissions scenarios didn't result in the concentrations he expected them to, that's a failure of his model. Pretending otherwise is the just the kind of three card monte that is causing the public to justifiably lose faith in climate science as an objective enterprise.

"Contrary to TallDave's missinformed epistemology, there is no logical difference between forecasting and hindcasting."

It is incumbent upon you to translate the EPA emissions data (shown as Tg of CO2 emitted over time) into radiative forcing over time. Otherwise, you have no ground on which to claim that the EPA data supports you.

When even the work of a self-styled "climate skeptic" refutes your claim, IMO you should probably look at your own position for error instead of repeating it ad nauseum without any modification - a behaviour which, by the way, is prohibited by the Skeptical Science comments policy.

I might add that characterising the work of tens of thousands of researchers across the world as a common con game is, at best, dancing on the line of an enormous accusation of deception.

The fact of the matter is that climate science is based first and foremost on rather basic thermodynamics (net radiative energy in vs. net radiative energy out), the radiative properties of the greenhouse gases, and the thermal properties of the atmosphere and ocean. Speaking bluntly, you're setting yourself up for a hopeless task if you think that your attempt at refuting Hansen's 1988 work will unravel the physics establishing the reality of global warming.

Also speaking bluntly, arguing against climate science in 2014 by reference to Hansen's work in the 1980s is equivalent to, say, arguing against oncology in 2014 by reference to where the knowledge base of that discipline stood in the 1980s. Such a line of argument says little to nothing about the science but much about the person who resorts to it.

"Concentrations are not emissions. If Hansen's explicitly described emissions scenarios didn't result in the concentrations he expected them to, that's a failure of his model. Pretending otherwise is the just the kind of three card monte that is causing the public to justifiably lose faith in climate science as an objective enterprise." (my emphasis)

And "scenarios" are not "models," as has been described ad nauseum. This is the kind of double speak that makes people use the word phrase "climate denier" instead of "climate skeptic."

People run scenarios all the time, with finances, with their shopping, with personal life choices. They choose among scenarios using "models" that can allow them to determine which scenario produces the most favorable or viable outcome.

If I choose to save more money and my bank account increases as a result, it does not invalidate the same accounting model that has me go broke if I spent way above my income. The scenario (what I chose to do with my money) is different from the accounting model that turns that choice into a result. The same is true of Hansens study — he tried to capture a range of possible scenarios so people could understand the consequences of different actions.

Talldave would have us accept Lindzen's model that the temperature would stay the same, rather than Hansen's model that temperature would rise. He ignores that record high temperatures have been set every five years since 1988, including 2014. 2014 will be the highest ever recorded and the hottest the Earth has been in thousands of years.

Talldave wants to wish away the temperature data and claim that the temperature is not rising. If Hansen cannot predict in advance when people would reduce pollution than the temperatures canot be rising.

This is a bit off topic, but TallDave's assault on 'cutting edge 1980s science' helps illuminate a pattern I've seen before. Essentially, there seems to be a mindset amongst many deniers which reasons that belief in scientific principles derives from faith in individuals... so the goal becomes an effort to discredit individuals they see as the 'preachers' (?) of science rather than the actual facts.

For example, even if Hansen got some things wrong in the 80s (and he did, ironically just not the things TallDave is harping on) that would do nothing to invalidate modern global warming science. Over the past 30 years we've accumulated vastly more confirmation than Hansen had in the 80s... yet the mindset remains that if you can bring down someone viewed as the 'original actor' then all the facts just go away.

We saw the same thing with Beck's assault on Callendar's atmospheric CO2 level analysis... decades after Callendar had died and Keeling had proven him correct. Ditto the endless assaults on Michael Mann despite dozens of subsequent studies having confirmed the findings of the 'hockey stick'. Similarly, whenever I've discussed evolution with creationists they have been endlessly fixated on supposed errors by/flaws of Darwin... as if there hadn't been two hundred years of additional confirmation since then.

I don't know what this pattern of fixation on individuals / 'originators' means, but it shows up reliably enough that I suspect there must be some underlying reason. Does any of the 'psychology of denial' type research shed any light?

The moderation policy at SkS and the overall tone is far better than most other internet forums on the subject. I have read through this thread and I could not find an ad hominem argument against you TallDave. Can you quote or link the specific post that contained that particular logical fallacy?

Sorry folks, but you guys discredit yourselves by acting like a bunch of radicals defending your prophet. TallDave is absolutely right. You guys need to be clear about Emissions and Concentrations... Emissions are what are emitted. The EPA and the IPCC both conclude emissions have increased more than the 1.5% That Hansen classified as business as usual.

Now, if concentrations are lower than Hansen expected based on those emission scenarios... than you should clearly state that and admit, yes, the ocean (most probably) absorbed much more CO2 than Hansen expected and therefore CO2 concentrations are more in line with Scenario blah blah blah. ...

That is a calm, coherent argument. But claiming Hansen wasn't wrong... Hansen can't be wrong... that's just irrational and makes you look anything but scientific.

planet8788 - See this comment above. If you include all emissions, including in particular differences in CFC emissions due to the Montreal Protocols and in methane levels, the actual forcings are far closer to Hansen Scenario C than A or B.

Note that Hansen did use a larger than currently estimated direct CO2 forcing (largely due to early radiative mis-estimates of that direct CO2 forcing, corrected by ongoing research in 1998), but when you account for that issue his model was indeed quite good.

Your claim that "The EPA and the IPCC both conclude emissions have increased more than the 1.5% That than Hansen classified as business as usual" [i.e. Scenario A?] is only close for CO2, not all emissions of GHGs that were incorporated into his model. And is hence quite incorrect.

planet8788 @37, Hansen describes Scenario A in Appendix B of his paper, saying with regard to CO2:

"CO2 increases as observed by Keeling [at Mauna Loa] for the interval 1958-1981, and subsequently with 1.5% yr-1 growth of the annual increment."

For somebody lecturing us on needing to "...be clear about Emissions and Concentrations", it is astonishing that you have not noticed that the scenario is specified with respect to the concentrations as measured at Mauna Loa, not according to emissions data. You may think it was specified according to emissions because Hansen wrote in the main body of the paper:

"Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially."

That statement, however, gives the average growth across all greenhouse gases specified - not just of CO2. Therefore it cannot be interpreted as specifying the growth rate of CO2 uniquely. Further, it only gives an approximate value (about), and is no basis for claiming any growth rate more exact than somewhere between 1.25 and 1.75%. In any event, the detailed statement above takes precedence.

Based on the detailed statement in appendix B, it is trivial to get the Mauna Loa data from 1958-1981 (actually 1959 forward for annual averages), and project from 1981 forward. It is not clear if the increment is expressed as an absolute value, or as a percentage. Taking the former possibility, Hansen's specified CO2 growth compared to actual values is as follows:

The final value in the Hansen projection represents 410 ppmv in 2015 (compared to an actual 401 ppmv). Incrementing on the absolute value gives a lower increase of 407 ppmv in 207. In either case, Hansen's projected CO2 increase for scenario A is comfortably larger than the actual increase.

For what it is worth, the tabular concentration data for the three scenarios and all included greenhouse gases as used in Hansen 88, and supplied by Gavin Schmidt shows a CO2 concentration for 2015 of 403 ppmv. Again, this is more than has actually occurred.

This denier talking point is based entirely on:

Ignoring four of the five greenhouse gases;

Ignoring Hansens explicit specification of the scenarios in Appendix B;

Ignoring the tabular data as supplied by Gavin Schmidt;

Ignoring that 1982 (not 1989) is the first year of projection; and

Loudly bewailing the fact that annual emissions have grown 1.62% per annum for CO2 when Hansen only specified emissions growth averaged across all GHGs of "about" 1.5%.

If they shut their eyes any tighter against the facts of the case they would go permanently blind.

planet8788 @41, based on Gavin Schmidt's calculation, the Hansen 88 GHG concentration trajectories would have resulted in a net forcing increase relative to 1983 of 3.35 W/m^2 for Scenario A, 2.33 W/m^2 for Scenario B, and 1.41 W/m^2 for Scenario C. The actual increase was 2.2 W/m^2, or just below Scenario B and 56% greater than scenario C. More importantly, Scenario C has a slightly declining forcing from 2000, while anthropogenic forcings have continued to rise at an approximately linear rate:

Therefore it is seriously misleading to say "we're still at Scenario C".

Further, and importantly, we are in our present position of a forcing increase slightly below Scenario C in part because of a significant, and ongoing effort to reduce GHG emissions. The correct conclusion, therefore, is not that everything will be fine, but that we need to continue, and indeed strengthen substantially those efforts. In the medium term (30 to 50 odd years), we need to bring net emissions to effectively zero. BAU will not do that. Even a continuation of current mitigation efforts will not do that.

Finally, even if we do that we will reach a mean global temperature close to 2 C above the preindustrial average. Likely even that increase will be significantly harmful, and certainly it will be catastrophic for some. It is just a much better scenario than genuine BAU which, if pursued in the long term would see the tropics become seasonally uninhabitable for large mammals (ie, humans, sheep, cattle, and dogs would die of heat prostation within a few days of unairconditioned exposure to 'normal' heatwaves under that scenario for more than a day or so).

That we are doing very slightly better than what Hansen considered the most likely scenario in 1988 is hardly a great comfort.

I had thought I was about to disagree with the post but I might only be disagreeing with the myth. Anyway, I disagree with any premise that a computer simulation such as these can have "climate sensitivity" as an input, because it's an output, a result. I've written a computer simulation program. It's more problematic to have a pre-determined output from a computer simulation than it is to compute a trapdoor function in reverse. I think it's impossible. The only way would be iterative simulations, still not an input. You run a few simulations with diverse settings for parameters such as forcings, cloud effects and many others. Then compare the "climate sensitivity" results and there's a technique I've forgotten that I programmed in 1970 (ironically, oil exploration) where the one with least preferred result is discarded and the results are projected from worst to best to yield a set of preferred inputs. And so on iteratively until the desired result is produced, then those inputs are the ones to use. That's the only way that a simulation can produce an output such as "climate sensitivity" as a pre-determined result.

Sorry, I have two eyes, and the data clearly shows me it's closer to Scenario C, not Scenario B.

Response:

[PS] It is not clear to me that you have read the article that carefully. Read the comment policy- you are risking sloganeering .Scenario B is about an emission scenario. Hansen considered what would happen for 3 different emission scenarios. What we actually emitted is closer to Scenario B, so should compare his prediction for scenario B to actual temperatures for B. His prediction is too high for reasons the articles discusses.