Hansen 1988: Details of Forcing Projections

During our discussions of the differences between Hansen Scenarios A and B – during which the role of CFCs in Scenario A gradually became clearer – the absence of a graph clearly showing the allocation of radiative forcing between GHGs stood out rather starkly to me. When Gavin Schmidt re-visited the topic in May 2007, he only showed total forcing both in his graphic here (see below)and in the data set http://www.realclimate.org/data/H88_scenarios_eff.dat .

Figure 1. Forcing totals form Schmidt (2007)

Schmidt summarized the differences as follows:

The details varied for each scenario, but the net effect of all the changes was that Scenario A assumed exponential growth in forcings, Scenario B was roughly a linear increase in forcings, and Scenario C was similar to B, but had close to constant forcings from 2000 onwards. Scenario B and C had an ‘El Chichon’ sized volcanic eruption in 1995.

While it is true that the total forcing in Scenario A is “exponential” and the forcing in Scenario B is “linear”, the graphic below shows my estimates of how the forcing breaks down between GHGs within each of the three scenarios using contemporary or near-contemporary “simplified expressions”.

Figure 2. Radiative forcing for three Hansen scenarios and calculations based on observed and A1B GHG concentrations.

Obviously one point sticks out like a sore thumb: Scenario A increases are dominated by CFC greenhouse effect. In Scenario A, the CFC contribution to the Earth’s greenhouse effect becomes nearly double the CO2 contribution during the projection period. This is not mentioned either in Hansen et al 1988 or in Schmidt (realclimate, 2007).

The allocation between GHGs also clarifies why Scenario B is “linear” and Scenario A “exponential”. In Scenario B, the main forcing occurs from CO2 increase (not CFC increase). The “simplified expression” for the relationship between CO2 concentration and temperature change is logarithmic; thus, even though CO2 growth is (modestly) exponential in Scenario B, the composite of the exponential increase in GHC concentration and a logarithmic expression relating CO2 concentration to forcing yields linear growth in forcing.

On the other hand, the simplified expression relating CFC concentration to forcing is linear. Thus the exponential growth in CFC concentration in Scenario A (and the CFC growth rate is hypothesized to be much stronger than for CO2), combined with a linear relationship leads to an exponential growth in total forcing – driven primarily by CFCs.

Calculation Method
The GHG concentrations used for the above calculations are taken from the file posted at realclimate on Dec 22, 2007 ( http://www.realclimate.org/data/H88_scenarios.dat ). Something close to these concentrations could be calculated from the verbal descriptions of Hansen et al 1988 (as I had done here […] prior to becoming aware of this file at realclimate).

Hansen et al 1988 contained a series of “Simplified Expressions” relating forcing (in delta-T) to GHG concentrations for each of the GHGs. Usual current practice is to express forcing as wm-2; IPCC 1990 (p 52) said that a factor of 3.35 was needed to convert the Hansen equations to wm-2 and this has been applied here. (For the CFC11 and CFC12 equations, which could be checked directly, this conversion factor reconciles the Hansen expression to the IPCC 1990 expression.) I was unable to get the Hansen et al 1988 expressions for CH4 and N2O forcing to yield sensible results and for these two gases, I used the expression in IPCC 1990 to convert GHG concentration to radiative forcing. If anyone can get the Hansen et al 1988 expressions for CH4 and N2O to work, I’d be interested. I spent a fair bit of time on this before abandoning the effort and going with the IPCC 1990 expressions.

In all cases, I’ve used the “pre-industrial” concentrations used by NOAA in their calculations. In some Hansen calculations, a 1958 base is used (and these differences can be readily reconciled at least to a first approximation.)

Discussion

Hansen et al 1988 said that “resource limitations” ultimately checked the expansion of Scenario A. While this is true for CO2, I’m a bit dubous that resource limitations come into play in connection with CFC emissions. Obviously CFC emissions have not increased anywhere like Hansen Scenario A. I can’t comment on whether this is due to Montreal Protocols or other factors, but “resource limitations” seem highly unlikely as a limiting factor for CFC growth history.

Second, while Hansen et al 1988 disclosed that they doubled the CFC11 and CFC12 contributions to account for minor CFCs, this seems like a pretty aggressive accounting, especially in a context of very strong CFC11 and CFC12 growth. Without an illustration of the allocation of forcing by GHG, an innocent reader of this article could easily assume that the doubling was a simple and reasonable way to deal with a minor effect and immaterial to the results. If the assumption is material (as it is), then the treatment and analysis of this assumption seems far too casual.

Third, one wonders how much subsequent controversy might have been avoided if Hansen et al had clearly shown and discussed the allocation between GHG in the clear form shown above. Here is how Hansen et al 1988 Figure 2 showed the results:

In my opinion, this graphic does not clearly show that CFC contribution to Hansen’s greenhouse effect in Scenario A becomes double that of CO2. Aside from the graphic, the running text does not clearly state that CFC greenhouse contributions exceed CO2 contributions during the projection period. Had there been a clearer graphic together with an explicit recognition of CFC contribution, people would have been able to look past Hansen’s unfortunate description of Scenario A as “Business As Usual” in his 1988 testimony and see that it was really an implausible upper bracket scenario, just as Scenario C was an implausible lower bracket scenario, and place no weight on Hansen’s “Business as Usual” label. In passing, Hansen’s 1987 testimony , not previously discussed here, provided the following further information on Hansen’s views on the respective merits of these scenarios:

Scenario A assumes that CO2 emissions will grow 1.5% per year and that CFC emission will grow 1.5% per year. Scenario B assumes constant future emissions. If populations increase, Scenario B requires emissions per capita to decrease. Scenario C has drastic cuts in emissions by the year 2000, with CFC emissions eliminated entirely and other trace gas emissions reduced to a level where they just balance their sinks. These scenarios are designed to cover a very broad range of cases. If I were forced to choose one as the most plausible, I would say Scenario B. My guess is that the world is now probably following a course that will take it somewhere between A and B. (p. 51)

If one is trying to evaluate Hansen’s skill as a forecaster of GHG concentrations, I think that this is probably the most reasonable basis – thus, some sort of weighted average of A and B, with somewhat more weight on B would seem appropriate.

Fourth, one has to distinguish between Hansen’s abilities as a forecaster of future GHG concentrations and the skill of the model, with Hansen himself obviously placing more weight on his role as modeler than as a GHG forecaster. To the extent that “somewhere between A and B” represents Hansen’s GHG forecast, in that GHG increases appear to have been closer to B than “somewhere between A and B”, it is more reasonable to use B to assess the model performance. (It would be more reasonable still for NASA to re-run the 1988 model with observed results.)

Fifth, Hansen argued vehemently that the skill of his results should not be assessed on Scenario A results. Fair enough. The difference between Scenario A and Scenario B points to the need to look carefully at GHG concentration projections in forecasts, which in 2007 are the IPCC SRES projections. The evaluation of the IPCC SRES projections becomes an important and perhaps under-appreciated activity. If one is prepared to agree with Hansen’s position that he should not be assessed on Scenario A (and I, for one, am prepared to agree on this), then it points to the need for caution in using publicizing results from today’s version of Scenario A GHG (e.g. perhaps IPCC A2), a point raised by Chip Knappenburger at RC.

Sixth, none of these calculations deal with feedbacks. So the sort of numbers that result from these calculations are in the range of 1.2 deg C for doubling CO2, depending on the precise radiative-convective model. In this case, the “simplified expressions” are based on the Lacis et al 1981 radiative-convective model and not from GCMs. Similar results are obtained with other radiative-convective models and someone seeking to dispute the results would need to show some systemic form of over-estimation in the radiative-convective models.

Seventh, Hansen et al 1998 , not cited in Schmidt (2007), contains an interesting and reasonable discussion of the 1988 scenarios ten years later and I’ll review this discussion some time in a subsequent post.

The OTGs are described in IPCC reports and are mostly CFCs and such that are not CFC11 or CFC12. I am unaware of any rational basis for doubling. There is a list of other CFCs in IPCC 1990 and estimates of the radiative forcing of each and, as I recall, IPCC 1990 estimated that they contributed about 1/3 additional – which would still yield a strong CFC effect in a Scenario A variation with extrapolated CFC consumption rates used in this Scenario.

3, it would be interesting to know if that also included HCFCs such as R-22 and R-134a, which were the substitutes phased in to replace the CFCs. As the CFCs decreased in the western world (but are still to this day widely manufactured and used in developing countries), the volume of HCFCs, particularly 134a, would increase. Of course, 134a has one fifth the “greenhouse potential” of R-12, so we could be releasing significant quantities of that while still seeing a reduction in effect.

A curious side-note: one of the leading contenders to replace 134a in the next generation of refrigeration equipment is CO2.

Steve, now you’re exaggerating, how about showing the top half of fig 2 from Hansen 88, it clearly shows that CO2 contribution would be less than CO2+trace gases. This post by you is gross misrepresentation.

6, it’s quite simple; the gases shown are (to a first approximation) “inert”; if you put of one a ton in the air the air has one more ton than you started with (which is actually not true of CO2, but that’s another discussion). Water vapor, otoh, is determined (again to an approximation) by temperature, and if you put a ton in the air, you get a ton more rain out of it somewhere else. It doesn’t accumulate.

Steve M cut off two of the three panels, including the legend in the top panel. The 3 lines are, top to bottom, scenarios A,B,C. Steve M’s point is still 100% correct, however: the CFCs are not broken out in Hansen’s Fig. 2.

So Phil is just being Phil.

I will say what Steve M won’t. Hansen was trying to sell an implausible scenario A as BAU. He was being deceptive in his effort to move his agenda forward.

#8
Isn’t it true though that according to the AGW theory that water vapor has to increase in order to amplify the effect of CO2 because CO2 can’t raise the temperature on its own because it is logarithmic in its ability to absorb IR. Meaning if you double it you don’t get double the heat So we would have to have an increase in water vapor to increase GHG induced warming. But that is not happening. You should also have the greatest increase in temperature higher in the atmosphere where water vapor is considerablely less and where the GHG has more influence. But once again that is not happening. This was shown by Roger Pielke Sr. is this paper.http://climatesci.colorado.edu/2007/12/18/climate-metric-reality-check-3-evidence-for-a-lack-of-water-vapor-feedback-on-the-regional-scale/

It would appear that BAU was nothing more than exponential increases of everything, and scenario B was actually based on serious forecasting. Which raises the question, what was the purpose of the BAU scenario, since the Montreal protocol was all but a done deal when this was published? It was a scenario that couldn’t happen. Who’s being dishonest, Phil.?

Steve has done himself a huge disservice with this post, the other blogs will have a field-day showing that he’s prepared to falsify the record to further his agenda against Hansen and frankly he’s got no out this time. You can’t cut off the top of a graph in a paper, do calculations that confirm the part you’ve excised and then turn round and say on your blog that the original author was remiss in not showing that data. ‘Gross misrepresentation’ is probably the mildness term that will be used to describe it, it’s a huge ‘own goal’.
Yes bender I am being Phil., if a grad student of mine did this he’d be out on his ear.

Steve has done himself a huge disservice with this post, the other blogs will have a field-day showing that he’s prepared to falsify the record to further his agenda against Hansen and frankly he’s got no out this time. You can’t cut off the top of a graph in a paper, do calculations that confirm the part you’ve excised and then turn round and say on your blog that the original author was remiss in not showing that data. ‘Gross misrepresentation’ is probably the mildness term that will be used to describe it, it’s a huge ‘own goal’.
Yes bender I am being Phil., if a grad student of mine did this he’d be out on his ear.

Phil, please explain where on Figure 2 of Hansen et al 1988 you see CO2 forcing broken out from CFC forcings?

Where in the paper did Hansen et al break out the forcings by GHG in the same detail Steve did?

Where in Steve’s posting does he do anything other than demonstrate that Hansen’s graphs could have been better?

Steve: In response to your point, I amended the figure to show all three panels. I had showed the excerpt so that people could directly reconcile from the totals in my three panels to the shape of the total forcings in the Hansen middle panel, thereby showing apples and apples, but that was for convenience of the comparison and I’m happy to show the full Figure as you suggest.

I reject your allegations. This blog shows more original materials than any other place that you can identify. It’s absolutely against my policies to deny people original materials, which I do all the time. Do you have any issues with the restated version?

#14 The full figure is available in the url in #10 I provided. Steve M’s argument stands.

I don’t understand why you insist on being such a turkey when you are obviously a capable commenter.

1. Steve M is not your grad student.
2. This blog is his lab notebook. It’s ok to paste one panel of a figure into your lab notebook. If you punish your grad students for that, you’re a psychopath.
3. No one is going to a field day with this because it’s inconsequential.
4. You accuse other people of imagining things, making things up, taking things out of context. But you do it more than anyone else!

I’ve edited the post in response to Phil’s observation. I try to show the underlying materials so that people can judge things for themselves. I have amended the post and showed the complete Figure 2. I showed the excerpt so that people could observe for themselves that the shape of the plots matched the shape of the plots in my Figure (thereby demonstrating a reconciliation of sorts and that I was showing apples and apples – I was looking at the figure in that context.) I’ve also re-arranged the points somewhat to respond to comments. For people worried about crossing outs, think of it as responding to review comments. I’ve also added in a direct display of Gavin Schmidt’s figure, previously only linked.

The fact that Hansen et al 1988 has been discussed for 20 years without people observing the CFC contribution to Scenario A is proof of the validity of my observation.

Jeez, this post isn’t particularly critical of Hansen. I observed that Scenario A is not really a Business As Usual scenario, but an upper bracket scenario and that some subsequent confusion could have been avoided with a clearer graphic. Is there anything incorrect about this?

Covered up? Isn’t it obvious by the commentary here that covering up in the blogosphere is not possible? Steve M posted one third of a graph. You whined. He posted the full graph. The argument does not change, no error was made, there is no cover-up.

One thing Hansen et al forgot when making all these projections, is that CFC’s, OTG’s or anything man made that might have a GHG effect, can only have this effect if it is released into the atmosphere.

They projected that it would all be released to the atmosphere, it wasn’t, so the projection was and is wildly wrong.

I infer from Phil’s nit-picking that he agrees with the subtantial remark:

Had their been a clearer graphic together with an explicit recognition of CFC contribution, people would have been able to look past Hansen’s unfortunate description of Scenario A as “Business As Usual” in his 1988 testimony and see that it was really an implausible upper bracket scenario, just as Scenario C was an implausible lower bracket scenario, and place no weight on Hansen’s “Business as Usual” label.

Aside. It’s quite an interesting forensic job in coming to understand how the alarmists got control of the agenda using shaky science. I sure hope their models aren’t wrong.

Jeez, this post isn’t particularly critical of Hansen. I observed that Scenario A is not really a Business As Usual scenario, but an upper bracket scenario and that some subsequent confusion could have been avoided with a clearer graphic. Is there anything incorrect about this?

No Steve, you’re fine.

Hansen makes habitually makes lousy graphs. He’s not alone and it’s certaintly no sin, but it is annoying.

Phil didn’t even read your post in enough detail to distiguish between Hansen’s reference to “Trace Gases” and your reference to CFCs.

So, Steve, given Hansen’s projections (warts and wall … as presented here that is) should we be worried at all about CO2 and global warming … or not? Or should we just focus on the OTGs … or some blend thereof? Have you formed an opinion? For example, does CO2 present enough of a risk that Hansen’s current call for a moratorium on (what I call) free range coal (i.e., non-sequestered burning) is justified by his 88 projections (leaving current models and projections aside)? I am frankly not smart enough (or qualified enough) to draw any real conclusions from all of this. Any thoughts or comments would be greatly appreciated.

RE: #5 – One of the other significant uses of CFCs, specifically, Freon, was for removal of solder flux and other process residues in soldering processes, specifically in electronics manufacturing. With the advent of water soluble fluxes and no clean fluxes, that usage has essentially just gone away.

#24 Yes, that could well be it. Phil wasn’t paying attention, mistook OTG in the graph for CFC in the text. Fact is CFC is not in the Hansen graph. If that’s the case he will quiet down on this one.

General trend noticed. The alarmists squeal with delight whenever they think they’ve routed out another denialist, misrepresenting data. But that’s the nature of witch hunting: look hard enough and you see them everywhere.

26, actually, there are a lot of “other” uses of CFCs. In the ’70s, they were used for aeorsol can propellants (something that not-so-bright people to this day confuse with atmospheric aerosols). I think other non-CFC compounds that had electrical/industrial applications (such as SF6) were also lumped in with that “OTG” category, but there were ready substitutes for most of them.

#27 You’re not dense. Phil’s quieted down because he’s realized he made his own mistake, as #24 suggests. You won’t get him to clarify because he would have to admit it was a false alarm. He’s got his moral victory getting Steve to change the graph. That’s the most he can hope for out of this one.

#25
That’s the question, Eric. The answer depends on the effects of these things. And that’s why Steve M is after an engineering quality report of how the effect of these things is computed. The purpose of CA is to audit climate science – to figure out how reliable the numbers really are. When that’s done, you’ll have your answer. Meanwhile, it’s the precautionary principle.

Gleaning anything in 2008 from Hansen’s 1988 scenarios is fairly pointless, but if you insist I would take this from them: The closest modeled temperature outcome to observed temperature results from scenario C. However the assumptions of GHG’s used in those model runs are significantly lower than the observed, and actually assumed decreasing GHG concentrations after about 1999. One of two conclusions can be drawn. Either the model itself is wildly inaccurate and incapable of yielding meaningful results, or else the actual warming effect of CO2 and the other GHGs are significantly lower than reported by the prevailing consensus. Nothing else should be taken from the 1988 report given the observed temperatures and atmospheric gas concentrations over the last twenty years.

#25. I try not to discuss policy. I’ve said that, if I had a big policy job, I’d be guided by the major institutions; also I would not require perfect certainty to make decisions. People make decisions in business all the time with unquantifiable uncertainties based on best judgement in cases where it’s irrelevant to talk about 95% confidence or anything like that.

In practical terms, I’m glad that Ontario is 50% nuclear. I don’t see that there’s any reason for Western countries not be building nuclear plants as fast as they can. If it costs more than coal, so be it. I suspect that I’d agree with Hansen more often about coal than I’d disagree with him.

I think that people are going to have to come to grips with China and India. I doesn’t make sense to me to try to deal with things without them – at the same time, one has to recognize their needs. IT’s not easy, but nothing is accomplished by ignoring them. If there’s a big problem, I suspect that any of the present Kyoto-type policies are merely nibbling around the edges of what really needs to be done.

It would be more reasonable still for NASA to re-run the 1988 model with observed results.

Exactly. This is the only way to determine the ‘skill’ of the model in forecasting (GMT). On the face of it, using observed levels of CFCs etc, should improve the model’s forecast (accuracy). Not re-running the models just leads to the suspicion that there are other problems with the models that they would prefer to avoid scrutiny on.

Take your time and carefully consider the matter, educate yourself, ask questions, etc. If you believe that there is insufficient time then your decision is already made– but not by you. My advice, make your own decisions.

Steve, really, your third claim that “how much subsequent controversy might have been avoided if Hansen et al had clearly shown and discussed the allocation between GHG in the clear form shown above” is silly. First, in 1988 almost no scientific journal regularly published color figures like your proposed replacement. And second, Hansen’s figure 2 which you show in full here clearly shows the CO2 contribution in the first panel (where scenarios A and B are close) and “CO2 + trace gases” in the second panel (where scenario A goes way above scenario B), which should make it obvious to anybody looking at the figure that the divergence is due to “trace gases”. In 1988 I doubt these were even directly computer-generated graphs; papers I published at the time used a skilled artist to render an image suitable for publication. One didn’t generally try out multiple different layouts of a graph to improve clarity, once the basic message was there.

And then your sixth claim “none of these calculations deal with feedbacks. ” – none of *what* calculations? Hansen’s 1988 paper described results of his model calculations at the time, a model that, as I understand it, had a 4 K sensitivity (adding a forcing corresponding to doubling CO2 led to 4 K long-term increase in temperature). That clearly does include the feedbacks (I’m sure mostly water vapor) as they were understood at the time.

Steve: I used the term clearly “shown and discussed”. Your point about color graphics is fair enough. However note bender’s observation about gray scale (or crosshatch) etc. The graphics could and should have been rendered more clearly. More importantly, an author in 1988 needed to work harder with his text to describe the figures. With a good color graphic, some obvious points might be left to the reader (though this shouldn’t happen). With a bad graphic, all the more reason for a clear statement that, in Scenario A, CFCs overtake CO2 in their contribution to greenhouse effect.

“These calculations” are the forcing estimates – I thought that that was self-evident.

33, Steve, I’m still a little mystified by what precisely these other trace gases could be. The PDF that bender linked was a scan, so I couldn’t search it and see if there was anything specific, but visually scanning it, I didn’t see any mention of what those compounds could be. And it matters a lot, because every different chemical has its own very specific commercial dynamics and atmospheric dynamics, and you can’t just apply an exponential growth to them and call it business as usual. And absent any evidence to the contrary, it appears like that’s what Hansen did. Someone here please prove me wrong.

Potential effects of several other trace gases (such as O3, stratospheric H2O, and chlorine and fluorine compounds other than CCl3F and CCl2F2) are approximated by multiplying the CClF3 and CCl2F2 amounts by 2.

42, That answers that. I’m a little taken aback by the arbitrariness of that. Especially since, as I said earlier, the Montreal protocol was being signed as he issued this report. And how does he figure that O3 goes up in concert with CFCs? Wasn’t that the whole point of Montreal, that O3 goes down with them?

The GISS website says that the code used in 1988 simulation runs are not available although other Model II versions appear to be:

Historical versions of Model II (e.g., the computer code used in the 1988 simulation runs) are not currently available. Please address all inquiries about the EdGCM project and about implementing Model II on modern personal computers to Dr. Mark Chandler.

I presume that the available editions of Model II would give relevant results if actual values were prescribed. You;’d think that someone would have done this analysis already.

47, that’s what I said. With Montreal, the case “A” couldn’t happen. It was science fiction. Apart from that, case “A” does imply that O3 goes up with CFC, since it’s part of the bundle of substances that they simply assumed would go up as justification for the doubling. The implication is inescapable: O3 in case “A” goes up with CFCs.

Larry, wow, er, that’s a weird assumption to make. They obviously hadn’t been up on the latest in Ozone chemistry. O3 shouldn’t go up as the CFC’s do too. Was Hansen an Anthropogenic Ozone Hole Denier?

48, that’s what I thought. With the single exception of SF6, they’re all halocarbons. Only a naive fool would assume that all of them would increase in step with CFCs. They’re not all used as refrigerants, and many actually are competing with others, so their usage and production are at each other’s expense.

50, that’s where you end up when your thought processes have all of the nuance and sophistication of “icky pollution bad”. When you think like that, it’s easy to rationalize lumping their dynamics together like that.

An earlier commenter in this thread pointed out that the CFC data is based on reported production, and *assumes* it all goes into the atmosphere. I have scanned some of the papers, but I do not recall any which included actual detailed measurements of CFC concentrations in the atmosphere. I recall one sea level series (but don’t recall the author) but no systematic measurements of upper levels. Can someone please point me to the seminal paper which details the cataloging of actual concentrations? I would very much appreciate it.

Wait a minute: the CO2 forcing is shown to be mildly logarithmic in Scenario B, but isn’t it upside down? Isn’t there supposed to be a decreasing incremental change in forcing with increased CO2? For all practical purposes, the CO2 forcing shown is linear in Scenarios A and B.

Steve: In response to your point, I amended the figure to show all three panels. I had showed the excerpt so that people could directly reconcile from the totals in my three panels to the shape of the total forcings in the Hansen middle panel, thereby showing apples and apples, but that was for convenience of the comparison and I’m happy to show the full Figure as you suggest.

I reject your allegations. This blog shows more original materials than any other place that you can identify. It’s absolutely against my policies to deny people original materials, which I do all the time. Do you have any issues with the restated version?

I’m glad to see that you saw sense and corrected the post, the original was most ill advised.
Do I have issues with the restated version, yes, please see below.

Scenario A increases are dominated by CFC greenhouse effect. In Scenario A, the CFC contribution to the Earth’s greenhouse effect becomes nearly double the CO2 contribution during the projection period. This is not mentioned either in Hansen et al 1988 or in Schmidt (realclimate, 2007).

It’s there and obvious to anyone who can read a graph.:
The difference between A & B in 88 (due only to OTG) isn’t reached until 2020 by CO2 alone (fig2)
Fig B2 indicates that by the more than half of the total forcing was due to trace gases (i.e. forcing by trace gases already exceeded forcing by CO2), of those trace gases ~40% are CFCs. By 2020 CO2+trace gases = 8xCO2 alone! That would mean that the total due to trace gases is 7xCO2, given that CFCs have a linear response are growing at 3%/yr and already constituted a substantial portion of the forcing in the 80s I’m surprised that it’s only double CO2. By the way your assessment appears to assume that all of the OTGs are CFCs in fact they are not, the largest forcing in that group is due to Ozone. Your calculations are interesting in that they give a little more detail but they don’t warrant your hyperbole.

It’s been a long day so that will have to do for now, if you hadn’t changed the post you’d be being hammered everywhere by now!
Steve: Please note that I had already posted Hansen 1988 Figure 2 in the preceding post http://www.climateaudit.org/?p=2611,

An earlier commenter in this thread pointed out that the CFC data is based on reported production, and *assumes* it all goes into the atmosphere. I have scanned some of the papers, but I do not recall any which included actual detailed measurements of CFC concentrations in the atmosphere. I recall one sea level series (but don’t recall the author) but no systematic measurements of upper levels. Can someone please point me to the seminal paper which details the cataloging of actual concentrations? I would very much appreciate it.

“Climate forcing scenarios are essential for climate predictions, including communications with the public about potentially dangerous climate changes. But if only one forcing scenario is used in climate simulations, as has been a recent tendency, the scenario itself is likely to be taken as a prediction, as well as the calculated climate change. Moreover, the single scenarios tend to be ‘business as usual’ or 1% CO2/yr forcings [1.5% in 1988 paper], which are approximately double the actual current climate forcings. As one of the purposes of simulations is to allow consideration of options for less drastic change, and as there is large uncertainty in present and future forcings, we recommmend the use of multiple scenarios. This will aid objective analysis of climate change as it unfolds in coming years.”

#60. Please bear in mind that I had already posted Hansen Figure 2 in full a couple of days earlier http://www.climateaudit.org/?p=2611 where I made the following observations on this figure (shown in full) based on what I could deduce from it at the time:

Also Hansen Figure 2 shows that a noticeable difference between Scenarios A and B had already arisen by 1987 – something that will need to be examined closely to see exactly which gases are contributing to it, as well as to the increasing differences between the two scenarios for gases other than CO2 by 2010. Right now, based on the review of GHG concentrations, it’s hard to see exactly what is accounting for the difference in radiative forcing. Update: As noted in a subsequent post, the handling of Other CFCs and Other Trace Gases accounts for the near time difference.

So, Phil, this post was in the context of the previous post. As my comment shows, it wasn’t obvious to me that CFCs were accounting for the difference – based on a careful reading and graphing of available data. I didn’t notice you or anyone else stepping in to solve the problem at the time. Contrary to your allegation, I had noticed the effect in Figure 2 – which I had already presented and which I was trying to figure out.

You seem to think that Figure 2 makes it obvious that the difference was due to OTGs. Well, it wasn’t then obvious to me. It might have been methane or N2O or CFC11 – how could anyone say?

The only way that I knew to resolve was to replicate the graphs from principles, which I did. This wasn’t all that easy as calculations as Hansen’s 1988 equations require a conversion to be consistent with Schmidt’s and the conversion, to my knowledge, is only shown in IPCC 1990 as a pers comm. I didn’t notice anyone stepping in with this information. Plus I wasn’t able to get the CH4 equation in Hansen 1988 to work and surmise that there is some typo in it (one is noted in IPCC 1990, but it wasn’t enough for me to make the equations work.) So it was a lot of work to redo the graphs from scratch and allocate the contributions of individual trace gases.

When I finished, I was very much of the view that Hansen’s description of his results (and the relatively uninformative Figure 2) did not properly state the role of CFCs and OTGs in Scenario A and that Scenario A was not a Business as Usual scenario but an upper bracket limit. I don’t think that either of these are particular severe criticisms.

Working through the original “simplified expressions” wasn’t

I realize that there are a lots of posts here and you have to allow for the fact that the posts are often interconnected.

I have been following this blog quite closely the last few weeks, and find most of it both very interesting and educating. However, most threads seems to be polluted by a user with the nick bender. Most of his/hers postings doesn’t have anything to do with the subject, but all to do with other peoples postings and in a very agitated manner.

In my opinion this blog would benefit greatly by the users sticking to the subject, and remove postings that have nothing to do with the blog but all to do with other users.

But all in all a very nice place to search for knowledge in the climate debate!

1984 is a good starting point to assess the various scenarios because their tempeature predictions are very close to each other for that year and also very close to the 0.0 anomaly (not exactly but within 0.1C).

The various temperature records are as well very close to the 0.0C anomaly for 1984 (it was a cool year.)

So all the Scenarios and the actual temperature records are very close to the same starting base conditions and we don’t have to worry about different starting points etc.

– Scenario A projected temps would increase 0.8C from 1984 to 2007.

– Scenario B projected temps would increase 0.7C.

– Scenario C projected temps would increase 0.4C.

– Hansen’s GISS temp record shows an increase of 0.45C.

– The (revised) RSS and UAH lower atmosphere annual averages both increase 0.55C from 1984 to 2007. (they are down during 2007 but the annual averages still increased between 1984 and 2007.)

So the actual forcings ended up between Scenario B and C (and so did the actual temperature increases.)

Please don’t intentionally read me wrong. I think all would welcome bender’s comments as long as they stick to the subject. His comments on scientific stuff are great. However, you should know that statements like “Go back to RC!” (I honestly can’t remember that that one was from bender, but I think so.) etc. doesn’t bring any quality to this site.

That would mean that the total due to trace gases is 7xCO2, given that CFCs have a linear response are growing at 3%/yr and already constituted a substantial portion of the forcing in the 80s I’m surprised that it’s only double CO2. By the way your assessment appears to assume that all of the OTGs are CFCs in fact they are not, the largest forcing in that group is due to Ozone.

1. Where does that 3% data come from?
2. By what logic does ozone increase proportionally to CFCs?

Same source, both increase as a result of industrial activity, note that the paper refers to those trends as ‘hypothetical or crudely estimated’ and are only used in the worst case scenario (A).

Steve: I don’t believe that the term “worst case scenario” is used in Hansen et al 1988. (The article is not word-searchable at present and I may have missed such use; if so, I apologize). Hansen used the term Business As Usual (not “worst case”) to describe this scenario and said that his own forecast was “somewhere between A and B”. On page 9357 he observed that there may not be sufficient time for many biosystems to adapt to the rapid changes forecast for scenarios A and B, a theme we hear more of the next two decades. I observed that calling Scenario A an upper bracket worst case would have been more accurate; while you’ve taken exception to every such comment, here you’ve adopted the terminology that I proposed as being appropriate.

The first is stratospheric. It is created by UV from O2. This is the ozone that is destroyed by CFCs.
The second is ground level. It is created by by a different mechanism. (I belive it has something to do with NOx, various hydrocarbons, and sunlight.)

Stratospheric ozone is, according to some computer models (but never proven in real life) destroyed by CFCs.
Ground based ozone is unaffected by CFCs, since CFCs don’t break down until the reach the stratosphere and are exposed to UV light.

I’m not sure where the bulk of the atmosphere’s ozone resides, though I suspect it is in the stratosphere since ground based concentrations are low, and highly localized. Additionally ground based ozone had started falling well before 1984, as a result of the Clean Air act and other laws.

I wish I could claim that I was surprised that Hansen was not aware of this.

Steve: Please don’t assume Hansen made foolish errors or put words in his mouth. He may have overlooked something or chosen to emphasize one thing rather than another or slipped somewhere in his calculations, but don’t assume that he is unaware of something obvious.

re 74. I like bender on or off topic. On topic I learn something, off topic I laugh.

what did aristotle say:

Anybody can become angry – that is easy, but to be angry with the right person and to the right degree and at the right time and for the right purpose, and in the right way – that is not within everybody’s power and is not easy.

It is the mark of an educated mind to be able to entertain a thought without accepting it.

No excellent soul is exempt from a mixture of madness.

Pleasure in the job puts perfection in the work.

Quality is not an act, it is a habit.

The gods too are fond of a joke.

The least initial deviation from the truth is multiplied later a thousandfold.

76, but the paper doesn’t justify the numbers or assumptions. And it’s flatly ignorant to assume that ozone is a product of “industrial activity”. The mechanism for surface ozone production is well known; it’s a product of unburnt hydrocarbons, which have been dropping dramatically in the developed world in the last two decades of the 20th century. And stratospheric ozone is supposed to be inversely related to CFCs, remember? You guys need to keep your doomsday theories straight.

Steve: I don’t believe that the term “worst case scenario” is used in Hansen et al 1988. (The article is not word-searchable at present and I may have missed such use; if so, I apologize). Hansen used the term Business As Usual (not “worst case”) to describe this scenario and said that his own forecast was “somewhere between A and B”. On page 9357 he observed that there may not be sufficient time for many biosystems to adapt to the rapid changes forecast for scenarios A and B, a theme we hear more of the next two decades. I observed that calling Scenario A an upper bracket worst case would have been more accurate; while you’ve taken exception to every such comment, here you’ve adopted the terminology that I proposed as being appropriate.

If I’d intended my words to be attributed to Hansen I would have put them in quotes, clearly of the three scenarios A is the worst case from a forcing point of view. For an ‘auditor’ your habit of changing your past posts without acknowledgement is truely astounding, auditing this blog would be impossible. Your argument for a significant period was that scenario B was not Hansen’s most plausible scenario with convoluted arguments about the color of lines, whether they were dotted or not etc., while denying that he actually described scenario B as such until forced to accept it (claiming to have missed it even though you quoted adjacent paragraphs). I have never taken exception to your describing scenario A as a worst case, in fact you had to be dragged kicking and screaming to the admission that it was such!

Steve: I treat comments here as a form of peer review and will edit articles to reflect sensible comments. Sometimes what I’m saying doesn’t come across the way that I intended and to deal with a comment, it’s not merely a matter of changing a number and I need to re-arrange things. I try to improve the posts for subsequent readers. Journal articles are typically sent out for private peer review. Authors don’t publish with chicken scratches all through them to show changes that they made or preserve an “audit trail” of edits to their article. I don’t understand why you would permit academics the right to publish journal articles without showing their changes and deny me the right to do similar edits to blog posts.

I believe that the salient “audit trail” was an “audit trail” for the actual calculations. I frequently provide R scripts and reference data and try to make the actual results as transparent as possible. I haven’t showed scripts for every calculation and, in retrospect, I wish that I’d done this more often. But if you have any issues with how I got from A to B in any graphic or calculation, I’m generally pretty responsive.

As to the idea of climate scientists being righteous about “audit trails”, give me a break. There are negligible or non-existent audit trails for virtually any climate article. Key data remains unarchived. Why don’t you fix the beam in your own eye and get proper audit trails for articles being used for public policy. Puh-leeze.

78, Steve, I usually don’t assume that Hansen or Mann or anybody are fools, but looking at this from a chemical standpoint, he made a pretty crass assumption in scenario “A”. I don’t think that it was an error, it appears that scenario “A” was simply built upon a crass assumption that he may have thought was reasonable, but upon close inspection it clearly isn’t. I’m not claiming that he’s stupid, just careless.

76, but the paper doesn’t justify the numbers or assumptions. And it’s flatly ignorant to assume that ozone is a product of “industrial activity”. The mechanism for surface ozone production is well known; it’s a product of unburnt hydrocarbons, which have been dropping dramatically in the developed world in the last two decades of the 20th century. And stratospheric ozone is supposed to be inversely related to CFCs, remember? You guys need to keep your doomsday theories straight.

Actually it does but since you clearly haven’t read it you wouldn’t know that. The paper was written in the 80s and the assumption of a continuing trend of tropospheric ozone was clearly indicated as unreliable hence it wasn’t used in scenario B & C. Stratospheric depletion of ozone primarily occurs at the poles, elsewhere it’s less dramatic however it’s responsible for cooling of the lower stratosphere as correctly predicted by GH theory.

78, Steve, I usually don’t assume that Hansen or Mann or anybody are fools, but looking at this from a chemical standpoint, he made a pretty crass assumption in scenario “A”. I don’t think that it was an error, it appears that scenario “A” was simply built upon a crass assumption that he may have thought was reasonable, but upon close inspection it clearly isn’t. I’m not claiming that he’s stupid, just careless.

83, in 1988, catalytic converters had been in use for 13 years (at least in the US), and HCs had already dropped dramatically, and along with them terrestrial ozone. You can’t claim that Dr. Hansen wasn’t aware of this. It’s a big enough stretch for him to ignore the impact of the Montreal protocol (which was being signed as the paper was published) in the “business as usual” scenario, but to ignore a trend that had already been in place for 13 years is pushing the envelope on honesty.

#69/#85
This analysis needs to be supported by an actual run of that 1988 model code. To judge whether the re-projections would be on track (i.e. 1988 parameterization ok), I think it is just too close to call by casual reckoning. But John Lang’s #69 point is well taken.

83, in 1988, catalytic converters had been in use for 13 years (at least in the US), and HCs had already dropped dramatically, and along with them terrestrial ozone. You can’t claim that Dr. Hansen wasn’t aware of this. It’s a big enough stretch for him to ignore the impact of the Montreal protocol (which was being signed as the paper was published) in the “business as usual” scenario, but to ignore a trend that had already been in place for 13 years is pushing the envelope on honesty.

The US is not the world, as a visit to Beijing will quickly confirm! The effectiveness of the Montreal Protocol at that time was certainly open to question, there was significant opposition to its implementation in the US. Kyoto is a case in point about the success of such treaties! Your portraying Hansen’s position in the paper the way you do is dishonest, being charitable I assume that you haven’t read it and are getting your information second hand.

92, bender, my question is more specific. I question the rationale for claiming that:

a) under BAU, CFC-11 and 12 in the atmosphere will increase geometrically, and more egregiously that

b) all other “trace gases” can be lumped together and their total effect is exactly equal to the effect of CFC-11 and 12. That’s arbitrary, and creates a false linkage between the two primary CFCs and this laundry list of other compounds that have completely different commercial and atmospheric dynamics.

I don’t think that such a back-of-the-envelope method would be cause for a lot of controversy if the total effect were a few percent, but when it becomes the majority of the effect, we can’t be so caviler.

What you’re driving at is was it ever intended to be considered a plausible scenario? There are two possible answers; yes and no. If the answer is yes, it’s junk science that Dr. Hansen should be ashamed of. And if the answer is no, it shouldn’t have been published.

I don’t see any evidence that ozone is a significant greenhouse gas on a global scale, or that it’s increasing.

Good, so you agree with Hansen’s position in 88!

Re #92

Obviously it was a ‘BAU’, it was defined as continuation of current growth rates in trace gases how is that not business as usual (i.e. what if we continue doing what we’re doing now)?
Whether it was likely to happen is another issue and one that was addressed by Hansen in the paper, his opinion being that it would not, despite Larry’s attempts to say otherwise.

I found the excerpted comment given below from the Hansen et al 1988 “scenario” paper most interesting in terms of what Steve M likes to call the provenance of the data and in this case the nuanced thinking behind the data presentation and its timing relevant to the rates at which GHGs were being generated into the atmosphere.

Our transient climate experiments were initiated in early 1983, being run as a background job on the GISS mainframe computer, a general-purpose machine (Amdahl V-6) of mid 1970s vintage. Results for Scenario A were reported at a conference in June 1984 [Shands and Hoffman, 1987], and the results from all scenarios were presented at several later conferences.

I find it interesting that the most unlikely scenario was reported first (does not necessarily indicate it was run first) and then followed by what was later indicated as “probably” the most plausible scenario (B). I have been attempting to track down the pre 1988 conferences at which the Scenario A was discussed and without success to date. The two models used in the runs are discussed in papers as early as 1983 as I recall. Maybe someone here can track this information down.

The power of the mainframe computer used in the model runs could be compared to modern desktop computers to determine the feasibility of running it on a PC as I assume Steve M has been contemplating here.

Looking at how the rate of increase in GHGs was progressing in the 1980s, (graph is show below) I can certainly see where a late 1980s extrapolation of those rates for scenario study might change from one done in the early part of the 1980s. I think also that it demonstrates how difficult it is to get the scenario inputs correct in these “experiments” be they in the 1980s or today.

Further digging into the background of these model runs and scenarios has reinforced by my first impression of them as being throw away runs to get policy makers “on the right track way back then” with probably Hansen et al as surprised as anyone that, at least for awhile and given the “correct” temperature data set, Scenarios B and C approximately tracked actual temperature. Need I remind that Hansen et al have stated that the closeness was accidental and that significantly more years of data are required to determine how close the “accident’ comes to reality.

If you think Beijing is a mess, stay in GuangZhou for a week. Suffice it to say that airline pilots who fly routes into GuangZhou have no problem keeping their instrument approach currency, visibility is NEVER 3 miles (5km) in the afternoon.

I will say what Steve M won’t. Hansen was trying to sell an implausible scenario A as BAU. He was being deceptive in his effort to move his agenda forward.

Hansen said:

Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns even though the growth of emissions in scenario A (~1.5%/yr) is less than the rate typical of the past century (~4%/yr)

I’m surprised you don’t realize that the point of having different scenarios is to in some way “bracket” reality. Furthermore, Hansen explicitly states that scenario A is unlikely. But he also notes that it is BAU in that the increases are in line (actually lower than) what had been happening.

I’m really struggling with the relevance of this. Hansen employed a bracketing method
when he guessed at future emissions. high, med, low. Calling Scare-nario A “business as usual”
not the best choice of words. I’d cut hansen some slack and just move on to the
real issues.

99, that’s nice, but the way he tied OTGs to CFCs, the scenario gets even more implausible without CFC caps. The OTGs (particularly ozone) don’t move with the CFCs. That’s the whole point. If Montreal hadn’t passed, that huge OTG increase wouldn’t have happened.

101, “business as usual” means nothing changes in the regulatory environment. It doesn’t mean nothing changes. His scenario implies that nothing changes. The fact that he verbally stated that it can’t do that doesn’t let him off the hook; it highlights the fact that he did something highly unprofessional.

re 104. Skunk!! As an old knob twister you know I can’t resist asking about the third knob.

On a serious note. When I first started looking at this climate stuff I was
drawn to two areas. The SRES and the non linear character
of the projections. It was clear that the SRES had a spread of assumptions
that could fit a universe of elephants, moreover the non linear aspect of
some scenarios made my spidy senses tingle. extrapolating an exponential without
a net is a dangerous circus stunt. Combine that with a non linear response to forcings
and you get this.

#107 If he wasn’t already, he should have been collaborating with an economist to set up useful and relevant boundaries A,C. And the people taking his advice should have had his scenarios scrutinized by an array of expert opinion.

Quoting is always selective, otherwise it would be called duplicating. I’m pretty sure Hansen’s quote reflects his view and is relevant. If you say your quote doesn’t reflect your position, I believe you.

newsflash. maybe, boreus, I was suggesting that the bit that you selected wasn’t representative? was taken out of context?

I’m pretty sure Hansen’s quote reflects his view and is relevant

I’m pretty sure you do not know Hansen’s view.
I’m pretty sure you do not know what has been said off the record.
I’m pretty sure you don’t want to focus on mosher’s two knobs, are fixated on hansen’s third.

#69 – It appears to me that scenario C only comes down to tickle either of the measured data series graphs during the entire period and that the other two scenarios consistently track much higher. Not so impressive from where I’m sitting.

Based on the actual GHG emissions Hansen built into his Scenarios, it seems likely that he didn’t expect A would happen or that even C would happen.

Scenario B appears to be the guess he was making.

He may also have been thinking of A and C as being the choice we, mankind, could make. Keep growing emissions and temps skyrocket. Put the brakes on emissions by the year 2000 and we can stabilize the temperature at the year 2000 level. Separating out the trace gases might have fallen out of that argument as well since it was easier for us, mankind, to stop CFC emissions than CO2.

On Anthony Watts at #122. Wow, Roy really did stick his neck out there on that one.

It has been clear for some time that the oceans were absorbing at least half of the carbon emissions from humans.

Secondly, the long-term ice age scale temperature and long-term CO2 record shows that cooling oceans absorb alot of CO2 and warming oceans release CO2. The 800 year lag between temp increases in the ice age and increasing CO2 also points to this conclusion as well.

But what is causing the oceans to warm right now and release CO2?

Is that not contradictory to the fact that cooling oceans absorb CO2 and half of human’s emissions are being absorbed by the “warming” oceans right now?

#122
This may explain the hostile treatment I received asking pointed questions about sun-ocean interactions at Real Climate. Perhaps they were already aware that their cookie is starting to crumble. I look forward to some discussion of these papers at Climate Audit.

ref 122 Anthony Watts. Both are very interesting. The second comparing the PDO+AMO, TSI and US average temperature is extremely interesting. But, doesn’t using the US average temperature instead of global temperature average, open it up criticism? Also the TSI reconstruction used has been questioned.

The results jibe well with A. A. Tsonis’ teleconnections paper. I believe the various decadal/multidecadal oscillations are the main drivers of climate with their various synchronizations and de-synchronizations responsible for the extended warming and cooling trends. The trick is proving it. With the US temperature average and TSI reconstruction used, I am not sure these will have that much impact.

They are not ‘drivers’. Not of global climate anyways. They are more like passengers. What they ‘drive’ are regional climate anomalies.

In order for there to be a measurable ‘global climate’ distinct from an aggregate of regional climates, ‘regional climate’ effects have to be removed from the temperature record. I’m not aware anyone has done this or even suggested a methodology for doing it.

According to Hansen, about 84% of solar radiation adsorbed by Earth in last half century went into oceans. Despite popular belief, it is oceans which warm atmosphere, not vise versa. Now, keeping in mind that oceans hold 2000 more heat energy than atmosphere, even slight fluctuation of heat transfer from oceans to atmosphere obviously could alter global climate, on both regional and global scales. To be more precise, what we call “global climate” is averaged over globe surface min/max daily mean air temperatures measured over 1 meter over surface, however stupid this metric is.

In practical and measurable terms, it translates into global cool or warm years due to well known phenomenas like ENSO, or on multidecadal scale due to PDO, AMO, and alike.

Oceans are not drivers of global climate. The sun is. But oceans are neither passengers. I think of oceans as transmission in the vehicle of global climate, where sun is the engine. And it is quit sloppy transmission, tend to abruptly change gears, hesitate, slip, and override.

Do you need other explanation of internal chaotic variability of the “global climate”?

They are not ‘drivers’. Not of global climate anyways. They are more like passengers. What they ‘drive’ are regional climate anomalies.

The question is: what drives *them*?

Energy from the sun fuels them, but we are the passengers on the bus. The bus routes change with time due to a number of natural and possibly a few anthropogenic causes, Aerosols and Black Carbon for example. Even if a driver has to detour once in a while, he is still a driver.

If CO2/GHG was behind the wheel, I would expect both poles would react more uniformly. If solar alone was the driver I would expect definitive 10/11 year temperature patterns. Synchronization/de-sychronization of global oscillations I think are the drivers. Just my thoughts.

Does anyone know if the NASA GCM(s) has (have) been significantly rewritten since 1988? In other fields, I know people who have been using their favourite model for years. I wonder if its the same at NASA, could they be using the same tools over and over with superficial changes to “correct” perceived deficiencies or upgrade to a new platform/operating system and giving it a new name? They have after all been using the same space ship for as long…

What is the incentive to rewrite something like a GCM from the ground up? It costs a lot to do something like this so it must be more cost effective to rotate the tires on the old model regardless of how archaic some of the underlyimg assumptions might be.

Boris, it is fairly obvious that I can not back up what Hansen might have said off the record. If you feel the need to state the obvious, be my guest.

I admit I broke a blog rule by surmising what Hansen might have been advocating off the record. Maybe some blog rules were made to be broken. This is a guy who’s writing to the Queen trying to get her off coal? I am tempted to speculate that he’s probably trying to sell her on doomsday scenario A as we speak. But I won’t.

I speculate that he might have been overselling scenario A, perhaps not knowing himself what it implied. I think that’s more self-deception than deception. It happens, you know.

Are you backing down from that statement?

I am backing away from it, but not retracting it. Reshaping it, to point out that it is a speculation that breaks a blog rule.

If not, I fail to see how I quoted you “selectively.”

You quoted the most speculative half of what I said, ignoring the more reasonable half.

If so, how often do you say things you don’t mean?

I mean what I say 100% of the time. I sometimes regret saying things. But I mean them. I sometimes say things in haste that could be worded better. But we all do that.

My view is this: whether or not Hansen oversold scenario A is an undecidable proposition, it is a matter of faith. As I indicated earlier, I don’t think it’s worth discussing by people who aren’t in the know.

I do commend you for calling me to task on this one. I should not have been so dismissive.

Maybe you’re barking up the wrong tree, trying to discern whether the scenario was oversold. That requires determination of intent. The bigger issue is that scenario “A” was flatly bogus, and it’s charitable to say that it was a back-of-the-envelope calculation. It’s ok to do such low-certainty calculations for bracketing scenarios, provided that it’s communicated as such. The fact that that wasn’t made clear that this was a quick-and-dirty low-certainty estimate is a failure to communicate uncertainty.

So it all comes back to uncertainty, and the team’s complete failure, for whatever reason, to talk about it.

Maybe you’re barking up the wrong tree, trying to discern whether the scenario was oversold.

Yes, that’s what I’m saying.

Although I’m not really ‘barking up any tree’ concerning Hansen. I’m not interested in playing gotcha with anyone. I have a general concern that uncertainty-free pseudoscience is being oversold to influence policy and promote particular agendas. I see this as just another instance. There was no assessment done on the relevance of the bracketing scenarios.

That requires determination of intent.

Hansen’s intent – and I am sure he would admit to it – was clear: to present policymakers with a clear directional choice: something like A vs. something like C. Nothing wrong there. But someone surely must have asked at some point how relevant A and C were. What was Hansen’s reply? This is not ‘intent’. It is a matter of what was on the record vs. off the record.

The bigger issue is that scenario “A” was flatly bogus, and it’s charitable to say that it was a back-of-the-envelope calculation.

A general comment too on the whole approach used here, which I will call 1970s imbalanced. You have the brilliantly detailed GCMs computing all kinds of things measured with tremendous accuracy over on one side. And into it you are feeding junk economics scenarios. You spent, what, a billion dollars on the GCM side, and how much on the scenario development? Does this kind of imbalance make sense in a modern world? Nowadays I am sure things are done more integratively. But I don’t think back then that they were taking the input scenarios all that seriously. One can say ‘they’re just scenarios’. Yes, but they’re one half of the overall computation! If I’m a policy-maker I want balance among the various scientific components that are going into my decision.

And btw, I can’t place the blame for understating or not stating the uncertainty entirely at Hansen’s feet. Media and certain NGOs wouldn’t state it even if Hansen had put it in capital bold 36 pitch letters on the first page of the paper. Like it or not, scientists need to understand that media and activists will take whatever they say and use it for their purposes, and the nuances end up in the floor. In this day and age, it’s a scientist’s responsibility to understand this, and conduct himself accordingly.

Not that I see any evidence that Hansen was in any way displeased with the way this was reported.

I can’t place the blame for understating or not stating the uncertainty entirely at Hansen’s feet

I believe I made that point as well. The policy makers have advisors whose job it is to take in what Hansen says and parse it. They should know when they need additional expertise to evaluate some component of the science. When you have a climatologist talking economics as input scenarios, you want to have an economist at the table who can assess the relevance of the scenarios. I have no idea if they did they or not in 1988. They should have.

152, That’s another issue entirely, but yes. A huge failing of climatologists in particular, and scientists in general, is their inability to effectively colaborate in multidisciplinary teams. They’d rather just run off and fake it themselves, than do the difficult management work of seeking out experts in other fields, and then communicating the problem to them, and persuading them to collaborate. Judith C. once remarked that most statistical experts in academe would find this kind of work too boring to be interested. There seems to be a fetish for novelty in academe that means that if the part of the problem for discipline x isn’t sufficiently novel, you can’t interest anyone. So the investigators wing it themselves.

Similarly, it would be difficult to find a professor of computer science who would be interested in helping out with the models and other computer code used by these climatologists, because the task isn’t sufficiently novel. So instead of seeking out a grad student or a professional coder, they wing it themselves, and they end up with a pile of fortran noodles.

All the more reason why the task of pulling the information together should be assigned to a consulting firm rather than a consortium of academics. Most of the tasks are simply too mundane to interest the best and brightest, and so end up being handled by people who are experts in their narrow fields, but don’t even make good dilettantes outside of their fields.

The bigger issue is that scenario “A” was flatly bogus, and it’s charitable to say that it was a back-of-the-envelope calculation.

Since I have no inhibitions to repeat myself, I will.

Scenario A was evidently publicized first back in 1984 as I noted and stated previously from an excerpt in the Hansen 1988 Scenario paper. Scenario A, in total for GHGs, appears to closely approximate the rate of increase that existed in the early 1980s as noted from an excerpted graph from a later Hansen paper. The 1980s were a period where the rate of increase in GHG forcing peaked and started to decline. I cannot determine when Scenarios B and C were run from the GISS literature data base but if they were run later than that rate change in GHG forcings in the 1980s may well have been intentionally captured by Scenario B. Hansen’s later papers complain about this situation and that it caused many modeler/scenario producers to get it wrong in the early 1980s.

Instead of playing with the details and intentions of the modelers why not take it as an object lesson in why these Scenarios can be very far from future reality and certainly add another uncertainty to the overall climate prediction equation. Of course the other uncertainty with these early and primitive models is how well they reproduced the actual temperatures out-of-sample using out-of-sample inputs.

In going back to the GISS literature I found papers discussing runs on climate Models I and II in attempts in the mid 1970s to determine which matched the real world best (which appeared to be Model II). I believe the GISS experiment, as they called it, used the first available Amdahl V/6 main frame computers. It also appeared that at that time competing models were about evenly divided on getting the ratio of temperature increases correct for the high and low latitudes. It appeared that a higher ratio of low to high latitude was needed to obtain a global temperature increase. None of these papers on the early runs said anything about scenarios. My guess is that the scenario runs were an after thought and probably related to a concern with the increasing rates for GHG forcings at that time period and a need to jolt the policy makers.

Instead of playing with the details and intentions of the modelers why not take it as an object lesson in why these Scenarios can be very far from future reality and certainly add another uncertainty to the overall climate prediction equation. Of course the other uncertainty with these early and primitive models is how well they reproduced the actual temperatures out-of-sample using out-of-sample inputs.

Agreed. Two knobs, two uncertainty assessments.

KF, Can you re-state the 1984 publication in which scenario A was first put forth?

More importantly, did the Bears make it to the Super Bowl this year? I haven’t been following ;)

re 150. If one merely looks at the uncertainity spread within a model (+-.5C) and between scenarios
( +-4C), that is if one looks at the sensitivity of knob1. ( doubling C02) and the wide range
of knob2 ( emmissions), It’s abdunately clear that you would STOP funding GCMs and start funding
economic projections. ECMs.

Put another way.people are more complex than radiative physics.

As it stands we argue about the settings of knob1 ( it’s a nice pasttime) while the huge uncertainties
of knob2 are ignored.

On a related note. are we more likely to have success with a two knob model that is revised
every year, or a 52 knob model that is arguued about every 10 years or so?

#12 It’s a question that gets argued about more and more these days: do we want reduced uncertainty on knob 1 or 2? Both, yes, but what’s the appropriate balance? The 52-knob model is a historical legacy. That model is used for much more than debating AGW. You probably need a set of models, simple vs. complex. This is what Isaac Held has been arguing for years. He doesn’t like a lot of knobs. You use the 52-knob model for scientific investigation and the 2-knob model for policy planning.

Ok, so now we have a 2-knob model. You argue for more effort on knob 2, that humans are complex, and there is a lot of uncertainty there that needs reducing. I agree that individual economic units are complex, but are global societies? You have the law of large numbers working in your favor there, at least. We all need energy. Few are going to make a major change to lifestyle of their own choice. Indochina has aspirations. Are those uncertainties reducible or irreducible? That distinction is critical, I think.

I argue that the physical climatology (knob 1) has more reducible uncertainty than knob 2. GCM research – if it is done properly – should pay. What I fear is that the historical legacy is dominating the research direction. I believe that is Held’s position too.

To keep this on topic I would say that any part of an uncertainty assessment on the Hansen scenarios would have to include explicit consideration of irreducible vs. reducible sources of uncertainty. Policy makers need to know that. Research managers need to know that. That gets the research focused on the areas where it will make a difference.

As to the idea of climate scientists being righteous about “audit trails”, give me a break. There are negligible or non-existent audit trails for virtually any climate article. Key data remains unarchived. Why don’t you fix the beam in your own eye and get proper audit trails for articles being used for public policy. Puh-leeze.

Steve, I find it objectionable that you try to tar me with that brush, you have no idea about my publication record.

Steve: You’re a member of the academic community. I talked about “articles being used for public policy”; unless your publications are being used for public policy (in climate by implication here), your own publication record, meritorious as it undoubtedly is, is not necessarily relevant to the issue. Have you ever written to any journal publishing climate articles and asked them to ensure proper audit trails? Or to NSF asking them to have Thompson (or his ilk) to properly archive data? Didn’t think so. If you had, you would have mentioned it by now. I presume that you’ve stood idly by.

Steve, I find it objectionable that you try to tar me with that brush, you have no idea about my publication record.

And yet I know some of of Phil’s publication record!

Of course, none of the publications you’ve authored and I’ve read have anything to do with climate change… but there ya’ go! :)

On the auditing issue: Yes, Steve, peer review articles are rarely audited in any conventional sense of the word. Peer review is quite different from auditing and serves different purposes. It’s unfortunate that many people don’t understand the difference, but the fact is, people don’t understand the difference in the purposes and results of peer review vs. audits.

RE 163. on the issue of model fidelity. We agree 100%. Long ago, I had to work with models
that spanned great scales in terms of physical fidelity.

The General could never grasp the details and convolutions of the physically accurate models.
He needed the one knob version.

The engineers needed the 52 knob version. and they never believed it.

ON the uncertainty of knob2. It’s been almost a year since I read the whole SRES document. That
was my introduction to this whole matter ( It’s the input data, so I started there) So, until
I reread the stuff, I’ll hold off on defending my claim. On first glance however you will find that
the parameter space they explore is huge.. so huge spreads in population, energy choices, economic
development.

Grossly over simplified the error band around a given scenario ( say B1) is much smaller than the
difference between scnearios ( say B1 verus A1F1)

Take hansens B scenario. The error about that line might be +-.1C, but the difference
between B and A dwarfs that.. Primarily because hansen is a better guesser about physics than he is
about emmissions ( human behavior) and He is NOT ALONE in that regard.

Put another way, we understand radiative physics better than people and their behavior..

Yes, I stated as much in #163. But then I went further and broke the uncertainty in each down into two components: the reducible and the irreducible. I don’t have a lot of hope for knob #2 – I see it as being dominated by irreducible uncertainty. But then I’m not a socioeconomist.

I think this is on-topic because it shows a structured way of approaching the climate modeling uncertainty problem. And it also helps explain why the GCMers do things the way they do: sensitivity analysis on the GCM itself and scenario analysis on the inputs.

I’ve been doing some additional stuff. I added the monthly volcano data & temperature and the sensitivity is up to 2.0C now.

I’m looking into a few other details that would be consistent with using monthly averages rather than annual averages volcanic activity.

For example, I’m trying to learn something about monthly oscillations in the solar constant, CO2 and figure out if those montly anomalies are relative to the particular month. (I think they are. I emailed Jim Hansen last night to find out if we know the absolute temperatures, and he says he doesn’t deal with absolutes when getting GISS temp, and suggested I ask Phil Jones. I emailed Phil, and I’m waiting for an answer.)

Based on the math, it’s really important to get as much stuff that pops out from “zero” forcing or temperature anomaly as possible to get a good answer.

After I get all this, I should use that to get a better sensitivity estimate. (I figure the someone somewhere must have at some time figured oscillations in the absolute temperatures. But, I’m asking for such weird stuff, it may take a while to find what I’m looking for!)

I do plan to wind back the model and predict what would have happened with the various scenarios. That’s sort of a goal! :)

But, I need to warn people: The sensitivities could change a lot! Having read a bit, the rms for the oscillations in the solar constant are on the order of 6 W/m^2 compared to 2-3 W/m^2 increases due to GHG’s. That’s a big forcing — and if I can deal with that properly, numbers we are interested in could really change.

RE 171. Lucia. My point and perhaps Benders point, and perhaps St. Mac point is this.

a simple model, well understood, might be better than a complex model not understood.

I’m most interested in what a simple model would have predicted in 1988. A grossly oversimplifed
simplfied model. That’s not a criticism of your lumped parameter model. far too often people add unneeded
knobs. ( no old fart stories this post I promise)

Ps. I found that 1.9C per doubling ( C02 only) fit the historical record quite nicely… ( no
consideration of other GHG, nor consideration of Aerosals))

KF, Can you re-state the 1984 publication in which scenario A was first put forth?

More importantly, did the Bears make it to the Super Bowl this year? I haven’t been following

Lucia has reminded you of the excerpt that I previously posted. Bender, I can forgive you for overlooking my excerpts, but that reminder about the Bears and particularly at this time of the year and season was low, totally uncalled for and unforgivable. I do think it would be worth while to make an effort to track the GISS modeling and scenarios back to their evident beginnings in the mid 1970s. I think a better understanding of what and why it was done is in that history.

After reading the Hansen papers with regards to these scenarios and some comments by Mosher, I am having second thoughts on the capability of a “simple” model describing the climate, although I continue to agree with Steve M’s position on there being perhaps a more simple way of explaining, at minimum, the radiative processes.

A simple model may approximate the climate but we would have to understand the climate processes sufficiently well to understand its limitations. Perhaps a simple model relegated to the radiative processes would be possible to the extent of producing a global anomaly for a doubling of CO2, but nothing more than that. While educational, a model without regional distributions of the anomaly, temporal outputs and saying anything about extreme and detrimental climate effects would have little practical value. And, of course, excluding the effects of feedbacks which could be overwhelming makes it even more impractical.

How would one go about checking the validity of the simple model? Would it be against the outputs of the more complex models? Or would one compare it with past climate results by somehow zeroing out the effects not being addressed by the simple model? One would think that a simple model should use only the physical processes that have a significant effect on climate and hope that many of the complicating processes could be ignored. Certainly if the model relied on selective empirical inputs from past climate it would be a prime candidate for over fitting.

My problem with the overall radiative process is keeping in mind all the processes and how they interact. I thought that on reading at other threads that DeWitt Payne had a good overall picture of the radiative process and could reply quickly on separate parts of it. We (or at least I) need someone like DeWitt to put the picture together better. Since I asked him (perhaps taken as impertinent which was not my intent) if he might do it, I have not seen a post from him.

Figure 1 shows the observed time series of the solar constant. It is apparent that there is a great deal of natural variability and structure in the solar energy reaching the top of earth’s atmosphere including a quasi-sinusoidal sunspot cycle at approximately 11 years with higher frequency variability superimposed. This higher frequency variability itself has structure with higher amplitudes at the crest of the sunspot cycle and less in the troughs. There is also a great deal of random variability (noise) imposed on the more ordered fluctuations which is evident in the observation that no single fluctuation is exactly alike. The maximum amplitude of these variations is approximately 6W/m2 which can be compared to the current radiative forcing due to anthropogenic greenhouse gasses of approximately 2.3 W/m2 .

I don’t know if this information is correct– but I’ve seen that figure several other places. If you could point me to information with more detail, I’d love to read it. (I’m ordering a few references.)

Steve Moscher– I’m a fan of simple models. The fiddling I’m doing isn’t changing the simple model. It’s just that using only annual averages smears out the response to things like volcanic explosions. So, I’m trying to find any features that pop-out from the ‘mean’. It’s the excursions from the mean that will help me get precision on the two parameters.

Also, I need to know the relative magnitude of feature that got averaged out. If solar oscillations from month to month aren’t negligible compared to the average change in forcing since 1880, I want to know that, and try to account for it.

Phil– I will ask Leif. I hadn’t been that interested in the solar variations until yesterday.

All: BTW, Wikipedia seems to need editing. They suggest the average solar constant over the course of the year varies by 100 w/m^2 due to the earth’s orbig. That’s which is quite bit more than 6 w/m^2!

Your information is correct, but short term fluctuations do not have the same effect as a long term trend. On an 11-year average (one solar cycle), the solar forcing varies little.

I understand the appeal of simple models. However there is just as much danger in a simple model as in a complex one: you might get the right answer for the wrong reasons. As I said on another thread, the main difficulty with the Earth’s climate is the nonlinear interaction that clouds introduce, and no simple model of radiative forcing will help you there. Just consider the onset of glaciation, which is triggered by a small fluctuation of the solar irradiance, through a still poorly understood feedback involving clouds, ice cover, etc. If you have a simple model that can explain that, you’ll be a hero(in) of climate science…

Francois– I agree that if two fluctuations have equal magnitudes, the long term one results in a larger change in temperature. However, for what I’m doing, it is useful to me to know the magnitude of both the shorter and longer term fluctuations.

So, if anyone has information on the magnitude of shorter term fluctuations in solar constant– or surface albedo, I’d like to read it.

I’d be very surprised if I ever get this model to predict glaciations. I’m trying to do something rather more modest: get an empirical estimate of sensitivity based on things we’ve measured in 1880. In the process, it just happens that I end up with a “predictive model” that predicts temperatures as a function of known forcings.

Summary and conclusions
From physical data generated in the context of satellite “remote sensing” it can be shown that wind dependant sea water thermal emissivity is a dominating climate parameter, also in comparison with anthropogenic atmospheric greenhouse gas and aerosol concentrations.The importance of this parameter can be traced and clearly identified in paleoclimatological as well as neoclimatological records. Disregard of sea surface emissivity leads to unrealistically high climate sensitivities when these are derived from climate history matches. By positive feedback mechanisms sea water emissivity characteristically contributes as an amplifier to natural climate fluctuations (glacial / interglacial; other cycles, possibly of solar origin). Sea water emissivity amplified the solar influence on climate during medieval warm period and little ice age.

Anyway you seem to me to be having way too much fun doing your model and giving us an inside view could be educational regardless of the final version’s validity/practicality. Besides, you seem to fit the image that the boys in trenches had of the company’s first lady senior executive where I worked in another life when they would say: that lady she do what she want.

187, just to be clear, I’m not saying that GCMs produce accurate answers. I’m just saying that if you don’t take circulation into account, you can be guaranteed an inaccurate answer. At this point, it’s up to the models to be proven worthwhile, but there are no viable alternatives.

@KenF–
I am having fun, and you pegged me about right. (I’m not doing this for work, so there really isn’t any other reason.)

@Larry-
I don’t think simple models replace full models or GCM’s. Simple models are heuristic, and can also be useful for seeing what ideas seem to best match measured data.

Still, like like all models, simple ones need to pass certain tests for self consistency. A model may, of course, be self consistent and wrong– but if it disagrees with itself….well… that’s no good! :)

191, Lucia, I understand that simple models can provide insight even if they don’t provide answers, but the danger here is people (i.e. media) who don’t understand those limitations getting their hands on the results, and reporting them as revealed truth. It all comes back to uncertainty, and the inseparability of any numerical result from its uncertainty.

@Larry–
As far as I can tell, the people who don’t understand the limitations of complex models out number the number of people who don’t understand the limitations of simple models.

So… I think we agree:
1) Simple models have limitations.
2) People often don’t understand the limitations.

But you seem to be suggesting some sort of danger that the media will somehow take my hobby model, documented at a blog, in blog quality, tremendously seriously, and see it as revealed truth.

Or am I misunderstanding you? Quite honestly, I don’t think what I’m doing is going to get media attention.

I’m just fiddling with the same simple Energy Balance Model (which I call lumped parameter model) and seeing what I get if I use real forcings instead of the white noise Schwartz used. Schwartz paper is interesting, but his prediction hasn’t caused the world to stop revolving on its axis. Whatever I do will likely have less impact, particularly as it’s documented at a blog. (And not a boring one at that.)

Anyway, how is it likely to mislead so horribly when I’d be the first to point out that it’s an over-simplification? Plus, there are all sorts of other more credible models backed by people who’ve worked on this their whole lives?

193, no, I’m not concerned about your model. You’re under the radar. I was making a broader point about the general faith in models, relating back to the discussion of simple v.s. complex models, as produced by the “pros”. You can’t win either way, because if the model is simple enough to work, it’s too simple to be accurate, at least for the time being.

Fellas? Doesn’t #96 have a point here in regards to Hansen C?????? To me it looks like Hansen had temps dropping off after 2000 if the Montreal Protocols were successful in their targets. What is Hansen’s own explanation of C?? Isn’t it in fact what has occured since 1998? I’m not the scientist here but isn’t it just as plausible to say that the Montreal Protocol’s real effect was to stop global warming according to Hansen’s own projections?? Hansen admits other gases are warmers by including them, as Steve pointed out the Hansen A was mostly the result of CFC growth with no change to reduce them. Isn’t it plausable to say based on Hansen A and C that AGW was CFC driven and not CO2 driven thus explaining why no year was hotter than 1998 because of the elimination of R-11 and R-12? I know that correlation is not causation, but in terms of CO2, doesn’t Hansen himself kind of put the stake through the heart of CO2 as the villian?

I personally think AGW is huey, but you know to an AGWer switching from CO2 to CFC as the villian would be the perfect opportunity to declare victory by claiming 1998 as the peak year and problem solved. No messy CO2 emissions solutions needed. It’s their perfect out.

Actually Steve, Isn’t this really a good way to end the debate, give Hansen and the AGW crowd the credit by acknowledging the warming was man made but saying now that CFCs have been controlled the problem is going away. It’s like splitting the difference and everyone’s a winner. Yes, cynical I know, but then at least science could move back to its more esoteric ways and you have the peace and quiet you all want without the rest of us annoying people asking stupid questions or the likes of Al Gore distorting science. LOL

Another point is that a simple model does not mean it’s linear. You could build a “simple” model, with a small number of variables, but with a nonlinear behavior. Small changes in a variable may result in a linear approximation, but there may still be thresholds where the behavior departs significantly from nonlinearity (I have a paper that does just that to explain the onset of glaciations). There is some truth I guess in the thinking that if you’re interested in a global annual mean temperature, a simple model with a small number of variables may be good enough. Anyway, have fun and let us know!

Sam– yes, but if you read, you’ll see they warn you to only use that to shift for annual averages. It doesn’t work for monthly averages because each month is zeroed for the average for that month. So, that means I can’t find the difference in the average between say Jan and July that way.

I’ve figured out I don’t absolutely need this information. But it would have made a nice consistency check. (Getting the monthly information seems to fall in the category of ‘difficult’. I understand why they don’t really have them– but it still means I can’t get it.)

SteveMoscher– since we began discussing this here, John Kennedy pointed me to the Cru 2.0 series with various warnings about uncertainties. Those are absolute and would, in principle, be what I want.

So, since yesterday, I know where, in principle to find them. But, having looked at it, I’m not sure those data will help me aren’t going to help me a lot either. I found the the land temperatures, and at least when I looked they don’t go above 80 N latitude or below 60S! I think they don’t have a nice blended set (which is what I need).

Alas… unless on further looking I find “the prefect set”, I think the absolute temperature records that exist it will be way to difficult and confusing. (Though, I may change my mind about that.)

FWIW, my main concern is always that even for simple minded models, one should make sure they don’t work great with data set “A” and then totally fail with data set “B”. But if good, complete data set “B” doesn’t really exist… well, I’m not going to try to do what Hadley thinks is not quite possible just to do the consistency check. I can explain why the method of developing the model should work based on the anomaly data. I’d just have liked the other stuff too! :(

204, Exactly, so here we have the observations following Hansen C, not A or B. The conclusion from looking at all three Hansen scenerios is [b]if[/b] we are to buy into AGW is that due to the Montreal Protocol, the manufacturing discontinuance of CFC 11 & 12 has halted the warming. Is anyone claiming that 1998 was not the hottest year to date? Is anyone claiming that the temperature trend from 1998 to 2007 is positive? Think about it.

204, Exactly, so here we have the observations following Hansen C, not A or B. The conclusion from looking at all three Hansen scenerios is [b]if[/b] we are to buy into AGW is that due to the Montreal Protocol, the manufacturing discontinuance of CFC 11 & 12 has halted the warming. Is anyone claiming that 1998 was not the hottest year to date? Is anyone claiming that the temperature trend from 1998 to 2007 is positive? Think about it.

Actually GISS says that 2005 is the hottest year to date for what it’s worth.
While the Montreal protocol has forestalled some additional warming it’s not as clear cut as you imply since 12 has plateaued (although it’s higher than it was in 1998) and 11 has only dropped ~7% while their replacements are actually more IR active. The stabilization of the methane concentration has also helped (notable that methane and other hydrocarbons are significantly higher in the N-hemisphere).CDIAC data

Monitoring Global Climate
Click for options and more information
Map Room
A collection of maps and analyses used to monitor climate conditions. Click on any of the maps to modify the figures or access the source data.

Climate Highlights
Relates headlines from the CID with likely future conditions.
The IRI/LDEO Climate Data Library contains over 300 datasets from a variety of earth science disciplines and climate-related topics. It is a powerful tool that offers the following capabilities at no cost to the user:

* access any number of datasets;
* create analyses of data ranging from simple averaging to more advanced EOF analyses;
* monitor present climate conditions with maps and analyses in the Maproom;
* create visual representations of data, including animations;
* download data in a variety of commonly-used formats, including GIS-compatible formats.

Are you new to the world of climate data? Check out our Introduction to Climate Data page.

I seem to pick up something that presents a question, at least for me, every time I go back to the 1988 Hansen scenario paper.

Lucia, since you are working with global climate time constants can you explain what you think Hansen et al mean when they say that, since for the control run they wanted to permit several integrations over several time constants, they do not allow heat exchange across the level defined by the annual maximum mixed layer depth? They then say that the isolated mixed layer response time is 10-20 years for a climate sensitivity of 4 degrees C for a doubling of CO2. What would that make the time constant for the control run based on the conditions for GHGs prevailing in 1958.

On looking at the control run results in the 1988 Hansen paper one has to wonder about finding a climate warming signal in there — and evidently with the time constant artificially reduced. I am not sure what this all implies. Lucia what does the control run for your model show?

The Hansen paper talks of another short cut, that would affect the time constant, I assume, for the scenario runs, in dealing with the annual mean temperatures by restricting the heat exchange depth into the oceans below the annual mixed layer depth.

A general question on my reread of Hansen 1988 is why the graph showing the extended curves for Scenarios A, B and C show a track to 2060 for the least likely scenario, A, and to only to 2027 for the most likely scenario, B, and to 2037 for Scenario C?

Lucia, since you are working with global climate time constants can you explain what you think Hansen et al mean when they say that, since for the control run they wanted to permit several integrations over several time constants,

I think they have a time constant for the mixed layer, and ran the control run for a bunch of those. My model is simplified and only has one time constant for “everything”. (Actually, I’ll explain how this relates to more complicated problems if my model doesn’t break down. But, the planet doesn’t have one time constant for everything.)

A general question on my reread of Hansen 1988 is why the graph showing the extended curves for Scenarios A, B and C show a track to 2060 for the least likely scenario, A, and to only to 2027 for the most likely scenario, B, and to 2037 for Scenario C?

I’ll speculate, and you can ask Hansen how close I am. :)

I suspect the true answer has nothing to do with physics, or even what Hansen thought was most likely. Sceneario A ran longest because of the way projects and programs get funded.Let’s begin with the fact, that Hansen’s ran Scenario A first. (It says so in the paper.)

Of course you think of Hansen as a big-wig now, but this was not always so. My guess is Hansen and his group ran A as a numerical experiment, in the background, using discretionary resources which they’d begged others to let him use. ( You’ll read they were using an Amdahl V6, running in the background of other jobs.) These exist at national labs and government agencies, but there is lots of competition because all scientists want to do ‘fun’ stuff.

Hansen also wished to write further proposals. (In fact, saying you hope to write further externally funded proposals is part of the justification for getting access to internal discretionary resources.) To write a proposal that was likely to be funded, there can be some effect is necessary to convince funding agencies for money to do further experiments. Since you don’t know what results you’ll get until you run them, scientiest always pick a “worst- plausible” case scenario. That’s A. To truly convince there might be a problem, they needed to extrapolate out to a ridiculous time frame. So, they ran to 2060.

As these were running, and shortly afterwards, Hansen presented results, discussed things and persuded others to give him snippets of additional funding to run more numerical experiments. Taking guidance based on questions, he ran the other cases for comparisons. Obviously, it made sense to try for some more plausible scenarios.

Still knowing that extrapolation is not all that likely to give good results, they decided not to run these out to 2060 before writing the paper. (Had they done so, the paper would probably be Hansen 1992 or something. )

I’d also guess at some point, the group eventually got real funding and turned their attention and resources to improving the model rather than just running the thought experiments out to infiniti and beyond. Running the thought experiments forever might have been nearly impossible in any case: That old Amdahl computer may have been on its last legs, scheduled to be retired etc. So, at that point, if the computer vanished, and, given change compilers etc. it might have been a ridiculous amount of work to continue on the older code.

But it may turn out that one is dominant. As a rough approximation, you could say that there’s an energy flux associated with each time constant, and one (I’m thinking the ocean mixed layer) may turn out to account for the vast majority of the flux, in which case the one lump model may actually be good. The problem is knowing this in advance. Once you start fiddling with multiple knobs, as I’m sure you well know, you find that a lot of very different combinations end up producing very similar and plausible results.

Larry– I’m waiting to see what happens as I fiddle. I was surprised when I first tried this and the method of fitting the data worked quite well immediately. Does that mean I won’t find holes? No. But, I’m looking for the obvious ones. If I don’t find them, I’ll be documenting. But I can’t disagree with the general misgivings. Fully modeling the climate is difficult, and the physics are complex.

But yes, it may turn out that one time scale is dominant for certain purposes– and this may be one of them!

What you say, Lucia about how the different length runs came about in almost reverse order to their stated likelihood makes sense. I would like to throw in my previous conjecture that from the time Hansen et al ran the Scenario A in the BAU mode that BAU changed to be more like Scenario B. Scenario C has a longer run time than B and I would guess might well have been run before Scenario B and perhaps run to show the significant differences that one could obtain from BAU and mitigation. Though Hansen was not as big a shot as now back then I think he was as sensitive to policy issues then as he is now.

Also the shortened time period graph for Scenario B shows an almost immediate leveling off effect that seems to go counter to Hansen’s later assertions that stopping GHGs at current levels would have a residual gain of 0.6 degrees C over a century’s time. The extended graph does show a 0.1 to 0.2 degree C blip for Scenario B.

In the excerpt from this paper, vintage later half of the 1990’s and linked below, I was surprised to learn that as you noted in your post, Lucia, that the longer runs on the Amdahl V/6 would have been no small or easy task for Hansen et al using a 3D model. The linked paper describes the modification that MIT workers made on a GISS 2D model (and the output results) in order to do scenario runs of longer lengths. And the paper is not describing the computer capabilities of the early 1980s when Hansen et al made their scenario runs but the capabilities a decade later.

A variety of scenarios for changes in greenhouse gas (GHG) concentrations also have to be considered. As a result, a significant number of climate simulations, each for 50 – 100 years, are to be carried out. This would be impossible with the use of GCMs, due to their enormous requirements of computer time, even on the most powerful super computers now available. An alternative approach is to use simplified models. The two-dimensional (2-D) statistical/dynamical model developed at the Goddard Institute for Space Studies (GISS) is 23 times faster than the GISS GCM with the same latitudinal and vertical resolutions (Yao and Stone, 1987).

In this same linked paper I was a bit taken aback by what seems a rather arbitrary change in the RH level at which precipitation is allowed (needed to compensate for the low spatial resolution used for the model) and that it can change the cloud feedback from negative to positive.

In the original version of the 2-D model, condensation occurs when relative humidity reaches 100%. As a result, the amount of precipitable water in the atmosphere obtained in the simulations with this version turns out to be larger than the observed value. At the same time, even in some GCMs with low horizontal resolution, condensation is allowed to occur in partly saturated areas, in order to take into account subgrid-scale variations of relative humidity. Such an approach seems to be even more appropriate in a zonally averaged model: moreover, a similar approach is used in the parameterization of moist convection (Yao and Stone, 1987). Therefore, the value of hcon = 90% has been chosen as the criterion for condensation. This small change has a very profound impact on the model’s sensitivity, namely, if hcon = 100%, the model produces a negative cloud feedback; however, when hcon = 90%, the cloud feedback becomes positive.

My reference to Scenario B in my previous post in the paragraph excerpted below should be Scenario C. Sorry for any confusion this caused.

Also the shortened time period graph for Scenario B shows an almost immediate leveling off effect that seems to go counter to Hansen’s later assertions that stopping GHGs at current levels would have a residual gain of 0.6 degrees C over a century’s time. The extended graph does show a 0.1 to 0.2 degree C blip for Scenario B.

“James Hansen told Congress on Monday that the world has long passed the “dangerous level” for greenhouse gases in the atmosphere and needs to get back to 1988 levels.

He said Earth’s atmosphere can stay this loaded with man-made carbon dioxide for only a couple more decades without changes such as mass extinction, ecosystem collapse and dramatic sea level rises.

“We’re toast if we don’t get on a very different path,” said Hansen, director of the Goddard Institute of Space Sciences who is sometimes called the godfather of global warming science. “This is the last chance.”

[…] Note: People have inquired about actual forcing in these terms. I spent quite a bit of time earlier this year trying to decode Hansen forcing estimates by GHG type (which is important) versus observed […]