Hansen’s 1988 projections

At Jim Hansen’s now famous congressional testimony given in the hot summer of 1988, he showed GISS model projections of continued global warming assuming further increases in human produced greenhouse gases. This was one of the earliest transient climate model experiments and so rightly gets a fair bit of attention when the reliability of model projections are discussed. There have however been an awful lot of mis-statements over the years – some based on pure dishonesty, some based on simple confusion. Hansen himself (and, for full disclosure, my boss), revisited those simulations in a paper last year, where he showed a rather impressive match between the recently observed data and the model projections. But how impressive is this really? and what can be concluded from the subsequent years of observations?

In the original 1988 paper, three different scenarios were used A, B, and C. They consisted of hypothesised future concentrations of the main greenhouse gases – CO2, CH4, CFCs etc. together with a few scattered volcanic eruptions. The details varied for each scenario, but the net effect of all the changes was that Scenario A assumed exponential growth in forcings, Scenario B was roughly a linear increase in forcings, and Scenario C was similar to B, but had close to constant forcings from 2000 onwards. Scenario B and C had an ‘El Chichon’ sized volcanic eruption in 1995. Essentially, a high, middle and low estimate were chosen to bracket the set of possibilities. Hansen specifically stated that he thought the middle scenario (B) the “most plausible”.

These experiments were started from a control run with 1959 conditions and used observed greenhouse gas forcings up until 1984, and projections subsequently (NB. Scenario A had a slightly larger ‘observed’ forcing change to account for a small uncertainty in the minor CFCs). It should also be noted that these experiments were single realisations. Nowadays we would use an ensemble of runs with slightly perturbed initial conditions (usually a different ocean state) in order to average over ‘weather noise’ and extract the ‘forced’ signal. In the absence of an ensemble, this forced signal will be clearest in the long term trend.

How can we tell how successful the projections were?

Firstly, since the projected forcings started in 1984, that should be the starting year for any analysis, giving us just over two decades of comparison with the real world. The delay between the projections and the publication is a reflection of the time needed to gather the necessary data, churn through the model experiments and get results ready for publication. If the analysis uses earlier data i.e. 1959, it will be affected by the ‘cold start’ problem -i.e. the model is starting with a radiative balance that real world was not in. After a decade or so that is less important. Secondly, we need to address two questions – how accurate were the scenarios and how accurate were the modelled impacts.

So which forcing scenario came closest to the real world? Given that we’re mainly looking at the global mean surface temperature anomaly, the most appropriate comparison is for the net forcings for each scenario. This can be compared with the net forcings that we currently use in our 20th Century simulations based on the best estimates and observations of what actually happened (through to 2003). There is a minor technical detail which has to do with the ‘efficacies’ of various forcings – our current forcing estimates are weighted by the efficacies calculated in the GCM and reported here. These weight CH4, N2O and CFCs a little higher (factors of 1.1, 1.04 and 1.32, respectively) than the raw IPCC (2001) estimate would give.

The results are shown in the figure. I have deliberately not included the volcanic forcing in either the observed or projected values since that is a random element – scenarios B and C didn’t do badly since Pinatubo went off in 1991, rather than the assumed 1995 – but getting volcanic eruptions right is not the main point here. I show three variations of the ‘observed’ forcings – the first which includes all the forcings (except volcanic) i.e. including solar, aerosol effects, ozone and the like, many aspects of which were not as clearly understood in 1984. For comparison, I also show the forcings without solar effects (to demonstrate the relatively unimportant role solar plays on these timescales), and one which just includes the forcing from the well-mixed greenhouse gases. The last is probably the best one to compare to the scenarios, since they only consisted of projections of the WM-GHGs. All of the forcing data has been offset to have a 1984 start point.

Regardless of which variation one chooses, the scenario closest to the observations is clearly Scenario B. The difference in scenario B compared to any of the variations is around 0.1 W/m2 – around a 10% overestimate (compared to > 50% overestimate for scenario A, and a > 25% underestimate for scenario C). The overestimate in B compared to the best estimate of the total forcings is more like 5%. Given the uncertainties in the observed forcings, this is about as good as can be reasonably expected. As an aside, the match without including the efficacy factors is even better.

What about the modelled impacts?

Most of the focus has been on the global mean temperature trend in the models and observations (it would certainly be worthwhile to look at some more subtle metrics – rainfall, latitudinal temperature gradients, Hadley circulation etc. but that’s beyond the scope of this post). However, there are a number of subtleties here as well. Firstly, what is the best estimate of the global mean surface air temperature anomaly? GISS produces two estimates – the met station index (which does not cover a lot of the oceans), and a land-ocean index (which uses satellite ocean temperature changes in addition to the met stations). The former is likely to overestimate the true global surface air temperature trend (since the oceans do not warm as fast as the land), while the latter may underestimate the true trend, since the air temperature over the ocean is predicted to rise at a slightly higher rate than the ocean temperature. In Hansen’s 2006 paper, he uses both and suggests the true answer lies in between. For our purposes, you will see it doesn’t matter much.

As mentioned above, with a single realisation, there is going to be an amount of weather noise that has nothing to do with the forcings. In these simulations, this noise component has a standard deviation of around 0.1 deg C in the annual mean. That is, if the models had been run using a slightly different initial condition so that the weather was different, the difference in the two runs’ mean temperature in any one year would have a standard deviation of about 0.14 deg C., but the long term trends would be similar. Thus, comparing specific years is very prone to differences due to the noise, while looking at the trends is more robust.

From 1984 to 2006, the trends in the two observational datasets are 0.24+/- 0.07 and 0.21 +/- 0.06 deg C/decade, where the error bars (2) are the derived from the linear fit. The ‘true’ error bars should be slightly larger given the uncertainty in the annual estimates themselves. For the model simulations, the trends are for Scenario A: 0.39+/-0.05 deg C/decade, Scenario B: 0.24+/- 0.06 deg C/decade and Scenario C: 0.24 +/- 0.05 deg C/decade.

The bottom line? Scenario B is pretty close and certainly well within the error estimates of the real world changes. And if you factor in the 5 to 10% overestimate of the forcings in a simple way, Scenario B would be right in the middle of the observed trends. It is certainly close enough to provide confidence that the model is capable of matching the global mean temperature rise!

But can we say that this proves the model is correct? Not quite. Look at the difference between Scenario B and C. Despite the large difference in forcings in the later years, the long term trend over that same period is similar. The implication is that over a short period, the weather noise can mask significant differences in the forced component. This version of the model had a climate sensitivity was around 4 deg C for a doubling of CO2. This is a little higher than what would be our best guess (~3 deg C) based on observations, but is within the standard range (2 to 4.5 deg C). Is this 20 year trend sufficient to determine whether the model sensitivity was too high? No. Given the noise level, a trend 75% as large, would still be within the error bars of the observation (i.e. 0.18+/-0.05), assuming the transient trend would scale linearly. Maybe with another 10 years of data, this distinction will be possible. However, a model with a very low sensitivity, say 1 deg C, would have fallen well below the observed trends.

Hansen stated that this comparison was not sufficient for a ‘precise assessment’ of the model simulations and he is of course correct. However, that does not imply that no assessment can be made, or that stated errors in the projections (themselves erroneous) of 100 to 400% can’t be challenged. My assessment is that the model results were as consistent with the real world over this period as could possibly be expected and are therefore a useful demonstration of the model’s consistency with the real world. Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get.

293 Responses to “Hansen’s 1988 projections”

Mr. Bender, this thread isn’t current on modeling, it’s not about current models. Plans made back in the late 1990s led to then new models. That’s been repeated over and over, for a long time, since the 1988 model.

Look at this week’s EOS ($20 AGU membership; the newsletter presents current science in terms the nonscientist reader can follow). Climate model requirements are discussed in a good review by Hibbard, Meehl, Cox and Friedlingstein.

That article begins:

“Climate models used for climate change projections are on the threshold of including much greater biological and chemical detail ….” The background given covers models from about 2000 to the present.

The authors there write “We welcome comments and ideas on the proposed strategy for the experimental design and related questions for the research questions …. ”

Just for the record. For things like the direct GHG forcing, for all practical purposes, there is no ‘tuning’ involved in GCMs. CO2, H2O, O3 etc. concentrations all feed directly into the radiative transfer calculations which then produce the associated atmospheric heating/cooling rates (including surface heating/cooling effects). For this aspect of CGMs there is no room for fitting anything.

I appreciate the correction. I always will.

In any case, it wasn’t the forcing which I regarded as problematic so much as the water vapor feedback due to higher levels of CO2 in the atmosphere. As I understood it, this was primarily what is meant by “climate sensitivity” to the forcing of carbon dioxide. However, if we are able to accurately calculate the effects of carbon dioxide on the levels of water vapor, and then the effects of both upon temperature, I am impressed. What would then be left would be primarily the positive feedbacks due to the carbon cycle, the cryosphere “albedo” feedback, and the effects of aerosols, but the last of these is quickly becoming amenable to calculation.

Re #247: No, it doesn’t have legs, except in the sense that it has the potential for serving as bender’s hobby for some time to come. IMHO it is of no value for a climate modeler to interact with him. For evidence see the thread below (intentionally not hot-linked) wherein leading modeler Isaac Held (starting with comment #20) slowly loses his patience with the ClimateAstrology yahoos. Bender’s rather conclusory and self-satisfied comment (#58) is par for the course.

Look at this week’s EOS ($20 AGU membership; the newsletter presents current science in terms the nonscientist reader can follow). Climate model requirements are discussed in a good review by Hibbard, Meehl, Cox and Friedlingstein.

Ah – the near-term future of climate modeling.

Greater vertical resolution, in line with what David was mentioning, and more detailed analysis of the carbon cycle – due to its strong, positive feedback – something I was hoping for. And we were already including representative species at the microbial and floral levels in various climate models – that is something I was unaware of.

The lateral resolution is still rather wide for the Atmospheric Oceanic Global Models (approximately two degrees – big grid), but then there are the regionals – which have higher resolution and include more variables, such as when NASA recently released projections for major eastern cities in the 2080s which showed higher temperatures (100-110 F averages July through August during the dryer summers) and more frequent droughts – as NASA was taking into account that it tends to rain later in the day, giving the ground a chance to warm up beforehand, resulting in increased evaporation and the consequent degradation of plants and soil.

Might be interesting to see whether the shotgun analysis of microbial colonies will come into play within the next couple years. It is supposed to give you a low resolution of metabolic functions at a level which is substantially broader than individual species. We might not have to rely upon representative species to the same extent – at least in the microbial realm, and it could prove valuable in the analysis of the carbon cycle – given the considerable importance of both archaea and eubacteria to the cycle at numerous key points of our climate system.

Re #251
So, you would have me retract my question because I’m, in your opinion, off-topic? Hokay, Hank. Carry on without me. Good luck to you and your presumptuous lot. May your arrogance not be your downfall.

Re #254
I know you feel the need to stomp on any discussion of uncertainty because you fear the influence it could have on public opinion. That’s ok, I understand. It’s your job. I applaud your faith.

But if your faith in GCM accuracy is so strong, then why don’t you just sit back patiently, keep quiet, and let the experts defend it? Then my concern will be proven to be unfounded. Then I will leave, we can stay friends, and all will be well again.

Instead, you ratchet up the rhetoric with your inexpert and superfluous commentary. Can’t you see how this harms your agenda? But you can’t resist, can you? May your selfishness not be your downfall.

Your notion of model “accuracy” is perhaps not as nuanced as it needs to be. With a stochastic dynamic system model containing a hundred free parameters, do you have any idea how easy it is to get the right answer … for the *wrong* reasons? A valid model is one which gives you the right answers … for the *right* reasons. There is much more to model validation than an output correlation test, especially over a meagre 20 years.

However many variables it takes, it is not that easy to get the “right results” if you do not have the right results beforehand. Back in 1988, Jim Hansen was not in possession of a time machine, and I seriously doubt that he had a crystal ball. What he did have, though, were models which were fairly good at projecting a future which he did not already know.

If your faith in the model’s accuracy is so strong, then why don’t you just sit back patiently, keep quiet, and let the experts defend it? Then my concern will be proven to be unfounded. Then I will leave, we can stay friends, and all will be well again.

I do not require faith. I can and do look up the technical papers whenever I have the time: it is a habit I developed while dealing with contrarians who were focused not so much on attacking climate science but evolutionary biology. Moreover, I enjoy understanding things – going as deep as I can go, and I am willing to put forward the effort. However, after Dover, things have slowed down a bit, so now I have a little extra time. Besides, there really isn’t that much of a difference. The ideological commitment to one’s own views without regard for the facts is more or less the same whether people are attacking evolutionary biology or climatology. It is probably more a matter of human nature.

In any case, I wouldn’t care to see a young earth creationist preacher “debate” an evolutionary biologist who was being patient enough and kind enough to offer free lectures to anyone who cared to listen. I wouldn’t care to see a pseudo-scientist try and “debate” various theoretical points with a specialist in quantum theory or general relativity – particularly if it were obvious that he had no grasp of the subject he was trying to debate.

I have similar reservations here.

You don’t have the background which would be required to make any debate with these specialists interesting except at the most superficial level. Now this is not to say that you could not learn much of what they already know, but it would take time and there would likely be limits – even assuming that you had the motivation. But from what I can tell, you have chosen to devote all of your time and energy to the fine art of heckling. Nothing left for learning what you don’t already know, it would appear.

But as some may have already guessed, there has been a purpose to my exchange with you: I wished to have someone illustrate in grizzly detail the sort of ideological commitments that we run into among the contrarians when defending modern science, and more generally, what practices people should avoid in their own lives as a matter of simple, commonsense cognitive and psychological hygiene. You have performed both functions quite admirably, and I appreciate the service.

Now should I send a check, and if so, who should I make it out to? Would you prefer “bender” or simply “anon”?

I’m just sayin’ — there’s an invitation to a discussion of your interests — it’s at the AGU.
You could put your ideas in a thread where people are inviting it, where the discussion’s about the topic, not about you.
If you wanted to improve automotive design, you wouldn’t be insisting it happen in a Henry Ford Model T thread, would you?
If you did that, odds are people discussing the Model T would think you were trolling.

Reading online involves interacting with the writers, trying to make a sensible discussion. Everyone’s doing their best here.
What we do is what the next reader along gets.

I had noted earlier that it has been my experience that many Objectivists believe that the discoveries of modern science are philosophically incorrect, and as such, they find it necessary to abandon and even attack such discoveries in areas like quantum mechanics, general relativity and special relativity. You seemed to think that this was extremely unfair, and were no doubt surprised to discover that I knew exactly what I was talking about – having been an Objectivist myself for quite some time, albeit one who was able to avoid such pitfalls of the movement. (It might have helped that I fell in love with modern science first before becoming acquainted with “Objectivist philosophy.”)

However, despite your protestations that my criticism of the movement was unfair, you never got around to explaining why it was unfair. For example, do you accept quantum mechanics, general relativity or special relativity? Assuming you do, can you honestly say that among the Objectivists you know, there isn’t a strong element of dogmatism with the associated ideological blinders that prevents them from being able to accept such discoveries?

Or is it that when you stated that I was making certain presumptions, you actually meant that I was “assuming” that modern science is true when you know for a fact that if is some sort of horrible Kantian abomination designed to prevent the ordinary man from seeing the truth of the Omphalosian-spun version of Objectivism which you hold above all else?

I am sorry, but if you actually addressed this at some point, I seem to have missed it.

1) Science in general, and modeling/simulation (whether in science and engineering) have long progressed by creating approximations to reality, and then improving them over time.

Improvements can come from:
– New scientific understanding, i.e., like when people started to understand global dimming effects from volcanoes and other sulfate sources.
– New data sources, like ice-cores.
– Longer data series of types that already existed, like glacier records.
– Better algorithms.
– Computers with more TFLOPs, memory, and I/O, since a lot of simulations use methods that require modeling something physical with a large collection of elements small enough to get accurate enough. It is quite common for important effects to be visible only when elements get down to some small-enough size, which of course requires more of them, and more compute power.

I used to help design supercomputers, and worked with the people who bought them, and had more than one researcher complain to me that we didn’t yet offer a Terabyte of main memory [I hold them, Next Year], but of course that was years ago.

(The first 4 are life-and-death, and anyone who reflexively distrusts simulations should avoid those).

2) *Anybody* I ever talked to (a lot!) understood perfectly well that there were various degrees of approximation and uncertainty involved, including, for sure, the climate modelers. Nevertheless, as soon as people starting getting useful results (which has happened at different times for different problems), they started using them (and trying to improve accuracy and bound uncertainty), rather than waiting for perfection.

Of course, some people seem to be able to be absolutely *sure*, but then spend their time poking at scientists to demand certainty, and claim that anything less than certainty is no good.

I doubt that my car is as perfectly safe as it could be, but I’m virtually certain that simulation made it a lot better than it would have been otherwise. (I know for sure it was simulated by safety-fanatics. They even did events like shedding moose, of which we have none locally, but we do have lots of deer, and although I’m not sure deer were simulated, I suspect the moose results may help somewhat.)

Car crash codes got good enough in the mid-1990s for car companies to do most of their crashes virtually. Of course, climate modeling is one of the tougher ones, along with some of the biological modeling problems.

3) For people new to this, and interested in good background, I recommend the ancient (1993), but fine book by Larry Smarr and William Kaufmann “Supercomputing and the Transformation of Science.”

It has a good overview of the (still-relevant) issues, beautiful illustrations, and can be gotten very cheaply via Amazon.

4) Once again, I miss the existence of the equivalent of USENET KILLFILEs for blogs in general.

Maybe looking up the technical papers is not such a great idea afterall.

I might take up lessons in staring at my navel from bender, or alternately weave back-and-forth while repeating the matra “This is not happening! This is not happening!” Life would no doubt be so much simpler if I could faithfully practice some form of radical skepticism with respect to modern science in order to protect the dogmatism of some philosophic belief…

As I have said before, we are not talking runaway greenhouse effect.

However, it would appear that the positive feedback from the carbon cycle may substantially increase the ppm on CO2 – enough to set off my personal alarms. Habitable, of course, but it would appear that the world will be changing quite substantially – and the long-term effects may be more significant than “global warming.”

I just hope that we have some semblance of an economy at the end of this century: we are going to need it to deal with the after-effects of all of our carbon binging. What we have seen happening to the arctic cap over the past several decades would appear to be just the appetizer – and through some odd association brings to mind a skit when I think of what lies ahead…

Bender, I think that you might get further if you devoted a little time to understanding climate science rather than jumping right into the instabilities of the models. If you do, you will find that although the problem of modeling global climate is very difficult, it is much easier to get the general trends right–as Hansen did back in 1988. I do not know your background, but it would seem that you are not a geoscientist. Every field has its own approach to quality control, and if you really want to make a contribution to the field, you first need to understand in detail how their process works, how what they are doing relates to your experience and see where the two may be complementary, where they are antagonistic and where your methods might supplement theirs.
Unfortunately, climate science has lots of experience with impatient outsiders who want to delve into details without understanding the general feature. Michael Crichton comes to mind–a man who feels that he should be able to understand a complex field with 30 minutes of effort even though he doesn’t understand even the basics of how science works (as should be obvious to anyone who has suffered through one of his novels).
This site is a resource–mostly for laymen–to come and learn about climate science. There are discussions of models, but it is not a forum for delving into the guts of the models. My recommendation to you would be to learn what you can here–about both climate and the climate models–but keep in mind that the contributors to this blog all have day jobs. They may not feel they have the time to delve into the guts of the models with someone who has not shown that they have mastered the basics.

So much verbiage, so little worth responding to. For those who do not understand how GCMs are built or parameterised, please see today’s post. It provides some (but not all) of the necessary context to understand why my questions in #153 and #232 are (a) valid and (b) directly on-topic. Re: #259 Thank you for the warm welcome here and the invitation to speak at AGU. But unfortunately I don’t have anything to discuss. All I have is a simple question – a question that is not “MY” topic, but THE topic: when we say GCMs are “skillful”, what is it that they are skillfully reproducing: the global circulation in all its possibilities, or the circulation as we have observed and characterized it over the last few decades? It’s a statistical question, going back to the reply #153 in regards to the phrase “out-of-sample”: what is the sample and what is the population?

Please, no amateur responses. It’s the inline replies I want to read. All else, it seems, is noise.

So much verbiage, so little worth responding to. For those who do not understand how GCMs are built or parameterised, please see today’s post. It provides some (but not all) of the necessary context to understand why my questions in #153 and #232 are (a) valid and (b) directly on-topic. Re: #259 Thank you for the warm welcome here and the invitation to speak at said conference. But unfortunately I don’t have anything to discuss. All I have is a simple question – a question that is not “my” topic, but THE topic: when we say GCMs are “skillful”, what is it that they are skillfully reproducing: the global circulation in all its possibilities, or the circulation as we have observed and characterized it over the last few decades? Are they one and the same? It’s a statistical question, going back to the reply #153 in regards to the phrase “out-of-sample”: what is the sample and what is the population?

[Response: I wouldn’t be so dismissive of commenters – there’s some good stuff sprinkled about there, but let me briefly step in. The models are tuned on climatology – that is the mean observed climate – usually over the satellite period. That will include the seasonal cycle and, to some extent, the unforced variability (ENSO amplitude etc.). They are not tuned to trends, events (such as Pinatubo), paleo-climates (6kyr BP, LGM, 8.2kyr event, D/O events, the PETM, the Maunder Minimum or the Eocene), other forcings (solar, orbital etc.) – thus every match to those climate changes is ‘out of sample’ in the sense you mean. Read a GCM background paper (like Schmidt et al 2006 for instance) to see what is tuned and what isn’t. The resulting code is identical to the one used for all the paleo-climate experiments, none of which were started prior to the final tuning for IPCC AR4 runs. The sample is therefore the present day mean climate, the population is the history of all paleo-climates.

There will be some clear failures where there are reasons to suspect that some of the (up to now) excluded physics is dominant (i.e. Heinrich events that rely on ice sheet dynamics), but pretty much everything else is fair game – as long of course there is a good hypothesis to test. The 8.2kyr event is a great example. -gavin]

Ray, I’m trying to learn the answer to my question. I don’t see it addressed here and I’m hoping it will be. Presumptuous Hank is way off the mark as to my purpose. I am not here to assert. I have no beliefs. I have only questions. Valid questions. Where shall I ask them if not here?

Ray, in #266 you say:
“it is not a forum for delving into the guts of the models”
but attribution is fundamentally a modeling exercise. If you can’t discuss the models, you can’t discuss attribution. It seems there may be a double-standard here: the posters can discuss the models, but the commenters can not.

My presumed “background” (#263) and imagined “interests” (#259), “beliefs” and “assertions” (#266) are immaterial to the issue, and are, frankly, offensively prejudicial. If you yourself have read the climatology literature, as you ask me to do, then you tell me straight out: what precisely is wrong with my question?

[Apologies to all for the redundacy in the double-posts. Some get held up, and some don’t. I will be more patient if Hank Roberts, Steve Bloom, and Ray Ladbury will.]

Presumably a climate model would assume a convergence of all moments in each relevant probability distribution function (pdf) although presumably higher order components may be truncated according to particular assumptions of a given model. Perhaps the climate modellers could elaborate on moment convergence and truncation of relevant pdfs in relation to structural stability. Some of the models also involve ensemble calculations, and again it may be instructive for the climate modellers to describe something about the use of these, especially as the public has been involved in some ensemble calculations being run on their pc’s at home.

In the field of nonequilibrium dynamics, there are interesting phenomena, such as the Belousov Zhabotinsky systems, which exhibit complex ordered features (going under the general field of self-organising systems). The work of pioneers such as Prigogine and Haken come to mind, although their work is not directly in climate change modelling. The book “The self-made tapestry. Pattern formation in nature” by Philip Ball has lots of photographs of pattern formation in diverse systems, although not climate systems. Systems can switch from one state to another quite abruptly, and one might ask whether such sudden state changes are predicted by the climate models. For example, do climate models predict jumps between states of climatic circulation, either atmospheric or oceanic? If so, do these jumps have anything do with moment convergence and pdf stability/instability? The formation of large-scale mills in the southern oceans is an interesting phenomenon in this respect, although presumably oceanic computational fluid dynamical models may not necessarily reveal such complex vortex-type phenomena.

It would seem that the question of structural stability and morphogenesis is very important in any model. Just because a particular pattern has not been seen before, it may not necessarily mean that a certain pattern may not form soon or some time in the future. In any case, there would also be questions about completeness of each subsystem model. For example, Hansen’s recent paper on Scientific Reticence is quite explicit that much of important physics of ice sheets is not included in the models, hence his raising of matters to do with nonlinear behaviour (eg disintegration) of ice sheets. One would have thought that if a model (eg for ice sheet disintegration) doesn’t include essential physics then that would naturally compromise other parts of GCM systems, such as oceanic circulation and oceanic biotics, to which it may link and take inputs/outputs.

In systems far from equilibrium, the phenomenon of self-organised criticality (SOC) is frequently observed. Examples which are often quoted to illustrate this are sandpiles, avalanches, forest fires, etc, and an essential ingredient for such phenomena seem to be interaction-dominated thesholds. Do any of the climate models exhibit systems which self-organise to criticality? Does the steady increasing rise of GHG emissions trigger SOC phenomena in any of the models? It would be strange if it did not given that SOC is apparently observed in so many situations and many authors in diverse fields associated with climate change discuss thresholds. The matter of thresholds, eg where bifurcation takes place and systems can move along diverse paths / patterns, is clearly a very important area and it is not necessarily obvious that such thresholds may emerge directly from model calculations. Are there examples of thesholds which are outputted by the models?

With respect to the “warm” welcome you received, you should have seen the “warm” welcome I received when I first joined DebunkCreation. I wasn’t being antagonistic or demanding that someone who has spent eight or more years becoming an expert in their field defend it while standing on one foot – on one of the more esoteric issues, nonetheless. I simply popped up my head making a silly creationist argument as a joke – in a way that I figured everyone would realize was a joke. But given what they had experienced in the past with other individuals, it took some time to convince them that that was how it was intended.

Now with respect to getting one’s questions answered, there was a question at one point that I had, I needn’t go into the details here, but I realized there had to be an answer – and it took ten years before I was able to answer the question for myself. You have spoken of stochastic processes presumably making climate prediction near to impossible, but why would they? Bell distributions with great regularity result from stochastic processes, and in this sense stochastic processes are quite predictable. Moreover, climates themselves are in essence a form of probability density function – for the weather patterns which exist within them. The weather is what is particular, whereas the climate itself is a statistical description of what particulars we can expect to find as the result of the stochastic process by which a climate system evolves.

Beyond this, in terms of the basics (at least from my perspective), what we are dealing with is a problem of induction. Any scientific theory reasons from what is known to the as of yet unknown. This is essentially what Gavin was getting at when he stated in the inline to #153,

One can never know that you have spanned the full phase space and so you are always aware that all the models might agree and yet still be wrong (cf. polar ozone depletion forecasts), but that irreducible uncertainty cuts both ways. So far, there is no reason to think that we are missing something fundamental.

Making inductions on the basis of what limited evidence we might have is part of the discovery process. It permits us to make predictions, and when those predictions fail, we know that it is time to replace or modify the theory. Now if the theory has worked very well in the past or with respect to a large body of evidence, chances are that we will not want to discard the theory altogether, but will realize that while that theory has worked with regard to earlier contexts, there is something new about this context – and then it is time to investigate what is new.

Moreover, you will note that Gavin specifically stated that while the climate models are parameterized in some contexts, but that they typically continue to work quite well outside of those contexts. We are reasoning from the known to the unknown, and Hume’s arguments to the contrary not withstanding, it generally works. If it didn’t, human cognition would be entirely impossible.

Incidently, I have been here a few weeks, and I have as of yet to receive an inline response, so you are ahead of me on that count.

In any case, it is quite possible that my opinions on climate science are as worthless as you appear to think. My only defense is that, like the scientific endeavor itself, this is part of how I learn, and while I hope that people will correct me when I get something wrong, it has often been my experience that I will have to do this myself.

Nevertheless, I would remind you that just because someone is not one of the leading climatologists themselves doesn’t mean that their opinions are entirely without merit. I have looked up some of the people who are participating in these discussions. They often have some fairly relevant experience and even expertise. Now I am not about embarrass them or single them out by pointing out who they are. You can look some of them up yourself, or alternatively, judge them by their arguments and the rest of what you already know – to varying degrees of justification.

Ray, you say:
“contributors to this blog all have day jobs. They may not feel they have the time to delve into the guts of the models with someone who has not shown that they have mastered the basics”

First, I reject your notion of a hierarchy of qualification. I agree that a climate modeler must master the basics before building a climate model. An engineer familiar with large, complex models, in contrast, has the background in modeling to ask modeling questions that are independent of climatological detail. His questions are necessarily going to seem awkward to non-climatological specialists; however they should not be dismissed on the basis of the way they sound to a non-professional.

Second, I am not seeking feedback from contributors who all have non-climatological day jobs. I am seeking feedback from qualified professionals who do climatological research on a daily basis. Thank you all for trying, however.

Lastly, others posting here clearly do not understand themselves how the GCMs are built and parameterised, yet make pretenses of having read “the literature” – which is HUGE. Can I assume you are as critical of their amateur commentary as you are of mine? Or do you hold a double-standard in how you apply your criticism?

Thank you for the reply in #265. I will read the cited paper and those cited in it. Expect to hear back in a month. Meanwhile, if you can manage a post on the topic of “structural instabilities” in theoretical models and in the real climate system, that would be very much appreciated. [For the record, some of the comments by commenters were, as you suggest, a propos. It’s the additional noise and the overall prejudicial tone from multiple commenters that disturbed me.]

Bender, I am not an expert in climate change. I am a physicist with enough physics knowledge–and enough knowledge of how physics gets done to follow the field. About a decade ago, I was an editor at a physics magazine. Do you know how many times I would get calls from people who were absolutely sure they had disproved relativity? Do you know how often people like the contributors here are confronted–often forcefully and rudely–by people who are absolutely sure that they’ve found a fundamental error and we can all go back to driving our gas-guzzling SUVs without guilt? I will describe these people to you. They often have some technical background–engineer, doctor, computer programmer… that they feel gives them some special understanding. They often have no background in climate science, so they cannot understand the culture of the field or how and why it has developed in the way it has. And finally, they are monomaniacal–refusing to discuss any issue other than the one they are fixated on. If you want help understanding a field, look as little as possible like a monomaniac. Accept the help as it is offered. There is usually a reason why people present issues in the order they do. One may be a prerequisite to understanding what you are really interested in. For interest, I would think it would be very useful to understand the general approaches taken by various modeling groups before trying to delve into the instabilities of the models.

I am a physicist with 20 years experience. I accept that Gavin et al. will teach me a whole helluva lot more about climate science than I will ever teach him about anything–unless I want to set up a website about radiation physics and he for some reason wants to know something about it. This is not their day job–which they are succeeding admirably at, by the way. Rather it is a community service trying to increase understanding of what they do. I applaud them for it.

[[First, I reject your notion of a hierarchy of qualification. I agree that a climate modeler must master the basics before building a climate model. An engineer familiar with large, complex models, in contrast, has the background in modeling to ask modeling questions that are independent of climatological detail.]]

Cf my earlier comments about engineers thinking they’re scientists. No, bender, just understanding computer modeling doesn’t mean you understand climatology as well, any more than someone who can program a runge-kutta differential equation solver but never took an astronomy course can do solar modeling.

Ray, perhaps you could help me with a problem I am having with radiation physics. What will the energy of the radiation emitted by a CO2 molecules in the atmosphere at NTP, and does it depend on the temperature of the air ?

Once again what are a scientist and what are an engineer in your view?

Are a scientiest someone doing research and an engineer someone applying research?

“An engineer familiar with large, complex models, in contrast, has the background in modeling to ask modeling questions that are independent of climatological detail.”

I agree with this. Barton, it is a huge difference with a background in modeling and a skill in computer programing.

I haven’t read all posts but benders concern about parametrizations seems to be a valid question as is vell know in system identification but gavin wrote a good answer and it seems like the parametrizations is done in a good way.

I have noticed that there is a great deal which goes into climate modelling, and not just the ability to model weather, either.

Obviously you have the physics, a keener interest in atmopheric conditions at higher altitudes and the like, but there is also the understanding ocean flow, chemistry (e.g., the effects of water vapor on the ozone and carbon dioxide on ocean chemistry), soil, the role of particulates in the nucleation of precipitation, geology (e.g., permafrost and methane hydrates), and now even ecology and microbiology and their effects upon carbon sequestration. I have never seen a discipline quite like this. Nowadays much of the science being done is interdisciplinary, requiring the formation of teams from various disparate fields – which at times will even have difficulty communicating with each other due to their specialized approaches to viewing problems and even their vocabularies. However, I suspect that climatology has been at the forefront of such an integrative approach – and that there is no other branch of science which requires this sort of integration to as great an extent.

I myself fully realize that I am an amateur in this and a number of other areas. But it is something that I would like to understand to the extent that I can – for a variety of reasons – including simple curiousity. For this reason, I greatly appreciate the work being done by Real Climate to communicate the principles of climatology with the general public, although sometimes I can’t even imagine why they would want to do it.

I am a physicist with 20 years experience. I accept that Gavin et al. will teach me a whole helluva lot more about climate science than I will ever teach him about anything–unless I want to set up a website about radiation physics and he for some reason wants to know something about it.

Yep.

You are one of the people I looked up and decided had some fairly relevant knowledge and expertise. (See post #270, last paragraph.) I would likewise include, to a lesser degree, those would have had to work closely with climatologists over an extended period of time.

But you are right: having advanced knowledge in a relevant discipline doesn’t make one an expert in climatology, and it would be a grave mistake to think otherwise. In an advanced economy, there exists a well-developed division of cognitive labor, particularly in the sciences. It is often difficult for one to be an expert even in their own field, which is part of the reason why new disciplines keep branching off of earlier ones. Moreover, it is a virtual impossiblity for one to become an expert in a field for which they never trained. In fact the phrase “contradiction in terms” springs to mind. And one of the mistakes which it is far too easy to make is to gaze at another field from a distance and assume that the only thing expertise or knowledge that it requires is that which immediately seems obvious. However, I suspect that in the case of the pseudo-scientist or monomaniac, there is usually a bit more to it.

In the case of evolutionary biology, engineers will often be the worst groups of offenders, although this isn’t meant to disparage them in their own fields. They will often be the creationists who see something which works as an integrated whole – and have considerable difficulty seeing how that whole may have come into being gradually and as the result of a blind process. The concept of systemic causation will often evade their grasp. In fact, it would sometimes appear that they haven’t even grasped the principles of organic growth and developement even at their most basic level. However, it should also be recognized in passing that there are generally certain religious motives involved, a personal framework through which they understand and live their lives which they then seek to impose upon the evidence and empirical science.

As for myself, I will pick up articles in prebiotic chemistry, virology as it relates to retroviruses or bacterial viruses, or molecular biology, such as those dealing with new discoveries related to promoters, transcription factors (proteins and RNAs which regulate the expression of other proteins and RNAs by binding to promoters), but even after having read a fair amount, trying to fit together pieces in a puzzle, I will be lucky if I have actually understood more than half of a given article at a relatively basic level. My understanding of advanced fields in science will always be superficial, and I will always be something less than a novice. But this isn’t something I can resent anymore than the law of gravity. It is simply in the nature of things, one of the essential aspects of the human condition.

Recognition of expertise is not a form of blind faith, nor does it prevent you from understanding a given field as best you can, but it requires a recognition of one’s own limitations which is itself an expression of rationality.

I haven’t read all posts but benders concern about parametrizations seems to be a valid question as is vell know in system identification but gavin wrote a good answer and it seems like the parametrizations is done in a good way.

Having blundered part way through the article that Gavin suggested, what I have noticed is the extreme level of specification which exists in the climate model.

Every detail has to be justified, particularly when it deviates from earlier models. And at every point, the climatologists seek to ground whatever mathematical assumptions they might make in that which has been empirically demonstrated. I even saw mention of experiments where humidity determined the size at which dust clustered – and how this further affects what is called the “imaginary index of refraction” at different wavelengths – which no doubt plays an important role in determining the associated albedo and greenhouse effect. In some ways it reminded me of all the math and physics which goes into creating realistic computer-generated images, but it is far more sophisticated, dealing with a far larger variety of phenomena than the affects angle, distance, surface properties and atmosphere which must be understood in order to recreate how things appear.

Sometimes the formula used will be approximations in one form or another for the sake of simplifying calculations, but they will be good approximations, not something which was simply thrown in there to get the results one wants. For their models to be accepted and used by other climatologists, it has to be rationally defensible at each and every step. It has to meet high standards. Moreover, the results of such models must be replicable – which in this case means that the models, it would appear, have to be made available to other researchers – including even the source code.

From what I can see, there is a great deal of art to this in the sense of ingenuity, but there is very little latitude for fudging either the model or the results – and I strongly doubt that it would be looked upon kindly by the rest of those within the discipline.

Ew. I hope you’re wrong about this guy being connected to that. Deltoid’s debunking most of those stories.

I hope so, too.

I still remember a time when I looked up to one of his colleagues – somebody who I thought had the desire and the ability to change the Objectivist movement. He changed. After witnessing what happened to the main group and three splinter groups – well, that is when I decided that I am very distrustful of ideologies as such.

Incidently, I never was much into the politics – and my understanding of the ethics became much more dialectical. But everything took a backseat to epistemology and my attempt to understand it within the context of twentieth century developments.

In any case, it was probably a bad idea to post that – since I was just guessing. But at the same time, if it was that Michael, I would want you guys to know who you are dealing with. They are fairly intelligent, and rhetoric (in a variety of senses) is a large part of what they do.

You see what I mean about presumptuousness? Here you are, accusing me of practising ideology when what I am actually practising is actuarial science. i.e. How much should I charge as an insurance premium so that I maximize my profit while maintaining a monopoly on the business of the alarmed? Nothing ideological about it. I can’t maximize my profit unless I know how much cost I can expect to incur, and with what degree of uncertainty. Just as you can’t propose a rational CO2 mitigation scheme without knowing the expected cost-benefit ratio. Throw your philosophy texts in the recycling bin and get into some economics. But be sure to start with Lorenz and Smale. (Hint: it’s not an insurance firm.)

Yes, I know, I have wounded you. I should have been more trusting, but I am suspicious by nature, superstitious and paranoid. It is probably biochemical. If you still feel like giving me a piece of your mind, my email is:

timothychase AT gmail.com

But don’t worry, I am not expecting anything, I won’t be especially hurt if you don’t write, and I don’t check my email that often anyway, so I probably won’t even notice. In the meantime, I might try finding someone to talk with who values my opinion above lint.

Re 276: Hi Alistair, What specifically do you want to know? Basically, increased temperature means increased motion of the molecules, so the main thing that happens is the absorption lines get broadened. The relevant IR lines for CO2 have to do with vibrational states, so they ought to couple pretty efficiently to the kinetic degrees of freedom. Any dependence, though is going to be pretty much negligible for small temperature rises–remember its Kelvin that is the appropriate scale, not Centigrade.

Re 288: As long as the matter is in equilibrium with the radiation field, I believe Planck’s law is the appropriate one to use–and I believe that that is the case for the frequency of vibrational transition.

Ray, I assume that when you wrote “As long as the matter is in equilibrium with the radiation field, …” you meant that molecular collisions were causing equal amounts of excitation and relaxation.

Although that will be true in the mid atmosphere, do you agree that is not the case near the surface of the Earth where the greenhouse molecules are being excited by blackbody radiation from the Earth’s surface, but are being relaxed by collisions with other air molecules such as N2 & O2?

I find it difficult to see how the greenhouse gases, which create lines and bands in the blackbody spectrum of the Earth’s radiation field, can also be radiating with an intensity determined by Planck’s function.

Is there some reason that the curve of observed GAT is not continued past 2003? That’s 4 years ago.

Don’t know.

But 2005 beat out 1998 as the warmest year from 1890 to 2005. And while 1998 had the benefit of an unusually strong El Nino, 2005 did not. Moreover, 2005 had the coolest solar year since the mid-1980s. 2005 was pretty unusual compared with the twentieth century – but part of a trend for the past five, ten, fifteen and twenty years.

The general direction was up – just like for the whole of the twentieth century.

In a discussion of Hansen’s latest paper over at CS, Willis E. made the following comment regarding the veracity of current climate models:

“Then we should perform Validation and Verification (V&V) and Software Quality Assurance (SQA) on the models. This has not been done. As a part of this, we should do error propagation analysis, which has not been done. Each model should provide a complete list of all of the parameters used in the model. This has not been done. We should make sure that the approximations converge, and if there are non-physical changes to make them converge (such as the incorrectly high viscosity of the air in current models), the effect of the non-physical changes should be thoroughly investigated and spelled out. This also has not been done.”

I asked him for references showing that the above have not been done. His reply was that he has no source, but that he can find no reference to any of the above having been done.

Does anybody know how true the above is? If it is in any part not true, are there any supporting references?