AR4: "ad hoc tuning of radiative parameters"

Chapter 1 of AR4 has some surprisingly interesting comments about models that, to the extent that the points are disclosed in the body chapters, are disclosed so opaquely that they would be undecipherable to anyone other than a few. Here are some interesting comments about flux adjustment – an issue that must surely raise civilian eyebrows. A “flux adjustment” in a GCM is defined below as an “empirical correction that could not be justified on physical principles” i.e. a fudge factor, and one of the accomplishments of recent GCMs has been to apparently get past that. AR4:

The strong emphasis placed on the realism of the simulated base state provided a rationale for introducing flux adjustments or flux corrections (Manabe and Stouffer, 1988; Sausen et al., 1988) in early simulations. These were essentially empirical corrections that could not be justified on physical principles, and that consisted of arbitrary additions of surface fluxes of heat and salinity in order to prevent the drift of the simulated climate away from a realistic state. The National Center for Atmospheric Research model may have been the first to realise non-flux-corrected coupled simulations systematically, and it was able to achieve simulations of climate change into the 21st century, in spite of a persistent drift that still affected many of its early simulations. Both the FAR and the SAR pointed out the apparent need for flux adjustments as a problematic feature of climate modelling (Cubasch et al., 1990; Gates et al., 1996).

By the time of the TAR, however, the situation had evolved, and about half the coupled GCMs assessed in the TAR did not employ flux adjustments. That report noted that some non-flux adjusted models are now able to maintain stable climatologies of comparable quality to flux-adjusted models (McAvaney et al., 2001). Since that time, evolution away from flux correction (or flux adjustment) has continued at some modelling centres, although a number of state-of-the-art models continue to rely on it.

This raises an obvious question: which “state-of-the-art models” continue to rely on flux adjustments? One of the annoying aspects of IPCC WG1 reports is their refusal to make such identifications, which might put one of the group in hot water with his funders, I suppose. I’d like to know which models make flux adjustments so that I can keep an eye out when the “ensemble” results are reported.

They go on to make the following interesting comment that I;ve not seen in print elsewhere:

(1.5.3) The design of the coupled model simulations is also strongly linked with the methods chosen for model initialisation. In flux adjusted models, the initial ocean state is necessarily the result of preliminary and typically thousand-year-long simulations to bring the ocean model into equilibrium. Non-flux-adjusted models often employ a simpler procedure based on ocean observations, such as those compiled by Levitus et al. (1994), although some spin-up phase is even then necessary. One argument brought forward is that non-adjusted models made use of ad hoc tuning of radiative parameters (i.e., an implicit flux adjustment).

No reference is given for this powerful statement. This is exactly what Gavin Schmidt denies and yet here’s IPCC WG1 worrying about “ad hoc tuning”. Does anyone know anything more about this?

Steve M, you misused the term “begs the question.” Begging the question does not mean the same as “invites the question.” Begging the question means that you are answering the question in a tautological manner.

I know that most journalists and journalism professors cannot tell the difference, but a “climate auditor” needs to watch his language.
😉

Having read a little bit of GCM source code and some GCM readme files, it would not surprise me at all if most of these GCMs have common lineage, or at least share common code. As Steve has demonstrated with numerous proxy studies, there is a limited number of fresh starts. A lot of published proxy studies use rehashed proxies, perhaps adding few new ones, or with a new technique.

I suspect it is the same with many GCMs. I would like to see a GCM family tree. Anyone know where I can find the ancestry?

If you put into your model that for a 1 degree increase in temperature do to CO2 the feedback mechanisms add 1.5 degree’s C. This is a unstable system and you will need to add fudge factors to make your model stable.

I ran across this web page the other day and thought it was applicable. The IPCC does have a policy for submitting model information. If you have developed a GCM, and then submit your results to the IPCC for perusal, your model must contain specific features before your outputs will be reviewed.

There is a lot of info here but some of the more interesting stuff is way down at the bottom of the page. I have not seen this link come up thus far in the recent discussions and thought it might add something to the debate.

I remember reading that there is no reliable data about the effects of aerosols so the GCMs are required to estimate their effect. The usual technique for estimating the effect is: calculate the difference between reality and the theory and assume the difference is due to aerosols. I can’t find the link right now but there are lots of papers on uncertainies in aerosol modelling. Given those uncertainities it is unlikely that any GCM could have predicted the past without tuning aerosols to produce the desired result.

For the French non flux-ajusted model LMD/ISPL (team of Hervé le Treut, lead Author of Chapter 1 of WG1 AR4), here is the archive of internal correspondances between modellers.

It seems they have divergence problems of their own, for example some quite “funny” and illustrative translations in this letter:
– “Olivier has mentionned the problem of snow accumulation reaching several km must be resolved”
– “Flux comparisons between top and bottom atmosphere show a discrepancy of about a dozen W/m2, it’s too much”
– “Zonal means show a big cold biais (5 to 15°C) at the tropopause”
…

Thank you for the reference to figs 13 & 14. They are educational. They also alarm this scientist. I think we used different terminoloical description (rather shorter), but we did not use the method.

Fair amazing how much modelling goes evetually back to Phil Jones. We know that there are still questions to be answered about the reliability of these data, yet here they spring up in yet another paper as if they were gospel.

It would be neat if the science followed the useual processes of postulation, experimental design, data gathering, evaluation, reporting etc. I believe that this process is incomplete for global temp data such as Jones, so the cart is before the horse and has been for a long time.

As always, use of econometric terminolgy is probably more reveailing in the use of models.

Macroeconometric models struggle with the same problems, but don’t use “flux adjustments”. Instead, they simply have “errors”. i.e. Actual-Predicted=error (or residual). Then, if you want to forecast, clearly you need to “forecast the errors”. How you do that in any credible way is one of the most endearing features of macroeconomtric based forecasting.

My recollection from research done some time ago now is that some of the regional GCMs i.e. GCMs used to do studies that are local to a particular part of the world e.g. Europe and/or the UK still use flux-adjustments. I’ll double check and get back.

One argument brought forward is that non-adjusted models made use of ad hoc tuning of radiative parameters (i.e., an implicit flux adjustment).

As I recall, when Dr. Isaac Held made his first visit to CA and he was queried about the use of fluxes in climate models, his reply indicated that flux adjustments could be and were made implicitly by being included in other parameterizations. In fact my take on his reply was that the boundaries between flux use and parameterization were fuzzy. I do not have the exact post or comment but I think I could find if anyone feels it is important.

On the other hand part of Held’s arguments about parameterizations was that tuning was difficult to do with parameters which he indicated were not “flexible” in changing the end results and that apparently tuning the model and using parameters are different concepts to him.

I found the Isaac Held comment I was referencing in my above post at Post #64 on the thread Truth Machines. As I reread these comments I think my interpretation is one of many that one could take away from Helds comments. Unfortunately, Dr. Held never returned to comment on the tuning question and provide details and insights into his comments at Post #64.

I would argue, as do some of the modelers interviewed, that there is no sharp distinction between flux adjustments and tuning of other sorts. Both are attempts by pragmatists to get a model that has potential relevance, one hopes, to the problem at hand. My personal preference for tuning is that the latter results in a model that is a testable hypothesis for how the climate system behaves, while a flux adjusted model is not. Anyway, I dont think there is any point in focusing on the flux adjustment issue in isolation.

I do want to comment on the tuning question. I havent the time right now ¢’Å I will get back to this in a day or two.

On doing some rereading of the thread I noted that Dr. Held used the term “stiff” to describe why parameterizations is difficult to use in “tuning” a climate model.

In this thread Steve M had questioned Dr.Held with regards to climate models results of the tropics where measured differences between troposphere to surface warming seems to go against the model results. Dr. Held implied that the “stiffness” of the physical parameters used in the climate models in this csae would indicate that the measurements are wrong. When Steve M pointed to Emanuel using the measured differences and not the climate model results in his exposition of a potential intensity model for explaining hurricane intensities, Held more or less laughed it off with a quip.

So this confirms my postulate from the other day that after a sigma (in this case it was said ten years) occurs, they will just go back and tweak (ad hoc) the forcings to make darn sure that the sigma disappears. Yet adding that kind of negative value should be readily apparent in that it would show temperatures decreasing without man’s help. Have I got this correct?

Bender I tried, but after almost an hour gave up the ghost. I would think the last two posting by SM squarely answers my question, I was just asking if I’m reading that information correct. I’m still a bit new to the debate and haven’t read everything there is out there, but I am trying (and trying out the math, which is giving me a headache).

1. Gavin comes across as obtuse in his response.
2. His answer was circular logic and thus completely nonsensical.

Let me see if I can articulate why:

1. He states that you cannot cheery pick a specific year to start a trend analysis, but that is illogical. You have to start somewhere, and in giving his criteria you’d actually start 5 years ago, not ten. Regardless of what your starting point for the trend, 1998, 1948 your slope is significantly altered if the last ten years have remained static to what it would have been had temperatures actually gone up for the last ten years. Period.

2. You can’t arbitrarily call any data insignificant without defining the parameters like he did in his response, then use a single year (2005) to prove that there was an upward trend in the same paragraph. It’s using circular logic and not just evasive, but a blatant end around.

3. You can’t comment that any one year’s negative temperature trend can be explained away by looking for a significant negative forcing but at the same time not doing it for positive trends which is what he’s doing in the response.

In short he gives an answer that is no answer and suggests that no matter what date you use or no matter what the trend is over a given time frame that he reserves the right to change the parameters. The only way it seems to circumvent his illogical argument would be for temperature data to drop to pre -satellite era temperatures which while possible isn’t likely.

RE: #32 – That sort of alludes to a global superstormish output. While Bell and Strieber are masters of junk science, I would not completely discount what I refer to as the “overshoot syndrome” – namely, a perturbation of any type, anthropogenic or otherwise, is eventually bound to cause the system to snap back down into ice mode. Someone on another blog put it as “the climate system is currently biased toward lossiness and cooling.”

AJ Abrams, indeed sudden a drop off is unlikely. Indeed, its nearly statistically impossible that the prediction that 2008 will be at least the 10th warmest ever over at the Hadley Centre will be wrong. It’s actually not going to happen short a volcanic corruption or the Sun dissapearing. Or some kind of super La Nina. But that only happens after a super El Nino. Next year will be warm, but what about the next decade?

Does anyone know why the “Clear Sky Anomaly” can be so easily dismissed by the modeling community? They blame the 20% difference between satellite data and radiative transport codes on aerosols. To me this seems to be a huge problem for people that are constantly saying that the “physics” underlying the models is correct.