More on Functional Forms: Wigley 1987

Over the last week or so, I’ve reported on my efforts to locate the provenance of the functional forms for the relationship between levels of CO2 and other greenhouse gases and temperature. Lubo has also chipped in on the topic from a different perspective proposing a derivation of a log formula from first principles.

The other leg of their argument was Wigley 1987, published in Climate Monitor, a CRU house organ where Wigley was then employed. I doubt whether this was severely “peer reviewed”. However, the CRU authors are leaders in their field and I see no reason to disrespect Wigley 1987 merely because it appeared in a house organ. However, it has not been easy to locate. The University of Toronto did not have a copy; Wigley himself said that he did not have a copy. However a CA reader has located a copy and kindly emailed me a scanned version, enabling this source to be tracked down a bit further.

Once more there’s rather a dead end. Wigley 1987 simply stated his results, rather than deriving them, as shown below. Wigley also had some interesting comments about GCM performance in this article, which I’ve also excerpted at length below.

The Logarithmic Formula

Wigley simply states the results without deriving them:

On theoretical grounds it can be shown that the relationship between radiative forcing change at the top of the troposphere and concentration change is linear at low concentrations, square root at intermediate values and logarithmic at higher concentrations. Because of this, the results of detailed radiative transfer calculations for the various trace gases give a linear concentration dependence for CFCs, square root for CH4 and N2) and logarithmic for CO2.

For CO2 and CH4, I have used results from the Kiehl and Dickinson 1987 model, supplied by Jeff Kiehl.

For CO2 over the range 250 ppmv to 600 ppmv, the Kiehl-Dickinson model gives a change in radiative forcing ΔQ, resulting from a concentration change from C_0 to C which can be described by:

CO2: ΔQ= 6.333 ln (C/C_0) (14)

to within 0.01 wm-2. Note that the precision of this fit should not be confused with the accuracy of the implied ΔQ values. The equation is probably accurate to about +-10% with similar accuracy for the results for other trace gases given below. Equation (14) implies a ΔQ value of 4.39 wm-2 for a doubling of CO2 concentration.

Lubo also believes that the relationship is logarithmic and this idea is plausible. This may very well be, but I would be surprised if Wigley had precisely the same proof in mind. Hans Erren has written in saying that a logarithmic form was stated by Arrhenius but Arrhenius’ results were not derived “on theoretical grounds” within the terms of Wigley’s above assertion. I suspect that there is some folk history to the linear-square root-logarithm rule within the climate community of the 1980s – Ramanathan probably has something on it, but this is a dead end in terms of tracking IPCC references. There is more to be said on Lacis et al 1981 and Myhre et al 1998 methods, which I will get to.

Wigley on GCMS
Wigley 1987 contained an interesting discussion on a topic that continues to this day: the divergence between the warming predicted by GCMs and the historical record. Wigley:

The accepted range of equilibrium warming due to a doubling of CO2 concentration (or its radiative equivalent) is 1.5-4.5 deg C. Most recent GCM studies have given values of 4 deg C or more. A 4 deg C warming for doubled CO2 corresponds to a climate sensitivity of about 1 deg C/wm-2, i.e. an equilibrium warming of about 1.7 deg C for the 1880-1985 radiative forcing of 1.7 wm-2. This is very much larger than the observed global mean surface air temperature change. This discrepancy, which is partly accounted for by oceanic lag effects, has been noted earlier by other authors e.g. Gilliland and Schneider 1984; Wigley and Schlesinger 1985. It has a number of possible explanations: the magnitude of global warming may have been considerably underestimated; the damping or lag effect of the oceans may be much greater than is currently believed; large additional forcings may be operating on the century time scale; and/or the most recent GCM studies may have overestimated the climate sensitivity.
Uncertainties in the observational temperature record are discussed by Wigley et al 1985 and Jones et al 1986. Current opinion is that, if anything, the amount of warming has been overestimated, an opinion not shared by myself and my colleagues. I will not consider this option further, but, instead concentrate on the other three possibilities.

Wigley then goes on to consider oceanic lag as an explanation for the non-response, concluding against this on the basis that the “only way that one could obtain the observed warming would be for vertical ocean mixing to be much greater than could be obtained with a pure diffusion or upwelling-diffusion ocean model”.

He then discusses the possibility of an overlooked forcing, noting that one would have to reduce the 1880-1985 radiative forcing by 0.7 wm-2 or more, given GCM sensitivity. He canvassed the possibility of a decline in solar over the 20th century as being an explanation (something that all parties would now seem to agree is exactly opposite to what was going on):

This would occur if some other external forcing agent were operating on the century time scale. The obvious possibilities are solar irradiance changes and/or long time scale changes in the volcanogenic aerosol changes of the stratosphere. …Solar variations of this magnitude cannot be ruled out. A decline of 0.7 wm-2 would correspond to a 0.3% reduction in solar output, well within the uncertainty in historical (pre-satellite) measurements of the solar constant. Recent satellite data show a decline of about 0.1% in irradiance over the 1979-85 period (Kyle et al; Willson et al 1986) attesting to the feasibility of a 0.3% decline over the past century or so.

He dismissed the potential forcing from volcanic aerosols as being anything other than transient. His other suggestion was planetary albedo:

Another possibility is a long-term increase in planetary albedo, Since the incoming radiation is about 240 wm-2, an increase of only 0.002 in the planetary albedo i.e. about 0.7% would be sufficient to reduce the net radiation balance by 0.7 wm-2.

Notably and surprisingly missing from this inventory were manmade aerosols. I think that Charlson (Hansen) et al 1990 was seminal in putting these into the mix as an explanation for the divergence. As I noted in my comments on Ramanathan’s AGU presentation, while the “discovery” of manmade aerosols seems to be somewhat opportunistic, the aerosols themselves are real enough and the effect has to be considered in a historical context. (Of course, opportunism can creep in, as one notes by the inverse relationship between GCM sensitivity and aerosol history, so there’s a lot of softness in this topic.)

Wigley’s third alternative is that GCMs are too sensitive:

A third possibility is that he climate sensitivity of about 1 deg C/wm-2 implied by recent GCMs is too high. If one accounts for the ocean damping effect using either a PD or UD model, and, if one assumes that greenhouse gas forcing is dominant on the century time scale, then the climate sensitivity required to match model predictions is only about 0.4 deg C/wm-2. This corresponds to a temperature change of less than 2 deg C for a CO2 doubling. Is it possible that GCMs are this far out? The answer to this question must be yes. Feedbacks involving sea-ice and cloud variations are still relatively poorly handled by all climate models and the feedback due to changes in cloud optical properties (e.g. Somerville and Remer 1984) has not been included in any GCM studies. This latter factor alone could possibly reduce the climate sensitivity by a factor of two.

The model uncertainties described in (1) [oceanic lag] and (3) [sensitivity] are of course well known. Their existence is the reason that, in spite of recent GCM results, the equilibrium temperature change due to a CO2 doubling is still generally given as lying in the range 1.5-4.5 deg C. The lower limit is entirely compatible with observations.

It’s interesting to see once again the references to cloud feedback as a major uncertainty and the possibility that a particular cloud feedback could reduce climate sensitivity. I wonder how IPCC represented the uncertainties, as stated here by Wigley, in their contemporary reports. I’ll look at that some time.

Steve, for once the internal Google search found that post, but this is my first success in my last 3 or 4 tries. To my mind, this is the most urgent software issue facing this blog. Without a reliable search function, what use is an archive?

Cheers — Pete Tillman
Steve: I have a reliable search function available to me in editor mode. If you can give me instructions on how to make it available to others in a reader mode, Im happy to do it.

Grant Petty in A First Course in Atmospheric Radiation (Second Edition) explains why there is linear, square root and logarithmic (exponential) behavior for different gases at different concentrations. It has to do with line shape, number of lines and overlap. For any line shape at very low optical density the behavior is linear. For an isolated Lorentzian line at high optical density, the limiting behavior is square root. If the line width and number of lines is so high that they severely overlap over a broad wavelength range, Beer’s Law applies and transmission decays exponentially with concentration or stated another way, absorption increases logarithmically.

You can find this stuff in textbooks, but it’s so basic you are unlikely to find it in the primary literature unless you dig down a very long way. If there ever is an explication of the 2.5 C/doubling CO2 climate sensitivity, it will probably first appear in a textbook. But that won’t happen until the level of understanding is much higher than it is now.

Perhaps we are finally getting down to a definitive explanation. As a practicing chemist many years ago I was familiar with the practical applications of Beer’s Law in determining concentrations of absorbing species in solutions, but even that straight line relationship could “bend down” at higher concentrations and as I recall become almost flat. I had never given much thought to what the relationships would be with gases at various concentrations, but after a lapse in adding to my knowledge base perhaps it is resuming.

Are the ranges at which linear, square root and exponentional relationships apply to optical transmission of an absorbing gas all that distinct?

Are the ranges at which linear, square root and exponential relationships apply to optical transmission of an absorbing gas all that distinct?

I don’t think so.

The deviation from Beer’s Law at high absorbance in a real spectrophotometer, at least in the UV/Vis range, is generally considered to be due to stray light intensity becoming significant compared to light transmitted through the cell rather than some fundamental problem with the theory. Unless you have a really good spectrophotometer, absorbances over about 2 will likely not be in agreement with Beer’s Law. You might be able to go to 3 (99.9% absorption) under the best conditions.

RE: Anthropogenic aerosols. The conventional wisdom has typically been very Euro and Amero centric, namely, that since pollution controls ramped up hard during the 1970s and 1980s, anthro aerosols are a diminishing issue. Of course, by now, the Asian Brown cloud is common knowledge. One hedging tactic seen is to mention the short residence time of the subject particles. But that is a red herring. The emission rate is such that there is a continual plume covering a good portion of Asia, the North Pacific and parts of the Arctic. Yes, individual particles’ residence times are much shorter than GHGs, but the plume made of of the train of such particles is persistent and growing.

If the line width and number of lines is so high that they severely overlap over a broad wavelength range, Beers Law applies and transmission decays exponentially with concentration or stated another way, absorption increases logarithmically.

If transmission decays exponentially (E_total*exp(-a*c)), absorption increase as E_total*(1-exp(-a*c)). They aren’t inverse functions. The total energy is equal to the sum of the energy transmitted and the energy absorbed.

The grass roots derivation of Beer’s Law is purely logarithmic. In lab use it is confined to low optical densities, usually well below 1, because noise becomes too large in the signal. Great care has been taken with spectrometers to get the relationship between reference and test specimens correct. These conditions simply do not apply in the atmosphere. There are complications, for example, from pariculate scattering (several variations), from high optical densities and from temperatutre and pressure and co-compositional changes in the optical path, which might not be parallel with the radiative path in any case.

It should not be encouraged that a certain type of relationship, because it fits OK in a certain concentration range, can be extrapolated and turned into a rule relationship.

I have not read the whole article now cited, but the abstract gives an indication of some of the complexity. Given that some of the variables are as yet unable to be modelled or properly measured, great care should be taken in arriving at conclusions. Reference follows – I’d go broke if I had to pay the opening fee of every article of interest.

L Wind and W W Szymanski
Institute of Experimental Physics, Aerosol Laboratory, University of Vienna, Boltzmanngasse 5, A-1090 Vienna, Austria
Abstract. We present a modelled approach of scattering contribution to the radiation transmission through a scattering medium, such as an aerosol, yielding a correction term to the Lambert-Beer law. The correction is essential because a certain amount of the forward scattered light flux is always overlaid on the transmitted radiation. Hence it enters together with the attenuated beam into the finite aperture of any detector system and therefore constitutes a potential problem in the inversion of measured data. This correction depends not only on the geometry of the measuring system but also substantially on the optical depth of the medium. We discuss the numerical analysis of the magnitude and functional behaviour of the scattering correction for a number of important measuring parameters and we give a simple approximation for the determination of the range of applicability of the scattering correction for single scattering conditions. Finally, we show that the derived expressions yield useful values of optical depths at which non-negligible multiple scattering effects occur.

I may be wrong but I think the derivation of the logarithmic relation for CO2 is similar to the ‘curve of growth’ concept in astronomy – an example derivation can be found here. As you increase the column depth of an absorber, the increase in equivalent width of the absorption line is directly proportional at small column depths, proportional to the logarithm at intermediate column depths, and proportional to the square root at high column depths.

In stellar atmospheres, the relation between line strength change and concentration (abundance) is linear for small concentrations, almost constant for intermediate concentrations and square-root for high concentration, see for example http://www.physics.uq.edu.au/people/ross/phys2080/spec/photo.htm
Is not line strength propotional to radiative forcing? Or do perhaps other physical laws apply for the atmosphere of the Earth and atmospheres of other objects in the universe?

Further on SteveSadlov’s contribution: Are the Sahara dust aerosols which drift across the Atlantic and partly end up in the US considered anthropogenic or natural? A case could be made for either, depending on the time scale. Apparently there is some evidence that they have existed since long before satellite photography, but the loss of ground cover in the Sahara has some anthropogenic roots in the long view. Ah, those pesky aerosols.

The correction involved appears to be because the measurement system is imperfect (finite angle of acceptance), not a problem with the theory itself. It’s similar to how stray light in a spectrophotometer limits the range of linear behavior of absorbance with concentration. Exponential extinction of transmitted radiation should continue for many more orders of magnitude than it is possible to measure in the lab.

Spencer seems to have found the source of artificial positive bias in diagnosing cloud feedback, which, if corrected, can reduce climate sensitivity at mere 0,8 deg K. If Spencer is right (none seriously disputed his conclusions thus far, he even had presentations of his results at Colorado University with 40 climatologists attending, and none objected) then all IPCC science is completely wrong, and your demand to AGW supporters to provide engineering quality study of 2XCO2 leading to 3 deg of warming is unnecessary. They will be shown to be wrong, not only stating uncertain propositions without engineering quality.

One concern I have with Spencer’s recent study is that when he uses a 7-day moving average there are no linear striations apparent in the data. When he uses a 91-day moving average then the striations appear. But what would happen if he used a 60-day moving average? or a 120-day moving average?

Could these striations somehow be an artifact of the data being auto-correlated and the time scale of the moving average?
I have no idea if this is or could be the case but it would be nice to rule this out.