..We first look at the RHS. We believe that the atmosphere will also increase in temperature by roughly the same amount, so there will be no change in the conductive term. The increase in the Radiative term is roughly 5.5W/m².

The increase in the evaporative term is much more difficult, but is believed to be in the range 2-7%/DegC. So the increase in the evaporative term is 1.5 to 5.5W/m², for a total change on the RHS of 7 to 11 W/m².

Since balance is an assumption, the LHS changed by the same amount. The surface sensitivity is therefore 0.095 to 0.15 DegC/W/m².

Note that this is the sensitivity to changes in Surface Forcing, whatever the source. It is NOT the response to Radiative Forcing – there is no response of the surface to Radiative Forcing, it can only respond to Sunlight and Back-Radiation.

Why is it at the tropopause and not at the surface? The great Ramanathan explains (in his 1998 review paper):

..Manabe & Wetherald’s [1967] paper, which convincingly demonstrated that the CO2-induced surface warming is not solely determined by the energy balance at the surface but by the energy balance of the coupled surface-troposphere-stratosphere system.

The underlying concept of the Manabe-Wetherald model is that the surface and the troposphere are so strongly coupled by convective heat and moisture transport that the relevant forcing governing surface warming is the net radiative perturbation at the tropopause, simply known as radiative forcing.

In essence, the reason we consider the value at the tropopause is that it is the best value to tell us what will happen at the surface. It is now an idea established for over 40 years, although for some it might sound bizarre. So we will try and make sense of it here.

Here is a schematic originating in Ramanathan’s 1981 paper, but extracted here from his 1998 review paper:

From Ramanathan (1998)

Figure 1

The first thing to pay attention to is the right hand side – 1.CO2 direct surface heating – which is shown as 1.2 W/m².

The surface forcing from a doubling of CO2 is around 1 W/m² compared with around 4 W/m² at the tropopause. The surface forcing is a lot less than at the top of atmosphere!

Before too much joy sets in, let’s consider what these concepts represent. They are essentially idealized quantities, derived from considering the instantaneous change in concentrations of CO2.

As CO2 shows a steady increase year on year, the idea of doubling overnight is clearly not in accord with reality. However, it is a useful comparison point and helps to get many ideas straight. If instead we said, “CO2 increasing by 1% per year”, we would need to define a time period for this 1% annual increase, plus how long after the end before a new balance was restored. It wouldn’t make solving the problem any easier – and it would make the results harder to understand – by contrast GCM’s do consider a steadily rising CO2 level according to whatever scenario they are considering.

So, with the idea of an instantaneous doubling, if the surface increase in radiative forcing is less than the tropopause increase in radiative forcing, this must mean that the balance of energy is absorbed within the atmosphere. And also, we have to consider what happens as a result of the surface energy imbalance.

The numbers I use here are Ramanathan’s numbers from his 1981 paper. Later, and more accurate, numbers have been calculated but don’t affect the main points of this analysis. The reason for reviewing his analysis is because some (but not all) of the inherent responses of the climate system are explicitly calculated – making it easier to understand than the output of GCM.

Immediate Response

The immediate result of this doubling of CO2 is a reduced emission of radiation (OLR = outgoing longwave radiation) from the climate system into space. See the Atmospheric Radiation and the “Greenhouse” Effect series for detailed explanations of why.

At the tropopause the OLR reduces by 3.1 W/m², and downward emission from the stratosphere into the troposphere increases by 1.2 W/m².

This results in a net forcing at the tropopause of 4.3 W/m². Most of the radiation from the atmosphere to the surface (as a result of more CO2) is absorbed by water vapor. So at the surface the DLR (downward long radiation) increases by only 1.2 W/m² – this is the (immediate) surface forcing. Here is a simple graphical explanation of why the OLR decreases and the DLR increases:

Figure 2 – Click for a larger image

Response After a Few Months

The stratosphere cools and reaches a new radiative equilibrium. This reduces the downward emission from the stratosphere by a small amount. The new value of radiative forcing at the tropopause = 4.2 W/m².

Response After Many Decades

The surface-troposphere warms until a new equilibrium is reached – the radiative forcing at the tropopause has returned to zero.

The Surface

So let’s now consider the surface. Take a look at Figure 1 again. The values/ranges we will consider are calculated by a model. This doesn’t mean they are correct. It means that applying well-understood processes in a simplistic way gives us a “first order” result. The reason for assessing this kind of approach is because our mental models are usually less accurate than a calculated result which draws on well-understood physics.

As Ramanathan says in his 1998 paper:

As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.

Process 1 is as already described – the surface forcing increases by just over 1 W/m². But the balance of 3 W/m² goes into heating the troposphere.

Process 2 – The warming of the troposphere results in increases downward radiation to the surface (because the hotter the body, the higher the radiation emitted). The calculated value is an additional 2.3 W/m², so the surface imbalance is now 3.5 W/m² and the surface temperature must increase in response. Upwards surface radiation and/or sensible and latent heat will increase to balance.

Process 3 – The surface emission of radiation increases at around 5.5 W/m² for every 1°C of surface temperature increase. But this is almost balanced by increased downward radiation from the atmosphere (“back radiation”). The net effect is only about 10% of the change in upward radiation. So latent heat and sensible heat increase to restore the energy balance, but this also heats the troposphere.

Process 4 – The tropospheric humidity increases. This increases the emissivity of the atmosphere near the surface, which increases the back radiation.

So essentially some cycles are reinforcing each other (=positive feedback). The question is about the value of the new equilibrium point.

From Ramanathan (1981)

Figure 3

In Ramanathan’s 1981 paper he gives some basic calculations before turning to GCM results. The basic calculations are quite interesting because one of the purposes of the paper was to explain why some model results of the day produced very small equilibrium temperature changes.

Sadly for some readers, a little maths is necessary to reproduce the result. It is simple maths because it is based on simple concepts – as already presented. As much as possible I follow the equation numbers and notations from Ramanathan’s 1981 paper.

To give an idea of typical values, for every 1°C difference between the surface and the air at the reference height, SH = 8.5 W/m²K, and with a relative humidity of 80% at the reference height (and 100% at the ocean surface), LH = 55 W/m²K.

Now we consider changes.

TM‘ is the change in the surface temperature of the ocean as the result of the increased CO2, and similar notation for other changes in values. Missing out a few steps that you can read in the paper:

TM‘ = ΔR(0) + ΔF↓(2) + ΔF↓(3) ….[13]

[ ∂LH/∂TM + ∂SH/∂TM + 4σTM³] + [ ∂LH/∂TS + ∂SH/∂TS ].TS‘/TM‘

This probably seems a little daunting to a lot of readers.. so let’s explain it:

The parameter on the top line in black, ΔR(0) is the surface radiative forcing from the increase in CO2

The red terms are the changes in downward radiation as a result of process 2 and 3 described above

The blue terms are the changes in upward flux due to only the ocean surface temperature changing

The green terms are the changes in upward flux due to only the atmospheric temperature near the surface changing

And the smaller the total under the line, the higher the increase in temperature. And there are two competing terms:

As the surface temperature of the ocean increases the heat transfer from the ocean to the atmosphere increases

As the atmospheric temperature (just above the ocean surface) increases the heat transfer from the ocean to the atmosphere decreases

As an interesting comparison, Ramanathan reviewed the methods and results of Newell & Dopplick (1979) who found a changed surface temperature, Tm’ = 0.04 °C as a result of CO2 doubling. Effectively, very little change in surface temperature as a result of doubling of CO2.

Ramanathan states that the calculations of Newell & Dopplick had ignored the red terms and the green terms. Ignoring the red terms means that the heating of the atmosphere is ignored. Ignoring the green terms means that the effect of the ocean surface heating is inflated – if the ocean surface heats and the atmosphere just above somehow stayed the same then the heat transferred would be higher than if the atmospheric temperature also increased as a result. (Because heat transfer depends on temperature difference).

I expect that many people doing their own estimates will be working from similar assumptions.

Later Work

Here is a graphic from Andrews et al (2009), reference and free link below, which shows the simplified idea:

From Andrews et al (2009)

Figure 4

The paper itself is well worth reading and perhaps will be the subject of another article at a later date.

Conclusion

I haven’t demonstrated that the surface will warm by 3°C for a doubling of CO2. But I hope I have demonstrated the complexity of the processes involved and why a simplistic calculation of how the surface responds immediately to the surface forcing is not the complete answer. It is nowhere near the complete answer.

The surface temperature change as a result of doubling of CO2 is, of course, a massively important question to answer. GCM’s are necessarily involved despite their limitations.

Re-iterating what Ramanathan said in his 1998 paper in case anyone thinks I am making a case for a 3°C surface temperature increase:

As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.

Notes

Note 1: The equation ignores the transfer of heat into the ocean depths

Note 2: The “bulk aerodynamic formulas” – as they have become known – are more usable versions of the fundamental equations of heat and water vapor flux. Upward sensible heat flux, SH = ρcp<wT>, where w = vertical velocity, T = temperature, so <wT> is the time average of the product of vertical velocity and temperature. However, turbulent motions are so rapid, changing on such short time intervals that measurement of these values is usually impossible (or requires intensive measurement with specialist equipment in one location). We can write,

By various thermodynamic arguments, and especially by lots of empirical measurements, an estimate of heat transfer can be made via the bulk aerodynamic formulas shown above, which use the average horizontal wind speed at the surface in conjunction with the coefficients of heat transfer, which are related to the friction term for the wind at the ocean surface.

Note 3: The calculation of each of the partial derivative terms is not shown in the paper, these are my calculations. I believe that ∂LH/∂TS = 0, most of the time – this is because if the atmosphere at the reference height is not saturated then an increase in the atmospheric temperature, TS, does not change the moisture flux, and therefore, does not change the latent heat. I might be wrong about this, and clearly some of the time this assumption I have made is not valid.

The second law of thermodynamics proves beyond doubt that there must exist the greenhouse effect, because:

1th A portion of the emission of the Earth into space from the upper layers of the atmosphere takes place.

2th A heat transport from a source (here the surface of the earth) to a sink (here the upper layers of the atmosphere) requires a temperature difference (second law of thermodynamics) – is just the same heat transport, whether by convection as well as radiation.

3th At a given temperature has a black body the maximum of the emission.

4th At the wavelengths at which a body emits, it also absorbs.

5th Radiation, which will go through an absorbent body is round, more or less absorbed and replaced by the self-emission of the absorbent body according to its temperature (“Instead-radiation”).

Without absorbing gases (greenhouse gases – GHG) radiates the earth’s surface almost as a black body at the relevant wavelengths, ie in the infrared range. A gas can not radiate more at a given temperature than a blackbody. The black-body condition with the temperature difference must lead to the greenhouse effect: The radiation into space must equal to be as large as at a uniform temperature – despite differences in temperature. This calls for temperatures, by which is part of the temperature below the uniform temperature is (here the upper atmosphere) and a other part above of the uniform temperature is (here the earth’s surface).

Readers fascinated by the “controversy” can check in the right hand side bar under Recent Comments for new comments that appear. New comments on this old subject that appear here will be removed to the other article, where they are much more relevant.

The net forcing at the surface following an instantaneous doubling of CO2 is less than 1 W/m². That’s because the increase in CO2 causes a decrease in incoming solar radiation at the surface because more is absorbed by the 4 μm CO2 band. So the atmosphere is heated from above as well as from below.

I’m having some trouble following this. I’d like to ask you a question I’ve asked numerous places. So far no one has been able to provide an answer.

The often quoted ‘intrinsic’ or ‘no-feedback’ warming from 2xCO2 of about 1.1 C, where is the +6 W/m^2 flux into the surface that causes this coming from? Meaning specifically where the watts are coming from?

I tried to download the Ramathan 1981 paper from your link, but says I dont have permission to access that page. Do you have another link?

The radiative forcing methodology would be more palatable if the forcings (basically local derivatives of a complex curve) were not used to extrapolated to twice the value of a given quantity (Co2 concentration).. that just seems wrong somehow. I would think that method would be good around 10 to 20% of a starting point, not 200%. But it is great for figuring what increases when something decreases, etc.

Do you all know how much the optical thickness of the atmosphere (to long wave radiation) would increase by just the doubling of CO2? could you point me to that info? I cant just double the thickness, since Co2 is only a component of the total,

Nice article ! The analysis presented however does still assume that during the energy balance adjustment of the ocean/atmosphere system to a doubling of CO2, the incident solar SW flux remains constant. This is not the case if the % of low cloud cover increases (with specific humidity). This will then raise slightly the global albedo thereby reducing solar forcing. Another effect which surely acts as a negative feedback is the reduction in lapse rate caused by condensation heating as happens in the tropics, particularly during thunderstorms. Naively this then should increase overall OLR from the upper atmosphere as the temperature of the average height for emission of CO2 increases.

The fact that liquid oceans have survived on Earth for 4 billion years during which time solar output has increased by 30% is a strong indictor that overall the effect of the oceans must be to stabilise global the Earth’s temperatures.

SOD wrote: “So, with the idea of an instantaneous doubling, if the surface increase in radiative forcing is less than the tropopause increase in radiative forcing, this must mean that the balance of energy is absorbed within the atmosphere.”

I think this statement could be wrong. The reason that the surface forcing for 2X CO2 is a little less than 1 W/m2 while the forcing at the tropopause is 3.7 W/m2 is because part of the CO2 absorption overlaps with the absorptions of water vapor. If water vapor is already absorbing 99% of surface OLR at a wavelength absorbed by both water vapor and CO2, doubling CO2 can’t appreciably increase the amount of energy absorbed within the atmosphere. Clouds covering approximately 30% of the sky also compete with CO2 for OLR, reducing surface forcing compared with the tropopause. (Somewhere in your long series of posts calculating upward and downward radiation, I learned that the maximum change in upward radiation upon doubling CO2 occurs at wavelengths where about 50% of OLR is absorbed.) At the tropopause and above, there are no clouds and little water vapor to compete with CO2 for outgoing photons. Doubling CO2

If the above paragraph is correct, why is your apparently sensible conclusion about the balance of energy wrong? I’m not sure. Perhaps the assumption of a fixed lapse rate with radiative-convective equilibrium converts the missing energy flux into convection. Perhaps CO2’s role as an emitter (as well as absorber) hasn’t been properly handled.

This prompts an interesting thought experiment. Imagine a lab laser tuned to a strong absorption of both CO2 and water vapor. The laser is directed into a transparent container filled only with CO2 long enough to absorb 99% of the incoming energy. The system will warm until outgoing power (radiation, convection and conduction) equals 99% of incoming power from the laser. Next we start adding water vapor in the container. Absorption can increase a maximum of 1%, but there is no limit on how many water vapor molecules that can be added to radiate energy away from the container. Can adding more of a greenhouse gas appears cool the system, not warm it?

I think your thought experiment demonstrates exactly how water vapor acts to stabilise temperatures. Any additional forcing must lead to more water evaporation from the infinite sinks which are held in the Earth’s oceans. What happens next ? Enhanced greenhouse from more water vapor can only effect non-overlapping bands with CO2. Direct transport of latent heat to higher levels in the atmosphere through evaporation short circuits the CO2 greenhouse effect by lowering the lapse rate leading to more OLR. More clouds then reduce albedo which offset solar heating.

The fact that the Earth’s temperature has remained so stable for 4 billion years in my view effectively rules out a simple linear positive feedback of water to external forcing of 2 watts/m2K-1.

“The fact that the Earth’s temperature has remained so stable for 4 billion years….”
Stable? You might want to check the veracity of that claim. “Stable” certainly wouldn’t be my choice to characterize noxious fumes, a deep-freeze giving way to the alternating temperature cycles of recent eons.

The radiative forcing methodology would be more palatable if the forcings (basically local derivatives of a complex curve) were not used to extrapolated to twice the value of a given quantity (Co2 concentration).. that just seems wrong somehow.

Radiative forcing calculations are not extrapolations based on local derivatives. They are calculated using the radiative transfer equations at a new set of conditions, like doubled CO2 concentration.

A useful tool for learning about how changes to the atmosphere affect radiative transfer in the IR is David Archer’s MODTRAN page. MODTRAN is not a full line-by-line radiative transfer program. It uses a moderate resolution band transmission model. It’s somewhat crude in that you can’t change the lapse rate, but that’s a reasonable first order approximation.

In the section on Surface Warming and in the conclusion, SOD cites Ramanathan:

“As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.”

We must understand why this caveat is necessary and what limitations it places on our “education”. Or is “indoctrination” a more accurate term? “Doctrine” describes a set of beliefs to be accepted, not subject to scrutiny. We can’t tell if this “highly simplified linear system” is education or indoctrination until we examine the caveats. How misleading can this system be?

Obviously, the major caveats involving feedbacks exist, and our host – an honorable scientist who always strives for scientific accuracy – has always approached water vapor and cloud feedbacks with appropriate caution. However, he often ignores lapse-rate feedback. (The term lapse rate is not used in this post.) It is impossible to make ANY predictions about what will happen at the surface unless one postulates that the slope of the red line in Figure 2 running from the tropopause to the surface doesn’t change as CO2 and temperature increase. This unexamined postulate permits the amount of warming of the tropopause (and possibly the surface) to be set equal to the amount of warming at the tropopause. Water vapor and cloud feedbacks can develop only AFTER warming at the tropopause has been transfered to the lower troposphere and the surface by a fixed lapse rate. The assumption that lapse rate remains constant is the CRITICAL CAVEAT behind every attempt to project what should happen after radiative forcing warms the tropopause.

Fundamental physics tells us the steepest (most negative) lapse rate consistent with stability is -9.8 degK/km for dry air and -4.9 degK/km for air saturated with water vapor at the surface. These are called the dry and wet adiabatic lapse rates and SOD has shown elsewhere how these quantities can be calculated from theory. Unfortunately, on the real earth, lapse rate is NOT controlled by theory – we have an “environmental” lapse rate which is controlled by convection. The environmental lapse rate varies from location to location and time to time. No fundamental physics tells us what the global average environmental lapse rate must be and nothing tells us that it can’t change. Unlike the instantaneous radiative response to temperature change, convection develops slowly and doesn’t immediately stop when the local adiabatic lapse rate has been restored. (It takes many hours of sunlight for summer thunderstorms to develop and they continue long after the sun – their driving force – has set. In many locations, the sign of the lapse rate changes at night at the bottom of the atmosphere because ground cools by radiation faster than air.) And, unlike radiation, upward convection requires downward convection somewhere else on the planet. No one ever seems to calculate the net upward energy flux of sensible heat in terms of W/m2, as everyone does for radiation. In the K&T energy balance diagram, sensible heat transfer determined subtracting all other upward fluxes from all other downward fluxes. Finally, we can’t rely on GCMs to tell us about convection, because it occurs on spacial scales far smaller than the grid cells of these models.

Fundamental physics tells us that the tropopause should warm about 1.2 degK due to the forcing from 2X CO2. How much must the lapse rate change to prevent half of this change (0.6 degK) from reaching the earth’s surface? If the tropopause averages about 11 kilometers above the surface of the earth (Wikipedia) and its temperature averages 217 degK, the average environmental lapse rate is -6.45 degK/km. If the tropopause warms 1.2 degK and the surface only 0.6 degK, the average lapse increase to -6.40 degK/km, a <1% change. HALF of projected surface warming can be negated by a 1% CHANGE in lapse rate – a fundamental parameter we can't calculate from basic physics and can't reliably model. If lapse rate is really this really this poorly understood, isn't CAGW more of a doctrine than a science?

(A closer examination of Figure 2 suggests that the lapse rate between the average height of upward emission and the average height of downward emission may be the relevant distant and these altitudes are significantly closer than 11 km. Unfortunately, the IPCC calculates radiative forcing at the tropopause, technically the altitude that the lapse rate increases to -2.0 degK/km, not at the altitude of average upward emission.)

Frank: You have hit the nail right on the head ! The environmental lapse rate changes in response to extra radiative forcing. This happens every day in the tropics as afternoon and evening thunderstorms develop when the sun’s radiance peaks and then falls. Lapse rate changes alter the level of free convection above which warm moist air then explodes up to the tropopause and beyond. It is like a pressure cooker blowing open the lid. Thunderstorms release huge amounts of energy ~ 10**15 joules and there are something like 50,000 occurring every day somewhere or other (Wikipedia). One tropical storm releases around 10**20 joules per day and there are 100 or so every year. For comparison the rise in CO2 since 1750 is predicted to have increased radiative forcing by 1.6 watts/m2. So by my reckoning this amounts to 7*10**19 joules of extra energy per day for the whole Earth. Furthermore small changes in tropical boundary layer and other clouds can change the albedo also cooling the surface.

The sun’s output has risen by about 85 watts/m2 over the last 4 billion years and temperatures have varied rather little over that time. The presence of liquid oceans for all that time seems to be too much of a coincidence to me. My naive suggestion is that the tropical oceans demonstrate that the Earth’s oceans self regulate it’s surface temperature. No doubt SOD or someone else will correct me from this heresy ! :-)

“Thus, system response to a forcing depends not only on (1) the size of a forcing, and (2) its duration (affecting the accumulation of heat), but also (3) the forcing depth in a system. For example, long-wave forcing of the low AR, high loss atmospheric level by GHGs would differ from shortwave solar radiation forcing the surface layers of the land and ocean. Geothermal heating in the deep ocean would have the highest intrinsic gain, due to reduced losses.”

A shorter version and discussion of the accumulation/depth model are here:

Clive: If you want to draw conclusions about climate sensitivity from how the planet’s climate has changed over the last 4 billion years, it is only fair to consider what the last glacial maximum (LGM) tells us about climate sensitivity. Estimates of climate sensitivity based on the LGM are high and aren’t often challenged by skeptics – except to say that we don’t know enough about conditions during the LGM. If we don’t know enough about conditions during the LGM (20,000 ya) when the continents were in their current locations and we have ice cores preserving atmospheric composition, how can we have any confidence about 2,000,000,000 ya? Can you compare LGM forcings and temperatures to those estimated for billions of years ago (but after the heat from formation of the earth had dissipated) or provide some references?

By convention, the lapse rate is a positive number. The initial moist adiabatic lapse rate isn’t a constant, it decreases as the surface temperature increases. It also varies with altitude as the specific humidity decreases and eventually approaches the dry lapse rate at sufficiently high altitude.

A negative lapse rate feedback would show up as a difference in rate of increase of temperature with altitude. In fact, that is modeled. It’s what predicts what’s sometimes called the big red spot in the equatorial upper troposphere for doubled CO2. The only problem is that it hasn’t been observed. The evidence is that the environmental lapse rate hasn’t changed significantly over the period of satellite temperature profile observations.

Here’s where your radiative transfer program would be useful. We can’t change the lapse rate in MODTRAN or SpectralCalc. But you should be able to do it in your program. It would be interesting to see exactly how much the lapse rate would have to change to reduce the temperature increase at the surface from doubling CO2 by half, absent other feedbacks. I’m betting it’s a lot more than Frank calculated, as emission from the upper troposphere isn’t all that large.

It is important to note that just the change in optical thickness doesn’t give you “the whole answer”.

This is because “the whole answer” is also about emission, not just absorption. And if it is about emission we need to know the temperature of the atmosphere from which the radiation is emitted (or, more accurately, the integral of emitted radiation across the whole depth of atmosphere).

The analysis presented however does still assume that during the energy balance adjustment of the ocean/atmosphere system to a doubling of CO2, the incident solar SW flux remains constant. This is not the case if the % of low cloud cover increases (with specific humidity). This will then raise slightly the global albedo thereby reducing solar forcing. Another effect which surely acts as a negative feedback is the reduction in lapse rate caused by condensation heating as happens in the tropics, particularly during thunderstorms. Naively this then should increase overall OLR from the upper atmosphere as the temperature of the average height for emission of CO2 increases.

The overall calculation of surface temperature change for a doubling of CO2 does of course depend upon the feedbacks, including the cloud feedback, the lapse rate feedback and other feedbacks.

As I said in the conclusion to the article:

I haven’t demonstrated that the surface will warm by 3°C for a doubling of CO2. But I hope I have demonstrated the complexity of the processes involved and why a simplistic calculation of how the surface responds immediately to the surface forcing is not the complete answer. It is nowhere near the complete answer.

Does this clarify my position?

The fact that liquid oceans have survived on Earth for 4 billion years during which time solar output has increased by 30% is a strong indictor that overall the effect of the oceans must be to stabilise global the Earth’s temperatures.

This is kind of true. Let’s agree that that something appears to have stabilized the global temperatures. It doesn’t have to be the oceans, although of course, covering 70% of the earth and storing most of the climate system’s heat they must be massively important in the complete picture.

A small comment on this in Ghosts of Climates Past which will be followed one day with more articles when I feel like I understand something useful on this challenging subject.

..The assumption that lapse rate remains constant is the CRITICAL CAVEAT behind every attempt to project what should happen after radiative forcing warms the tropopause..

No one assumes this in climate science – excepting the “here is what happens before any feedback is taken into account, beginners school of climate science”.

If I have given this impression (that the lapse rate remaining constant is a critical component of climate science forecasts) then please accept my apologies.

..The environmental lapse rate varies from location to location and time to time. No fundamental physics tells us what the global average environmental lapse rate must be and nothing tells us that it can’t change..

This is correct.

..No one ever seems to calculate the net upward energy flux of sensible heat in terms of W/m2, as everyone does for radiation..

If you want to say “seems” then of course, as a subjective assessment, it is, by definition, correct.

SOD: As usual, your science is technically correct, but directs the readers attention away from problems with the traditional forcing/feedback analysis. Your post discusses water vapor and cloud feedbacks without discussing how or why warming at the tropopause should or should not result in an equal amount of warming at the surface. Equal warming means a fixed lapse rate. Hiding the mechanism(s) of surface warming behind “caveats”, assumptions, or terms like “highly simplified linear system” or “no-feedbacks” doesn’t do justice to our knowledge of physics. A more appropriate discussion might proceed as follows:

We know from energy balance considerations and the absorption properties of GHGs that 2X CO2 will require warming at altitudes near the tropopause where temperature is controlled by radiative equilibrium. The tropopause is expected to warm an average of about 1 degK.

Question 1: What else does fundamental physics tell us MUST happen when the tropopause warms?

Answer 1: The tropopause can’t warm unless it gets additional energy flux from below. (The definition of radiative forcing already accounts for changes in flux from above. A small fraction of the needed energy is absorbed from the incoming SWR by doubled CO2.)

Question 2: How can increased flux be delivered from below?

Answer 2: By radiation and/or convection.

2a) Increased Radiation from Temperature Change: Warming of the troposphere and/or the surface will deliver increased flux from below. IF we know how much warming occurs at all locations, basic physics would allow us to calculate the additional flux arriving at the tropopause. If we ASSUME that: a) all locations warm equally (ie a fixed lapse rate), b) all of the needed flux is delivered by radiation, and c) atmospheric components and the surface do not change (no feedbacks); a 1.2 degK temperature rise is needed everywhere to deliver the required radiative flux from below.

2b) Increased Radiation from Clouds: Surprisingly, upward radiative flux from below can increase without warming. If cloud tops are lower, increased radiation should be emitted upwards towards the tropopause without warming. Lower clouds might result from increased evaporation and convection, both of which might be driven by a surface warming that is smaller than tropopause warming.

2c) Convection: Basic physics tells us little about the oft-ignored possibility that some of the needed energy flux could be delivered by convection. If convection didn’t exist (and radiative equilibrium existed down to the surface), surface temperature would be about 65 degK warmer than present. In that case, however, the temperature gradient (lapse rate) in the lower troposphere would be too steep and therefore easily changed by the buoyancy-driven convection that does exist. Convection reduces the actual gradient to an average of 6.5 degK/km (between the surface and the lowest altitudes where radiative equilibrium controls temperature), but physics (to my knowledge) doesn’t tells us why this value is 6.5 degK/km rather than 6.4 or 6.6 degK/km. When the tropopause temperature is determined by radiative equilibrium, an 0.1 degK/km change in the lapse rate translates into about a 1 degK change in surface temperature. Increased convection – driven by some surface warming – should result in a smaller lapse rate and a surface temperature rise smaller than at the tropopause.

Question 3: All of these mechanisms require some warming of the surface and lower troposphere. That warming that can be less than or equal to that calculated for the tropopause. What changes do we expect to accompany the warming occurring at the surface and lower troposphere?

Answer 3: Surface warming – which may be less than or equal to tropopause warming – will be amplified or suppressed by traditional water vapor, cloud and lapse rate feedbacks within months followed by much slower ice/snow albedo feedbacks.

Confusion arises from the forcing/feedback terminology. The traditional feedbacks (water vapor, clouds, albedo) don’t operate on warming at the tropopause. So the mechanisms (2a, 2b and 2c) that initially spread warming to the surface are forcings, just like the small radiative forcing (1 W/m2) operating near the surface. If increased convection is an important means of getting needed energy to the tropopause and this reduces the lapse rate; this is a forcing, not a feedback. The traditional explanation used by this post combines all processes that change the lapse rate into a lapse-rate feedback. (Often a combined water vapor/lapse rate feedback is used, falsely implying that the environmental lapse rate is changed only by humidity and not by convection.) This sleight-of-hand demands that feedbacks amplify a surface warming equal to the tropopause warming.

SOD: I did us the word “seems” because only a small fraction of climate research penetrates the blogosphere. My impression was mostly based on K&T’s energy balance diagram, which treats sensible heat flux as a fudge factor. They included two old references showing that their fudge factor appears reasonable. Your link did allow me to find the right search terms to find who has been analyzing the sensible and latent heat fluxes over the ocean estimated from reanalysis. Of course, calculating long-term trends in convection from reanalysis data is as problematic as calculating humidity trends from the same data. You may find these links interesting:

You can’t look at the lapse rate in isolation either. If the lapse rate changes, the radiative flux upward and downwards changes too. If you calculate forcing (excess of absorption over emission) for doubling CO2 by altitude, it’s lowest at the surface and highest at the tropopause. The instantaneous response for a one dimensional atmosphere would be for the lapse rate to decrease (the tropopause warms faster than the surface) with little change in surface temperature. But that results in an increase in downward radiation at the surface. The surface is now absorbing more than it emits and must also warm. That will tend to drive the lapse rate back towards the environmental value because the increased surface temperature will cause the lower atmosphere to warm. One dimensional radiative/convective models always tend to converge on about the same lapse rate profile.

Claiming that convection will increase to maintain a lower surface temperature increase and lower lapse rate raises the question of why isn’t convection higher and the lapse rate and surface temperature lower now. The simplest explanation is that the lapse rate is at its optimum value now and has little variation with surface temperature.

DeWitt: Thanks for the comments. This is an issue I’d like to understand better. You said:

“The instantaneous response for a one dimensional atmosphere would be for the lapse rate to decrease (the tropopause warms faster than the surface) with little change in surface temperature. But that results in an increase in downward radiation at the surface.”

How many of the additional of photons emitted downward from the tropopause after a 1 degK warming actually reach the surface? If the tropopause emitted like a blackbody, a temperature rise from 220 to 221 degK would increase the downward flux by only 2.4 W/m2. However, CO2 emits at only some wavelengths and there probably isn’t enough CO2 at many of these wavelengths for the tropopause to be optically thick enough for blackbody assumptions. Those wavelengths that are optically thick are precisely those that are strongly absorbed on the way to the surface. So the only way a significant amounts of additional energy is likely to reach the surface is through a series of emissions and absorptions. That process requires warming through the troposphere and is contrary to the standard formulation for radiative forcing: after equilibration with the stratosphere and before equilibration with the troposphere. Feedbacks can’t exist until the forced warming at the tropopause has been transmitted to the surface and lower troposphere. When you assert that downward radiation is a critical mechanism by which forced warming is transmitted downward, you are contradicting the definition of radiative forcing. (The standard 1D model collapses due to internal contradictions.)

When discussing lapse rate, you wrote: “The simplest explanation is that the lapse rate is at its optimum value now and has little variation with surface temperature.” When the surface warms (before thunderstorms, at the ITCZ, at frontal boundaries?), convection increases. Surface winds are driven by increased convection. Evaporation increases with surface wind speed. Therefore, increased surface temperature may drive more water vapor aloft and thereby reduce the environmental lapse rate. Above the equator (the warmest and moistest place on the planet), convection produces the coldest place in the atmosphere. Convection seems to be driven by temperature and the lapse rate appears to be controlled by convection. How sure are you that lapse rate doesn’t vary with temperature?

Figure 1 is ‘so’ misleading that I hardly know where to start to show that the graphic and concept shouldn’t be used (even for educational purposes).

‘Tm’ is represented ‘graphically’ as ‘ocean surface’, so how can ‘Tm’ add to “Evaporation and sensible heat flux”? The ocean is (very nearly) ‘always’ colder than the atmosphere, so “Evaporation” ‘only’ can be the ‘sole’ “heat flux” there and this energy doesn’t add to temperature until ‘condensation’ within a cooler atmosphere (probably a higher atmospheric altitude because ‘water vapour’ is a much less dense molecule than other atmospheric constituents)! Oh! It (water vapour) does add to ‘DLR’ (back radiation), but ‘DLR’ is converted directly, once again, into “Evaporation” by the ocean surface ‘skin’.

When the ‘ocean surface’ is ‘colder’ than the ‘surface atmosphere’, any “sensible heat flux” from ocean to atmosphere is absolutely impossible! In fact the ‘heat flux’ is in the opposite direction, from atmosphere to ocean by way of ‘surface contact conduction’ (not to mention, again, DLR’s conversion to “Evaporation”)!

Why do you associate yourself with this ‘diatribe’ SoD? Surely you’re above this. I’d have thought that the ‘caveat’ was a ‘strong hint’.

DeWitt: Thanks for explaining the proper convention for lapse rates. I’ve seen negative signs in many places, but I see that textbooks include the negative sign in the definition.

You are correct that the wet adiabatic lapse rate is not a constant. The number I cited was for saturated air near the surface, but I should have specified a temperature (and pressure?). The few radiosonde plots I have seen didn’t appear to get dramatically closer to 4.9 degK/km at higher altitudes. Is there a practical limit to the lapse rate near the top of the tropopause where the temperature is 230 degK. ((I don’t remember the name of the plot that shows theoretical adiabatic curves.)

Do you know of any physics that constrains the mean global environmental lapse rate? The average specific humidity? If my understanding is correct, the “hot spot” (apparently missing) in the upper tropical troposphere is attributed to the combined water vapor/lapse rate feedback which is supposed to be strongly positive despite the fact that increased water vapor reduces the lapse rate for “buoyant stability”. The presence of a “hot spot” is mathematically equivalent to a decreasing environmental lapse rate.

My main complaint is the assumption that environmental lapse rate must be fixed. The warming of the tropopause demanded by radiative forcing and energy balance considerations requires the tropopause to get additional flux energy from somewhere below. That additional flux could be supplied by an enhanced Hadley circulation driven by a surface warming less than 1.2 degK, thereby producing a shallower lapse rate. It doesn’t seem appropriate to consider water vapor and cloud feedbacks BEFORE possible changes in lapse rate – those feedbacks depend on how much warming the lapse rate transfers from the tropopause to the surface and lower troposphere.

The scale height for water vapor is ~2 km compared to ~8 km for dry air. So at 8 km, the air pressure has been reduced to ~1/e of the surface pressure. But the saturated water vapor pressure has gone down by ~(1/e)^4. At that point, the adiabat is going to be closer to the dry adiabat.

Some examples are shown in Fig. 3.5. To compute each of these curves, we start with a saturated parcel of specified temperature at the surface and integrate upwards. As the parcel rises, water condenses releasing latent heat, and so temperature decreases more slowly than in the dry case. This effect is stronger the warmer (and hence moister) the initial conditions. At typical Earth-like surface temperatures, the effect is very strong: for a starting temperature of 5C, the mean lapse rate over the first 5 km is 7.2 K km−1, for 20C it is 4.8 C km−1 and for 35C it is 3.3 C km−1 (compare with 9.8 C km−1 for the dry adiabat).

A surface temperature of 35C with 100% humidity is not likely to happen anywhere on the planet.

If the water is allowed to condense and rain out as the parcel rises, you get a pseudo-adiabat. For purposes of calculation, mixing isn’t allowed. Obviously it does happen in the real atmosphere. In the real atmosphere, potential temperature generally rises with altitude rather than remaining at zero as it would if it were following an adiabat.

“This is not correct. However, given that you are so certain in your mistake it seems unclear what evidence to bring forward to demonstrate this very basic fact of climate.”

Then let’s use the ‘Clausius Clapyron relationship’ to show this. Water will always produce ‘water vapour’ (WV) at Earthly surface ‘PVTs’ (pressures, volumes and temperatures), and takes the energy for the ‘change of phase’ from the liquid state, unless the near surface ‘relative humidity’ (RH) exhibits a measure of ‘100% RH’ (absolute atmospheric saturation) where no ‘change of phase’ to WV can be made (or energy robbed from the liquid phase) because the local atmosphere is already ‘saturated with WV’.

How many regions of open ocean surface can you disclose that boast a 100% RH that allows the ocean surface ‘temp’ (temperature) to exceed the near ocean surface atmospheric temp (tornadoes and cyclones excluded)?

You’ll realise that this is why I said “The ocean is (very nearly) ‘always’ colder than the atmosphere”, though a ‘sea fog’ region may come to your rescue.

“The atmosphere is mostly transparent to solar radiation. The ocean absorbs most of the solar radiation. How therefore does the atmosphere become warmer on average than the ocean surface?”

I’ve been privy to convincing data from Ferenc Miscolczi that this is due to the ‘latency’ property of WV (as one would expect when Earth’s overall surface is cooled by ‘evaporative cooling’), but, since his return home from Europe, Ferenc found that a hurricane has caused ‘a lot’ of flood damage to his property and home. Thus, I’m unhappy to trouble him with permissions for disclosure at this point.

However, I don’t think I’m disclosing anything if I suggest you research radiosonde data from measurements taken where ocean surface isn’t compromised by adjoining ‘dry land surface’, as this will ‘throw up’ the ‘clearest signal’ of ‘evaporative cooling’.

If someone else also believes that the air above the ocean is generally warmer than the ocean, and wants to see evidence that disproves this view, please throw in your comment and I will put aside some time to providing evidence.

In the meantime perhaps Ray will produce some measurements of temperature profiles above the ocean surface to demonstrate his case.

The transfer of enthalpy is from the ocean surface to the air as shown in Ramanathan’s Figure 1, not the other way around. If the air above the surface is moving and isn’t saturated with water vapor, the enthalpy transfer can be from the water to the air even if the temperature of the water is lower than the air temperature. The result is cooler moister air with increased enthalpy and cooler water with lower enthalpy. If that weren’t the case, then the wet bulb temperature would not be lower than the dry bulb temperature. I’m sure the meteorologists have some sort of modified temperature measure, like equivalent potential temperature, that explains this.

“If someone else also believes that the air above the ocean is generally warmer than the ocean, and wants to see evidence that disproves this view, please throw in your comment and I will put aside some time to providing evidence.”

I need to add a caveat to that SoD! It seems popular with ‘Climate Science’ to take ‘averaged figures’ and, as an engineer, I’ve assumed ‘averaged diurnal’ observation. Otherwise, my conclusion is from ‘first principles’.

However, interestingly the ‘day : night’ ratio for ocean surface evaporation looks to be controlled by near ocean surface atmospheric temp. This governs the local RH and alters the ability of the ocean surface to ‘produce’ (propel) WV into the atmosphere. This should be seen as an ‘energy density’ disparity between ‘ocean’ and ‘atmosphere’.

On a more ‘open approach’, perhaps a ‘day : night’ ratio for RH could be used as a ‘mark’ of ‘type of climate’ for ‘land surface’ (it’s a more complete observation than just ‘diurnal temp variation’). What’s your POV on this?

“In the meantime perhaps Ray will produce some measurements of temperature profiles above the ocean surface to demonstrate his case.”

The only “profiles” I have were provided by Ferenc. However, my conclusions from his ‘profiles’ was influenced by ‘first principles’. I can ‘Google’ for some “profiles” if you want, but so can you (I’m best convinced by my own research, aren’t you?). BTW. You’ll also need the ‘ocean surface skin “profile”‘ that leads to the “profile” of “above the ocean surface” activity as well. The two are linked by the main ‘interface attractor’ (the Clausius Clapyron relationship).

SOD wrote: “From another point of view, the vertical temperature profile turns out to be – on average – a fairly predictable value from a simple equation.”

Having a “fairly predictable value” for the environmental lapse rate isn’t good enough. As shown above, <1% change in the environmental lapse rate can negate half of surface warming.

How does one actually predict a value for the environmental lapse rate? We know how to calculate the dry and saturated adiabatic lapse rates, but these aren't the environmental lapse rate. Use of the ambiguous term "lapse rate" creates the impression that we are dealing with is a fixed value we know (the dry adiabatic lapse rate), or a value we can calculate from temperature (the saturated adiabatic lapse), when we are dealing with something which is more complicated (environmental lapse rate). The term you may want to use is the "parcel lapse rate", which is a combination of the dry rate until the condensation level and the saturated rate above. If so, you probably need to consider the rise in the condensation level will accompany warming. A 100 m change in the condensation level produces roughly 0.5 degK temperature change (0.1 km*(dry-saturated)), calculations that resemble those used to determine how high the characteristic emission level rises due to radiative forcing.) Then you need to explain why the parcel lapse rate is a good model for the mean global environmental lapse, because temperature profiles from radiosonde data show large chaotic deviations from the parcel lapse rate.

You might consider conceding that: 1) A fixed environmental lapse rate must transfer warming at the tropopause to warming before feedbacks can amplify the surface warming. 2) So far, a fixed ENVIRONMENTAL lapse rate (before water vapor feedback) is a postulate of the 1D model you present, not something that has been properly derived from basic physics and supported by observation.

Until you can show that a reduction in lapse rate causing a smaller change in surface temperature can produce the same TOA IR emission as no change in lapse rate and a larger change in surface temperature, your conjecture is unproven. I have serious doubts that it is true.

From McGuffie and Henderson-Sellers “A Climate Modelling Primer”, when a 1D R/C model uses 6.5 K/km as the critical lapse rate for convective adjustment and constant relative humidity, doubling CO2 from 300-600 ppmv causes an increase of surface temperature of 1.94 C. Using the moist adiabatic lapse rate as the critical lapse rate, the surface temperature increases 1.37 C. That’s a significant change, but not 50% and the moist adiabatic rate is more than 1% lower than 6.5 K/km at a surface temperature of 288 K. It’s also highly unlikely that the critical rate is as low as the moist adiabatic rate as outside of clouds and rain and the near surface layer over open water, 100% humidity is rare.

Very glad to see you acknowledge that this is a hard problem. It is not a simple calculation to derive the temperature response.

One problem is that formulas 3b and 3c are crude approximations, and in fact wrong since they imply that without wind you can have no evaporation and no thermal convection. Thermals exist on sunny windless days. We have discussed this on one of your earlier threads I think.

One other point – would it not make sense to distinguish between a land surface and a sea surface? The processes will be quite different.

And thermals are not wind because? You can’t have sensible and latent heat transfer without air movement, i.e. convection. Any vertical air movement, like a thermal, creates horizontal air movement to maintain hydrostatic equilibrium. Any air movement has a non-zero velocity V.

I’m only guessing, but I suppose PaulM is thinking “geostrophic winds” when he says “thermals are not winds”, because when you get convection you definitely get horizontal air movement associated with it.

Here is Mississippi, on a “windless” day in the middle of the afternoon, we’ll get about 3 m/s winds that vary over about 360° in roughly a 20-minute period. It’s actually pretty cool to watch a met tower with a weather fin on the top and bottom. When the wind shifts, the top one moves first, then the bottom one.

I’ve got some data sets using 3-axis sonic anemometers (these guys) where you can see the relationship between the horizontal and vertical components over time.

PaulM, I’m also guessing in your reference, that they are referring to the mean flow, when they say V=0 (e.g., geostrophic flow). In my measurements, V was near zero, but there was definitely a net vertical flux of heat—and air, the mean value of the vertical component of wind velocity, “w”, was very much nonzero. In fact this process typically happens on a daily basis… during the day, is positive upwards as the atmospheric boundary layer expands in height, and during the night it is negative. (Typical is the correct word, because atypical conditions like frontal boundaries influence w too of course.)

DeWItt wrote: “Until you can show that a reduction in lapse rate causing a smaller change in surface temperature can produce the same TOA IR emission as no change in lapse rate and a larger change in surface temperature, your conjecture is unproven.”

This appears mistaken. Radiative/convective models of the troposphere POSTULATE that the required net outward flux of energy at any altitude (needed to balance incoming SWR) is provided by net LWR (calculated by radiative transfer equations) and convection – with the convective flux being whatever additional outward flux is needed for energy balance. Below the tropopause, such models automatically take care of net outward energy flux including the TOA IR flux.

I’m not sure what the McGuffie and Henderson-Sellers model that you discussed in your next comment includes, but the assumption of constant relative humidity is relevant to calculating water vapor feedback. Before one can calculate water vapor feedback, one needs to know how much warming has occurred at the surface and in the lower troposphere before feedbacks. If the tropopause warms from 223.0 degK to 224.0 degK due to radiative forcing, how much does the surface roughly 10 kilometers below warm? From 288.0 degK to 289.0 degK? That prediction says the environmental lapse rate over that distance remained constant at 6.50 degK/km. If the environmental lapse rate on the warmer planet shrinks just 1% to 6.435 degK/km due to the increase in convection needed to compensate for the increased opacity of the troposphere to LWR, the surface would be 288.35 instead of 289.0. 65% of the expected surface warming has disappeared before feedbacks have been considered. Water vapor feedback (including feedback on the lapse rate) now amplifies an 0.35 degK surface temperature increase rather than a 1.0 degK increase.

1D models assume that the warming that occurs at the tropopause will occur throughout the troposphere before feedbacks are considered. That would be fine if we had a good theory for why the temperature difference between these two locations is 65 degK rather than 64 or 66 degK – or put in different terms why the lapse rate is 6.5 degK/km rather than 6.4 or 6.6 degK/km. When the factors that determine the global mean environmental lapse are understood (not just the dry and wet environmental lapse rates), we will be in a position to decide if the lapse rate should remain constant to within <1% when CO2 doubles. If you can't tell me why the temperature difference between the surface and tropopause is currently 65 degK, why should I believe that it is going to remain at 65 degK when CO2 doubles?

This appears mistaken. Radiative/convective models of the troposphere POSTULATE that the required net outward flux of energy at any altitude (needed to balance incoming SWR) is provided by net LWR (calculated by radiative transfer equations) and convection – with the convective flux being whatever additional outward flux is needed for energy balance. Below the tropopause, such models automatically take care of net outward energy flux including the TOA IR flux.

Nope. An R/C model calculates the temperature at a given altitude based on balancing radiative absorption and emission at a fixed surface temperature. Then it checks to see if the lapse rate is greater than some critical value. If it is, then the temperature profile is adjusted and absorption and emission are recalculated. The calculation eventually converges on a particular temperature profile. The critical lapse rate for adjustment has only a small effect on the calculated lapse rate in the troposphere.

Radiation upward at the TOA depends only on the temperature and ghg concentration profiles. The amount of convective energy transfer only comes into the game if you do an energy balance for a given temperature profile.

If you don’t do the radiative transfer calculations, your adjustment of lapse rate and surface temperature are nothing but hand waving.

Feedbacks can’t amplify warming until warming at the tropopause has been transferred to the surface and lower troposphere. The temperature difference between the tropopause and the surface is controlled by the environmental lapse rate, not the dry or wet adiabatic lapse rates. For 1D models, we need a theory explaining why the mean global environmental lapse rate has a particular value (that agrees with observation) and then we need to know what that theory predicts will happen when the tropopause warms 1 degK due to 2X CO2.

In upwardly convecting regions, the “parcel lapse rate” may be the best estimate for calculating a mean global environmental lapse rate. The parcel lapse rate is the dry adiabatic lapse rate up to the lifting condensation level and the saturated adiabatic lapse rate above. The lifting condensation level depends on temperature and both lapse rates have terms that are somewhat temperature dependent. (Even the heat of vaporization of water varies with temperature.) Upwardly convecting regions are usually cloudy, so the parcel lapse rate could apply to approximately 30% of the surface where convection is important.

In regions where downward convection occurs, the dry adiabatic lapse rate may be the best estimate for calculating a mean global environmental lapse rate. This would apply to 70% of the surface where convection is important.

In the Arctic, Antarctic and temperate zones in the winter, radiative cooling is capable of removing all of incoming solar energy without any assistance from convection. In these regions, horizontal convection (including ocean currents) from warmer latitudes and the melting of frozen water are important to determining the temperature difference between the tropopause and the surface.

Can we combine all of this information to create a method for determining the mean global environmental lapse rate and its temperature dependence so that we can use it in 1D models? Is this theory consistent with observations of the real world, which contains distortions from chaotic motions of the turbulent atmosphere, meridional transport of energy, the inversions that develop near the surface at night, and who-know-what-else? Should we believe that surface warming due to 2X CO2 before feedbacks will be equal to warming at the tropopause – where it is 65 degK colder?

Should we believe that surface warming due to 2X CO2 before feedbacks will be equal to warming at the tropopause – where it is 65 degK colder?

Yes, to a first approximation. You have provided zero evidence that this isn’t a good approximation. A calculation that a 1% change in the lapse rate will reduce the surface temperature change by half for doubling CO2 is meaningless until you show that radiative balance has, in fact, been restored by that change. You haven’t. Hand waving about increased convection doesn’t cut it. Convective heat transfer sufficient to reduce the surface temperature increase by half is, IMO, going to change the temperature profile by a lot more than 1%.

DeWitt: We seem to disagree about the fundamental principles of the simple 1D models based on radiative-convective equilibrium presented by SOD. It is my understanding that: a) High in the atmosphere, temperature is determined by radiative equilibrium. b) Low in the atmosphere (where the atmosphere is relatively opaque to OLR), radiative cooling is not effective enough to balance all incoming SWR without creating an unstable lapse rate. If you remove the latent and sensible heat fluxes from the KT energy budget, surface temp would need to rise to about 350 degK before radiative equilibrium would be achieved. In the troposphere, therefore, temperature is determined by the maximum lapse rate consistent with stability and the temperature at the tropopause, the lowest altitude where radiative cooling is capable of providing all of the needed upward flux. The models POSTULATE that convection provides whatever upward flux is needed for energy balance below the tropopause. (If I’m way off base here, a reference would help. Perhaps it’s time to buy Petty’s text.)

I am confused every time you ask me if “radiative balance has, in fact, been restored”. In these simple models, radiative balance in the troposphere does not need to be restored. Changes in GHGs and temperatures in the tropopause will change the results of radiative transfer calculations, but energy balance in these models will automatically be restored by an appropriate change in convection. Whatever energy flux is needed (in upward W/m2) automatically flows the moment surface warming causes the environmental lapse rate to exceed the lapse rate for buoyant stability. This means the environmental lapse rate doesn’t change (before feedbacks) despite the fact that convection must increase to balance upward and downward energy flux in a more opaque atmosphere.

You said: “Hand waving about increased convection doesn’t cut it. Convective heat transfer sufficient to reduce the surface temperature increase by half is, IMO, going to change the temperature profile by a lot more than 1%.” I’m not saying that convection itself will reduce surface warming by half. Convection will be whatever is required to maintain a fixed lapse rate and balance downward and upward energy fluxes. Once the tropopause warms, surface temperature in a radiative-convective model is determined by only one other factor – the lapse rate between the tropopause and the surface. A small change in this lapse rate has a big impact on surface warming. No one has shown me why the global lapse rate is 6.5 degK/km. No one has calculated the theoretical lapse rate for a realistic atmosphere with some downward convection regions (where the DALR should apply) and some upward convection regions (where the parcel lapse rate and condensation level appear relevant). No one pays any attention to the fact that the lifting condensation level and the saturated and dry adiabatic lapse rates change with temperature. Everyone simple says a fixed environmental lapse rate through the tropopause is an adequate approximate.

Yes, you are confused. You clearly don’t understand how radiative transfer in the atmosphere actually works. The atmosphere isn’t totally opaque to LW radiation even at the surface. Some surface radiation escapes directly to space. So if the surface temperature is lower, the temperature at the tropopause would have to be higher, significantly higher, than it would be if the lapse rate didn’t change. That would require a large increase in convective energy transfer. There is no reason to believe that the rate of convective energy transfer is particularly sensitive to temperature.

When greenhouse forcing increases, the emission must increase until radiative balance at the top of the atmosphere is restored. That means an increase in surface temperature that propagates all the way through the atmosphere. The change in surface temperature is relatively easy to calculate assuming a fixed lapse rate. And you don’t do it by calculating the increase in temperature at the tropopause and extrapolating back to the surface. You can do it with MODTRAN on the web, for example. But if you change the lapse rate, it’s more complicated. You need your own radiative transfer program because all the ones available to the average person either for free like MODTRAN or for a fairly reasonable amount of money like Spectralcalc, don’t allow you to change the lapse rate.

SOD says: “No, it is not a statement that CO2 is at a higher temperature than the surface”

Of course it is. No “direct surface heating” can be done to the surface unless CO2 is of higher temperture. Whether conductive, convective or radiative heat only goes in one direction and since the statement is that 1.2 W/m^2 “direct surface heating” by CO2 it says that T1 (CO2) higher than T2 (surface).

If the word “direct” means something different in climate science than in the dictionary please let me know.

SOD says: “Both are artificial distinctions but both mean addition of energy to the climate system.”

No energy can be added by forcing. The energy is already there at the surface by the sun. Surely you didn’t mean “addition of energy” and you just mistyped.

The equations in part six (which I reread) show no proper radiative heat transfer equation showing CO2 imparting 1.2 W/m^2 to the surface. With a CO2 and water vapor mix then Hottel’s charts would be used and I’ve seen none of that. If there are none say so, but if they exist which I have yet to see please show them.

CO2 molecules do not have more kinetic energy on average than the nitrogen and oxygen molecules in the near vicinity. So CO2 isn’t warmer than the local atmosphere. That’s the basis for local thermal equilibrium which is required for the validity of Kirchhoff’s Law.

I left out a step on Leckner’s emissivity calculation. You calculate emission for different path lengths with the same mass path (total number of CO2 molecules) and extrapolate to infinite path length and zero concentration.

I’m unable to compile, then copy and paste from Word Pro here when answering within a ‘nest’ now (it’s OK for an ‘original post’ to the ‘nest’ like this one though)! Any idea why (Win XP home o/s with MS IE 8 browser)?

I fully agree with you that appling an insulation will delay the cooling of an object by slowing conduction or convection.

I don’t consider “direct surface heating” to be done by insulation.

However, the dR in figure one is for radiation not conduction or convection. So how does radiation insulate if you wish to use that comparison?

You say:”It prevents heat transfer from the inside to the outside and so as a result the temperature differential must increase.”

Once a photon has left the surface it has “cooled” and it is irrelavant how far above the surface the photon went. The only way that the surface temperature increases is if a photon from of a higher temperature source strikes. If the photon that left comes back (illiteration here I know the same one won’t come back) the we only maintain the same temp not increase.

With the combination of water vapor and CO2 I still have not seen a proper radiative heat transfer equation showing that the surface is heated by CO2 radiating back to the surface. Hottel shows emissivity of the combination to be less far less than unity so CO2 would have to be at a higher temperature.

The interpretation of this equation in words – which is never as good as appreciating the equation itself – is that the change in outgoing radiation depends on a difference between the original source temperature and the atmospheric temperature where the absorption/emission is taking place.

Solve the equation for an atmosphere which interacts with radiation and where the temperature decreases with height and you find that the atmosphere “prevents” more energy leaving via radiation compared with the situation where:
a) the atmosphere doesn’t interact with radiation OR
b) where the temperature stays the same with height.

If you can understand the above sentence then you can see that it is the same effective result as an increased insulator effect that most people can understand.

I realize that many people can’t understand the above sentence and I can’t work out how to write it simpler.

Well, I can work out how to write it simpler, but only by reducing critical elements that make it less accurate.. which then open it up to the criticism that it is less accurate.. so we are back to more accurate statements that people can’t understand.. which attracts the criticism that the readers don’t understand it and so therefore it must be flawed..

The statement approximately matches the equation. Yet equations are always better than words as they are precise and unambiguous.

The equation can be solved, with boundary conditions of course. Realistic boundary conditions result in more heat being accumulated in the system until the surface temperature increases to the point where radiative balance again applies.

dIλ/dτ = Iλ – Bλ(T)

Prove this equation wrong and undermine stellar physics and 60 years of atmospheric physics, which in turn will lead to a revolution in fundamental physics.

Prove this equation has a solution where a more opaque atmosphere doesn’t lead to a higher surface temperature and you will have something totally new in maths.

Most people can’t understand this equation. That’s totally fine as well, but is a different proposition from this equation not existing.

The equation does exist, the solution exists and the solution to the equation proves the point.

Proving otherwise means proving it wrong, or finding a different solution.

No one has shown me why the global lapse rate is 6.5 degK/km. No one has calculated the theoretical lapse rate for a realistic atmosphere with some downward convection regions (where the DALR should apply) and some upward convection regions (where the parcel lapse rate and condensation level appear relevant). No one pays any attention to the fact that the lifting condensation level and the saturated and dry adiabatic lapse rates change with temperature. Everyone simple says a fixed environmental lapse rate through the tropopause is an adequate approximate.

“Everyone” and “no one” are big calls.

Actually not everyone but most people do pay attention to the fact that the “expected” environmental lapse rate changes with temperature and water vapor content.

c) see how the chaotic fluctuations in lapse rate change this picture – the important point being that even if the averages in the random fluctuations equal an expected “climatological” lapse rate, that doesn’t mean that the average flux to space can be calculated from the average lapse rate.

SOD wrote: “Everyone” and “no one” are big calls.” Yes, but they are mostly signs of frustration. In these 1D models, feedbacks can’t amplify warming until that warming has been transferred to the surface and lower troposphere. If one commenter had acknowledged this obvious truth, I might have avoided such ridiculous generalizations. IMO, we should recognize that setting surface warming equal to tropopause warming before considering feedbacks is an essential postulate of 1D models. Then we are forced to consider whether this is a reasonable postulate or (another) significant source of uncertainty that has been ignored in the search for a scientific consensus.

A post on observed environmental lapse rates would be great. Hopefully observations will come with some way of distinguishing between lapse rates from convecting and non-convecting regions. Separating regions with upward convection from downward convection would also be interesting, but I suspect horizontal mixing smears out the differences one might expect to find.

That would be a good basis for an article. The TIGR-3 database has all sorts of different surface temperatures, lapse rates humidity and ozone profiles. They also calculate brightness temperatures as might be observed by a satellite for each profile. That data should at least serve as a reality check on your own calculations as well as having information on the observed range of lapse rates.

Sorry for the delayed response. I seem unable to post within a ‘nest’ here, so I hope you are able to recognise my response to your post # comment-13258 of Sept. 13, 2011 at 3:12 pm.

“The transfer of enthalpy is from the ocean surface to the air as shown in Ramanathan’s Figure 1, not the other way around.”

Your ‘terms’ are confusing! Enthalpy is defined as causing ‘work’ and the only ‘work’ that ‘phase change’ achieves in this scenario is altering the ‘phase’ of a substance. In the case of ‘water evaporation’ all the energy comes from ‘the liquid phase’, but none of that ‘added energy’ (entropy (robbed from) to the liquid phase) is seen in the ‘gas (WV) phase’ as a temperature change. Thus, no ‘sensible temp’ change to the atmosphere during a ‘change of phase’ other than the atmosphere adding a small kinetic ‘kick’ to the newly generated WV molecule. There’s much more thermal energy taken from the near surface atmosphere by ocean absorption of DLR.

This leads me to doubt the rest of your post because;

“The result is cooler moister air with increased enthalpy and cooler water with lower enthalpy.”

is again confusing in physical terms. The enthalpy value only accounts for the kinetic energy used to push the water molecule through the vapour pressure barrier into the atmosphere. However, this does have other repercussions in ‘radiative analyses’, but doesn’t alter the fact that the “water” is left with a ‘higher entropy’ and not a “lower enthalpy”. A “lower enthalpy” just means that the local potential kinetic energy has, for some reason, now got to expend less energy for the same ‘work’ effect. In truth, entropy is increased in the water and entropy is decreased in the air by this phase change.

This may seem confusing, but, as energy ‘seeps away’ from a ‘system’ (system A), it’s energy reduction is taken to be an increase of ‘entropy’ (increasing towards disorder). However, if there is a recognised ‘sink’ for this energy it can be termed an ‘attractor’ to the energy loss, and if we look at individual ‘packets of energy loss’ to that attractor we can discover how the lost energy is used. When the ‘lost energy’ is used to do ‘work’ on ‘another system’ (system B) we can call this “system A’s” ‘increased entropy’ towards “system B’s” ‘decreased entropy’ by way of “system A’s” ‘enthalpy’ (work done) to “system B”. Wiki isn’t clear on this, but here’s the link to entropy;

Enthalpy is defined as causing ‘work’ and the only ‘work’ that ‘phase change’ achieves in this scenario is altering the ‘phase’ of a substance.

That’s not how enthalpy was defined in my Physical Chemistry book nor in Feynman’s lectures, nor for that matter in Wikipedia. Enthalpy, H, was defined originally in Chemistry because the real world doesn’t usually operate under constant volume conditions. Constant pressure works better. ΔH=ΔQ + VΔP. There is a work term there, but if the pressure doesn’t change, then enthalpy can change without work being done. Possibly you’re confusing enthalpy with the Helmholz potential F (or free energy) or the Gibbs potential (or Gibbs free enthalpy).

Of course “swamp coolers” work, but only if the RH is below 100%! ‘Swamp coolers’ work by evaporating water sprayed droplets into the atmosphere (unless you speak of the ‘refrigerant’ type). The net result is that all the water evaporates, and in doing so, removes all the value of ‘latent heat of evaporation’ from the surrounding air (mostly by ‘contact conduction’). Thus, cooling the air, but adding to humidity (RH at the lower temperature).

I said wiki isn’t clear on this and you’ve picked up on this point. We obviously come to this ‘perspective’ from divergent disciplines.

“Enthalpy, H, was defined originally in Chemistry because the real world doesn’t usually operate under constant volume conditions.”

No! The term was defined for useful work from the thermodynamic cycle of ‘steam engines’. The ‘volume’ within the cylinder of a steam engine is rarely ‘constant’ whilst the piston is moving (aside from TDC and BDC)! Chemistry disciplines may well have taken ‘on board’ this ‘definition’ for their purposes, but it originates from the ‘Carnot Cycle’ et al.

“Constant pressure works better. dH=dQ + VdP. There is a work term there, but if the pressure doesn’t change, then enthalpy can change without work being done.” My “d” for “delta symbol”.

As always, this depends on the ‘attractor’ under observation, but, if a ‘chemical reaction’ is involved, you can bet that ‘enthalpy’ is a ‘player’ at some level or other (where the work is done)! Why should you expect this to necessitate a pressure change?

The Carnot cycle was proposed by Carnot in 1824 and expanded by Clapeyron in the 1830’s and 40’s. The Carnot cycle is about heat and work, not enthalpy.

The concept of enthalpy wasn’t proposed until 1875 by Gibbs. Gibbs was the founder of physical chemistry and chemical thermodynamics.

Josiah Willard Gibbs (February 11, 1839 – April 28, 1903) was an American theoretical physicist, chemist, and mathematician. He devised much of the theoretical foundation for chemical thermodynamics as well as physical chemistry.

The “Carnot Cycle” is a ‘theoretical construct’ introduced to improve understanding within the field of thermodynamics. That’s why I placed “et al” after it (there are many physicists and engineers that have added to the concept) and I’m not here to discuss the etymology of ‘enthalpy’ either.

‘Enthalpy’ shows where ‘all’ the energy went and not just the ‘work output’ that you were expecting from the energy input, thus, ‘indicates efficiency’ (‘eta’ h) for the system under observation, or ‘the system of your choice’. From this, it’s easier to investigate ‘other attractors’ that are able to draw energy from the energy source provided ‘to do work’ in the system of your choice.

As an aside, enthalpy isn’t about ‘heat’ per se, it’s about ‘energy’ and its distribution.

The Greek letter ‘eta’ isn’t an ‘h’, it’s an ‘n’ with the RHS trailing below the ‘line of text’. It’s hard to communicate here sometimes, but as you see, I’m now able to ‘nest a post’ here again (must have been a quirk with my laptop). :)

When formulating the energy balance at the surface, why isn’t the back radiation due to direct UV absorption (78 W/m^2 in the Trenberth diagram) taken into account? Since the uv reflected due to albedo gets a second pass through the atmosphere it would seem the 78 +.23*23 = 83 W/m^2 should cause a significant warming of the atmosphere which should then lower the surface temperature required to achieve TOA equilibrium (right?). It seems in all the references I have found these effect is ignored so I’m probably missing something.

The solar radiation “reflected due to albedo” = the amount that is not absorbed in the climate system – this is measured at about 100 W/m2.

Are you thinking of solar radiation reflected from the ground that is then absorbed in the atmosphere?

(I’m assuming that by “UV” you mean “solar radiation”?)

I’m sure some is – but the solar radiation that makes it to the surface is at wavelengths where (mostly) the atmosphere does not absorb:

And so the proportion of surface reflected solar radiation absorbed by the atmosphere is much lower than the proportion of incoming solar radiation absorbed by the atmosphere.

In any case because we measure the total reflected solar radiation by satellite we have a pretty accurate number for the total value. This is in contrast to all values of downward radiation at the surface which cannot be measured by satellite and instead must be measured by ground stations, which are expensive and so sparsely located.

As a result, total solar radiation absorbed by the atmosphere (for example) was a value with significant uncertainty. I believe that much of this uncertainty has been resolved over the past decade, but the point is that the numbers in the Kiehl and Trenberth diagram are not certain. As they said in their 1997 paper:

The purpose of this paper is not so much to present definitive values, but to discuss how they were obtained and give some sense of the uncertainties and issues in determining the numbers.

All of the post albedo gets to the surface, albeit not all directly. What doesn’t get there directly in the form of SW is either absorbed by the atmosphere and emitted down as part of the downward LW flux received at the surface or falls to the surface as part of the energy in the temperature component of precipitation, etc.. Whatever part of the post albedo that ends up leaving the system without ever reaching the surface is just traded off for energy emitted from the surface absorbed by the atmosphere that would otherwise be leaving the planet. The bottom line is Conservation of Energy dictates that the full post albedo gets to the surface one way or another, because the atmosphere cannot create any energy of its own – all it can do is redirect it.

This is what is so confusing about Trenberth’s depiction of 78 W/m^2 designated as ‘absorbed by the atmosphere’. He brings this all to the surface as part of the downward LW flux of about 333 W/m^2 designated as ‘back radiation’. This is highly misleading and why so many seem to be confused. If you (or anyone) really think Trenberth’s depiction is accurate, where is the energy from the temperature component of precipitation in the return path from the atmosphere to the surface? It’s not there, as he brings all the latent heat back to the surface in the form of downward LW designated as ‘back radiation’, which is wrong.

You still don’t understand what I’m claiming or have claimed. The net convective flux from the surface is not zero, as there is net convective loss from the surface to the atmosphere. This is why the surface cooler than it would be without convection. You also don’t seem to understand the concept of net energy flux. If the surface is emitting 390 W/m^2 in the steady-state, it has to also be receiving a net energy flux of 390 W/m^2. To the extent that there is non-radiative flux from the surface to the atmosphere, from the atmosphere to other parts of the atmosphere and from the atmosphere back to the surface doesn’t change this. These fluxes are in addition to the radiative flux at the surface.

The non-radiative fluxes are just moving energy around so the planet’s energy balance is what it is – about a net 390 W/m^2 flux into the surface. If there is an imbalance (i.e. more is leaving the surface than is returning on average), non-radiative flux is just being traded off for radiative flux at the surface, requiring the surface to emit less to achieve equilibrium output power at the TOA. Why is this so hard to understand? The energy flux entering and leaving the TOA is all radiative (i.e. all photons), so all the non-radiative fluxes are in between the surface and the TOA.

If you really think Trenberth’s depiction is accurate, then what is the origin of the energy in the temperature component of precipitation from the atmosphere to the surface? This is not accounted for in the diagram.

And yes, COE dictates that the full post albedo must get to the surface one way or another. Of course, it is entirely possible for a photon from the Sun to be absorbed in the atmosphere (mostly clouds) and eventually be emitted to space without ever reaching the surface. But the same is true of a photon emitted from the surface absorbed by the atmosphere that otherwise would need to be leaving the planet and subsequently getting back to the surface. In such a case, energy from the surface absorbed by the atmosphere is just being traded off for energy from the Sun absorbed by the atmosphere that would otherwise be getting to the surface. So indirectly, the post albedo energy from the Sun gets to the surface.

Are you forgetting maybe that clouds are thermally connected to oceans and energy absorbed by them is really no different than energy directly absorbed into the oceans?

SoD,
Didn’t mean to stir up a hornets nest. I was referring to the 78 watts/m^2 of SW power (about 22% of the solar insolence) that never makes it to the ground but is absorbed in the atmosphere. Presumably 22% of SW energy reflected due to surface albedo is similarly absorbed on the way back up. This adds up to about 83 W/m^2 that I assume is converted to IR radiation both out to space and down to the surface. How is this radiation accounted for inb the energy balance calculation?

Also SoD can correct me if I’m wrong, but I believe that by definition the albedo is all reflected SW radiation from the Sun, so if some of the reflected SW at the surface ends up absorbed by the atmosphere it by definition is part of the post albedo that enters the system.

I would also take Treberth’s 78 W/m^2 figure with a grain of salt, as it’s not a directly measured or derived value. While no doubt some of the post albedo is absorbed by the atmosphere, I frankly doubt it is actually this much.

Take UV as an example. The atmosphere absorbs almost all the UV on the journey to the ground. So no UV will be absorbed in the atmosphere from surface reflected solar radiation – because all those wavelengths have already been absorbed. Likewise for the other wavelengths where the atmosphere absorbs.

Take another look at the spectrum of solar radiation measured at the ground. Note the notches (which represent absorption). If you shine this spectrum through the atmosphere which wavelengths will be absorbed?

Having said that no doubt there will be papers which have quantified total absorption of solar radiation. Without checking – which I might do later – the 78 W/m2 may well be the calculation including this reflected & absorbed portion.

SoD,
But doesn’t the absorbed UV or SW energy heat the atmosphere and contribute to the radiation balance? Imagine the earth was a white body, It seems to me the TOA energy balance would require the Ta to rise.

Yes, of course any solar radiation absorbed by the surface or the atmosphere causes the temperature of the climate system to rise.

We know how much radiation is absorbed – in total – by the climate system because we measure the reflected solar radiation at TOA by satellite. We know what went in, we know what came out, so we know quite accurately how much was absorbed.

So there is no “extra energy” that might have been missed, the question remaining is how much is absorbed by the atmosphere and how much by the surface.

We know how much radiation is absorbed – in total – by the climate system because we measure the reflected solar radiation at TOA by satellite. We know what went in, we know what came out, so we know quite accurately how much was absorbed.

That being the case, why is it so difficult to ascertain the climate sensitivity? It would seem at first blush that it would be easy to determine the solar cycle amplitude in the surface temp data and to ratio this with the known input amplitude to get the dT/dS which would include all (quick) feedback mechanisms. This could then be converted to dT/dF given the albedo and atmospheric absorption. Since in the IPCC formulation all forcings are considered to see the same gain (dT/dF), couldn’t one get an empirical estimate of sensitivity this way?

Because the climate sensitivity we want to know is very low frequency, hundreds to thousands of years, and the frequency behavior is complex. If you have an RC circuit, you can’t characterize it with a measurement of the impedance at one frequency, unless you’re very lucky and have a phase shift near 45 degrees at the measurement frequency. And the climate isn’t anywhere close to being as simple as an RC circuit.

“Because the climate sensitivity we want to know is very low frequency, ”

We’re interested (at least for the AGW issue) in time responses on a decadal time scale. Feedbacks with time constants of 100kyrs are completely decoupled – their poles would have a negligible effect on the 2x CO2 step response. And to use your analogy we need not characterize the impedance to get the transfer function. Drive it with white noise, cross correlate the measured output variance with the known input and FFT. Voila – G[s]

I see. So it’s accounted for in the radiation totals but there is uncertainty about how much to attribute to the various contributors. Thanks. BTW, I can’t figure out how to nest a response. I just see one response box and it always puts it at the bottom of the page.

“We know how much radiation is absorbed – in total – by the climate system because we measure the reflected solar radiation at TOA by satellite. We know what went in, we know what came out, so we know quite accurately how much was absorbed.

That being the case, why is it so difficult to ascertain the climate sensitivity? It would seem at first blush that it would be easy to determine the solar cycle amplitude in the surface temp data and to ratio this with the known input amplitude to get the dT/dS which would include all (quick) feedback mechanisms. This could then be converted to dT/dF given the albedo and atmospheric absorption. Since in the IPCC formulation all forcings are considered to see the same gain (dT/dF), couldn’t one get an empirical estimate of sensitivity this way?”

Yes, one can. The ratio of surface emitted power to post albedo solar power is only about 1.6 to 1 (390/240 = 1.625), meaning it takes about 1.6 W/m^2 of surface emission to allow 1 W/m^2 to leave to the system at the TOA. This is also the origin of the so-called ‘Planck’ response or ‘zero-feedback’ response in the IPCC’s formalism (3.3 W/m^2 x 1.625 = 5.4 W/m^2 = 1 C via Stefan-Boltzman).

The fundamental problem is this power densities ratio already includes the lion’s share of all the feedbacks in the system from hundreds, thousands, millions of years or more of solar forcing. How could it not? To get a 3 C rise from the 3.7 W/m^2 of ‘forcing’ from 2xCO2 requires an amplification factor of 4.5 (16.6/3.7 = 4.49), which is way outside the system’s bounds. If the 3.7 W/m^2 of ‘forcing’ is supposed to be equivalent to post albedo solar power, how can watts of GHG ‘forcing’ be more effective at warming the surface than watts from the Sun? Moreover, If 3.7 W/m^2 is to be amplified into 16.6 W/m^2 required for a 3 C rise, why doesn’t it take 1077 W/m^2 at the surface to offset the 240 W/m^2 from the Sun??? (16.6/3.7)*240 = 1077.

>Yes, one can. The ratio of surface emitted power to post albedo solar power is only about 1.6 to 1 (390/240 = 1.625), meaning it takes about 1.6 W/m^2 of surface emission to allow 1 W/m^2 to leave to the system at the TOA.

Hmm. I doubt whether the absolute power relationship has much bearing on the sensitivity – the system certainly is not that linear. What we really want is the gain to a small perturbation around the nominal forcing. Seems like the solar cycle variations should do the trick nicely. As for your other point re the time lags, cross spectral density studies can get around this problem. For unknown system G[s] presumed linear for small perturbations and known input PSD Sxx and output PSD Syy, G[s] can be estimated quite accurately from Sxy/Sxx where Sxy is the cross power spectral density. G[s] is the frequency domain response which includes the poles from the system time lags. |G[0]| gives the small signal sensitivity.

Yes, the absolute response doesn’t give the actual sensitivity, but more an upper bound on the sensitivity to incremental ‘forcings’ because net negative feedback is required for basic stability and maintenance of the current energy balance.

The ultimate point is one cannot arbitrarily separate the physical processes and feedback mechanisms that maintain the net 390 W/m^2 surface flux to the 240 W/m^2 solar flux (the 1.6 to 1 ratio) from those feedback mechanisms that will act on incremental GHG ‘forcing’ in the system. This is where the IPCC’s ‘forcings and feedbacks’ formalism breaks down and where the ultimate starting point of of climate science – the 1.1 C ‘zero-feedback’ response to 2xCO2, is not accurate.

I’ve been thinking a lot about this lately and I’m starting to agree with your assessment of the IPCC’s formulation of the problem. It is hard to find validity in the concept of a “no-feedback” climate sensitivity. A climate without any feedback whatsoever (except of course radiant balance) is a very different looking system than one with feedback. How would you parametrize such a system? How much water vapor and aerosols are in the air. What’s the equilibrium GHG concentrations? What’s the “no feedback” optical depth, absorption and albedo? Even more, the equilibrium temperature of such a system will likely move substantially once the feedbacks are in place. Since the system is highly non-linear, by definition the no-feedback sensitivity is a function of the operating point (equilibrium temperature). Simple one dimensional modeling should convince anyone how sensitive this no-feedback gain is to operating point (3:1 with 10 degree change in Te). Thus the no-feedback sensitivity must be calculated (because it can’t be measured) at the “with feedback” equilibrium temperature which in turn depends on the no-feedback sensitivity. Seems like a very tough nut to crack. I think its more of a marketing gimmick to convey the notion of amplification than a valid model of reality. But that then begs the question, what _is_ the amplification factor and what empirical methods might be employed to narrow the uncertainty? The climatologists turn to GCMs for answers but without empirical verification, I doubt policy makers will ever act based on modeling alone.

I no detect (also in the comments), the terms “increase of the tropopause height” and “partial pressure of CO2″ at the tropopause. The partial pressure of CO2 in the earth’s tropopause is about 0.12 mbar, the Venusian tropopause about 0.4 mbar. At doubling of CO2 stays the partial pressure of CO2 in the earth’s tropopause approximately constant or increases at most slightly. This increases the thickness of the troposphere and at a constant lapse rate the temperature difference across the troposphere. This temperature increase is distributed to approximately 1 / 4 on the increase the surface temperature and 3 / 4 to the decrease in stratospheric temperature – how it is measured and can be easily derived from the possible changes of the spectrogram may of the earth’s emission. Water vapor feedback is not helpful.

Here is a dissertation that attempts the frequency domain analysis I alluded to above (http://www.princeton.edu/~marsland/Junior_Paper.pdf ). The author was unsuccessful due to errors in both his assumptions and data processing techniques, He assumed from the outset (eq 1) that the feedback could be modeled as a simple delay which would exhibit a linear phase vs frequency relationship. The result in figure 5 shows clearly the assumption was incorrect. That curve is characteristic of a system with a single pole at approximately 1/2.5 years. As for analysis, he should have re-sampled both data sets via interpolation rather than zero padding, recognizing that the effect of the interpolation function cancels in the ratio. This would have allowed greater spectral resolution and solved his phase wrapping problem.

I would like to attempt a correct analysis but don’t know where to get the time series he uses. Can anyone point me to someplace where these can be downloaded?

There are a couple of problems with his approach. First his time records are way to short to provide meaning full results. Imagine the climate system’s impulse response is h(t) = A e^(-tau1*t) +B e^(-tau2*t) where tau2 >> tau1 and B >> A. The time constant difference is assumed large enough so that they are decoupled so that the superposition of the individual impulse responses adequately approximates the system response as in the equation above. Over a short time record, the second term will be nearly constant and so undetectable when we differentiate to get the gain. This leads to deriving the system gain as A when in fact it is dominated by B. In his analysis he sees a 60 degree phase shift in-to-out in the annual periodicity. This corresponds to a pole at ~1/4 months. The time constant associated with the ocean is measured in years (about 7 as I recall), so the problem I outlined above is manifest.

The other problem is doing the analysis in the time domain. Time lags cause phase shifts which can add destructively, reducing the apparent sensitivity. Such analysis should be carried out in the frequency domain where the effects of the system leads and lags are easily sorted out.

I don’t understand your criticism. The direction of the feedback is independent of the length of the response time. And 25 years worth of data is plenty long enough for more longer-term feedbacks to manifest themselves, globally averaged. The only exception is the ice albedo feedback, which is negligible and even very small by the IPCC’s quantification.

The point is that as radiative forcing increases, the ratio of emitted surface power to incident solar power decreases. This is the exact opposite behavior predicted by the IPCC’s computer models.

His time series are only 18 months. And neither the sign or magnitude of the feedback (if any) can be discerned from this type of analysis. It’s the total transfer function his attempting to measure which includes feedback(s) lumped in.

His time series are 18 months (look at his plots). Averaging multiple cycles from 25 years of data to create a single 18 month time record reduces the amplitude variance but the time series length is still 18 months..

“His time series are 18 months (look at his plots). Averaging multiple cycles from 25 years of data to create a single 18 month time record reduces the amplitude variance but the time series length is still 18 months.”

Not for the global average response from which the sensitivity estimate is derived. It’s also not 18 months, but one year with a repeat, extented plot of the first 6 months added on to more easily see the pattern relationships. It’s true that using 25 years of averaged data reduces the amplitude variance slightly, but the fundamental behavior is not altered as a result of this. Nor is the direction of the feedback dependent on the ultimate response time of the system to changes in forcing. Also, the data reveals the response time of the system (time constant) must be significantly shorter than your estimate or else the magnitude of change that occurs could not occur.

“Not for the global average response from which the sensitivity estimate is derived.”

Slicing the data into 25 yearly records and averaging them by month does not give you the total global average response. As his own description (limited as it is) of his method makes clear, he didn’t de-trend the data because his method is insensitive to trends. This means it is also insensitive (or greatly attenuates) slow responses from feedbacks whose time constant are large w.r.t. 1 year.

“but the fundamental behavior is not altered as a result of this”

Of course the method of analysis does not alter the behavior of the system being measured. The question is whether the method can accurately characterize the behavior. For the reasons shown in my first post, his method is insensitive to long term feedbacks which may (or may not) be significant to the question at hand.

“Also, the data reveals the response time of the system (time constant) must be significantly shorter than your estimate or else the magnitude of change that occurs could not occur”

You’re making my point here. Perhaps you’re thinking the climate system has only one feedback path when in fact there are many, each with its own gain and time constant. He can only measure the response of short time constant feedbacks because he has averaged out the longer ones.

The global response is the globally averaged surface emitted power divided by the average incident post albedo solar power. This is from 25 years of averaged data, which is plenty long enough for the more longer-term feedbacks to fully manifest themselves. The only exception is the ice albedo feedback, which as I mentioned before, is very small even by the IPCC’s quantification. What greater than 25 year long-term feedbacks are you referring to? Most of the ‘enhanced’ positive feedback required for the 3 C rise hypothesis comes from positive water vapor and cloud feedback, which operate on relatively short time scales (weeks to months) – not years or decades.

What I mean by the fundamental behavior not being altered is the direction of the net feedback that occurs from changes in forcing. There is delay between initial forcing and final effect which is longer than the measured yearly changes due to the thermal inertia of oceans, but there is no physical basis why this fundamental measured shorter-term behavior would change long-term, let alone swich to 300% net positive required for a 3 C rise.

The bottom line is globally averaged it only takes about 1.6 W/m^2 of radiative surface emission to allow 1 W/m^2 to leave the system, and the as the radiative forcing and surface temperature increases, the ratio of surface emitted power to incident solar power decreases, which means increases in forcing are being opposed by the system, which is the exact opposite behavior predicted by the IPCC models.

To get 3 C requires a gain of 4.5 (16.6/3.7 = 4.5), which is so far outside the system’s bounds as to be completely unsupportable, especially given the measured behavior to changes in forcing is exactly the opposite required.

The only way to reconcile this discrepancy is if watts of GHG ‘forcing’ have a far greater ability to warm the surface than watts from the Sun. I can’t accept this.

If you are going to simply restate your position without responding to the counter-points raised, it’s probably a good sign that this discussion has reached a terminus but here’s one final attempt then I’ll give you the last word.

You have to show how a method that is insensitive to trends can never the less be sensitive to slow but potentially large feedbacks. I claim such a method doesn’t exist because over short time scales large feedback trends are indistinguishable from a trend in the input data.

The scale that matters here is the scale of the resultant time series (12 months, as you point out beyond that it just repeats), not the scale from which the aggregate was created. Here’s why. Imaging the output series is composed of the diurnal sinusoid and a long slow exponential as might be encountered due to a step change in forcing. If we slice the data into yearly time slices, average all the Januaries, all the Februaries etc. we have lost an important temporal relationship, namely that due to the exponential, Jan’88 < Jan'89 < Jan'90 etc. Once lost, the true sensitivity cannot be foiund.

First, you have a very informative and well written blog, my compliments.

There are a few assumptions you mention that I think are poorly considered. From a more classical perspective, they appear to be inconsistant with expectations and observations. I am attempting to write a more detailed description of my visualization of the atmospheric effect, which I hope you will critique.

For one thing, the net surface energy flux isn’t 390 W/m², it’s nearly zero when averaged over the entire planet for a year. If it weren’t nearly zero, then energy would either be accumulating in the ocean or it would be decreasing. ARGO measurements show at most a very small increase, less than 1 W/m². Now if you mean the surface energy flux in the thermal IR, the net flux still isn’t 390 W/m², it’s about 70 W/m². If you’re talking about the gross energy flux upward including convection, it is ~490 W/m². The gross flux upward isn’t 4900 W/m² because the upward and downward fluxes wouldn’t balance at that level.

That would be gross, not net and the incoming flux is ~490 (324 + 168 ) W/m², not 390 W/m². The reason is because that’s the flux required to maintain an approximate energy balance at the top of the atmosphere given the current composition of the atmosphere and the planetary albedo. This has been explained to you more than once.

The interesting thing with both K&T drawings is the 20-24 Wm-2 difference in atmospheric absorption of OLR with the NASA data. Taylor, David of the UWM, did his Phd thesis on minimal local emissivity variance in the Arctic and noted 1RU or ~20K not accounted for in instrument error or calibration that appeared to be absorbed by the atmosphere and not included in the calculations. 20K is roughly 80Wm-2, with a range of 60 to 100 depending on the surface or TOA frame of reference.

So my pondering relativistic relationships aside, that is a fairly significant error if it is indeed an error.

Since the ratio of atmospheric absorption versus surface absorption of incoming solar appears to be an important factor in climate stability, it is a bit unrealistic to believe the value of the K&T downwelling Longwave versus Angstrom’s early measurement, which made sense and was properly compensated for local temperature.

Which NASA data? K&T and KT&F somewhat arbitrarily picked 40 W/m² as the amount of OLR that is transmitted directly to space. They took the clear sky transmission of ~100 W/m² and reduced it to 40 W/m² because 60 % of the surface is covered on average by clouds and clouds are completely opaque to OLR. If you calculate OLR by ignoring clouds and assuming that the all water in the atmosphere is present only as vapor, you get a different number, on the order of 65 W/m². But that’s really minor in the overall energy balance. It means less is radiated to space by the atmosphere and clouds (because the assumption means there are no clouds) so the total amount radiated up and down still balances the amount of OLR and DSW absorbed by the atmosphere. It only matters if you’re calculating an average value for τ. And if you ignore clouds, you get the wrong answer.

No, clouds seem to be in there as far as incoming. The OLR net in both K&Ts are close to NASA but off by ~20 Wm-2. Since the thermal and latent match, it has to be OLR. The NASA budget also considers OLR absorbed by liquid water and ice, it looks like, while K&T do not. I have gone through the text and K&T have huge differences in measured radiant flux, it kinda looks like they just picked the wrong number. ~20Wm-2 missing or 20K used instead of Flux? It is a big miss.

That’s when I started looking at the Satellite calibrations, “In summary, the approximate 1 RU bias between the AERI and the LBLRTM in clear sky conditions is probably not due to calibration errors in the instrument, but is most likely atmospheric absorption that is not accounted for in the calculation.”

I can figure approximate DWLR a dozen ways that agree with Angstrom, but no way that agrees with K&T at the surface, TOA, yes, but not at the surface or top of the troposphere. It looks like a simple mistake with a pretty big impact.

The Turner dissertation is dated 2003. The latest version of the HITRAN molecular spectral database is 2008. It’s very likely that the problem noted by Turner was corrected in the new version of HITRAN. Try looking for a comparison of line-by-line model calculations that use the latest version of HITRAN and AERI or other FT-IR atmospheric spectra.

“Yes, I do, because that’s what the measurements show and that’s what’s required to close the energy balance.”

Well that does simplify things. We can just neglect that 390Wm-2 minus 174Wm-2 solar absorbed at the surface is 216 Wm-2 and assume that it takes 320 Wm-2at the surface to create that difference at the surface. It would not really matter that 320Wm-2 at the TOA would be reduced to 216Wm-2 at the surface due to the increasing opacity of the atmosphere with increasing density and the addition of water vapor. Of course, since the average surface temperature is 0.8C warmer than the air 1 meter above the surface, we could just assume that DWLR is 390Wm-2 minus a touch. Since CO2 happens to have a greater impact at the upper troposphere where there is less competition with water vapor, we can assume there is perfect energy transfer from the upper troposphere to the surface, like opacity is a two way mirror, it only interacts with infrared in one direction..

Or we can use the tropopause for a reference, that would give a DWLR value of ~160Wm-2. Since the atmosphere is a two way mirror, we can pick any number we wish.

Now, if DWLR does obey basic thermodynamics, where it occurs could matter, unless we model the atmosphere as a two way mirror. That does simplify things.

It is way too complicated to have the surface balance, the atmosphere balance and the TOA balance.

The values of measurements in Figure 1 are just no reasons why these measured values. Asked is “why these values​​?”. The surface temperature is determined and changed by absorption of radiation, emission of radiation and convective energy transfer until the temperature gradient in the troposphere is of about 6.5 K/km. This is disappear in this discussion, but – that will just hide the “why”.

I can figure approximate DWLR a dozen ways that agree with Angstrom, but no way that agrees with K&T at the surface, TOA, yes, but not at the surface or top of the troposphere. It looks like a simple mistake with a pretty big impact.

Even better than your ideas are measurements of DLR from a network of high quality calibrated base stations around the globe.

I am working on that. The BSRN paper you linked is one I have been looking at for a while Most of the low humidity environment readings agree well with what I am getting, 200 to 240 ish. Even Boulder Jan. It is a bit difficult to spacially average the sparse readings and adjust for humidity, which can cause higher than expected readings. Mid- troposphere temperatures tend to agree with what I would expect. With a better estimate of minimum local emissivity variance, I may be able to use the mid-tropo to estimate a global average, converted to actual temperature instead of anomaly. Lot of pay walls though.

What I meant by the question is why does the surface receive a net energy flux of about 390 W/m^2?

And DeWitt Payne’s response explained yet again and added the all important point:

..This has been explained to you more than once.

Let’s not count them up or link to the many places where basic questions of yours have been answered or elementary mistakes of yours have been corrected.

Future repetitions of basic questions by you will be deleted. You have had your air time in the comments of more than one article. This blog is littered with your questions where you demonstrate that you don’t understand absolute basics.

Fair enough.

But not much point adding to the list. For all our sakes, and especially for readers who would like to learn something useful.

If you have a new subject or have finally understood something that has been explained and now want to ask the next question fire away. Otherwise why not start your own blog where you can proclaim to the world whatever it is that you don’t understand.

I re-read you post on DWLR. Excellent as usual. I am not one that cares to “ditch” laws fo physics or “invent” relationship for force data to match my theories.

The average temperature of Antarctica is -49C(224K) which corresponds to 143Wm-2 assuming emissivity = 1. This temperature is curtesy DWLR created globally and distributed by the potential energy of the atmosphere, pressure. At any particular point on the Globe, California for example, the value of DWLR will likely exceed this value, it appears the global average might be 216Wm-2. 320Wm-2 is a bit difficult to reconcile, though if one wishes to “Force” data to match personal “theory” any old rationalization will do.

I was under the impression you were curious about why nature agrees with classical physics and seems to be thumbing its nose at post-modern theory. Another year or two and it will be obvious.

Oh, I nearly forgot, CO2 has a non-linear impact on conductivity of a mixed gas. Craziest thing you have ever seen. CO2’s maximum conductivity as a gas is at -20C. It seems like as the Antarctice warms from ~-49C, that its ability to conductively cool increases. Of course, conductivity of air changing from ~0.0238W/m-2.K to ~0.024W/m-2.K is negligible :)

The average temperature of Antarctica is -49C(224K) which corresponds to 143Wm-2 assuming emissivity = 1. This temperature is curtesy DWLR created globally and distributed by the potential energy of the atmosphere, pressure. At any particular point on the Globe, California for example, the value of DWLR will likely exceed this value, it appears the global average might be 216Wm-2. 320Wm-2 is a bit difficult to reconcile, though if one wishes to “Force” data to match personal “theory” any old rationalization will do.

[Emphasis added]

Perhaps you believe Antarctica is a larger part of the globe than is generally calculated.

The observations you provide do not nessicarilly include corrections for minimum local emissivity variance. The Arctic and Antarctic may be small areas compared to the total surface, but they benefit the most from the atmospheric effects. The radical surface pressure changes in the polar region are indications of the net flux magnitude and direction due to convective heat transfer, a non trivial percentage of the atmospheric effect, which is not directly a radiant impact, but an indirect impact due to energy transfered in and near the tropics.

The changes in pressure and density associated with polar vortecies impact the accuracy of spectral analysis as referenced in the thesis by David Taylor, or the MLEV issue.

While the spectral analysis you provide indicate a higher value of DWLR, these measurements are potentially the least accurate of all direct meausurements available. Using basic thermoicnamcis, the indicated value of DWLR, after allowing for latent shift in the atmosphere, would be ~216. There are several ways this can be estimated, i.e. Kimoto, Lidzen, and others.

However, Kimoto, Lindzen and others have not allowed for the missing ~20Wm-2 missing in the K&T analysis, which if you review the paper was confused by the obvious inaccuracy of direct flux measurement.

Since temperature and pressure direct measures are the more accurate of the observational data, the thermodynamic approach using surface with respect to potential temperature at altitude, appears to allow for changes in minimum local emissivity variance. A useful analytical tool in my opinion.

The question you should ask, is which data is inherently the most accurate?

You have an enormous amount of stuff here to consider, but it occurs to me that if the temperature of the surface has increased, then, since the incoming energy (the Sun[1]) seems not to change much) it would seem that either the heat capacity of the atmosphere and hydrosphere has changed/increased, or the insulating ability of the atmosphere has changed/decreased, or both.

Bring the discussion back to everyone’s favorite topic, climate sensitivity to a doubling of CO2[2] in the atmosphere, it would seem that what we have to determine is:

What is the impact of a doubling of CO2 on the heat capacity of the atmosphere, and

What is the impact of a doubling of CO2 on the thermal conductivity of the atmosphere.

However, perhaps I am wrong.

1. There have been suggestions that while the variance in the energy present in the visible spectrum is small, there might be larger variance in the IR spectrum, which can be absorbed by the atmosphere.

2. A recent article on WUWT points out that with higher concentrations of CO2 in the atmosphere, plants need less water, and thus the amount of H2O in the atmosphere over land, a possibly more potent green house gas, is reduced, probably more than offsetting the effect of the increased CO2.

I have purchased a copy of Thermodynamics by Sears (1953 edition, but I doubt that this stuff has changed much) which I am currently reading so that I understand this stuff better. It was cheap at $12.95 from a local second-hand book store, and I remember Sears and Zemansky or some such as authors or another text on Thermodynamics.

I hope that my recent question does not betray too much ignorance … I have only gotten to chapter 3 but now I have to fix my garbage disposal …

As far as IR thermometers and wavelength filtering, the ones that measure the ratios of emission at two wavelengths do indeed use narrow bandwidth filters and don’t have to be corrected for emissivity of the source. Narrow bandwidth means the detector has to have high sensitivity. That means big bucks. The ~$50 hand held IR thermometers work much like a pyrgeometer because thermistors to measure the temperature of the detector and thermopiles to measure heat flow into or out of the detector are relatively inexpensive. If the hand held IR thermometers were insensitive to emission from water vapor, they couldn’t be used to measure total column water vapor, and they can: http://journals.ametsoc.org/doi/pdf/10.1175/2011BAMS3215.1

Nobody takes note of the layer structure of the atmosphere. The radiation transport equation is valid in the stratosphere with the radiation balance. Under it is the troposphere, where one can indeed write down the radiation transport equation – but there is no radiation balance – she is negligible. In the troposphere act the adiabatic laws. More CO2 means a thinner stratosphere and a thicker troposphere. Water vapor feedback is only an auxiliary to explain the change in temperature at the surface. How much CO2 is in the upper troposphere is of no interest, but how much CO2 is in the lower Stratosphere.

“The simplest method for doing this is to calculate the adiabatic lapse rate from the temperature and calculate the radiative energy flow from the temperature.”

That’s all well known – but this does not help a feedback factor as a starting point, but at most as a result. It’s about how does the increase of CO2? Constant lapse rate in the troposphere and radiation balance in the stratosphere. The increase of CO2 has thus mainly about the displacement of the tropopause.

To explain the effect of increasing CO2, the description of the state is not enough – it’s about what is the cause of the increase of the temperature of the surface. Namely the change of the tropopause.

Gold suspected already 1908 ( http://www.archive.org/details/philtrans05311580 p. 66) of principle the increase the height of tropopause: “It appears therefore that the greater the power of absorbing the atmosphere for terrestrial radiation the greater will which be the height at the isothermal condition begins, is apart from other considerations.” The transition from the stratosphere into the troposphere happens if the Brunt-Väisälä frequency (http://en.wikipedia.org/wiki/Brunt% E2% 80 % 93V% C3% A4is% C3% A4L% C3% A4_frequency) at the bottom border of the stratosphere is real according to radiative transfer equation for conservation of radiative equilibrium.

I still note a lot of discussion seemingly implying high accuracy of radiative transfer calculations. Is this not what is quantified in this paper? http://www.agu.org/pubs/crossref/2004…/2003JD004457.shtml

I keep asking a simple question: if calculations of theoretical upwelling and downwelling radiation each have uncertainties of 1% to 2% say, then how can you possibly assume that a net difference (eg 100.2 – 99.7 = 0.5) is accurate enough to actually calculate whether that 0.5 really is positive?

And, after all, in any one location it varies between positive and negative both in daily cycles and in annual (seasonal) cycles. How can all possible world-wide 24/365 calculations be added to give, with any reasonable certainty, a net positive downwelling flux?

In the real world, the net cumulative downwelling radiative flux at TOA from January 2003 to the present has actually been slightly negative, I suggest, because there has been a net fall in sea surface temperature – 12 mo to Sept 30, 2011 SST being cooler than 2003. Where are the calculations that show why that happened and produce a similar result?

I keep asking a simple question: if calculations of theoretical upwelling and downwelling radiation each have uncertainties of 1% to 2% say, then how can you possibly assume that a net difference (eg 100.2 – 99.7 = 0.5) is accurate enough to actually calculate whether that 0.5 really is positive?

Because the two numbers aren’t independent and the error isn’t random, it’s systematic. Systematic errors cancel when computing a difference. That should have been obvious even to you.

Thanks – just came across this 2 months late. I’ve got a Master’s degree in electrical engineering, am older than young and you could probably consider “acquiring information and trying ti make sense of reality” as my hobby.

I’m one of those who sit in “Climate Change” (or whatever we call it this month) no mans land, wonder about the more extreme rantings from both sides, try to make sense of the arguments, note that nobody is usually quite as convincing as they think they are, and want to see us ” do things properly and with integrity”. (eg I’m not ‘wholly enamored ‘of either Lord M or IPCC)

SO – it’s articles like this one that give me some hope we may yet get somewhere. That it’s finer conclusions are wrong is a given (“All models are wrong, some models are useful ..” G Box.) but it seems a much more honest than usual attempt to really understand what’s going on and to intelligently provide reasoned counterpoints to opposing reasoned counterpoints. Not without its occasional lip curl ” … who might think Bryan …” but freer than some of invective.

Do I understand it? – at a first skim, only in general thrust BUT it looks useful enough to be worth printing (65 pages (17 sheets booklet) with reader comments – which seemed useful enough to include) and add to (slightly heavier than most) casual reading pile.

More like this from all positions in the spectrum may yet get us somewhere.

The top spectrum is the satellite-measured clear sky planetary emission to Space (frequency is the bottom axis). The large hole between wavenumbers 650 and 700 is caused by CO2. The atmosphere is opaque to these frequencies, so in this region all the emissions are from high level CO2 molecules. [Note that we cannot estimate the height of CO2 emissions from the “blackbody temperature” of the emissions of 215DegK. Low temperature and low pressure gases do not radiate a broad spectrum, they emit lines which are nothing like a BB spectrum. See http://www.spectralcalc.com for a CO2 spectrum.]

The bottom spectrum shows emission to Space when a thundercloud is present. Most of the emissions are consistent with a blackbody (the top of the thundercloud) at approximately 210DegK. This puts the top of the cloud at around 14km.

In the CO2 opaque area, however, the emissions are unaffected by the thundercloud, meaning that these are from above the cloud top, so over 14km, and probably higher. (for the wavenumber 700 region, the emissions are probably a little lower, but still around 14km.)

2. CALCULATION
The figures I have for absorption of Wavenumber 650 by CO2 are:
Percentage of photons remaining after passage through Carbon Dioxide.
CO2 depth, in Atmosphere Centimetres (atmcm, equivalent to a distance of one 1cm through pure CO2 at 1 atmosphere and 20oC. 1atcm is the same amount of CO2 as in approximately 25m of atmosphere at ground level)
0.2 0.5 1 5 10 100
Percent remaining 75% 61%% 48% 16% 8% 0.1%

This attenuation can be modelled. I found that using the following works well:

17 iterations per atmcm of :
Powern = Powern(1-absorptionconstantn) where Powern is the remaining power in each value of offset from the centre frequency (integers from -250 to +250), and absorptionconstantn is the initial fraction of total line power at that offset frequency.

I then modelled what happens to a wavenumber 650 line radiated to Space from various altitudes. The results are below:

These results imply that a very significant fraction of the emissions are from the very high altitudes – probably about 2/3 of the emissions are from above 15km. This is consistent with the measured spectra cited above.

IMPLICATIONS

An emission height of 14km changes to about 19km when CO2 is doubled.

In polar regions both heights are well into the stratosphere, and so radiative cooling is the likely result.

In temperate regions, both heights are in the tropopause, so no temperature change will be experienced. There will be neither cooling or heating.

In tropical regions, the doubled emissions (19km) are in the Stratosphere. The temperature difference is about 4DegK. This is unlikely to result in much “forcing”.

Thanks for these comments. I will have another look at these posts, however:
1. The spectralcalc site shows the CO2 emission lines for the region 100um to 1000um to be 11 orders of magnitude less intense than the 15um region. I think that means in absorption terms that the probabilities of absorption are similarly very low – there is very little attenuation of photons because the CO2 molecules aren’t very interested in 100um to 1000um, they’d rather (much rather) have 15um.
2. The emissions by CO2 to Space are mostly 15um – of the 18W/m^2 radiated to Space by CO2 about 12W/m^2 is in the 15um band. (my numbers are rubbery, perhaps DeWitt Payne has more accurate figures?)

Your calculation is far too simplistic. When observed from space, emission in the range of 630-710 cm-1 includes emission from CO2 in the stratosphere where it’s warmer than the tropopause. So your estimate of the altitude of emission compared to emission from a thunderhead is fundamentally flawed.

The rule of thumb is that effective emission altitude is the altitude where the optical density reaches 1.0 or a transmittance of 1/e = 0.367879. Using MODTRAN and averaging transmittance over the range 630-710, at 280 ppmv CO2 the OD is 1.0 at about 16.6 km. Doubling the CO2 to 560 raises the altitude to 18.8 km. The temperature at 17 km is ~195 K and ~203 K at 19 km. But the small change in emission intensity at the bottom of the valley isn’t the important part. As SoD illustrated above, it’s the fact that the valley gets wider as the CO2 concentration increases.

But that was for the band on average. Suppose we look at the peak at 668 cm-1. At 280 ppmv, an OD of 1.0 occurs at about 33 km altitude where the temperature is about 240 K. At 560 ppmv CO2 the effective emission altitude increases to 36 km at a temperature of about 245 K.

Thanks for the above replies.
I entirely agree with DeWitt Payne’s “When observed from space, emission in the range of 630-710 cm-1 includes emission from CO2 in the stratosphere where it’s warmer than the tropopause.” but don’t understand the next sentence “So your estimate of the altitude of emission compared to emission from a thunderhead is fundamentally flawed.”

Just to outline my method again:
1. The thundercloud is roughly a blackbody, with temperature ~210DegK.
2. This puts its top at about 14 (or 20)km.
3. The emissions in the wavenumber 650-700 band (attributed to CO2), unlike the whole of the rest of the spectrum, are unaffected by the thundercloud
4. Therefore emissions in that band are coming from above the top of the cloud, ie above 14 (or 20!) km.

I would be grateful if my logical error can be nailed.

DeWitt Payne correctly points out that I did not take temperature into account in my estimates of emission strength from various altitudes (though I used it in my estimates of absorption). Had I done so I would have estimated higher emission heights, strengthening the case that the emissions to Space in the main CO2 band are from the Stratosphere. [The reason for not doing so, is that I do not know the effect of temperature on the strength of an emission line].

So looking at DeWitt Payne’s numbers, in the main emission band (wavenumbers 630 to 710) the emission height is 16.6km. This is at the Tropopause (Lat 40N to 40S), in the Tropopause (Lat 40-60), and in the Stratosphere (Lat 60-90). This means, when CO2 is doubled, the TOA imbalance from this majority part of the spectrum is Surfeit, Unchanged, Surfeit respectively, ie there is MORE energy being exported to Space than there used to be. There is cooling forcing in this band.

To get a deficit, one must rely on the out-of-main-band emissions, as pointed out by S_O_D. There are 2 considerations:
1. More CO2 = more absorption of Surface radiation. I don’t know the W/m^2 value.
2. More CO2 = more radiation from CO2 to space. I don’t know the W/m^2 value, but it is less than the decrease at 1 above, ie a deficit.

It would therefore be useful to know the proportion of CO2 emissions which are in the main band, and the proportion outside that band, and the increase in emissions from the main band and the decrease in emissions outside that band.

[IPCC AR4 gives 3.7W/m^2 as the “Radiative Forcing”, ie the imbalnce at the Tropopause. Not sure if that is a helpful number…]

In the tropics (roughly 40N to 40S) it is about 16.5km
In the Temperate Zone (Lat 40-60) it is about 11km, with a depth of 9km
Above 60Deg it is around 8km.

The boundariesbetween these zones vary seasonally.
[“The height of the tropopause depends on the location, notably the latitude … It also depends on the season. Thus, it is about 16 km high over Australia at year-end, and between 12 – 16 km at midyear, being lower at the higher latitudes. At latitudes above 60 , the tropopause is less than 9 -10 km above sea level; the lowest is less than 8 km high, above Antarctica and above Siberia and northern Canada in winter. The highest average tropopause is over the oceanic warm pool of the western equatorial Pacific, about 17.5 km high, and over Southeast Asia, during the summer monsoon, the tropopause occasionally peaks above 18 km.” (Source: http://www-das.uwyo.edu/~geerts/cwx/notes/chap01/tropo.html )]

The height of the Tropopause varies considerably from day to day. On the 7th, 8th, and 9th of September this year, the height at Guam (13N) was 16642, 15506 and 16033 metres.

I looked at La Pas in Canada on the 5th, 6th and 7th of every month for a year. The largest change in Tropopause heights and temperatures was 8100m and -46DegC on the 5th of May, and 10,500m and -61DegC on the 6th, but in ten twelve sets of the sampled dates there were excusions of 1km or more.

I don’t know why the IPCC definition of Radiative Forcing, which involves imbalance at the Tropopause, is useful. The Tropopause is nowhere stable – it could be more usefully thought of as the fuzzy boundary between the chaotic Stratosphere and the rather more ordered regime of the troposphere. So I am unhappy with Radiative Forcing. It defines the effect at a boundary which is unfixed, highly variable and difficult to measure. It is impossible to confirm the predicted Radiative Forcing by measurement. Better measurements are available at the Surface or at the Space boundary. A vague and variable boundary in the middle of the atmosphere is not where most people would choose to define such an important value. Surely Surface Forcing or (to coin a term) TOA Forcing would be more appropriate and testable concepts? I favour Surface Forcing – after all, the problem being examined is the change to Surface Temperature, and Surface Temperature is dependent on Surface Forcing, not Radiative Forcing.

The very large variability of the Tropopause begs the question of what model is being used to calculate Radiative Forcing.

While you can construct a definition of the tropopause that results in a fixed altitude precise to the meter, in reality the tropopause is a boundary layer with significant volume between the troposphere and the stratosphere. it isn’t a bright line with stratosphere 1 m above the line and the troposphere 1 m below the line.

There are very good reasons for the radiative forcing to be defined at the tropopause after allowing the stratosphere to equilibrate to the new ghg concentration. If you spent any time researching the problem, you could find out why. It’s been mentioned on this site multiple times. But you would apparently rather believe in your own fictions.

If the concentration of CO2 changes, the average altitude of emission at CO2 active frequencies increases. This causes changes in the amount of energy exported to Space from each altitude.

For much of the CO2 band (the portion from wavenumber 630 through 710) this imbalance is in the Stratosphere – more energy is being exported to Space than there was previously. For the weakly active frequencies from wavenumbers 500 to 800 the imbalance is in the Troposphere, and less energy is being exported to Space than there was previously.

We therefore have two effects:
a. A cooling of the Stratosphere, and
b. A heating of the Troposphere.

We can expect the tropspheric imbalance to be gradually greater towards the Tropopause, then tail off through the Tropopause, then reverse sign and reach a maximum somewhere in the Stratosphere before tailing off at some very high altitude.

I don’t much like that diagram, which is also in accord with S_O_D’s explanation of the mechanism for translation of a weak upper atmospheric energy imbalance into a very large surface forcing.

Since we live in a world which is allegedly experiencing changes in the atmosphere due to catastrophic GHG emissions, I have looked carefully at the weather ballon measurements to see if I can find evidence of the process outlined in the IPCC diagram, in particular evidence that increased Tropopausal temperatures change the temperature further down the Lapse Rate tree.

I haven’t seen it. What I do see, and it is particularly clear in the Guam incident on 7th, 8th, 9th September this year [I get data from http://weather.uwyo.edu/upperair/sounding.html ], is that a heating of the upper Tropopsphere is much more likely to result in a LOWERING of the tropopause, and a change in tropopause depth – the Tropopause “slides” down the lapse curve, and both the Tropopause and stratospheric temperatures increase so that there is now an area of constant temperature from the Tropopause until the height at which the Stratosphere begins to warm up.

Rather than the IPCC diagram, which implies a raising of the tropopause, I expect a lowering and an increased depth. This then provides a rectification of imbalances which warms the high troposphere and cools the stratosphere, without changing the lapse rate or the surface temperature.

Colin,
Does this relate to the ‘tropospheric hot spot’ – increased heating of the upper troposphere over the tropics with simultaneous cooling of the stratosphere?
My understanding of the empirical data is that there is no actual evidence for the predicted ‘hot spot’ but there is some cooling of the stratosphere. Is that correct? Also, there is some discrepancy between the radiosonde data and the satellite data. Is that your understanding? Either way the IPCC ‘consensus’ claimed this would be the ‘fingerprint’ of GHG warming. My view: even if they found this to be true, water vapour is the dominant GHG and would be the likely cause for the most part.

“While you can construct a definition of the tropopause that results in a fixed altitude precise to the meter, in reality the tropopause is a boundary layer with significant volume between the troposphere and the stratosphere. it isn’t a bright line with stratosphere 1 m above the line and the troposphere 1 m below the line.”

Sometimes that is a true statement. Sometimes not (for example see Guam on 07Sep11. The three readings around the Tropopause are:
Height 16,620 Temp -82.1; 16642 -82.3; 16764 -77.9.)

In general the tropical region (below 40 Deg lat) mostly has a sharp inflection in temperature. The Temperate and Polar zones have a very wide Tropopause. So I think DeWitt Payne’s observation is true for about half the planet, but not very true for the tropics.

DeWitt Payne was responding to my statement that both pressure and height are both very variable at the lowest point of the Tropopause. I assume that he concedes that point, which seems to me to be entirely consistent with the balloon data.

DeWitt Payne (0152, 9Dec11) went on to say:
“There are very good reasons for the radiative forcing to be defined at the tropopause after allowing the stratosphere to equilibrate to the new ghg concentration. If you spent any time researching the problem, you could find out why. It’s been mentioned on this site multiple times. But you would apparently rather believe in your own fictions.”

I agree I need a haircut.
I agree that I think there is a flaw in the GHG hypothesis, and I am trying to tease it out, and am therefore hard to convince. (The reason why I take that position is that when I look at all the evidence, particularly the MWP and LIA evidence, I cannot discern any CO2 signal in the current behaviour of the system. Only if all the warming since the LIA was due to CO2 would a sensitivity of 3DegC be realistic. One must assume that the processes which caused the MWP and LIA are still in action. That would put the sensitivity at a very low value indeed. Most of the 20th century warming must be natural if those processes, whatever they are, are still in place. At least that is the conservative, scientific, conclusion. If one is bold, an extremist, one can postulate other causes, and assert that the MWP/LIA process is no longer active. Hmmmmmmmm.)

A radiative imbalance, however small, is quickly corrected in the thin fluid at the top of the atmosphere, whether Statosphere or Troposphere. The fluid heats up if there is excess, cools if there’s too little.

And that is what we are dealing with, with a very slowly accumulatiing CO2 population. The Stratosphere cools in the appropriate spots (probably after the Troposphere warms, after all, the increased concentration is coming upwards from the surface…) and the upper Troposphere warms. And this should be happening constantly and consistently, year in, year out.

Is it? What do the models predict and what do the measurements say? Or is it all very ambiguous? What measurememnts of the Stratosphere and Tropopause are convincing proof of the hypotheisis? That’s where theory says the action is.

It is wrong to assume back radiation from the atmosphere warms the surface.

New physics (of Einstein-like significance) proves there is no mechanism by which imaginary “photons” from a cold atmosphere can warm the Earth in some fictitious “greenhouse” effect. Tonight just try holding a large mirror above the ground sending the radiation back down – does the ground get warmer?

Without solar radiation not a backradiation. The backradiation heated of course. This Gerlich and Tscheuschner have even added (http://arxiv.org/pdf/1012.0421.pdf p 12.):

We never claimed – allegedly with reference to Clausius – that a colder body does not send radiation to a warmer one. Rather, we cite a paper, in which Clausius treats the radiative exchange [19, 20]. The correct question is, whether the colder body that radiates less intensively than the warmer body warms up the warmer one.

Nobody has denied, that the warmer body radiates more intensely than the cooler body.

As an aside, I applaud your policy of not entertaining discussions about whether the physics of the twentieth century was fallacious. I appreciate the clarity and (in my judgement) the trustworthiness of the information you make available, as well as the courtesy afforded to all commenters.

Please excuse my asking a question whose answer may will already be available on your site.

Wikipaedia gives a formula that I have often seen referred to for the forcing due to CO2 (said to be a 1st order approximation):

Delta F = 5.53 ln ( C / C0 ) W m^-2

It gives a reference [Myhre et al., New estimates of radiative forcing due to well mixed greenhouse gases, Geophysical Research Letters, Vol 25, No. 14, pp 2715–2718, 1998] which is inaccessible to me – I have no library access and no budget to access paywalled papers.

I remember that this formula is quoted by IPCC, who give the same reference.

My question: Is the derivation of this formula available on Science of Doom, or elsewhere online? I am interesting to know the method used to derive it and what assumptions and approximations were used in its derivation.

Each point on the graph was calculated from solving the radiative transfer equations through the atmosphere. These equations can only be solved via numerical methods – no analytical solution can be derived.

“..The change in net (down minus up) irradiance (solar plus longwave; in W/m2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values.”

Let me ask the question another way. Do you agree and/or think that an instantaneous reduction in direct surface radiation to space of 3.7 W/m^2 is equal to or will result in an equal and opposite downward flux or ‘net (down minus up)’ of 3.7 W/m^2?

If yes, then on what physical basis?

If no, then what amount or fraction will be re-directed downward and based on what physics or data?

An instantaneous reduction in surface radiation directly to space of 3.7 W/m^2, the increased radiation from the atmosphere to space to 3.7 W/m^2 The backradiation then falls, for example, 6 W/m^2, whereby the surface and 6 W/m^2 emits less. But even these, 3.7 W/m^2 more to go into space, which means in other words, that the atmosphere now is 9.7 W/m^2 absorbed less.

More greenhouse gases but goes the other way around. An instantaneous increase from the surface radiation directly to space of 3.7 W/m^2 lowers the radiation from the atmosphere to space at 3.7 W/m^2 The backradiation then rises, for example, 6 W/m^2, whereby the surface and 6 W/m^2 emits more. But even these, 3.7 W/m^2 more to go into space, which means in other words, that the atmosphere now is 9.7 W/m^2 more absorbed – because more greenhouse gas.

Error. Of the 6 W/m^2 more to go radiation 3.7 W/m^2 more in the space and 2.3 W/m^2 absorbed by the atmosphere. Because go 3.7 W/m^2 is less in space, the atmosphere absorbs a total of 6 W/m^2 more – go to the counterradiation.

Thanks, Nick. I suggest everyone read it. There seems to be a lot of muddled notions as to what exactly the RT simulation is calculating. It’s my understanding, based on numerous conversations I’ve had (including with Myhre himself), that what specifically is being calculated are changes in direct surface radiation to space for changes in GHG concentrations – like the 3.7 W/m^2 from a doubling of CO2. The RT calculation itself does not say or imply anything about what happens after absorption, in particular whether it’s all downward re-directed or not.

I’m well aware of the IPCC’s definition applied to the RT calculation as cited by SoD, but it’s arbitrary and has no actual physical basis I’m aware of. That is unless one is claiming the RT simulation itself directly calculates ‘net (down minus up)’ for changes in GHG concentrations, but I’m pretty sure it doesn’t.

I’m well aware of the IPCC’s definition applied to the RT calculation as cited by SoD, but it’s arbitrary and has no actual physical basis I’m aware of. That is unless one is claiming the RT simulation itself directly calculates ‘net (down minus up)’ for changes in GHG concentrations, but I’m pretty sure it doesn’t.

“That I’m aware of..” being the important caveat. And in this (only) I’m sure you are spot on.

The radiative transfer calculations do directly calculate the net change in climate energy balance at the tropopause (after allowing for radiative adjustment in the stratosphere as already noted).

Any calculation of changes are “arbitrary” because they have to be with reference to something. Once they are with reference to a definition then they are valuable. One can change the reference point and the value will change but the new calculation is equally valuable.

—It’s a bit like saying that gravitational potential energy is “arbitrary”. It’s defined as zero at infinity. That’s arbitrary. But useful. And real. And physical.

“No physical basis..” ? A change in energy balance of a system is a very physical basis.

Mind you, as regular readers know I have realized there is no point discussing any subject with RW. Expect only occasional comments for the benefit of new readers.

Radiatives forcing and water vapor feedback, auxiliary crutches to obtain the values ​​of the real facts. The radiation balance in the stratosphere yields rising of the height of the tropopause with more greenhouse gases. The tropopause is the level at which the temperature gradient in the stratosphere is critical, and the circulation (troposphere) is beginning. The temperature gradient in the troposphere is hardly affected by the greenhouse gases, it is a gas property. The thicker troposphere leads to a higher surface temperature. The approach higher tropopause and constant temperature gradient in the troposphere leads to a sensitivity of 3 K at doubled CO2 concentration.

“The radiative transfer calculations do directly calculate the net change in climate energy balance at the tropopause (after allowing for radiative adjustment in the stratosphere as already noted).”

I never said they didn’t. However, this is not the same as ‘net (down minus up)’ or a change equal to an opposite downward flux as arbitrarily inferred by the IPCC. Or are you claiming the net absorption increase per CO2 doubling is more than 3.7 W/m^2 and the 3.7 W/m^2 is only the downward emitted portion?

The radiative transfer calculations for CO2 doubling are fictitious – because with real increases of in greenhouse gas, the radiation decreases from the thinner and colder stratosphere. Only the pairing of fictitious radiative forcing and water vapor feedback provides verifiable results. A quarrel about the fictional calculations of radiative transfer with no real boundary values is meaningless (height of the tropopause is the same and of the troposphere is an unchanged upward radiation).

Or are you claiming the net absorption increase per CO2 doubling is more than 3.7 W/m^2 and the 3.7 W/m^2 is only the downward emitted portion?

You really need to think about the meaning of the term ‘radiative imbalance’ and ‘instantaneous’. When radiative imbalance exists, as it would after an instantaneous change in ghg concentration, what’s absorbed doesn’t have to be emitted in any direction. There isn’t a balance and there isn’t time for a balance to even begin to be established.