Measuring Climate Sensitivity – Part One

Can we measure the top of atmosphere (TOA) radiative changes and the surface temperature changes and derive the “climate sensivity” from the relationship between the two parameters?

First, what do we mean by “climate sensitivity”?

In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature.

Climate Sensitivity Is All About Feedback

Climate sensitivity is all about trying to discover whether the climate system has positive or negative feedback.

If the average surface temperature of the earth increased by 1°C and the radiation to space consequently increased by 3.3 W/m², this would be approximately “zero feedback”.

Why is this zero feedback?

If somehow the average temperature of the surface of the planet increased by 1°C – say due to increased solar radiation – then as a result we would expect a higher flux into space. A hotter planet should radiate more. If the increase in flux = 3.3 W/m² it would indicate that there was no negative or positive feedback from this solar forcing (note 1).

Suppose the flux increased by 0. That is, the planet heated up but there was no increase in energy radiated to space. That would be positive feedback within the climate system – because there would be nothing to “rein in” the increase in temperature.

Suppose the flux increased by 5 W/m². In this case it would indicate negative feedback within the climate system.

The key value is the “benchmark” no feedback value of 3.3 W/m². If the value is above this, it’s negative feedback. If the value is below this, it’s positive feedback.

Essentially, the higher the radiation to space as a result of a temperature increase the more the planet is able to “damp out” temperature changes that are forced via solar radiation, or due to increases in inappropriately-named “greenhouse” gases.

Consider the extreme case where as the planet warms up it actually radiates less energy to space – clearly this will lead to runaway temperature increases (less energy radiated means more energy absorbed, which increased temperatures, which leads to even less energy radiated..).

As a result we measure sensitivity as W/m².K which we read as Watts per meter squared per Kelvin” – and 1K change is the same as 1°C change.

Theory and Measurement

In many subjects, researchers’ algebra converges on conventional usage, but in the realm of climate sensitivity everyone has apparently adopted their own. As a note for non-mathematicians, there is nothing inherently wrong with this, but it just makes each paper confusing especially for newcomers and probably for everyone.

I mostly adopt the Spencer & Braswell 2008 terminology in this article (see reference and free link below). I do change their α (climate sensitivity) into λ (which everyone else uses for this value) mainly because I had already produced a number of graphs with λ before starting to write the article..

The model is a very simple 1-dimensional model of temperature deviation into the ocean mixed layer, from the first law of thermodynamics:

Heat capacity times change in temperature equals the net change in energy

– this is a simple statement of energy conservation, the first law of thermodynamics.

The TOA radiative flux anomaly, F, is a value we can measure using satellites. T is average surface temperature, which is measured around the planet on a frequent basis. But S is something we can’t measure.

What is F made up of?

Let’s define:

F = N + f – λT ….[1a]

where N = random fluctuations in radiative flux, f = “forcings”, and λT is the all important climate response or feedback.

The forcing f is, for the purposes of this exercise, defined as something added into the system which we believe we can understand and estimate or measure. This could be solar increases/decreases, it could be the long term increase in the “greenhouse” effect due to CO2, methane and other gases. For the purposes of this exercise it is not feedback. Feedback includes clouds and water vapor and other climate responses like changing lapse rates (atmospheric temperature profiles), all of which combine to produce a change in radiative output at TOA.

And an important point is that for the purposes of this theoretical exercise, we can remove f from the measurements because we believe we know what it is at any given time.

N is an important element. Effectively it describes the variations in TOA radiative flux due to the random climatic variations over many different timescales.

The climate sensitivity is the value λT, where λ is the value we want to find.

Their result indicates positive feedback, or at least, a range of values which sit mainly in the positive feedback space.

On the method of calculation they say:

This equation includes a term that allows F to vary independently of surface temperature.. If we regress (- λT+ N) against T, we should be able to obtain a value for λ. The N terms are likely to contaminate the result for short datasets, but provided the N terms are uncorrelated to T, the regression should give the correct value for λ, if the dataset is long enough..

[Terms changed to SB2008 for easier comparison, and emphasis added].

Simulations

Like Spencer & Braswell, I created a simple model to demonstrate why measured results might deviate from the actual climate sensitivity.

radiative feedback calculated from the temperature and the actual climate sensitivity

daily temperature change calculated from the daily energy imbalance

regression of the whole time series to calculate the “apparent” climate sensitivity

In this model, the climate sensitivity, λ = 3.0 W/m².K.

In some cases the regression is done with the daily values, and in other cases the regression is done with averaged values of temperature and TOA radiation across time periods of 7, 30 & 90 days. I also put a 30-day low pass filter on the daily radiative noise in one case (before “injecting” into the model).

Some results are based on 10,000 days (about 30 years), with 100,000 days (300 years) as a separate comparison.

In each case the estimated value of λ is calculated from the mean of 100 simulation results. The 2nd graph shows the standard deviation σλ, of these simulation results which is a useful guide to the likely spread of measured results of λ (if the massive oversimplifications within the model were true). The vertical axis (for the estimate of λ) is the same in each graph for easier comparison, while the vertical axis for the standard deviation changes according to the results due to the large changes in this value.

First, the variation as the number of time steps changes and as the averaging period changes from 1 (no averaging) through to 90-days. Remember that the “real” value of λ = 3.0 :

Figure 1

Second, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The daily temperature and radiative flux is calculated as a monthly average before the regression calculation is carried out:

Figure 2

As figure 2, but for 100,000 time steps (instead of 10,000):

Figure 3

Third, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The regression calculation is carried out on the daily values:

Figure 4

As figure 4, but with 100,000 time steps:

Figure 5

Now against averaging period and also against low pass filtering of the “radiative flux noise”:

Discussion of Results

If we consider first the changes in the standard deviation of the estimated value of climate sensitivity we can see that the spread in the results is much higher in each case when we consider 30 years of data vs 300 years of data. This is to be expected. However, given that in the 30-year cases σλ is similar in magnitude to λ we can see that doing one estimate and relying on the result is problematic. This of course is what is actually done with measurements from satellites where we have 30 years of history.

Second, we can see that mostly the estimates of λ tend to be lower than the actual value of 3.0 W/m².K. The reason is quite simple and is explained mathematically in the next section which non-mathematically inclined readers can skip.

In essence, it is related to the idea in the quote from Forster & Gregory. If the radiative flux noise is uncorrelated to temperature then the estimates of λ will be unbiased. By the way, remember that by “noise” we don’t mean instrument noise, although that will certainly be present. We mean the random fluctuations due to the chaotic nature of weather and climate.

If we refer back to Figure 1 we can see that when the averaging period = 1, the estimates of climate sensitivity are equal to 3.0. In this case, the noise is uncorrelated to the temperature because of the model construction. Slightly oversimplifying, today’s temperature is calculated from yesterday’s noise. Today’s noise is a random number unrelated to yesterday’s noise. Therefore, no correlation between today’s temperature and today’s noise.

As soon as we average the daily data into monthly results which we use to calculate the regression then we have introduced the fact that monthly temperature is correlated to monthly radiative flux noise (note 3).

This is also why Figures 8 & 9 show a low bias for λ even with no averaging of daily results. These figures are calculated with autocorrelation for radiative flux noise. This means that past values of flux are correlated to current vales – and so once again, daily temperature will be correlated with daily flux noise. This is also the case where low pass filtering is used to create the radiative noise data (as in Figures 6 & 7).

Maths

x = slope of the line from the linear regression

x = Cov[- λT + N, T] / Var[T] ….[3]

It’s not easy to read equations with complex terms numerator and denominator on the same line, so breaking it up:

And we see that the regression of the line is always biased if N is correlated with T. If the expected value of N = 0 the last term in the top part of the equation drops out, but E[NT] ≠ 0 unless N is uncorrelated with T.

Note of course that we will use the negative of the slope of the line to estimate λ, and so estimates of λ will be biased low.

As a note for the interested student, why is it that some of the results show λ > 3.0?

Murphy & Forster 2010

Murphy & Forster picked up the challenge from Spencer & Braswell 2008 (reference below but no free link unfortunately). The essence of their paper is that using more realistic values for radiative noise and mixed ocean depth the error in calculation of λ is very small:

From Murphy & Forster (2010)

Figure 10

The value ba on the vertical axis is a normalized error term (rather than the estimate of λ).

Evaluating their arguments requires more work on my part, especially analyzing some CERES data, so I hope to pick that up in a later article. [Update, Spencer has a response to this paper on his blog, thanks to Ken Gregory for highlighting it]

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Stephens (2005), reference and free link below:

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Conclusion

Measuring the relationship between top of atmosphere radiation and temperature is clearly very important if we want to assess the all-important climate sensitivity.

Spencer & Braswell have produced a very useful paper which demonstrates some obvious problems with deriving the value of climate sensitivity from measurements. Although I haven’t attempted to reproduce their actual results, I have done many other model simulations to demonstrate the same problem.

Murphy & Forster have produced a paper which claims that the actual magnitude of the problem demonstrated by Spencer & Braswell is quite small in comparison to the real value being measured (as yet I can’t tell whether their claim is correct).

The value called climate sensitivity might be a variable (i.e., not a constant value) and it might turn out to be much harder to measure than it really seems (and already it doesn’t seem easy).

Notes

Note 1 – The reason why the “no feedback climate response” = 3.3 W/m².K is a little involved but is mostly due to the fact that the overall climate is radiating around 240 W/m² at TOA.

Note 2 – This is effectively the same as saying f=0. If that seems alarming I note in advance that the exercise we are going through is a theoretical exercise to demonstrate that even if f=0, the regression calculation of climate sensitivity includes some error due to random fluctuations.

Note 3 – If the model had one random number for last month’s noise which was used to calculate this month’s temperature then the monthly results would also be free of correlation between the temperature and radiative noise.

Advertisements

Like this:

LikeLoading...

Related

292 Responses

For pure greenhouse effect is the overall change in the TOA radiation is equal to 0, because emitted power = absorbed power. What changing by the greenhouse effect is the wavelength distribution of emitted radiation. In the wavelength range of atmospheric window increases the emission, in the wavelength range of stratospheric emission decreases the emission.

In equilibrium is an increased back-radiation. As long as no balance, there’s the overall change in the TOA radiation unequal 0. At the TOA due to the fluctuations of the temperature variations can compute a sensitivity to fluctuations in surface temperature – but no sensitivity to changes in CO2 concentration.

To equilibrium is also the flow into the ocean depths, does not change so quickly. Strictly speaking, this sensitivity is measured by just how quickly the heat spreads into the ocean depths, because as a change in the TOA has only one difference between the slow change of the heat flow in the depth with respect to rapid changes in the emission in the universe.

SOD, I think climate sensitivity must be a variable because isn’t that a measure of how effectively the atmosphere blocks outgoing terrestrial radiation (positive feedback) or reflects incoming solar radiation (negative feedback)?

In this case, it is a function of cloud formation, evaporation and condensation of water, outgassing of CO2 from oceans, and melting of icesheets in response to 1C increase in temp. Aren’t these thing chaotic and unpredictable in principle? Even if we can determine their exact historical relationships, history will not repeat itself.

The denominator in this equation is just Var[T], and E[N] = 0 of by construction, so equation:

x = -λ + { E[NT] – E[N].E[T] } / { E[T²] – (E[T])² }

x = -λ + E[NT]/Var[T]

However, E[NT]/Var[T] can have either sign, so this represents an error term (and one that likely depends on the measurement period, see my prior comments on the effect of measurement periods on variance for red/pink noise). So I really don’t see how this conclusion follows:

Note of course that we will use the negative of the slope of the line to estimate λ, and so estimates of λ will be biased low.

The math makes sense to me but the interpretation seems to have problems.

The other thing to note is that E[NT] may change sign in a very smooth fashion as you change your window length…that’s a pretty common feature when the “noise” is dominated by coupled internal oscillations like the Earth’s atmosphere.

It’s also likely that E[NT]/Var[T] → 0 for a large enough observation time. Something worth testing I suppose.

It’s also interesting to me how many blogs float ad homs when they don’t like the conclusions of a particular author. (Referencing the title of that blog post.)

It’s also likely that E[NT]/Var[T] → 0 for a large enough observation time. Something worth testing I suppose.

I don’t think this is the case. This is the assertion in Forster & Gregory 2006 assuming as they do that N is uncorrelated with T. If N is uncorrelated with T, then this is true, as short time series won’t have zero E[NT].

I have run some of the simulations for 1,000,000 time steps (days) and there is no tendency to zero error. In fact, the mean of estimated λ stays the same while the standard deviation of the results reduces dramatically, as would be expected – and as can be seen from the comparisons between 10,000 and 100,000 time steps in the article.

Regarding the first point, if T is assumed to be related linearly to N, then we will can have E[NT] > 0, but that seems like a consequence of an assumption, not a real consequence of the equation, or of the underlying physics for that matter.

Regarding the second point, I was being a bit too thin on details I think. I was considering the spectral characteristics of the “realistic” temperature fluctuations, and assuming “N” follows these too, in making that statement.

In that case, as you increase your time window, E[NT] should get smaller relative to Var[T]. (Var T grows without bounds, while E[NT] being band limited is bounded.) This is the sort of thing one would need to do a proper Monte Carlo to separate out.

If you are using something akin to a realistic spectral distribution for T and N, then of course I concede the point.

Gaussian white noise is a completely horrible assumption for climate fluctuations of course. I haven’t read Murphy and Foster in any detail, but it does appear this is what they are doing.

Editorial comment here: I think all of the models posed by the various published authors are way over simplified. You don’t have to go (I think anyway) to 5-d hyperdimensional space to get it right, but I think you do need to start with assumptions that more carefully reflect the underlying physical behavior of the system.

I think another problem is the short duration of the CERES dataset. I’m not sure that can be fixed by anything but more observation time.

“Note 1 – The reason why the “no feedback climate response” = 3.3 W/m².K is a little involved but is mostly due to the fact that the overall climate is radiating around 240 W/m² at TOA.”

The ‘no feedback’ response is actually derived from the surface response to solar forcing, which is about a 1.6 to 1 power to power ratio (390/240 = 1.625) – meaning it takes about 1.6 W/m^2 of radiative surface flux to allow 1 W/m^2 to leave the system, or it takes about +5.4 W/m^2 at the surface to allow 3.3 W/m^2 to leave at the TOA (3.3 x 1.625 = 5.4). +5.4 W/m^2 at the surface = +1C rise in temperature.

This is the origin of the so-called ‘Planck response’ or ‘no-feedback’ response of about 1.1 C from 2xCO2 (3.7/3.3 = 1.1). The problem is this is not a ‘no feedback’ or ‘pre-feedback’ response, but an upper limit on sensitivity because net negative feedback is required for basic stability. The 1.6 to 1 power densities ratio already includes the lion’s share of feedback in the system from decades, centuries, millenia, millions of years of solar forcing from which the feedbacks in the system have already manifested themselves.

If you really think that net positive feedback of 300% or more from 2xCO2 is possible for a 3 C rise (+16.6 W/m^2), you should explain why it does not take 1077 W/m^2 of surface power to offset the 240 W/m^2 of incident post albedo solar power. (16.6/3.7)*240 = 1077.

If watts are watts, how can watts of GHG ‘forcing’ have a greater ability to warm the surface than watts from the Sun? The 3.7 W/m^2 of ‘forcing’ from 2xCO2 is supposed to be the equivalent of +3.7 W/m^2 of post albedo solar power, is it not?

Attempts to determine climate sensitivity by looking at the correlation between surface temperature (anomaly) and TOA flux (anomaly) seem to be based on hopelessly simplistic models. Only about 15% of OLR is emitted from the surface of the earth. The remaining OLR is emitted from higher in the atmosphere. If atmospheric temperature at various locations in the troposphere moved in parallel with surface temperature, then we might observe a reasonable correlation between surface temperature and TOA flux. Unfortunately, surface temperature anomalies could require a significant amount of time to rise to higher altitudes, so most of the TOA response could lag surface temperature anomalies. Spencer and others have shown there is a modestly stronger (but still weak) correlation between TOA flux and surface temperature 2-3 months earlier. However, the rise of surface temperature anomalies through the troposphere could also be inhomogeneous, with some parts moving more slowly than others. The TOA response to a surface temperature anomaly today might be spread out over the next six months. That would partially explain why correlation coefficients between TOA flux and Ts are so poor at all time lags.

How long should it take for a temperature anomaly to rise through the atmosphere and escape to space? This would depend on the mechanism of energy transfer. It apparently takes months for radiative equilibrium to be established between the tropopause and the stratosphere, so radiation is slow. The residence time of water vapor in the atmosphere is only 9 days (after the time required for the evaporative response to a temperature anomaly). At the UAH website (http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps), the warmest sea surface temperatures occur during March (ca +0.3 degK above mean), but the warmest temperatures at 4.5 km (and 7.5 km) occur during July (+1.0 degK above mean). This data suggests that it could take four months for the annual SST anomaly to rise into the upper troposphere, but isn’t proof of mechanism. However, it certainly shows that that sea surface temperature and upper troposphere temperature don’t move in parallel.

It is a matter of definition as to whether you define Stefan Boltzman radiation as energy balance or negative feedback. The effective temperature of the Earth as measured by radiation to space (240 w/m2) results in a value of Teff=255K . Differentating Stefan Boltzman equation we get DS/DT = 4*sigma*T^3. Therefore a 1 degree rise in the “effective” temeperature would be balanced by an increase in rediation of 3.7 w/m2 – or a negative feedback.

Earth’s average surface temperature is T=288K because of the greenhouse effect. The normalised greenhouse effect works out at ~0.3 — i.e. resulting in Tsurf^4*(1-0.3) = Teff^4

Therefore assuming zero other feedbacks if the surface temperature rises by 1 degree Teff will rise by 0.9 degrees. This then works out at 3.3 watts/m2 K-1 !

Science of Doom: Thank you for providing this excellent resource on the fundamentals of climate science.
I’m still a learner in most things to do with climate, but from my reading so far I’m inclined to agree with Frank, that “Attempts to determine climate sensitivity by looking at the correlation between surface temperature (anomaly) and TOA flux (anomaly)…” seem at best optimistic, and possibly doomed to failure. There are just too many ‘forcings’ (an ugly and ill-defined term in my view, but there it is) and too many feedbacks. Yes, there is a well-established and generally agreed figure for the no-feedbacks change in average surface temperature resulting from a given change in atmospheric CO2 content. But there are feedbacks, and we seem to be little nearer evaluating those than we were 20 or more years ago. See for instance Mitchell et al, 1989: ‘CO2 and climate: a missing feedback’ (Nature, vol 341, 132-4). I’m tempted to ask, what is the point of all this? Apart that is from keeping climate modellers, journal editors and the remainder of the climate change industry legitimately occupied? Are we actually learning anything? But maybe I should drop such questions and try to make a more constructive suggestion.
The feedbacks, both known and unknown (or ‘missing’), seem to be imponderable. It’s fun (and profitable) studying them, but if what we are ultimately interested in is the net effect of small changes in the earth’s energy budget on changes in the earth’s surface air temperature, why don’t we look for a method that takes into account the existing feedbacks, known and unknown, (as well as the forcings)?
A number of scientists have come up with something like this, but only a small number seem to have made it into the print journal literature, and I can find no reference to this type of approach in those IPCC reports I have searched (WG1 of TAR 2001 and AR4 2007).
The following example is from an article by Hug and Barrett, which can be found at http://www.john-daly.com/forcing/hug-barrett.htm
“The Stefan-Boltzmann equation linking the energy of emission of a cavity radiator to its temperature:
E = (sigma) T4
may be differentiated with respect to temperature:
dE/dT = 4(sigma)T3
and inversion gives a value for the sensitivity:
dT/dE = 1/4(sigma)T3
If a value of 288°K (a mean value for the troposphere at sea-level) is inserted into the equation the value for the sensitivity is 0.18 K(W m-2)-1.”
That would as I understand it translate to about 0.7K per doubling of atmospheric CO2.
The example is a simple one, perhaps over-simplistic, but is it any less valid than the no-feedbacks approach with added feedbacks? Or is it just nonsense?

The important point to understand here is that climate science is attempting to measure the actual climate sensitivity.

So no assumptions about negative or positive feedback are necessary or involved in this method.

In fact, that is what the measurements are attempting to ascertain.

The challenge is if climate sensitivity is a variable, e.g. a function of location, surface temperature, season, and phase of ENSO. In that case, the current measurement attempts will not work.

Another challenge, explained in the article in more detail, is where the “radiative noise” (random fluctuations in flux) prevents accurate measurement of climate sensitivity.

In respect of the link you provided the writers clearly have so little understanding of what climate science accepts and understands as the basics that I wouldn’t know where to start. Anyone who has read a few textbooks on atmospheric physics will be able to pick apart the confusion.

Otherwise it might sound convincing.

Section 2 – Applicability of Kirchhoff’s Law is a nice example of a mishmash of true, false and strawman ideas all stirred together into a pot of confusion. And in so few sentences.

Let’s pick one sentence and challenge you to demonstrate its truth:

The misunderstanding arises from the IPCC regarding the processes operating under conditions of true equilibrium, where the Kirchhoff law operates, whereas under the non-equilibrium conditions occurring daily in any part of the globe at any time, the law is inapplicable.

Where does the IPCC claim to believe the atmosphere is in thermodynamic equilibrium?

This would be absurd. The only published paper I have found such a confused notion is in Miskolczi’s paper, which is not exactly mainstream as it claims to overturn anything close to consensus.

Over to you to find the relevant IPCC reference.

I expect the writers of this article have no idea what Kirchhoff’s law is or why emissivity = absorptivity is true (for equal wavelengths) even under conditions of non-thermodynamic equilibrium.

Science of doom: Thank you for your reply. I will indeed read the post you refer to. However I wasn’t asking you to comment on the remainder of Hug and Barrett’s article, merely on the validity/meaning of the particular passage I quoted.
I have by the way no problem with your statement that
”The important point to understand here is that climate science is attempting to measure the actual climate sensitivity. So no assumptions about negative or positive feedback are necessary or involved in this method. In fact, that is what the measurements are attempting to ascertain. The challenge is if climate sensitivity is a variable, e.g. a function of location, surface temperature, season, and phase of ENSO. In that case, the current measurement attempts will not work.”
Everybody active in this field is presumably trying to measure or estimate a quantity which as you point out may well be a variable function of a number of variable factors. In what direction would you like to see the research going?

Coldish wrote: “The Stefan-Boltzmann equation linking the energy of emission of a cavity radiator to its temperature:
E = (sigma) T4
may be differentiated with respect to temperature:
dE/dT = 4(sigma)T3
and inversion gives a value for the sensitivity:
dT/dE = 1/4(sigma)T3
If a value of 288°K (a mean value for the troposphere at sea-level) is inserted into the equation the value for the sensitivity is 0.18 K(W m-2)-1.”

Until you inserted 288 degK as the temperature, everything appears to be right (and agree with SOD’s earlier posts). These equations derive the relationship between temperature and radiation – but only when temperature is controlled only by radiation. The surface of the earth is cooled by convection of latent and simple heat as well as by radiation (and warmed by incoming solar and LWR radiation from the atmosphere). So the answer you get from using 288 degK doesn’t tell us anything useful about the surface (IMO).

Above the altitudes where convection occurs, temperature is controlled by radiative equilibrium and it makes sense to apply your equations. If you use 237 degK as the temperature, you get SOD’s value of 0.33 K(W m-2)-1 for the NO-FEEDBACKS climate sensitivity.

Calculations show that 2XCO2 will reduce outgoing radiation by about 3.7 W/m2 at the tropopause (where it is probably a little colder). This is your dE term. This gives a no-feedbacks climate sensitivity for doubling CO2 of about 1 degK; 1.2 degK using SOD’s numbers.

This part of climate science might be termed “settled”. The controversies begin when we attempt to predict how 1 degK of warming at the tropopause effect surface temperature and how feedbacks may amplify warming. In this post, SOD is discussing observational evidence relating outgoing radiation to surface temperature as a way of predicting how radiative forcing at the tropopause will change temperature at the surface (rather than the tropopause). Unfortunately, the relationship is very noisy, possibly for the reasons I proposed above. (I’m hoping someone will tell me why I’m wrong.)

“Calculations show that 2XCO2 will reduce outgoing radiation by about 3.7 W/m2 at the tropopause (where it is probably a little colder). This is your dE term. This gives a no-feedbacks climate sensitivity for doubling CO2 of about 1 degK; 1.2 degK using SOD’s numbers.

This part of climate science might be termed “settled”. The controversies begin when we attempt to predict how 1 degK of warming at the tropopause effect surface temperature and how feedbacks may amplify warming. In this post, SOD is discussing observational evidence relating outgoing radiation to surface temperature as a way of predicting how radiative forcing at the tropopause will change temperature at the surface (rather than the tropopause). Unfortunately, the relationship is very noisy, possibly for the reasons I proposed above. (I’m hoping someone will tell me why I’m wrong.)”

The calculated reduction of 3.7 W/m^2 at the tropopause is assumed to cause net increase in energy flux into the surface of 3.7 W/m^2, which then the surface has to warm up an additional 2.3 W/m^2 (62% more) to re-emit the 3.7 W/m^2 back out at the TOA to restore equilibrium. This arises because about 38% of what emitted from the surface is ‘blocked’ by the atmosphere and returned or re-circulated back to the surface.

The ‘no-feedback’ sensitivity of 1.1-1.2 C is derived from the 1.625 W/m^2 to 1 W/m^2 ratio of radiative power emitted from the surface to power emitted at the TOA (390/240 = 1.625). This includes all the non-radiative energy transport from the surface to the atmosphere, from the atmosphere to other parts of the atmosphere, and from the atmosphere back to the surface. All of these fluxes are in between the surface and the TOA. At the TOA, it’s all photons entering and leaving.

RW: The definition of radiative forcing used by the IPCC (TAR) cited by Wikipedia is:

“The radiative forcing of the surface-troposphere system due to the perturbation in or the introduction of an agent (say, a change in greenhouse gas concentrations) is the change in net (down minus up) irradiance (solar plus long-wave; in Wm-2) at the tropopause …

… AFTER allowing for stratospheric temperatures to readjust to radiative equilibrium, …

… but with surface and tropospheric temperatures and state held fixed at the unperturbed values.”

Upward and downward LWR fluxes are calculated for various layers in the atmosphere using: pressure, mixing ratio of GHG’s, temperature (emission is temperature dependent) and absorption/emission data for all GHGs at relevant wavelengths. Changes in TOA flux (often calculated at the tropopause) and changes in surface DLR are NOT directly linked: the radiative forcing for 2XCO2 is 3.7 W/m2 at the tropopause and a little less than 1 W/m2 at the surface. For a demonstration, see SOD’s post on this subject. https://scienceofdoom.com/2011/02/06/understanding-atmospheric-radiation-and-the-“greenhouse”-effect-–-part-five/ Although these calculations appear to contradict the law of conservation of energy, radiative convective models automatically get conservation of energy correct by assuming that convection increases or decreases enough to balance incoming and outgoing radiative fluxes in the troposphere. At the tropopause, there is not convection. Using dW = 4oT3.dT and knowing dW = 3.7 W/m2, we can calculate the temperature rise at the tropopause needed to restore the balance between incoming and outgoing radiation.

Hopefully, this will explain the rational for the calculations in my comment. I don’t understand the basis for your calculations, possibly because you are using a different meaning for 3.7 W/m2 than the radiative forcing for 2XCO2.

The problem is the ambiguity of the definition. What matters ultimately is the net change in energy flux into the surface as a result of the perturbation from 2xCO2. The IPCC is suspiciously vague on exactly what the 3.7 W/m^2 means. My interpretation is they are assuming it is equal to an increase in post albedo solar power of 3.7 W/m^2. I do not agree with this interpretation or assumption, and as best I can derive the 3.7 W/m^2 is the reduction in ‘window’ transmittance (the amount of surface radiative flux that passes straight through the atmosphere to space as if the atmosphere wasn’t even there) or is the incremental atmospheric absorption.

For example, using Trenberth’s ‘window’ transmittance of 70 W/m^2 (40 W/m^2 through the clear sky and 30 W/m^2 through the cloudy sky), a doubling of CO2 reduces this value to 66.3 W/m^2 and the atmosphere absorbs and additional 3.7 W/m^2 that previously went straight from the surface to space.

I suggest you ask SoD specifically where the watts are coming from to cause the +6 W/m^2 flux into the surface that causes the claimed 1.1 C ‘zero-feedback’ temperature increase from 2xCO2, as when asked no one ever seems to know. He won’t talk to me anymore.

They have just assumed it causes a +3.7 W/m^2 flux into the surface, which then the surface has to warm up an additional 2.3 W/m^2 (62% more) in order to re-emit the -3.7 W/m^2 at the TOA (or tropopause) back out to space to restore equilibrium. This arises because about 38% of the radiative flux from the surface is ‘blocked’ by the atmosphere and returned or recirculated back to the surface – only 62% of what’s emitted is allowed to leave at the TOA, i.e. the planet’s emissivity of about 0.62 (240/390 = 0.62; 390-240 = 150; 150/390 = 0.38).

Even though I don’t really know what the graphs are saying it did immediately stand out to me that results were generally less than 3 rather than greater. Have you had any insights yet why this is so? Do you think this is a quirk of your methodology or a wider problem?

1) MF10 essentially does what the new Dessler11 paper does (in one part), which is to determine the value of S (F_Ocean) based on the mixed-layer heat capacity times the temperature fluctuations (C * dT/dt) and then subtracting from that the TOA flux. The problem is that using (C * dT/dt) substantially overestimates the energy flux of those top 100 meters compared to actual Argo measurements, as you can see in Dr. Spencer’s post or here. The reasons for that might be that even though the mixed layer is near uniform in temperature, small transfers of energy from the sea surface to lower parts of the mixed layer will result in a supposed energy loss from the whole mixed layer when nothing of the sort is actually happening. Furthermore, one is then aliasing all errors from calculations of C * dT/dt and TOA flux, which may be substantial relative to the fluctuations, in with the S term. Thus even white noise will cause a bias in the ratio of S/N.

2) To the point that Frank raised above relative to the atmospheric temperature lag, I went over this a bit here. If there is a two-month lag between sea surface and TLT in the real world, and we can see that, at least for the Planck response, 80% of the OLR is coming from the atmospheric temperature changes that occur 2 months AFTER sea surface temperatures, how can we properly diagnose the climate feedback from TOA fluxes occurring in sync with the surface temperatures? Both the Spencer and Forster camp seem to agree that feedbacks occur simultaneously with surface temperatures, which I cannot understand given the atmospheric temperature lag time. Seems to be a fundamental problem with the model. Any thoughts?

Troy wrote: “Both the Spencer and Forster camp seem to agree that feedbacks occur simultaneously with surface temperatures, which I cannot understand given the atmospheric temperature lag time.” Can the data be analyzed with a model that says that X% of TOA flux anomaly varies with current surface temp, Y% lags by M months, Z% lags by N months, etc? There is controversy about what observations of dW/dTs mean in terms “climate sensitivity” expected for future forcings. However, useful “climate sensitivity” is more likely to be the change in TOA flux integrated over some period of time with surface temperature rather than the instantaneous response at any particular time. Is such a statistical analysis impossible when monthly Ts anomalies are undoubtably correlated? Could principle components help?

Some bright person needs to figure out how to “tag” energy from a known source so it can be tracked as it circulates in the system. Yeah, sounds crackpot, but right now it’s too fungible to know what is really happening.

RW: In your comment dated October 1, 2011 at 2:11 am, you complain about the ambiguity of the definition of radiative forcing. The IPCC’s definition is not ambiguous, it is quite precise; you simply appear to prefer a non-conventional meaning. When people use different definitions for different concepts, rational discussion is impractical. I’m not surprised that our host has lost patience.

To calculate radiative flux through the atmosphere, one needs to break the atmosphere up into layers with a defined pressure, temperature and composition for each layer. With this information and the appropriate spectral data for all components of the atmosphere, scientists can calculate the upward and downward radiative fluxes between the layers, ground and space. (SOD has done a wonderful job of illustrating how this is done in his long series of posts on Understanding the Greenhouse Effect.) Then one can increase the concentration of a GHG and find out how those fluxes change. For regions of the atmosphere where temperature is controlled solely by radiation, one can iterate to find the new equilibrium temperature once it has responded to the new radiative flux. One can’t do that in convective regions, because the temperature in those regions is not controlled solely by radiation. (Convection reduces surface temperature about 60 degK below what it would be if it were controlled by radiative equilibrium alone.)

This provides the rational for all three sections of the IPCC’s definition of radiative forcing (see above), how they calculate 3.7 W/m2 of radiative imbalance and a temperature rise of about 1 degK AT THE TROPOPAUSE.

You wrote: “I suggest you ask SoD specifically where the watts are coming from to cause the +6 W/m^2 flux into the surface that causes the claimed 1.1 C ‘zero-feedback’ temperature increase from 2xCO2, as when asked no one ever seems to know.”

You are certainly correct when you calculate that a surface temperature rise of 288 to 289.1 degK will increase upward radiate flux from 390 to 396 W/m2 (before correcting for emissivity being slightly less than 1). The KT energy balance diagram also shows 333 W/m2 of DLR, which would come from an atmosphere with an average temperature of 276.8 degK – if its emissivity were 1. Assuming the lapse rate from the surface to this altitude remained constant, the temperature there would also rise 1.1 degK, to 277.9 degK, increasing DLR by 5.3 W/m2. However, because the atmosphere would be optically thicker from 2X CO2, the average photon arriving at the surface would have been emitted from a lower, warmer altitude. So, it is trivial to see where the additional 6 W/m2 (or more) might come from, but it is difficult to calculate properly. (The assumption that the emissivity of the atmosphere is 1 is incorrect. It varies somewhat with CO2 concentration.)

However, we are NOT required to balance upward and downward radiative fluxes – the total downward flux is about 100 W/m2 greater than the upward at the surface! The difference, of course, is convection. Our theories about atmospheric stability suggest that convection AUTOMATICALLY increases or decreases to correct for any imbalance associated with too steep a lapse rate (radiative-convective equilibrium). Any shortage (or surplus) in your needed 6 W/m2 that doesn’t come from radiation can always be provided by reduced (or increased) convection! Short of GCMs, we don’t know how to calculate the energy flux provided by convection from first principles; we only know what the maximum lapse rate should be after an unspecified and variable amount of energy has been convected upwards. The IPCC avoids the problem of convection by incorporating the lowest altitude where convection is no longer important – the tropopause – into its definition of radiative forcing.

If the definition isn’t ambiguous, then what is the reduction in ‘window’ transmittance or the incremental absorption from 2xCO2? Is it 3.7 W/m^2 or some other amount? If it is some other amount then what’s the amount?

I assume you’re aware that not all the surface emitted LW absorbed by the atmosphere is emitted back down the surface, right?

The figure of 3.7 W/m² is a reduction of emission, mostly. The transmittance can also be calculated. Using MODTRAN and the 1976 standard atmosphere with clear sky, the transmittance from 100-1500cm-1 at 280 ppmv CO2 is 0.2551. At 560 ppmv, it’s 0.2492. At a surface temperature of 288.2 K and an emissivity of 0.98 the radiative flux at the surface upward is 360.472 W/m². So the reduction in transmission at 100 km altitude for clear sky is 2.13 W/m² . But that’s clear sky. Clouds cover ~60% of the Earth’s surface and are totally opaque to IR from the surface. It’s much trickier to calculate the reduction in emission from cloud tops.

Trenberth “window” may not be 70 W/m2. They claim that only 40 W/m2 is emitted from the surface and escapes directly to space (mostly through the 8-12 um window). This is the value I always cite for the “window”, but I don’t clearly understand how they arrive at this number. (Miskolczi believes 23 W/m2 may be more accurate.) K&T also claim that 165 W/m2 is emitted by the atmosphere in general and to this they add 30 W/m2 of “long wavelength cloud forcing” which is equal to the difference in TOA LW emission between clear and cloudy sky (but reduce incoming SWR by 50 W/m2). I presume this means that clouds are better LW emitters than the atmosphere (because they have higher emissivity? or are warmer?). If you look carefully at the Figure, the 165 W/m2 originate from both clear and cloud sky, the 30 W/m2 originates only from clouds, and the 40 W/m2 has its own special channel through the atmosphere. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.831&rep=rep1&type=pdf

I presume that the window transmittance doesn’t change after doubled CO2, because CO2 doesn’t have significant absorptions in the window. However, you need to remember that CO2 is both and absorber and an emitter. 2X CO2 emits twice as much radiation as 1X CO2. The radiative forcing from 2X CO2 occurs because the average photon escaping to space is emitted from a higher/colder altitude than when 1X CO2 is present. Oversimplified ideas about trapping or blocking radiation spread by the CAGW crowd are misleading.

Thank you for this post – It is a really good summary of the key issue of the climate debate. If net feedbacks turn out to be zero or negative then a maximum temperature rise of ~1C can be expected from a doubling of CO2 levels which will have a relatively minor impact. If however IPCC models are correct with an average net positive feedback of ~2.0 W/m2K-1 then we can expect a 2-5 degree rises for a doubling of CO2 with well known consequences. However, I cannot believe that feedbacks (mainly due to water) can possibly be a constant linear response.

Such assumed feedbacks in IPCC models are incompatible long trem with the Earth’s early history because of the fact that we know that the planet has been continuously covered in liquid water. I find it very difficult to accept that the net effect of water on climate can be a positive feedback. While it is true that water vapor greenhouse effects depend on surface temperatures though the Claussius-Claperyon equation, high and low level cloud change to the Earth’s albedo seemingly must be more important. The evidence is that global surface temperatures have changed little over the Earth’s history. This is incompatible with a simple linear net positive long term feedback from water vapor and clouds. During the early lifetime of the Earth total feedback from water must have been negative to avoid run away surface heating as the sun brightened.

I agree with your idea that climate sensitivity may be variable. The problem is how do you prove this? If true, the present preoccupation of climate scientists in trying to determine climate sensitivity from historical data is futile. The future climate sensitivity will be different from the past. They cannot contruct a model with predictive power.

It seems to me the more productive line of research is trying to prove that climate sensitivity is indeed variable. This may entail combining chaos theory with atmospheric physics. It may be beyond the expertise of climate scientists. They may have to collaborate with mathematicians or mathematical physicists on this research.

Roy and SOD: For large changes in Ts, climate sensitivity is obvious not constant. At a high enough temperature, water vapor feedback is anticipated to produce a runaway greenhouse effect. At low enough temperature, the ocean will freeze. The issue is whether or not climate sensitivity is roughly constant over +/-3 degK around our current temperature.

If one focuses on the surface energy balance (rather than TOA), the change in upward and downward LWR with temperature is very likely to be linear over this temperature range. The 4oT3.dT terms are linear and the rate at which the altitude/temperature from which the average DLR photon reaches the surface of the earth probably decreases roughly linearly. On the other hand, convection appear less likely to change linearly with Ts, since convection often begins when a critical threshold is crossed. Feedback may also play a role: convection increases surface winds, surface windows which increase evaporation, evaporation increases convective potential and diminished SWR due to clouds.

.. This may entail combining chaos theory with atmospheric physics. It may be beyond the expertise of climate scientists. They may have to collaborate with mathematicians or mathematical physicists on this research.

From the many papers I have read there is a very diverse field of expertise within what is today known as the discipline of climate science. And there doesn’t appear to be a shortage of other opinions within published papers.

I agree with your idea that climate sensitivity may be variable. The problem is how do you prove this? If true, the present preoccupation of climate scientists in trying to determine climate sensitivity from historical data is futile. The future climate sensitivity will be different from the past. They cannot contruct a model with predictive power..

Many papers point this out, for example:

..The assumption that a heat perturbation mixes as a passive tracer may break down as the climate warming increases. In the ocean model of Bryan et al. a warm anomaly of 0.5’C penetrates significantly less than a similar cold anomaly.

Furthermore, global warming will be accompanies by changes in evaporation, precipitation, and wind stress over the ocean surface, and possible by the addition of fresh water from melting ice sheets – all of which may affect the rate of ocean mixing.

There is evidence that some mechanisms of ocean overturning are capable of sudden changes, and the paleoclimate record reveals cases of large warming within periods of no more than several decades. Thus we cannot exclude the possibility that the climate may at some point undergo a rapid transition to the equilibrium climate for current atmospheric composition..

This is from a climate skeptic known as James Hansen, Climate Response Times: Dependence on Climate Sensitivity and Ocean Mixing (1985).

It’s good that Hansen and others recognize the variability of climate sensitivity. If climate is unpredictable in principle, how can IPCC make a forecast with 90% confidence level? If the system is truly chaotic, this is not possible.

Correct me if I’m wrong but a Monte Carlo simulation will only work in a random system but not in a chaotic system. I think the probability distribution of a chaotic system is not a bell curve but a box. There is no most probable value. Each value within the range of possible values is equally probable. This is my guess. Mathematicians may use chaos theory to prove it.

ScienceofDoom,
You have defined sensitivity: “In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature”.

When the average temperature on Earth is very slow changing, and when storage (to the ocean) is small enough, the radiation out has to closely match (on average) the absorbed solar radiation in. The surface temperature can slowly change even under this condition by the simple process of the location of the effective altitude of the outgoing radiation becoming higher. In fact, once the surface temperature has risen then leveled off to a new level of surface temperature, the outgoing radiation will be back to the level before the rise, but the ground can be much warmer. The only difference would be the greater altitude of outgoing radiation. The definition you give is flawed.

The lapse rate is the cause of the increase in ground average temperature if it occurs. If other gases (CH4, etc.) filled the window of direct radiation to space from the ground, and the amount of all radiation blocking gases slowly increased, back radiation to the ground would increase, as would the ground level temperature, but the increase in both upward and back radiation would be the result, not cause of the increased ground temperature. The only radiation average heat transfer is up (there can be local reversal, but not average), and is due to the difference of up less down radiation. However, evaporation and water condensation, and air convection adjust to maintain the lapse rate at the value only dependent on composition of the gases and condensation effects.

The presence of blocking clouds raises the ground temperature for some cases, due to blocking the direct radiation window. This and change in the albedo may affect the sensitivity (say due to feedback from change in CO2 increase), but the issue here is definition of sensitivity.

I’ll also cast a vote for sensitivity as change in temperature caused by a certain change in radiative forcing, rather than change in radiative output caused by a change in temperature.

The IPCC (AR4 WGI 8.6.2) sez:

“…the global annual mean surface air temperature change experienced by the climate system after it has attained a new equilibrium in response to a doubling of atmospheric CO2 concentration is referred to as the ‘equilibrium climate sensitivity’ (unit is °C), and is often simply termed the ‘climate sensitivity’.”

Which also points to the difference between “transient” and “equilibrium” sensitivity, which is another can o’ worms (as in, which of these is effectively measured by a given calculation?).

SOD,
Delta F from TOA to space is zero when the increase in temperature has leveled out to a new level. If the surface T is constantly increasing rather than leveled out, the delta F at TOA depends on the rate of heating (due to storage terms). You need another choice of sensitivity definition. If it is the impulse response, this may do, but here storage (mainly sea water) complicates the issue.

SOD,
If you are claiming it is the direct radiation through the atmosphere window to space that is the delta F you are referring to, that is a better choice for CO2, but even more wrong for gases that close the window (CH4, etc.).

SOD,
It occurred to me that when you talk about delta F, you may be referring to change in equivalent solar flux intensity that would change the ground temperature by that amount if there were no greenhouse gas. I do not dispute that definition.

The top of atmosphere forcing from double CO2 must be determined using a line-by-line radiation code that computes directional transmittance by integrating over the hemisphere, and which includes the actual measured water vapour profiles. The HARTCODE program with the NOAA water vapour profile was used to calculate the effects of changing surface temperature (Ts), water vapour and CO2 content on the outgoing longwave radiation (OLR). The OLR is broken into two components, the radiation from the surface St and the radiation from the atmosphere Eu.

Line 5 of the spreadsheet shows the unperturbed values.
The red lines (lines 6 – 17) show the effects of changing the surface temperature.
The blue lines (lines 18 – 29) show the effects of changing the water vapour content.
The black lines (lines 20 – 41) show the effects of changing the CO2 content.

SoD writes a 1 Celsius increase in Ts would cause a no-feedback OLR increase of 3.3 W/^2.
Row 15 shows a 1 Celsius increase in Ts causes the OLR to increases by 3.79 W/m2. This includes 1.39 from the surface (St) and 2.40 W/m^2 from the atmoshere (Eu). The no-feedback climate sensitivity = 1/3.79 = 0.264 K/(W/m^2). The IPCC adopts the value 0.30 K/(W/m^2), which is 13.6% more than estimated by HARTCODE.

Row 41 shows that doubling CO2 causes the OLR to decrease by 2.52 W/m^2. This includes 1.45 from St and 1.05 W/m^2 from Eu. Note that this is only 68% of the 3.71 W/m^2 estimate using the IPCC adopted formula 5.35Ln(2).

Using the proper water vapour content is critical. This graph
shows the huge difference in optical depth change with doubled CO2 between assuming the USST 76 versus the actual NOAA water vapour profile. Using the USST 76 water vapour profile overestimates the optical depth change of double CO2 by 94%!

IPCC forcing is not calculated at the TOA but at the tropopause after allowing the stratosphere to adjust to the change in CO2 concentration by cooling. Above the tropopause, an instantaneous change in CO2 causes a decrease in forcing with altitude because the stratosphere warms with altitude and increasing CO2 increases radiation which has a cooling effect.

If you use the global average specific humidity and the global average temperature, you get greater than 100% RH at the surface. The presence of cloud cover explains much of the difference between the total column precipitable water of the clear sky US 1976 standard atmosphere and global average column precipitable water. You can’t simply pretend that clouds aren’t there and assume that all water in the atmosphere is in the form of vapor.

Thanks for your reply. I am aware that the IPCC uses the tropopause for the IPCC forcing definition. I don’t know how to estimate the tropopause forcing from the OLR change. We need to know this because the satellites measures the TOA OLR, not at the tropopause. If the change in OLR is 2.51 W/m^2m due to double CO2 holding surface temperature constant, how do I calculate the change in OLR at the tropopause and exactly how do I adjust this to somehow allow for stratospheric cooling? Is there a formula?

The NOAA data for 2010 shows the global average specific humidity is 10.633 g/kg at 1000 kPa. The corresponding 2010 global average temperature is 15.522 C. The NOAA relative humidity is 78.235%. I don’t get a RH greater than 100% at the surface.

I don’t see how cloud cover can explain the difference between US 1976 std atmosphere and the NOAA water vapor profile. The NOAA specific humidity in g/kg is the mass of water vapour per kg of air. Are you suggesting the NOAA specific humidity includes the amount of liquid water in clouds? I don’t think so!

I am not pretending that clouds aren’t there. I am assuming that the cloud cover fraction does not change when CO2 is doubled, which may not be accurate.

Dr. Spencer comments that the diagnosed feedback of the FGOALS model of 0.77 W/m^2/K is much less than the known feedback of 2 W/m^2/K of the model.

M&F used the HadSM3 climate model to show that an accurate feedback could be diagnosed from the model output. But the model experiment used an instantaneous quadrupling (!) of the CO2 content, then held constant, so the feedback signal was huge compared to any radiative forcing noise. In the real world, the radiative forcing noise is large compared to the feedback signal when CO2 increases by only 2 ppm/year.

M&F changed the averaging time of the model output, which Spencer agrees with. But M&F changed the depth of the mixed ocean layer from 50 m to 100 m. For diagnosing feedbacks from satellite data, the time scales of variability affecting the data are 1 to only a few years. On those short time scales, the equivalent mixing depths are pretty shallow. Spencer thinks 50 m may be too deep.

The Figure 2 above shows that changing the mixed layer from 50 m to 100 m would have an insignificant effect on the feedback parameter. So why does Figure 3 of M&F 2010 show a large change to 110 m of mixed ocean layer?

“However, disadvantageously, including non-instantaneous processes clearly blurs the distinction between forcing and feedback as there is no longer a clear timescale to separate the two; further including these processes in the forcing incorporates more uncertain aspects of a climate models response [Forster et al., 2007].”

“Let’s say that if such a thing as climate sensitivity exists and can be measured then climate feedback = 1/(climate sensitivity) by definition.”

I suppose one can define things as one wishes but this is a large departure from the classical (Bode 1945) definition and is I think the source of much confusion. If we have a system with a feed-forward transfer function A (which the IPCC refers to as the no-feedback sensitivity k taken as a constant independent of frequency) and feedback B(s) as depicted here, the classical definition of system sensitivity is the gain from input to output which is given by the feedback equation A/(1 – A * B(s)). In “normal” formulations, the feedback is not 1/sensitivity but rather B(s). Equally confusing (perhaps more so) is the re-definition of what constitute positive feedback. Whereas in the classical formulation, the sign of feedback = sign of B(s), climatologists instead define positive feedback as any B(s) s.t. A/(1 – A * B(s)) > A, i.e. |A * B(s)| <1. This is a particularly poor formulation, as it obscures the fact that the sign of B(s) can change if it contains two or more poles because it is a complex function of frequency (s= i*w) and each pole contributes 90degs of phase shift.

“While there is a substantial time lag between forcing and the temperature response due to the heat capacity of the ocean, the radiative feedback response to temperature is _nearly simultaneous_ with the temperature
change.”

Figure 3 does not dispute this point, but instead notes that at zero lag the unknown radiative forcing (N) will make it difficult to isolate the feedback among the measured TOA flux. So, according to SB11, it’s still expected to be there at zero lag, but the signal is confounded most at that point (LC11 try to go out to further lags to retrieve the signal).

My take on what David is saying is that if TSI is above the TSI mean for a period then temperature will rise; if TSI is below temperature will fall; the rates of temperature response will depend on the direction of the TSI movement; for instance if TSI is above the mean but decreasing towards the mean then temperature will still be increasing but at a slower rate. When TSI crosses the mean the temperature response also changes direction, without lag.

[…] Murphy and Forster 2010 used a similar method in their response to Spencer and Braswell 2008. Science of Doom has been looking into this as well, and gives some background on the origins of why this is […]

Many thanks for researching and writing a detailed post on this important issue. You don’t mention Lindzen and Choi’s recent research, which involves regressions at varying lags. It is very relevant to the issues that you discuss, I think.

I was going to point out that you had terminologically confused climate sensitivity with the climate feedback factor (or, as I prefer, to avoid confusion with usage of the term ‘feedbacks’, climate response factor). But I see that Nick Stokes has already pointed this out.

I would venture that your question is NOT fair. SoD has said that he/she is taking this one piece at a time, and has done a very meticulous job of investigating each piece. A summary of the whole “big issue” in one comment will inevitably be unsatisfactory, and will not include specific, detailed analysis that this site has become known for. As Steve says at CA, OT discussions regarding the “big issue” turn every thread into the same thing.

I agree he’s done a good job investigating the many separate pieces, but I think it’s very easy to lose sight of perspective going into as much micro detail as he has – most of which is ‘noise’ in regards to the fundamental question. In the end, it boils down the combined net feedback of water vapor and clouds, and this should be the primary focus, IMO.

I certainly don’t dispute that the physics and data supports a likelihood of some effect or that humans can and have influenced climate to some degree, but I firmly believe the magnitude of 3 C or more being predicted cannot be supported by any real science or data – just what amounts to guessing, which is not real science, let alone something any kind of public policy should be based upon.

And I’ve yet to see any pro-AGW proponent provide a falsification test, which is why I’m asking. Maybe SoD doesn’t really believe in the 3C rise or is not convinced by the purported evidence. Maybe it’s not fair to ask for a summary. Anyway, I thought I’d ask. It’s up to him if wants to answer.

Opinions are often interesting and sometimes entertaining. But what do we learn from opinions?

It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?

Might I ask how it’s possible that watts of GHG ‘forcing’ can have a 3x greater ability to warm the surface than watts from the Sun, especially since each incident watt in the system causes proportionally less and less warming?

That’s because they don’t. GHG forcing is expressed as W/m² precisely so it can be compared to other forcings such as an increase in TSI. When you calculate the change in TSI, you need to correct for the surface area of the sphere and albedo. A change of solar forcing of 3.7 W/m² would be the same as a change in the solar constant of 1368*3.7/239 = 21 W/m² or 1.55%.

I thought watts of GHG ‘forcing’ were supposed to be equivalent to watts of post albedo solar power, or that the 3.7 W/m^2 of ‘forcing’ from 2xCO2 is supposed to be the same as if the post albedo solar power were to increase from about 240 W/m^2 to 243.7 W/m^2. Is this not correct?

I’m not sure I understand what you’re doing there. Are you saying that a doubling of CO2 is not equivalent to an increase in post albedo solar power of 3.7 W/m^2 but instead an increase in post albedo of 21 W/m^2 (or and increase from about 240 W/m^2 to 261 W/m^2)???

Black body radiation from the Earth’s surface is the primary negative feedback on any temperature rise DT. The differential of Stefan Boltzman’s law gives the effective feedback parameter:

DS/DT = 4sigmaT^3 (= 3.75W/m2K-1 for T=288K)

Hence a temperature rise of about 1C for a doubling of CO2. The feedback estimates used by IPCC models concern water vapour and clouds. The total feedbacks used range from +1.6 to 2.5 i.e. an average total positive feedback of ~ 2.0 W/m2K-1. Only with these feedbacks does a doubling of CO2 lead to large temperature increases of between 2 to 5 deg.C. If however it turns out that total feedbacks are actually negative or zero then global temperatures would rise only 0.3 to 1.1 degrees C. So feedback is the make or break issue for significant AGW.

The sun has brightened 30% over the last 4 billion years and current average solar radiation is 342 watts/m2. Assuming a slow linear increase of solar radiation with time gives a net forcing of 0.02 watts/m2 every 1 million years. Taking a simple IPCC linear feedback factor 2.0 W/m2K-1 to calculate past temperatures using DT( 4sigmaT^3 -F) = DS then gives non physical results, because the temperature eventually falls so that F= 4sigmaT^3.

Therefore this simple model must be wrong. Another possible model is that a balance between cloud albedo and H2O greenhouse effects enables the earth to self regulate its temperature. When solar radiation is below some optimum H2O greenhouse effects dominate while above it cloud albedo effects dominate.

The increase would be 240 + 3.7 W/m² = 243.7 W/m². If you agree that this is correct, why do you think that “watts of GHG ‘forcing’ can have a 3x greater ability to warm the surface than watts from the Sun”? Any increase in TOA or tropopause forcing from any source will increase the surface temperature by much more than the increase in temperature at the tropopause. Or to put it another way, the increase in radiative flux at the surface must increase faster than the increase of flux at the tropopause. Any feedback, such as increased water vapor at higher temperature depends only on the surface temperature, not the reason why the surface temperature increased.

Mind you I fully understand that the solar constant of 1367 W/m^2 has to be divided by 4 to get the global average because the Earth is a sphere (or very close to a sphere). 1367/4*(1-0.3) = 239 W/m^2 globally averaged post albedo.

It’s my understanding that the net change of 3.7 W/m^2 at the tropopause is for the global average or that the outgoing LW flux of about 239 W/m^2 (or whatever the flux is there) reduces by 3.7 W/m^2 (to 235.3 W/m^2). You’re saying this is not correct?

OK, I see the discrepancy. When I refer to post albedo power, I mean globally averaged post albedo power, i.e. TSI/4*(1-0.3) or about 240 W/m^2.

So getting back the original question, how can 3.7 W/m^2 of additional power from GHG ‘forcing’ have a 3x greater ability to warm the surface the same amount of additional post albedo power from the Sun, especially since each addition incident watt from the Sun causes proportionally less and less warming in the system?

how can 3.7 W/m^2 of additional power from GHG ‘forcing’ have a 3x greater ability to warm the surface the same amount of additional post albedo power from the Sun

Please support this claim with some numbers or a literature citation.

When the surface temperature increases, the temperature of the atmosphere increases. That means the surface temperature must increase even more to achieve a net increase in flux. The reason the surface temperature increases doesn’t matter. Forcing of the same magnitude from an increase in TSI or an increase in CO2 will have the same effect on surface temperature. I am completely unaware of any factor of 3 difference between ghg forcing or solar forcing.

For clear sky, US76 atmosphere in MODTRAN, it requires an increase in surface temperature of 1.12 C to increase the TOA emission by 3.7 W/m². That increase in temperature causes an increase in emission from the surface of 6 W/m². That’s in the absence of any feedbacks. Doubling CO2 from 280 to 560 ppmv requires an increase of 0.98 C to restore radiative balance at 13 km. The forcing change calculated by MODTRAN at 13 km (tropopause) is only 3.4 W/m², probably because the stratosphere wasn’t allowed to equilibrate. There is no factor of three here. There is no reason I know of to believe that feedbacks are proportional to anything other than the change in temperature.

“Any increase in TOA or tropopause forcing from any source will increase the surface temperature by much more than the increase in temperature at the tropopause. Or to put it another way, the increase in radiative flux at the surface must increase faster than the increase of flux at the tropopause. Any feedback, such as increased water vapor at higher temperature depends only on the surface temperature, not the reason why the surface temperature increased.”

Let me elaborate on l what I’m asking in series of separate questions:

Do you agree that the 240 W/m^2 post albedo solar flux is forcing the climate system?

Do you agree that the 240 W/m^2 forcing the system from the Sun results a net increase at the surface of about 390 W/m^2?

Do you agree that this accounts for all the physical processes and feedbacks in the system?

If not, why haven’t the feedbacks fully manifested themselves after billions of years of incident energy from the Sun?

Do you agree that in order to amplify +3.7 W/m^2 from 2xCO2 into +3C requires an net increase of 16.6 W/m^2?

“For clear sky, US76 atmosphere in MODTRAN, it requires an increase in surface temperature of 1.12 C to increase the TOA emission by 3.7 W/m². That increase in temperature causes an increase in emission from the surface of 6 W/m².”

Yes, and 3.7 W/m^2 x 1.625 = 6 W/m^2. Do you see how the so-called ‘no-feedback’ response of about 1.1 C is derived from the surface response to solar forcing (390/240 = 1.625)?

No, I do not agree that the net energy flux at the surface is 490 W/m^2, as if it were, the surface would be emitting 490 W/m^2 and not 390 W/m^2.

I do agree that in addition to the radiative flux at the surface there is about 100 W/m^2 of non-radiative flux from latent heat and thermals. However, this flux originates from the surface, and to the extent that non-radiative flux leaves the surface, it also returns to the surface somewhere else (mostly as the temperature component of precipitation, wind, weather, etc). If there is an imbalance – say more non-radiative flux is leaving the surface than is returning to the surface on average, non-radiative flux is just being traded off for radiative flux at the surface, requiring the surface to emit less to achieve equilibrium output power at the TOA.

“So you agree there is 390 W/m² radiative flux and in addition 100 W/m² non-radiative flux, but do not agree that the total is 490. Good one.”

The net energy flux entering and leaving the surface is 390 W/m^2 in the steady-state. Why is this so hard to understand? If this were not the case, and the net energy flux at the surface was 490 W/m^2, the planet would be in the process of warming to 32 C from 15C. If it were less than 390, it would be in the process of cooling to some temperature lower than 15C.

Do you understand the basic T^4 relationship between temperature and emitted power. Whatever temperature a body is radiating at (in this case the surface of the Earth), the amount of energy it is radiating has to be continually replaced or else the body will gain or loss energy and subsequently warm up or cool down.

I should more correctly say if “the net energy flux entering the surface was 490 W/m^2 with the surface still radiatively emitting 390 W/m^2, the planet would be in the process of warming to 32C from 15C.”

We’ve been through this before. The net energy flux entering and leaving the surface is less than 1 W/m². If you split between incoming solar and outgoing thermal, it’s ~168 W/m² solar in and 66 W/m² radiative and 102 W/m² convective out. Gross energy in and out is 492 W/m² (168 + 324 W/m² in and 390 + 102 W/m² out). Convective heat transfer keeps the surface cooler than it would otherwise be. Convection happens because a radiative heat transfer only atmospheric temperature profile is unstable. Your apparent inability to understand this as well as add and subtract correctly makes further discussion pointless.

But the interesting discussion is RWs method. What’s wrong with finding the amplification of todays climate system (492/168 or whatever it is) and then assume that the next additional watt, from solar, CO2 or whatever, will see roughly the same amplification?

He can’t work his way through the K&T97 or FK&T2009 energy balances. He insists they are double counting somewhere in spite of all attempts to show that they aren’t. It amounts to an idée fixe so any further discussion is pointless.

Don’t bother restating your point or asking your question again. As I have already promised I will just delete your repetitions. It’s only because others have come and answered that I feel bad about deleting your recent comments.

“Convective heat transfer keeps the surface cooler than it would otherwise be.”

Yes I totally agree with this. I’m not sure exactly why you think this is in conflict with anything I’ve said, but I tried to explain it among other things and apparently failed. I’m no longer ‘allowed’ to discuss this any further. Sorry.

[…] unmanageable if I were to repeat them in every post. However, for now I’ll mention the recent Science of Doom and Isaac Held posts, and then begin listing some of the difficulties I see (particularly without […]

so they are basing global warming on increases of the wind mixing waves, changing the reflectivity of the local surface? of water in shoals. Not counting the convectional currents of the streams or the up/downwelling of mixing, or how the clouds change the apparant reflectivity of the oceans and land masses. Dumb.

I suspect the uncertainty in climate sensitivity falls under Level 3 (without sufficient data) or Level 4. Can this be proven from first principles? There may be a mismatch betweeen climate models and the phenomenon they are describing. If so, the models and their results may not be useful.

How is climate sensitivity as you define it related to the “climate sensitivity” as a change in temperature per doubling of atmospheric CO2? If the former is a variable, does it follow the latter is also variable?

If climate sensitivity is variable, can we determine its upper and lower limits and the shape of the probability curve through statistical analysis?

The MIT paper particularly Sections 3, 4 and 6.1 describes the types of uncertainty amenable to statistical analysis – whether it is possible to determine the correct probability curve. We often take it for granted that this is true, which leads to wrong models and wrong predictions.

[…] speaking a climate science blog; others (notable RealClimate, also notably James Annan, Tamino, Science of Doom) were covering that territory. Nor was it one of the many climate policy blogs. Rather its focus […]

My apologies, I don’t often visit.
I don’t agree with much in the headline post.

1.”First, what do we mean by “climate sensitivity”?

In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature.”

This assertion assumes a non-equilibrium planet. In an equilibrium planet, flux out=flux in. So unless flux in changes, flux out won’t either.
If you assume a non-equilibrium planet (one which is in the process of warming or cooling to make flux in = flux out) you can get any number you care to choose for sensitivity.

So I don’t much like your definition of this term, which i have always thought of as a consequence/drive ie, the equilibrium change in surface temp/ change in (ugly term)”forcing”.

2.”If the average surface temperature of the earth increased by 1°C and the radiation to space consequently increased by 3.3 W/m², this would be approximately “zero feedback”.”

This amounts to a sensitivity (in my terms) of 0.3DegC/W/m^2. The surface sensitivity is actually half to one third of that – 0.095 to 0.15 DegC/W/m^2 (depending on evaporation change, noting that evaporation change is not a feedback but a direct determinant of surface temperature. This range is supported by direct evidence from the seasonal behaviour of the planet, see below.) Perhaps you could point me to where you calculated your zero feedback sensitivity.(even eliminating evaporation change from the calculation I only get about 0.2DegC/W/m^2).

3. It seems to me that TOA (note: NOT tropopausal, so NOT Radiative Forcing) flux changes don’t matter to the Surface, which is largely de-coupled from the TOA by:
a. The highly variable stratosphere, which accounts for 8-30% of the atmosphere. This region seems to be adapting to local flux imbalances at least on an hourly basis.
b. The highly variable tropopause, which from day to day can change by several kilometres in height (temperate/polar zones, and by a kilometre in the tropics) and by kilometres in depth, with consequential temperature variability.
c. clouds
d. the difficulty TOA has in heating the surface. The surface responds only to the following:
1). Absorbed Sunlight
2). Back-Radiation received at the surface. (the change in this is nothing like the same as a radiative imbalance at the TOA.)

[The diagram in IPCC AR4 WG1 Chapter 2 Fig 2.2 purports to show how the Tropopause temperature influences the surface. I have recently looked at Sonde measurements trying to find confirmation of the process illustrated in that diagram, but have found no support for the claimed process, whatever it is (could be magic…) It is apparent from sonde measurements that atmospheric flux imbalances result in temperature change at the point of imbalance, but do not propagate downwards in the manner suggested by the IPCC diagram.]

4. Where I live, the average solar forcing changes by 132W/m^2 from winter to summer, resulting in a change in average temperature from July to January of 16DegC, for a sensitivity of 0.12DegC/W/m^2. This is in my view, strong confirmation that an insignificant change in “Radiative Forcing” (ie stuff happening at the Tropopause not the TOA – how do they calculate that when the Tropopause is bouncing around all the time?) of 3 or 4 W/m^2 will have very little effect down here where the real people live.

[…] a meridian scholarship blog; others (notable RealClimate, also particularly James Annan, Tamino, Science of Doom) were covering that territory. Nor was it one of a many meridian process blogs. Rather a […]

In the spirit of fairness, can I at least get you to address my point about how designating the 1.1C as the ‘zero-feedback’ starting point (or so-called ‘Planck response’ of about 3.3 W/m^2 per 1 degree of warming) arbitrarily separates the physical processes and feedbacks in the system that will act on additional ‘forcings’ or imbalances, like from increased GHGs, from those physical processes and feedbacks that maintain and control the planet’s energy balance from the forcing of the Sun?

[…] In Part One I created a Matlab model which reproduced the same problems as Spencer & Braswell (2008) had found. This model had one layer (an “ocean slab” model) to represent the MLD with a “noise” flux into the deeper ocean (and a radiative noise flux at top of atmosphere). Murphy & Forster claimed that longer time periods require an MLD of increased depth to “model” the extra heat flow into the deeper ocean over time: Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010). For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations. […]

If I understand it, the radiative forcing of a gas is the change in W/m2 emited out of the atmosphere due to the gas being present (all other things being constant) or perhaps not completely present but due to the change in concentration since the year 1750. Here is the fundamental problem with the approach: The value in W/m^2 is proportional to the radiation emited by the ground surface, and thus it depends on the ground temperature. The values used to measure sensitivity ought to be intrinsic quantities, not the result of a multiplication by the quantity you are trying to find in the first place.

The way I think of it, at the top of the atmosphere you have the net downward solar radiation S = So/4*(1-A) with A=albedo, the transmitted ground radiation G*trans where trans is a transmission factor, and the upwards energy emitted by the Atmosphere. This is turn is proportional to a factor Ctop times the Ground radiation. Then
So/4*(1-A) = G*trans + G*Ctop = G*(trans + Ctop)

We can then solve for the ratio of G/So as
G/So = 1/4*(1-A)/(Ctop + trans)

This way the terms A, trans and Ctop are independent of the ground radiation or temperature (as well as solar) and one can can get the sensitivity of trans and Ctop versus increases in greenhouse gas concentration, and thus the sensitivity of the ground temperature to them.

In order to analyze the atmosphere I would need to know from published ‘radiative forcing’ numbers the ground temperature assumed when the analysis was done, and the ‘base concentration at year 1750’ from which it was derived. It seems that the radiative forcing method ofuscates the problem of calculating a ground temperature rather than bringing light to it.

Wouldn’t help. Forcing alone doesn’t tell you all that much. You need the climate sensitivity. But the climate sensitivity should also be a function of temperature also as it takes a smaller forcing at a lower temperature to increase the temperature by 1 degree. I think. Not to mention that the forcing isn’t all that sensitive to surface temperature. A quick and dirty with MODTRAN tropical atmosphere has the forcing reduced from 4.2 to 3.8 W/m² when the surface temperature is dropped 10C.

Question 1: Since the colder air is, the less water vapor it can hold, is it true that it takes less energy to raise a cold air mass a given delta T than raising warmer air mass the same delta T? given the same RH?

Question 2: If the answer to Question 1 is yes, should the non-uniformity of the global average temperature anomaly be taken into account to determine climate sensitivity?

1. Assuming constant RH, yes. The converse is true too. It takes more energy to cool warm air by one degree than it takes to cool air at lower temperature by 1 degree.

2. Yes, and it is in the GCM’s. That, IMO, is also one of the reasons for polar amplification, i.e. the temperature changes faster at the poles than at the equator. That would be the absolute local temperature, though, rather than the local anomaly. In fact, it’s one of the reasons you need something as complex as a GCM to estimate climate sensitivity. Unfortunately, GCM’s have other problems, like accounting for the effect of aerosols when we don’t completely understand them and don’t have a historical record that’s worth a dime.

I don’t have any major issues with this article, but I think it can be credibly argued that it’s easier to think of and quantify sensitivity as dimensionless gain, such as dictated by Bode and established in control theory.

The 3.3 W/m^2 of increased TOA flux emission per 1C or warming is basically derived from the dimensionless gain of the system to post albedo solar forcing, which is about 1.6 (385/239 = 1.61). Where +1C equals about +5.3 W/m^2 of net surface gain and 5.3/1.6 = 3.3. An incremental gain less than absolute gain of 1.6 indicates net negative feedback and an incremental gain greater than 1.6 indicates net positive feedback.

The consensus sensitivity of 3.3C can be easily converted into dimensionless gain. +3.3C from a baseline of 287K requires about +18 W/m^2 of net surface gain and the so-called ‘radiative forcing’ from 2xCO2 is 3.7 W/m^2; and 18/3.7 equals a gain of about 4.8, which is larger (3x larger) than the absolute gain of about 1.6, indicating net positive feedback.

Moreover, when sensitivity is quantified as dimensionless gain it’s much easier to see the direction of non-linearity of the system to post albedo solar forcing, which is probably not uncoincidentally in the opposite direction consistent with or required for net positive feedback acting on imbalances (or the incremental gain being greater than the absolute gain).

That is, each incremental watt of post albedo solar forcing above the global average gain results in progressively less and less than the average gain. This suggests that the net feedback acting on the climate system is not only negative, but actually gets more or stronger negative as the system warms.

In reading some of my older posts in this thread, I should note that I don’t consider flux to be gained by the surface unless it’s actually being added to the surface, i.e. added in a way that affects or changes its temperature. Admittedly, I don’t know for sure what the formal use of language is to describe this, but I think it’s most commonly referred to as ‘net energy gained’.

This gets to the point I mentioned earlier that this arguably is effectively claiming watts of GHG ‘forcing’ have a significantly greater ability to warm the surface than watts forcing the system from the Sun.

My math skills are not very good, and neither are my skills in physics. I’ve wondered how to find simple functions that can illustrate issues related to global warming. It would be nice if there was a collection of such tools that one could play around with. Just being able to convert various measures of temperature and energy.
I think it is possible to produce some calculations in a more understandable way than is often done. I have tried to look at logarithmic functions to see how doubling of CO2 is illustrated. I will use 1750 as my startpoint because I think it is a transient period in Little Ice Age and before increase in greenhouse gases. And also because 1750 is seen as the beginning of industrial age.

And I found this site: Online Logarithmic Regression
This page allows you to work out logarithmic regressions, also known as logarithmic least squares fittings.http://www.xuru.org/rt/LnR.asp

Here you can place datapoints, from the assumption that for a doubling of CO2, you get a temperature increase of 1,1 (degree C): (x=400,y=1,1) (x=800,y=2,2) (x=1600,y=3,3)
The function you get from Xuru.org is then:

y = 1.586964545 ln(x) – 8.408241809

You then just search for this function in Google, and get an interactive diagram. Then you can read the change in temperature from x=200 (and y=0), as 200ppm CO2 and upwards. The next step can be to place 278 as the zero point. Temperature of 1750 as 0 (baseline) for 278 ppm CO2 (IPPC). You get into the Google diagram and find the y value for x=278 as 0,522593372
New function for year 1750 baseline:

y = 1.58696 ln(x) – 8.93008

This is a function of the relationship between temperature and CO2 concentration without any feedback. The next step can be to place the real increase in temperature from 1750 to 2013 into the diagram. This warming is 0,85C (GISS), which is both CO2 driven and natural variation. What we are interested in is the CO2 contribution to the global warming.

I will chose the sea level rise as an equivalent to global warming. The net energy uptake of the climate system will show up in sea level. And there is a close correlation between sea level and global temperature. See Vermeer and Rahmstorf, 2009. “ We propose a simple relationship linking global sea-level variations on time scales of decades to centuries to global mean temperature. This relationship is tested on synthetic data from a global climate model for the past millennium and the next century. When applied to observed data of sea level and temperature for 1880–2000, and taking into account known anthropogenic hydrologic contributions to sea level, the correlation is >0.99, explaining 98% of the variance.”

And we can use calculations of how much this increase of sea level come from an increase in CO2. See S. Jevrejeva,1 A. Grinsted,2 and J. C. Moore, 2009. “Anthropogenic forcing has been the dominant factor (with a contribution of more than 70%) in sea level rise (Figure 3c, inset) since 1900, and has been the main driving force (from 50– 70%) of sea level rise since 1850. This finding is robust in all our experiments.”
It is then reasonable to use an anthropogenetic contribution (CO2) of 60% of the temperature rise since 1750. 60% of 0,85 is 0,51. So we can use 0.51C as the CO2 driven increase in global temperature. As I understand it this is with feedbacks.

The next step is then to look into the logarithmic function y = 1.58696 ln(x) – 8.93008, which is temperature anomaly without feedback. For year 2013, with CO2 concentration 396ppm, we get a temperature rise from 1750 of 0.562208765, which means that CO2 part of warming between 1750 and 2013 is 0,56C.

The difference between forcing with and without feedbacks is then 0,05C, and surprisingly it comes out as a negative feedback. To sum up: CO2 part of warming between 1750 and 2013 is 0,56C, with feedbacks it is 0,51C and natural variations is 0,34C.

The next step is to calculate a new function of CO2 driven temperatures with feedbacks, and use the reduction of 0,05C, which is 8,93% reduction. We then get the datapoints: (x = 278, y = 0) (x = 396, y = 0,51) and the function:

y = 1.441520492 ln(x) – 8.112331156

This function comes up interactive in Google. The x-axis is showing the concentration of CO2 in ppm. The y-axis is the temperature change since 1750 (278pph). Then you can read the change in temperature with increased concentration of CO2. With a doubling from 400ppm to 800ppm we would get an increase of 1,0C.
To use this function in prediction we have to assume an increase in CO2 level. This has been ca 0,5% the last years. To look how this works out yearly in the future, we can calculate in Exel, from 396ppm. Place 396 in A1, (A1*0,005) in B1, (A1+B1) in C1. Then (C1) in A2, (A2*0,005) in B2, (A2+B2) in C2. Drag down the second row, and we get the CO2 ppm level for every year from 2013. The doubling of CO2 concentration from preindustrial level (556 ppm) will be reached in 68 years from 2013, which is in 2081. Then we will have 1,0C plus 0,34C natural temperature increase from 1750, and 0,49C increase from 2013. The 2 C degrees increase from preindustrial level will be composed of 34C natural warming and 1,66 CO2 forced warming. We use the y-axis of 1,66 in the graph and find CO2 ppm of 879. This is a level we will reach in 161 years, in 2175..

Stupid ases.

A calculation like this have many assumptions that will never be quite valid. I will call it stupid assumptions (abbr. Stupid as). I think the climate debate is full of them. All the approximations are the least troublesome. We rely on some temperature records and gas concentration records. We assume the value of 1,1C increase for doubling of CO2 without feedbacks. But when we come into the realm of natural vs anthropogenic driven change the stupidity grows. I don`t think all assumptions are stupid. Some scientists can also bring in more Informed assumptions, but I think that I am not the one to judge that. What I think is one of the greatest problems in climate discussions is that most of the stupid assumptions are hidden behind models.
I know that much of what I am writing here is wrong. It does`t hold some scientific standards. But at the same time it is the most reasonable argumentation I can see, as an outsider and layman. Some old-fashioned statistic calculations can perhaps give a better insight in probable scenarios than obscure models. I don`t give much for fancy tipping-points and runaway scenarios. What scares me a little in the discussions is that some scientists are abusing their status to bring in their subjective fears clothed up as science. I think that when someone tells us that there will be 2C increase in temperature from preindustrial level in 2040, or that ECS is 3,5, then it say something of their lack of deeper understanding of climate change. And when others are jumping on their bandwagon it tells us about their disability to sober reflection. I think we need people with a deeper understanding of climate dynamism.

To be fair, full equilibration in the climate models takes thousands of years. It’s so long that the models don’t appear to have equilibrated to the initial conditions during the approximately one hundred year spin up period. They also assume that much ghg forcing has been offset by sulfate aerosols up to about 1950.. That’s getting a lot harder to justify, though.

I admit that iI do not fully understand the equilibration matter. I understand that it is about the conditions that must be fullfilled to stop the TOA radiation imbalance. Most of this imbalance goes into the oceans now. I would like to know more about the kind of temperature gradient in oceans and in atmosphere that can change the net TOA radiation to equilibrium. (As a kind of idealized state). This is perhaps too complicated to answer as it depends on changes in time (seasonal) and place (altitudes etc). My thought is that one can follow the TOA imbalance by studying sea level. It would be possible to calculate the storing of energy historically. I would also think that it is somthing to be learned from this, Hopefully what I wrote can be something in that direction. As to the aerosol, when it come to volcanoes, I cannot follow that it has a historically cooling effect. On records of volcano eruptions, the hundred years between 1900 and 2000 are the least eruptive in the last thousand years. I reccomend Volcano Cafe as a very interesting site for this.

The problem is that sea level also depends on runoff from melting land based ice. There’s also been a significant contribution recently from removing large volumes of water from aquifers much faster than the refill rate. Unless you have a measure of land based ice volume or ocean temperature profiles, it’s difficult to separate the contribution of thermosteric expansion and increased total mass to the rise in sea level.

I left out isostatic rebound, which Frank included below, as another source of error in sea level measurement. The sea floor is sinking because the parts of the continents that were covered with ice during the last ice age are rising and the total mass of water in the ocean has consequently increased. That correction is the same order of magnitude as the change in sea level and isn’t well known.

I think sea level estimates is in process. That is what science is for. The oldest sea level measurements is from Amsterdam, beginning in the 1680ies. According to Shannon the land level in Netherlands was very stabile over the last 5000 years, with a subsidence of ca 0,5m pr 1000 years (not rebound elevation, if I understand it right). The tide gauges of Amsterdam is corrected for errors, and may be representative for sea level change for 200 years.

NobodyKnows: Changes in CO2 play the largest role in anthropogenic radiative forcing, but the contribution from other GHGs and aerosols are too big and/or uncertain to ignore as you do in your calculation. See Figure SPM5 from the Summary for Policymakers in AR5 WG1. Sensible calculations require working with forcing from all sources, currently 2.3 (1.1-3.3) W/m2 or 7/10ths (0.4-0.9) of a doubling of CO2.

I strongly dislike using the 1750 as the starting date/reference point for measuring temperature change and anthropogenic forcing. It is during the LIA, which may be an example of a large unforced cooling. We don’t have any idea of what the current change of 0.8 degC above “pre-industrial” or goals of 1.5 or 2 degC above pre-industrial mean, because no one living today has experienced pre-industrial conditions. No one in their right mind wants to go back to 1750 in terms of climate or energy use. The climate in 1750 is poorly known, and – while CO2 is accurately known – aerosol forcing is not. It makes far more sense to me to calculate backwards or forwards from today. When going backwards to estimate TCR (or ECS) with energy balance models, you can go back only as far as you think the data is trustworthy. Otto (2013) looked at the last four decades (with good evidence for no change in aerosols) and Lewis and Curry (2014) looked at the last one or two cycles of the AMO (65 and 130 years) to eliminate that source of unforced variability. If you haven’t read these papers, they are worth your time, since these papers and your calculations have the same goal. They will also help you being to understand the difference between transient and equilibrium climate change.

There is probably little benefit from using sea level as a proxy for temperature. The formula that you cite linking temperature and sea level rise comes purely from fairly arbitrary curve fitting of temperature and sea level data, so it can’t tell us any more about past temperature than the temperature data itself. Rahmstorf thinks the IPCC has underestimated the danger of SLR and this formula allowed him to make some scary projections that are not grounded in the physical basis of SLR: thermal expansion, glacier melting, ice sheet melting and flow, storage in reservoirs, ground water depletion, and phenomena associated with glacial isostatic rebound.

Frank: “Changes in CO2 play the largest role in anthropogenic radiative forcing, but the contribution from other GHGs and aerosols are too big and/or uncertain to ignore as you do in your calculation.”

I have deliberately ignored some of the forcings, but think that the contribution of CH4 is important. In the summary of policymakers (SPM) the effect of CH4 is 57,7% as strong as CO2 effect ( forcing relative to 1750). This is far more than other papers I have read. Other forcings are much smaller. But when it comes to the resent global warming the smaller forcings can have a huge effect.

“Greenhouse gases contributed a global mean surface warming likely to be in the range of 0.5°C to 1.3°C over the period 1951 to 2010, with the contributions from other anthropogenic forcings, including the cooling effect of aerosols, likely to be in the range of −0.6°C to 0.1°C. The contribution from natural forcings is likely to be in the range of −0.1°C to 0.1°C, and from natural internal variability is likely to be in the range of −0.1°C to 0.1°C. Together these assessed contributions are consistent with the observed warming of approximately 0.6°C to 0.7°C over this period. {10.3} ”

It is a temperature budget that has to be made up, it looks like. I have not the background to go into this, but what has been written in SPM make me sceptical. The great variance of aerosol effect on temperatures and the ignorance and very little variance of natural internal components pose some questions to their understanding.

The latter part of the period from 1950-1980 was when there was some flap about global cooling and a new ice age. See graph from Wood for Trees. The models blame the relative lack of variation from 1950-1980 on aerosols. I strongly doubt they are correct. That was the negative phase of the AMO index. If that is indeed cyclic, the AMO index should have started declining about now.

Frank: If I really want to make up a disaster scenario with global warming, I would take the period of 1976 to 2007, with a 0,6C trend (call it delta T), and pretend that it is an anthropogenetic warming. Then I would use this perod to calculate some TCS and ECS, call it science, and make up some predictions. And I would use some models, and cut it down to several smaller periods, to mystify it all. Perhaps I should mask it a little, and use the period from 1970 to 2010. I would also calculate the increase of CO2 for this period, 65ppm rise from 325 to 390, 16ppm pr year, and call it a 1% increase pr year. And then I would ask some scientists to put their name on it. I think I would ask Alexander Otto1*, Friederike E. L. Otto1,Olivier Boucher2, John Church3, Gabi Hegerl4, Piers M. Forster5, Nathan P. Gillett6,Jonathan Gregory7, Gregory C. Johnson8,Reto Knutti9, Nicholas Lewis10, Ulrike Lohmann9,Jochem Marotzke11, Gunnar Myhre12,Drew Shindell13, Bjorn Stevens11and Myles R. Allen1,14
But: I think I will stick to the period 1750 to 2014.
Ref. graph from Wood for Trees in DeWitt Paynes comment

Nobody knows wrote: “If I really want to make up a disaster … then I would ask some scientists to put their name on it. I think I would ask Alexander Otto1* … Nicholas Lewis …

I’m not trying to “make up” disaster scenarios – quite the opposite. Nic Lewis, a prominent skeptic who posts occasionally at ClimateAudit, is among the authors of Otto (2013). Judith Curry is another prominent scientist who is skeptical of the “C” in CAGW. Their views are summarized in a report for the GWPF (an organization of skeptics).

They have estimated climate sensitivity from forcing and warming data and convinced the scientific community that the earth is likely warming about 1/3 less from anthropogenic forcing than expected given the climate sensitivity of the IPCC’s models. About half of the co-authors of Otto (2013) are also authors of the IPCCs chapter on climate sensitivity. The debate has moved from whether there is a discrepancy between observations and models to the size of the discrepancy, reasons for the discrepancy, confidence intervals, and which approach to believe.

Since your goals appear similar to Lewis and Curry, I hoped you would appreciate links to their state of the art analyses. Then you can see what your ideas about SLR can add to the analysis. LC14 does mention SLR in their estimate of warming over the last 130 years (two cycles of the AMO). Lewis has posted links to data and code for LC14 at his website.

Frank: Thank you for the comment and the link to thegwpf.org. It is nice that Lewis is still working to get better estimates. But what with the others?
“the Otto et al. (2013) study. That study is notable because almostall of its other fifteen co-authors are also lead or coordinating lead authors of those chapters of the AR5 WGI report that are relevant to the question of climate sensitivity.”
If you use the fastest warming period in recent history as a basis for calculation of TCS and ECS, and if you have no idea that there may be some significant natural variation in this change, then I think you would not pass the lowest grade i statistics. But at the same time it makes you qualified to have responsibility for parts of the UN climate report.
I think the laymans understanding of climate change (and I can speak for myself) is hampered by scientists arrogance. First, if you are making some mistakes as a scientist, it is very educational to have some explanation of what went wrong. Most science has a reflection on the process of understanding difficult topics. And second, the models that are used should have som kind of transparency. Every single model should answer some basic questions of what assumptions are laying behind and what are the results of calculation on some important issues (temperature gradients in ocean and atmosphere and SST when some time has passed, TCS, ECS and so on). The scientific process itself is perhaps hampered by something of the same.
The most fundamental question when it comes to sesitivity is: How can you say something of climate sesitivity without having a good measure of natural variation? It looks like it is a great bias in much of the recent presentation and prediction that the natural variation is underestimated, and that it goes unnoticed by many people who pretend to understand.

Nobodyknows: “The most fundamental question when it comes to sesitivity is: How can you say something of climate sensitivity without having a good measure of natural variation?”

You’ve got that right. I like to say that either the climate sensitivity of GCMs is too high OR they don’t produce enough unforced variability. On the other hand, those who believe the hiatus proves climate sensitivity must be low are ignoring the problem of unforced variability.

Nobodyknows wrote: “If you use the fastest warming period in recent history as a basis for calculation of TCS and ECS, and if you have no idea that there may be some significant natural variation in this change, then I think you would not pass the lowest grade i statistics. But at the same time it makes you qualified to have responsibility for parts of the UN climate report.”

Otto (2013) looked at the period 1970-2010 (decade by decade and as a whole), because we had good measurements showing that aerosols didn’t change appreciably during this period, eliminating the greatest source of uncertainty in calculating climate sensitivity. Despite the rapid warming in some decades and little warming after 2000 (partly due to a weaker sun and volcanos), all four decades gave similar results for climate sensitivity. Pinatubo was mostly an “intra-decade” event and the authors discounted the modestly higher sensitivity from that period. So what looks to you like big decadal changes in the warming rate did not produced a large dispersion in calculated climate sensitivity – or at least the dispersion is modest when compared to the confidence interval for calculated climate sensitivity. (See Figure 1.) If I remember correctly, the raw data is available at Nick Lewis’s website – see if you agree.

Rightly or wrongly, Otto (2013) accounted for unforced variability using estimates from the climate model output. Given that the paper was a short communication, the lack of full discussion about this choice isn’t surprising. I personally think the paper was remarkably candid about a conclusion that must have made many of the authors extremely uncomfortable. The large number of co-authors suggests a preference for safety in numbers and a desire to make a consensus statement that both sides could accept.

Lewis and Curry tried to eliminate what they believe is the largest source of unforced variability – the Atlantic Multi-decadal Oscillation – by analyzing warming and forcing over one and two cycles of this oscillation (65 and 130 years). Simple curve fitting suggests that the peak-to-trough contribution of the AMO to GMST is about 0.25 degC. Their analysis is helped by the larger dynamic range of the change in temperature and forcing, but limited by the larger uncertainties associated with older data, particularly aerosols.

Fortunately, Otto (2013) and LC (2014) reach similar values for the best estimate for TCR and ECS. If unforced variability is high, then this agreement is a matter of luck.

Nobodyknows wrote: “I think the layman’s understanding of climate change (and I can speak for myself) is hampered by scientists arrogance. First, if you are making some mistakes as a scientist, it is very educational to have some explanation of what went wrong.”

Few ethical scientists are willing to undergo a soul-searching analysis of their mistakes and PUBLISH the results. The clear statements in Otto (2013) about the analysis they currently think is most definitive are about all you should expect of humans – especially in a politically charged atmosphere.

Few ethical scientists are willing to undergo a soul-searching analysis of their mistakes and PUBLISH the results.

Few being a number that is practically identical to zero. Corrigenda are occasionally published and there are even more rare outright retractions. But, IIRC, most retractions are fraud related. Somebody fudged data and the coauthors and reviewers didn’t catch it before publication.

Frank: “There is probably little benefit from using sea level as a proxy for temperature. The formula that you cite linking temperature and sea level rise comes purely from fairly arbitrary curve fitting of temperature and sea level data, so it can’t tell us any more about past temperature than the temperature data itself. ”
I think yoy are right, and that I have been seduced by the good fit.

The 1.1K of so-called ‘no-feedback’ surface warming per 3.7 W/m^2 of GHG absorption is derived from the 1.6 to 1 power densities ratio between the surface and the TOA, i.e. 385/239 = 1.61 and +1.1K = +6 W/m^2 of net surface gain and 6/1.61 = 3.7.

The problem I have if it’s agreed that so-called ‘no-feedback’ should be a linear increase in aggregate dynamics, is that the 1.6 to 1 power densities ratio between the surface and the TOA is specifically offsetting post albedo solar power. That is, its physical meaning is it takes about 1.6 W/m^2 of net surface gain to allow 1 W/m^2 to leave the system at the TOA, offsetting each 1 W/m^2 of post albedo solar power absorbed. In other words, it’s specifically quantifying a linear increase in aggregate dynamics offsetting post albedo solar power entering, and is not connected to or quantifying a linear increase in aggregate dynamics offsetting GHG absorption.

The GHE is established to be driven by radiative resistance to outer space cooling by radiation from the atmosphere into space, or simply the absorption of upwelling surface IR that would otherwise pass into space which is subsequently re-radiated back downward towards (and not necessarily back to) the surface. That is, the atmosphere must above all make the push toward radiative balance with the Sun at the TOA via re-radiating absorbed IR up towards space, but in order to do that, it must also push back the other way because absorbed IR re-radiated by the atmosphere both up and down.

Since all of GHG absorption, aggregate or incremental, is notnew energy being added to the system like post albedo solar power, and at any discrete layer the probability of re-emission is 50/50 up or down, the intrinsic surface warming ability to GHG absorption can’t really be considered equal to that of post albedo solar power entering the system, which is not only all new joules added but also continuously downward radiated in (i.e. a continuous stream of radiant energy all flowing in the direction of the surface).

The question then is how does one actually quantify a linear increase in aggregate dynamics to offset GHG absorption. I say it should be the additional amount of net surface gain needed to pass the difference of surface IR not instantaneously transmitted into space, into space, in order to achieve radiative balance with the Sun at the TOA. That quantifies a linear increase in aggregate dynamics or a linear increase in adaption offsetting GHG absorption.

BTW, it’s ever perplexing to me why people don’t see this. I assume it’s agreed that the re-emission of GHG absorption, aggregate or incremental, is non directional, i.e. occurs with equal probability up or down, and that none of it is new energy to the system. Based on this alone it would not follow that incremental GHG absorption is equal to incremental GHG ‘forcing’ if GHG ‘forcing’ is specifically quantified as being equal to additional post albedo solar power entering the system (which it is if each is claimed to have the same ‘no-feedback’ surface temperature increase).

BTW, I also assume it’s agree that IR emitted up in the atmosphere is part of the radiative cooling process of the system, i.e. is contributing to push toward radiative balance at the TOA, which is achieved by upwelling IR from the atmosphere that eventually passes into space. And that in general the constituents of the atmosphere, GHGs and clouds, both act to warm by radiating downward toward the surface and cool by radiating upward towards space.

RW: When a photon is absorbed, the resulting excited state is relaxed by collisions with other molecules much faster than any other process can occur. This is called Local Thermodynamics Equilibrium and it exists up to about 100 km. Everything that happens where LTE exists depends on TEMPERATURE, not the wavelength of the photons being absorbed. 1 W/m2 of radiation of any wavelength is a 1 W/m2 of HEAT. You can wave your hands about differences between OLR, DLR, SWR, scattering, re-emission etc., they don’t make any difference.

On the other post, Pekka unnecessarily brings up Jim Hansen’s concept of “effective radiative forcing” – which is calculated from the output of a climate model of unknown validity. Normally, radiative forcing is accurately measured in a laboratory can be trusted for homogeneous materials like GHGs. Non-homogeneous aerosols are more difficult to study in the lab. However, both aerosol and GHG forcing are derived from laboratory MEASUREMENTS. We also have accurate measurements about aerosols in the atmosphere made at visible wavelengths which are transparent.

Two “forcings” can’t really be measured. 1) Aerosols have the ability to change the size of the water drops in clouds and therefore their interactions with SWR and LWR – the indirect aerosol effect. The indirect forcing by aerosols probably hasn’t measured by observing changes in radiation in a laboratory. (DeWItt often says the indirect aerosol effect could be negligible.) 2) Some forcing agents create free radicals that destroy ozone. Some aspect of that process can be studied in a laboratory, but not the critical ones.

As best I remember, “effective” radiative forcing (from climate models) and “measured” radiative forcing (in the laboratory) are consistent – ie the same within uncertainty. Ignoring the indirect aerosol effect and ozone destruction, all W/m2 of forcing can be considered to be equivalent (IMO).

” When a photon is absorbed, the resulting excited state is relaxed by collisions with other molecules much faster than any other process can occur. This is called Local Thermodynamics Equilibrium and it exists up to about 100 km. Everything that happens where LTE exists depends on TEMPERATURE, not the wavelength of the photons being absorbed. 1 W/m2 of radiation of any wavelength is a 1 W/m2 of HEAT. You can wave your hands about differences between OLR, DLR, SWR, scattering, re-emission etc., they don’t make any difference.”

Make any difference to what? Of course, 1 W/m^2 of upwelling IR absorbed by GHGs is still 1 W/m^2. I’m certainly not disputing that. The point is if 1 W/m^2 of upwelling IR goes into any layer, whether there is LTE or not and whether the absorbed energy is thermalized or not, when that layer re-radiates that absorbed energy, 0.5 W/m^2 will be re-radiated back up in the same direction it was going pre-absorption, i.e. away from the surface towards space. Moreover, the energy of absorbed IR in the atmosphere is very short lived and gets re-radiated fairly quickly otherwise it would never cool down at night, let alone by how much it actually does. The dominant flow of and exchange of energy is by radiation and not conduction.

The key point is none the IR energy captured by GHGs is new energy, i.e. new joules, added to the system, and the probability of a photon emission at any discrete level is 50/50 up or down, independent of the initiating mechanism of re-emission. This means the IR energy captured by GHGs, even though there is no way to trace the specific path of the energy, has (by and large) equal probability to be re-radiated up as it does down — no matter how many times it gets re-radiated, until the initially absorbed energy somehow finds its way out of the atmosphere. For this reason GHG absorption really cannot really be considered equal to post albedo solar power in its intrinsic ability to act to ultimately warm the surface.

Remember, the issue is not whether additional GHG absorption should act to ultimately further warm the surface (I agree it should), but whether its intrinsic ability to do so is equal to post albedo solar power. Do you understand that it is effectively being claimed so if each is established to have the same ‘no-feedback’ surface temperature increase?

The GHE is specifically established to be a radiative resistance to outer space cooling by radiation from the atmosphere into space. From Wikipedia:

“The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases, and is re-radiated in all directions. Since part of this re-radiation is back towards the surface and the lower atmosphere, it results in an elevation of the average surface temperature above what it would be in the absence of the gases.[1][2] “

If GHG ‘forcing’ is specifically quantified as that forcing equal to post albedo solar power entering the system (which it is), how does it follow that incremental GHG absorption is equal to incremental GHG ‘forcing’ if not all of what’s absorbed is re-radiated back downward towards the surface?

Again, I assume it’s agreed that the constituents of the atmosphere, i.e. GHGs and clouds, both act to cool by radiating IR up towards space and downwards towards the surface?

The other key point is post albedo solar power is not only all new joules being added to the system, but is also a continuous stream of downward radiation all flowing in the direction of the surface, i.e all acting to warm. Of course, a lot of it gets absorbed by the atmosphere, but again, those are all new joules added to the system, where as the joules captured by GHGs must always be prior absorbed solar joules which have subsequently been re-radiated back up, either by the surface or atmosphere. The constituents of the atmosphere itself, GHGs or otherwise, are not a source of energy to the system. The joules captured by GHGs can only be prior absorbed solar joules that are simply ‘blocked’ from exiting the system through the TOA in the immediate present.

The system ultimately achieves radiative balance with the Sun by continuously re-radiating absorbed IR energy up towards space, right? This means GHGs in the atmosphere must always be making this push toward radiative balance with the Sun, which is ultimately achieved by the required amount of upwelling IR eventually passing through the TOA.

Another point is the IR emission from the atmosphere not originating from clouds is thought to be narrow band emission directly from GHGs and not broad band emission like that from dense heated mass or body (where conduction is the dominant source of energy transfer). In other words, the bulk of the constituents of the atmosphere, i.e the O2 and N2, aren’t even doing the emitting, which means the absorbed IR by GHGs is moving through quite quickly and doesn’t persist for very long. This is why as soon as the Sun sets, it’s starts to cool down almost immediately. It seems too many people think of the atmosphere as more like a layer of thick insulation wrapped around the Earth, when it’s really only surrounded by a very thin gas.

If you wish to present a simple model for understanding (semi)quantitatively the basics of GHE, you must use physics to decide, what you may keep constant, when GHG concentrations change. Rather than doing that you have effectively assumed that the ratio of the LWIR emission from the surface to OLR at TOA is constant (like 1.6). That’s an unjustified assumption and that assumption is surely wrong. The ratio changes, and the rate of that change can be estimated from a more correct analysis.

The simplest approximate model that can be justified from physics and that seems to be correct enough for being useful, is based on assuming a constant lapse rate. That’s an approximation to the reality, but the assumption is based on physics of convection and is not arbitrary as your assumption is. When that assumption is combined with an accurate enough calculation of radiative heat transfer, we get the clear sky GHE. Including clouds makes the calculation more complex, but much of the calculation remains valid also for the case, where the properties and extent of clouds are kept fixed in the spirit of no-feedback response.

The assumption of fixed lapse rate is not similarly true for the atmospheric boundary layer near the surface, where diurnal variations are large, but it’s a fair approximation for the free troposphere that extends typically from around 1.5 km to the tropopause. The diurnal temperature variation is typically only about 0.3C in the free troposphere (at the surface the typical diurnal variation is roughly ten times stronger). You can find measurements of the diurnal variability from Seidel et al (2005).

The main properties of GHE are determined by the properties of the free troposphere, what happens in the atmospheric boundary layer is not equally essential (but does certainly influence the details).

“If you wish to present a simple model for understanding (semi)quantitatively the basics of GHE, you must use physics to decide, what you may keep constant, when GHG concentrations change. Rather than doing that you have effectively assumed that the ratio of the LWIR emission from the surface to OLR at TOA is constant (like 1.6). That’s an unjustified assumption and that assumption is surely wrong. The ratio changes, and the rate of that change can be estimated from a more correct analysis.

The simplest approximate model that can be justified from physics and that seems to be correct enough for being useful, is based on assuming a constant lapse rate.”

Everyone seems to be skirting around the fundamental issue, and that is whether a watt of GHG absorption really has the same intrinsic surface warming ability as a watt of post albedo solar power entering the system. That is what’s effectively being claimed if each is also claimed to have same ‘no-feedback’ surface temperature increase.

The 1.1C of so-called ‘no-feedback’ from 2xCO2 is derived from this formulation:

dTs = (Ts/4)*(dE/E), where Ts is equal to the surface temperature and dE is the change in emissivity (or change in OLR) and E is the emissivity of the planet (or total OLR).

Plugging in 3.7 W/m^2 for dE or the change in OLR, we get dTs = (287K/4) * (3.7/239) = 1.11K

The problem is not with the formulation itself (which is correct), but just that there is nothing implicit in the formulation that the input variable ‘OLR change’ be an instantaneous change. The 3.7 W/m^2 of incremental GHG absorption from 2xCO2 is just the instantaneous net absorption increase. All the above formula really does is validate the T^4 relationship between the surface and the TOA, which is ratio of about 1.6 to 1, i.e. 385/239 = 1.61. Meaning that for every 1.6 watts of surface radiation about 1 watt of radiation is emitted out the TOA. This relationship though is specifically that offsetting post albedo solar power entering the system (because it’s based on the 239 W/m^2 of post albedo solar power), and is not connected, physically or mathematically, to an amount offsetting aggregate GHG absorption prior to an imposed imbalance. This ratio of IR power densities physically means it takes about 385 W/m^2 of net surface gain to allow 239 W/m^2 to leave the system at the TOA, offsetting the 239 W/m^2 of post albedo solar power absorbed. Of course, in each case whether it’s +3.7 W.m^2 of GHG absorption or +3.7 W/m^2 of post albedo solar power, there will be a -3.7 W/m^2 TOA deficit that has to be restored. It is because of this trivial reason that arbitrarily warming the surface and atmosphere by the same amount needed to linearly offset additional post albedo solar power will also restore balance for additional GHG absorption.

I understand that aggregate GHG absorption prior to an imposed imbalance is roughly 300 W/m^2, though I don’t know if SoD has data on this (please correct me if I’m wrong). If at least roughly correct, it means about 90 W/m^2 is transmitted instantaneously through the whole of the atmosphere into space, leaving a deficit of roughly 150 W/m^2 that that constituents of the atmosphere must be passing into space in order to achieve balance with the Sun at the TOA. In order to pass that difference of about 150 W/m^2 into space, it only takes about +150 W/m^2 of net surface gain. The concept of ‘no-feedback’ should be linear increase in aggregate dynamics offsetting aggregate GHG absorption prior to an imposed imbalance. 239 W/m^2 entering from the Sun is the minimum surface gain with no GHE (assuming the same albedo), so the net result of the GHE is for the surface to gain an additional 150 W/m^2 that it wouldn’t otherwise without the GHE. The underlying mechanism of the GHE is achieved via the atmosphere being largely opaque to upwelling surface emission and that some or part of the amount captured, i.e. ‘blocked’ from passing into space, is re-radiated back downward towards the surface, resisting the push toward radiative balance at the TOA by IR emitted up towards space. This requires the surface and the lower to be emitting at higher rates (and thus be warmer) in order to ‘push through’ the required 239 W/m^2 out the TOA in order to achieve radiative balance with the Sun.

Maybe someone can explain what they think the actual physical basis, i.e. the actual physics, in support of GHG absorption being equal to GHG ‘forcing’. That is what the actual physics are that makesthe intrinsic surface warming ability of each equal to one another. As best as I can tell, it seems few people have ever given this any critical thought whatsoever, and just seems loosely based on the notion that by reducing the IR flux out the TOA via added GHGs, the atmosphere and ultimately the surface must warm by some amount in order to restore balance, and if warmed proportionally by 1.1C and up through that that would restore balance. Of course that is true, but fail to see how that is a proper measure of a linear increase in aggregate dynamics offsetting GHG absorption (though I clearly see it is for post albedo solar power).

There’s much in what you write that doesn’t make any sense to me. I understand the standard theory and concepts needed to describe that, but you add concepts that I do not understand. You write, e.g.:

.. GHG absorption being equal to GHG ‘forcing’

In that GHG forcing is clear. That’s the imbalance at TOA caused by GHG addition, when the atmosphere is left at its earlier state. But what do you mean by GHG abosrption, and how can you imagine that GHG absorption could be equal to forcing. That does not make sense at all, because changes in both absorption and emission contribute strongly to the forcing.

Similarly it’s true that under certain conditions the ratio of emission from the surface is 1.61 times OLR at TOA, but what does that tell about the warming from additional CO2? It tells nothing, because the ratio is going to change and only a totally different analysis can tell, how much the ratio changes.

The heat capacity of the atmosphere is so small that the atmosphere reaches a balance much faster than the surface. Therefore the net energy flux is essentially equal at all altitudes. That means also that the net energy flux must be nearly equal at the surface and at TOA. If the imbalance is 1 W/m^2 at TOA it’s 1 W/m^2 also at the surface. What is not equal is the radiative part of that. At tropopause and above the radiative net flux is essentially the total net flux, near the surface the share of convective energy energy transfer is large and net LWIR correspondingly less. The shares of convective and radiative fluxes within the troposphere depend on the temperature profile. Thus it’s always necessary to determine the temperature profile as part of the analysis.

In practice it’s often enough to use a fixed temperature profile that agrees well enough with the observed profile. When that is done, it’s not necessary at all to estimate the shares of radiative and convective processes, it’s enough to calculate only the radiation that reaches TOA, and that can be done directly based on the temperature profile.

RW: Once you accept LTE, then radiative transfer can be calculated using absorption cross-sections measured in the laboratory and the Schwarzschild eqn for a single wavelength:

dI/ds = n*o*B(lamba,T) – n*o*I_0

This leads to the conventional explanation for the GHE and the radiative imbalance produced by rising GHGs. Since the solutions to this differential equation requires numerical integration and evaluation at many wavelengths, you can’t solve this differential equation by hand or by intuition*. You can solve it online with MODTRAN or spectracalc.

* Intuition can tell you that if I_0 comes from below where it is warmer, dI/ds will be negative and will increase as n (the density of GHG) increases.

“There’s much in what you write that doesn’t make any sense to me. I understand the standard theory and concepts needed to describe that, but you add concepts that I do not understand. You write, e.g.:

.. GHG absorption being equal to GHG ‘forcing’

In that GHG forcing is clear. That’s the imbalance at TOA caused by GHG addition, when the atmosphere is left at its earlier state. But what do you mean by GHG abosrption, and how can you imagine that GHG absorption could be equal to forcing. That does not make sense at all, because changes in both absorption and emission contribute strongly to the forcing.”

I’m not sure I fully understand your question. The 3.7 W/m^2 from 2xCO2 is incremental GHG absorption after the increased opacity from the added CO2 increases emission both upwards and downwards from all the contributing layers. But those changes are instantaneous, and 3.7 W/m^2 is the net addition of GHG absorption, i.e. it’s 100% a quantification of upwelling IR additionally instantaneously captured by the atmosphere and in no way accounts for what happens to the energy after absorption.

This 3.7 W/m^2 of incremental GHG absorption is what is fully quantified as GHG ‘forcing’. Now, because +3.7 W/m^2 of post albedo solar power and +3.7 W/m^2 of GHG absorption are both claimed to have the same ‘no-feedback’ surface temperature increase, effectively two things (I assume knowingly) are being claimed by it:

The use of the word ‘incremental’ is important because total GHG absorption prior to an imposed imbalance is not zero. If it were, there wouldn’t be a GHE in the first place.

Does this answer your question?

“Similarly it’s true that under certain conditions the ratio of emission from the surface is 1.61 times OLR at TOA, but what does that tell about the warming from additional CO2? It tells nothing, because the ratio is going to change and only a totally different analysis can tell, how much the ratio changes.”

Yes of course, but that’s out of the realm of the so-called ‘no-feedback’ calculation and the rules of linearity in calculating the intrinsic surface warming ability of forcings (i.e. changes that put the system out of energy balance).

The fundamental point is the 1.6 to 1 ratio does quantify a linear increase in aggregate dynamics offsetting post albedo solar power entering, but (arguably at least) doesn’t really for that offsetting GHG absorption. Take a minute to conceptualize what post albedo solar power actually is and is doing. It’s not only all new energy and the only significant energy source to the system, but is also a continuous stream of radiant energy all flowing towards the surface, i.e. is continuously acting to warm. GHG absorption, aggregate or incremental, is only prior absorbed solar energy that is ‘blocked’ from leaving the system in the immediate present. The atmosphere re-radiates the flux captured by GHGs both up and down (equally both up and down on a photonic level). It’s that some (and not all) of what’s captured by GHGs is re-radiated back downward towards the surface that acts to ultimately elevate the surface temperature above what post albedo solar power could do alone. That amount captured by GHGs which is re-radiated back up in the same direction it was going pre-absorption, i.e. away from the surface towards space, and is not resisting the push toward radiative balance at the TOA and is not acting to ultimately warm the surface like post albedo solar power is.

Thus it does not follow that the intrinsic surface warming ability of incremental GHG absorption is equal to that of post albedo solar power entering the system, but this is what is being claimed by each having the same ‘no-feedback’ surface temperature increase. (BTW, I assume you agree that ‘no-feedback’ for +3.7 W/m^2 of post albedo solar power is about 1.1C).

BTW, I should ask — do you agree that the quantification of so-called ‘no-feedback’ is (or least should be) a quantification of the intrinsic ability of a particular imposed imbalance to elevate the surface temperature above what it would otherwise be?

Climate sensitivity is by definition the increase in surface temperature that results from some specified change (like doubling CO2 concentration). No-feedback sensitivity is an artificial concept that tells about that change when influence of mechanisms that have been labeled as feedbacks is excluded. Estimating the value of such an artificial concept is usually possible only using models, because that’s typically the only way of preventing the feedback mechanisms from contributing to the estimate.

“Estimating the value of such an artificial concept is usually possible only using models, because that’s typically the only way of preventing the feedback mechanisms from contributing to the estimate.”

I don’t really understand this. What kind of models? BTW, I would not say it’s an artificial concept, but more just a theoretical concept.

Let me ask, do you agree it should be linear increase in adaption or a linear increase in aggregate dynamics?

RW: It is easier to think about how the earth behaves whenever it warms – for whatever reason. This is simpler than asking how much it will warm after a forcing.

Suppose the surface of earth were suddenly now 1 degK warmer everywhere. If the planet now radiates 3.7 W/m2 more OLR to space than before, it is behaving like a simple blackbody and ECS will be about 1 degC for a doubling of CO2 (3.7 W/m2). If rising humidity or other feedbacks limit the increase in OLR to 1.85 W/m2 after 1 degC of warming, then ECS will be about 2. If OLR increases by only 0.9 W/m2, ECS will be about 4. And if the surface can warm 1 degC without radiating any more energy to space, a runaway greenhouse effect exists. If 1 degK of warming causes 7.4 W/m2 of OLR to be emitted, ECS will be about 0.5 degC.

This concept is called the climate feedback parameter and is measured in terms of additional W/m2 emitted to space per degK of surface warming (W/m2/K). Its reciprocal is equilibrium climate sensitivity (K/(W/m2), which is usually multiplied by 3.7 W/m2/doubling to give degK per CO2 doubling.

After CO2 doubles, will it take 0.5, 1.0, 2.0 or 4.0 degK of surface warming emit an additional 3.7 W/m2 of radiation to correct the radiative imbalance and be at equilibrium again?

(I’ve oversimplified here by assuming uniform temperature, rounding off some numbers (using 3.7 W/m2 instead of 3.2 W/m2), and ignored the possibility that emission to space occurs in the reflected SWR channel as well as the OLR channel. Due to asymmetric distribution of land, GMST warms 3.5 degK every summer in the NH (before anomalies are calculated) and CERES monitors changes of about 10 W/m2 in OLR and SWR changes in response.)

Frank wrote: “It is easier to think about how the earth behaves whenever it warms – for whatever reason. This is simpler than asking how much it will warm after a forcing.”

RW wrote: “But I’m not asking that.”

Frank continues: You didn’t ask; I volunteered the info. Try it; things may be easier. Many times it is simpler to figure out what equilibrium “must be” than it is to calculate the whole path to a final equilibrium. This is true for equilibrium climate sensitivity (and indirectly feedbacks), which you have asked about.

If I suspend a object at 10 degC in a vacuum chamber at 20 degC, it will be an incredibly painful job to calculate the fluxes of radiation between the object and the walls of the chamber and the time course of the temperature change in both (or one if I make the heat capacity of the chamber much larger than the object). However, I can immediately say that the net flux between the two must be zero at equilibrium and deduce more from that constraint

If one instantly doubled CO2 and OLR dropped by 3.7 W/m2, equilibrium will be restored once the temperature rises enough that the earth emits or reflects an addition 3.7 W/m2 than it did immediately after doubling. How much surface warming is required?

The earth emits different amount of OLR and reflected SWR as GMST rises and falls about 3.5 degC every year. Observations from space tell us about how the as a whole planet responds (in terms of radiation to space) to a change in surface temperature and how feedbacks interfere with that response. “Global warming” is different from seasonal warming because both hemispheres warm during the former.

1) The zero feedback model is the black body model with an emissivity of 1. The current conditions are of a gray body whose temperature is the surface temperature and whose emissivity is about 0.62 yielding net emissions of about 240 W/m^2. The fact that the emissivity is not 1.0 is a consequence of feedback and the resulting gain, where the closed loop gain of 1.61 is the reciprocal of the emissivity of 0.62.

2) If gc is the closed loop gain, go is the open loop gain and f is the feedback fraction, control theory tells us that 1/go = 1/gc + f. The open loon gain of the climate system is conventionally 1 which is obfuscated by quantifying sensitivity as degrees per W/m^2 instead of the dimensionless ratio of output power to input power that control theory dictates. In effect, the pedantic feedback model considers the Stefan-Boltzmann relationship as the open loop gain.

3) When an energized GHG molecule collides with O2/N2, the most likely result is the re-emission of an absorption band photon by the GHG molecule. At typical atmospheric energy levels, here is no viable mode of converting rotational/vibrational energy into the linear momentum of particles in motion. If an energized CO2 molecule collides with an unenergized CO2 molecule, the respective state of the 2 molecules may flip, but it still has no effect on the translational velocity of the CO2.

4) The emissivity of O2/N2 is near zero, thus no matter what its temperature is, it has almost no influence on the radiant energy emitted by the planet making concerns about the effect of the lapse rate mostly moot.

5) Line by line simulations show that 3.7 W/m^2 is the approximate decrease in power passed through the transparent window upon doubling CO2 from its pre-industrial level.

6) The IPCC metric of forcing is ambiguous where an instantaneous change in net solar power at TOT is treated the same as an instantaneous decrease in power passing through the transparent window. The difference being that some fraction of the surface emissions absorbed by the atmosphere (the data says about half) eventually leaves throuhg the top of the atmosphere and has no influence on the LTE temperature of the surface.

7) The tendency to conflate energy transported by photons (Planck emissions, GHG effects. etc.) with energy transported by matter (latent heat, thermals, molecules in motion. etc.) obfuscates the immutable fact that only EM energy matters to the EM balance of the planet.

8) If matter in the atmosphere is in LTE (by definition, it must be to quantify the LTE response to forcing), the photon energy absorbed by this matter (mostly water) is equal to the photons emitted by that matter, thus no net conversion from one form to another is happening.

9) Thermometers measure the combined effect of molecules in motion and photons colliding with the sensor. The molecules in motion have a temperature of their own per the kinetic theory of gases, while the photons are an indication of the temperature of distant matter. Its a mistake to consider that these two manifestations of temperature must be in LTE with each other for the system to be in LTE. The LTE temperature at some point in the atmosphere is the combined influence of the 2 effects.

CO2isnoevil wrote: “When an energized GHG molecule collides with O2/N2, the most likely result is the re-emission of an absorption band photon by the GHG molecule.”

This statement is grossly incorrect for the troposphere and for most of the stratosphere – where Local Thermodynamic Equilibrium prevails. By definition, LTE exists where collisional excitation and relaxation are much faster than any other process. This ensures that the fraction of molecules in any excited vibrational state depends only on local temperature and the Boltzmann distribution, not the local radiation field. The rate of spontaneous emission (radiative cooling) depends only on the fraction of molecules in a vibrational excited state and on the lifetime of that state. Emission can be calculated using the Schwarzschild eqn and absorption cross-sections.

In the mesosphere and thermosphere, CO2isnoevil is correct, but our climate is controlled by what happens in the troposphere and stratosphere.

CO2isnoevil offers us no references to back up is statements. I can’t find any good references to this subject at the moment, but here are some lecture notes:

Statements 8) and 9) are also wrong. CO2isnoevil appears to live on a fantasy planet where the only way energy can escape a photo-excited molecule in the atmosphere is by the “re-emission” of a photon. This just isn’t true. Re-emission (technically “photoluminescence”) is faster than collisional relaxation in a few special circumstances: in very thin atmospheres, in fluorescence and phosphorescence, and in lasers.

How does latent heat from condensation of water vapor escape from the troposphere to space? That heat must lead to collisional excitation of a GHG and emission of a thermal IR photon. If the forward process exists, so does the reverse: absorption and collisional relaxation.

You write that you want to “clear things up”, but most of what you write is wrong.

Where did you figure this stuff out? Textbooks? Your own calculations? Experiments?

Let me pick a few points:

5) Line by line simulations show that 3.7 W/m^2 is the approximate decrease in power passed through the transparent window upon doubling CO2 from its pre-industrial level.

No. Line by line simulations show that 3.7 W/m^2 is the approximate decrease in power emitted from the climate system.

The “window” – unless you are using a definition that is unknown in atmospheric physics – means the 8-12 μm band where the clear sky atmosphere absorbs very little from the surface (note 1). Doubling CO2 doesn’t decrease power through the window because there is no CO2 absorption in that band.

4) The emissivity of O2/N2 is near zero, thus no matter what its temperature is, it has almost no influence on the radiant energy emitted by the planet making concerns about the effect of the lapse rate mostly moot.

CO2, O2, and N2 in a local portion of the atmosphere are all at the same temperature. This is because they collide a lot, sharing energy.

I explain it in Planck, Stefan-Boltzmann, Kirchhoff and LTE – under the sub-heading “Local Thermodynamic Equilibrium”. I cite physics textbooks so if you have a different point of view you need to provide a reference or a calculation.

The calculation is relatively straightforward so your calculation of the mean time between collisions and the typical time before emission of a photon will be very interesting.

Why does the central area of the CO2 band have such a low radiance, and why is the very center (667 cm-1) higher?

It’s because the local temperature of the gas and the emissivity of the gas determines its radiance. Therefore, the lapse rate is extremely important.

Lastly:

8) If matter in the atmosphere is in LTE (by definition, it must be to quantify the LTE response to forcing), the photon energy absorbed by this matter (mostly water) is equal to the photons emitted by that matter, thus no net conversion from one form to another is happening.

In common with many people writing blogs and commenting on climate blogs you don’t know what LTE means. I refer you back to the article where I’ve extracted statements from physics textbooks. LTE means that collisions dominate so that different molecules are at the same temperature (note 2) – that is, energy absorbed from photons by CO2 and water vapor gets shared with the N2/O2. It doesn’t mean photons absorbed = photons emitted by one type of molecule. LTE means that there is possibility of a net imbalance in radiation in a region.

Notes:

Note 1 – The “window” is not a window under cloudy skies and it is not completely transparent. There is a tropospheric ozone region of absorption and the water vapor continuum also absorbs across the band.
Note 2 – Obviously individual molecules don’t have temperatures. I leave aside the finer points in anticipation of you providing some evidence for your many unsupported (and plainly inaccurate) assertions. Some textbooks would be a good start.

Consider the following thought experiment.
If the atmosphere contained only O2 and N2 and I shine a laser though it in a band where the medium was transparent to that wavelength and put a very fast acting thermometer in the beam, the temperature would instantaneously increase when the beam is on and instantaneously decrease when the beam is off.

Now add a GHG with an absorption line matching the wavelength of the laser. The beam will quickly diffuse and the measured temperatures in the local vicinity will increase slightlty, falling off with distance, but then quickly return to the kinetic temperature of the atmosphere once the laser is turned off.

The point is that a gas only emits and absorbs specific wavelengths and the N2/.O2 in the atmosphere is mostly transparent to visibly light and LWIR. In the liquid or solid state, extreme collisional broadening spreads out the lines of a gas into a typical Planck spectrum, but gases do not act as black bodies as in being broad band absorbers and emitters of energy.

You are also considering only the spontaneous emission of a photon by an energized GHG molecule. What I’m talking about is stimulated emission which is a likely result from either from a collision or when another photon is absorbed. Here is a good reference.

During the time that the electron shells of colliding gas molecules are interacting, an energized molecule will have rotated and/or vibrated so many times, that any average kinetic effect is zero and its not like a bat hitting a ball. What is the precise mechanism you propose that can convert vibrational motion of CO2 (there are on rotational modes) into an increase in the velocity of the N2 molecule that collides with it, or its own velocity for that matter. Bear in mind that the the energy of a 15u photon is about the same as the 1/2 mv^2 kinetic energy of a gas molecule in motion. For the energy of a collision to increase the state of a GHG molecule, the 1/2 mv^2 energy must be much larger than the equivalent energy of a photon that will do the same thing.

The latent heat from condensation warms the liquid water droplets that the water vapor ultimately condensing upon and is returned to the surface as rain that is warmer than it would have been otherwise.

CO2isnoevil: If you want to get into subjects like stimulated (induced) emission, I recommend the article below. It starts with Einstein Coefficients and rigorously develops everything about radiative transfer in the Atmosphere. The author is a physicist who arrives at slight different answers than the IPCC and isn’t afraid to say so. Grant Petty’s book (A First Course in Atmospheric Radiation) is fairly comprehensive, cheap for a textbook, and probably lacks serious mistakes because it has been widely used.

If you want radiation transfer with and without LTE or with induced as well as spontaneous emission, you can find it in Harde’s paper. (I believe he calculates that about 1% of emission from CO2 is induced rather than spontaneous near the surface of the earth. Equation 60.) Most of the climate science community simply assumes LTE and ignores induced emission. Those turn out to be perfectly acceptable approximations. (I don’t claim to have mastered any of this material.)

A mechanism for collisional relaxation of vibrationally excited CO2 is trivial to imagine. Picture standing on one CO bond so neither attached atom appears to be moving. When vibrationally excited, the remaining oxygen is swinging from side to side through the linear position where the oxygen is found in the ground state. As seen from one CO bond, the swinging oxygen collides with an N2 or O2 and transfers all of its bending energy into translational energy of the N2 or O2. (Not every collision will cause relaxation.)

However, a mechanism for relaxation is totally unnecessary. Experiments prove that diluting vibrationally excited CO2 with N2 prevents “re-emission” of a photon.

The power passing through the transparent window is not just what passes between 8u and 12u (which BTW overlaps ozone absorption) but the integration across the entire spectrum of surface emissions scaled by the probability that a photon emitted by the surface will pass into space without being captured by a GHG. As GHG concentrations increase, more surface photons are intercepted by GHG’s and the integration of power passing through the transparent window decreases. Your concept of the transparent window is a primitive approximation that has no physical significance except relative to very coarse calculations.

BTW, I’ve written a high performance MODTRAN like library for my climate analysis tool (MODTRAN was far too slow and not easily integrated into my tool). It gets the same results as MODTRAN based on the same HITRAN line data, so rest assured that I have a good handle on how this works. My simulation has the same minor peak at 667 cm-1, although my absorption spectrum corresponding to your first picture has a corresponding feature at high altitudes and is due to stratospheric ozone absorption which has a gap at about 667 cm-1 in a region of the atmosphere with almost no CO2 or water vapor.

Also, the emissions in absorption bands are certainly lower, but if you look carefully, its only about 1/2 of the power that would be emitted if there was no GHG absorption at all. What you are seeing is the half of the surface photons absorbed by GHG’s that eventually make their way out the top of the atmosphere.

Here’s another thought experiment.
Consider an Earth like planet whose atmosphere contains only N2 and O2 in the same proportions as Earth and that this planet receives the same 240 W/m^2 from the Sun as Earth does. Its average surface temperature will be 255K and will independent of the lapse rate of its atmosphere which is heated from below by convection. Are you trying to claim that the lapse rate in this case would matter? If so, what would the surface temperature be if not 255K?

In your comment of July 20, 2015 at 11:32 pm possibly you were replying to my comment, not to Frank.

The power passing through the transparent window is not just what passes between 8u and 12u (which BTW overlaps ozone absorption) but the integration across the entire spectrum of surface emissions scaled by the probability that a photon emitted by the surface will pass into space without being captured by a GHG. As GHG concentrations increase, more surface photons are intercepted by GHG’s and the integration of power passing through the transparent window decreases. Your concept of the transparent window is a primitive approximation that has no physical significance except relative to very coarse calculations.

It’s not “my concept of the transparent window”. If you decide to use a common term (convention) in a completely different way, you should explain it up front. Your definition is a new one, and also seems to include an unproven (inaccurate) assertion.

I’ve written in other places about the curiosity value only of the “atmospheric window” but nice use of language – “your concept.. primitive approximation”.

Can you go back through your original comment, identify any other conventions, especially “primitive ones” and let us know whether you have used them in a different way from the rest of climate science or general physics.

It will save people here writing responses that turn out to be a waste of time.

“Here is the AFGL Tropical atmosphere. As suggested by Ebel, I changed the code to plot cumulative contribution from each layer:”

Obviously the shape of this curve changes with different atmospheric types, but in this case, most OLR originates from within the atmosphere not from the surface.

In the examples shown earlier of the changes in OLR due to more CO2 most of the changes are not primarily due to changes in total transmissivity of the atmosphere. That is, the change in OLR from doubling CO2 is not due to changes in surface radiation making it to the top of atmosphere, but instead primarily from changes in atmospheric emission. (Obviously there is always some change in surface radiation reaching TOA).

The reason I picked up on this point was because it seemed to be a premise for the next statement I discussed:

The emissivity of O2/N2 is near zero, thus no matter what its temperature is, it has almost no influence on the radiant energy emitted by the planet making concerns about the effect of the lapse rate mostly moot.

This statement is wrong as already highlighted. This also relies on your confusion about LTE.

..so rest assured that I have a good handle on how this works..

It’s not how we work around here.
You make claims that contradict textbooks, papers and the calculations I’ve shown in a line by line model (which match results in textbooks and papers).

You need to go point by point through the claims you’ve made that have been disputed by myself and Frank.

Here’s another thought experiment.
Consider an Earth like planet whose atmosphere contains only N2 and O2 in the same proportions as Earth and that this planet receives the same 240 W/m^2 from the Sun as Earth does. Its average surface temperature will be 255K and will independent of the lapse rate of its atmosphere which is heated from below by convection. Are you trying to claim that the lapse rate in this case would matter? If so, what would the surface temperature be if not 255K?

“Are you trying to claim that the lapse rate in this case would matter?” – No it would not matter.

There’s a reason why it wouldn’t matter: 100% of the radiation leaving the climate system would be emitted from the surface rather than from the atmosphere. Therefore the atmospheric temperature has no effect on the radiation emitted. Therefore the lapse rate has no effect on the radiation emitted.

Frank,
If you look carefully at the Iraq view, the thunderstorm anvil has the exact same feature as the clear sky at around 667 cm-1 and in fact all views have the same feature at the same amplitudes and indicates no dependence on surface/cloud temperatures or geographical location. I went back to check my simulations and the feature is present, but its not nearly as sharply defined, especially in the polar regions, although, the one I do see is definitely due to ozone and is dependent on surface/cloud emissions. I suspect that the observed feature could be related to ozone emissions in the far upper stratosphere or ionosphere, perhaps the re-emission of energy captured from the solar wind or even thermal noise from the measuring satellite itself, neither of which I model in my simulations. We certainly see higher energy photons do this as the Aurora Borealis. It would be interesting to see if this feature changes in response to solar activity. In any event, it seems small enough relative to the whole that it can be ignored regarding the effects change has on the LTE. surface temperature.

I’m not Frank but anyway.. the reason I pointed out the feature was to ask you to explain why it is there, and I’d like you to explain all the features.

From the perspective of standard atmospheric physics, as explained in the last 40 years of textbooks and papers, and as used in satellites for climate research and weather prediction – the features are easy to explain.

How do we explain them? At wavelengths where the atmosphere is opaque the upward radiance as seen by satellites is coming from the atmosphere.

– The little peak at 667 cm-1 is due to stratospheric temperatures being higher than tropospheric temperatures – and at this specific wavelength the atmosphere is so opaque that the emitted radiation is coming from a place close to the satellite, i.e., well above the troposphere.
– Outside of this peak, but still in the central part of the band the radiation is being emitted from the top of the troposphere.
– In the wings of the band, the radiation is being emitted from lower down in the atmosphere.
– In the “primitively defined” atmospheric window of 8-12 μm, the radiation is being emitted from the surface

Likewise when we look upwards from the surface we see radiation emitted from the atmosphere:

Back to your comment on the little peak at 667 cm-1:

..I suspect that the observed feature could be related to ozone emissions in the far upper stratosphere or ionosphere, perhaps the re-emission of energy captured from the solar wind or even thermal noise from the measuring satellite itself, neither of which I model in my simulations..

Noise from the measuring satellite? That’s a great explanation of something that is always present and doesn’t fit your theory.

But it does fit the theory of the Schwarzschild equation, which is derived from fundamental physics and has the atmosphere absorbing and emitting radiation.

How do you think the AIRS satellite measures atmospheric temperature & water vapor concentration at different heights?

The concept of transparent window that makes sense to the simulations is as I described. If you want a more precise definition, the power passing through the transparent window is the combined power from those photons that pass from the surface to space in a straight line at the speed of light. The energy of all other photons emitted by the surface is delayed by atmospheric GHG’s or clouds to be re-emitted and re-absorbed until the energy of that photon either makes it out to space or back to the surface.

For the clear sky the power passing through the transparent window is about 50% of all photons emitted by the surface and for the average cloudy sky (includes partly cloudy) about 12.5% passes through, so about 25% of the OLR originates directly from the surface (2/3 clouds, 1/3 clear). About 40% of the planets OLR emissions originate from cloud tops and pass through the atmosphere without interacting with GHG’s and the remaining 35% are emissions from GHG’s having intercepting surface or cloud emissions.

Most OLR in the absorption bands of atmospheric GHG’s originates from the GHG’s in the atmosphere, but the O2 and N2 are completely transparent to those wavelengths and contribute almost nothing to the OLR emitted by the planet. Adding a trace component of GHG’s does not change the transparency/absorption of the N2/O2 but changes the bulk transparency and absorption of the atmosphere. The N2/O2 molecules don’t even notice the GHG’s and are just passive constituents of the atmosphere.

When we look at diffuse gas clouds in deep space, we can not infer their temperature by emissions from its gases, but can only measure the temperature of dust in those clouds. We know what gases are there by their absorption spectra.

You agree that the emissivity of an O2/N2 atmosphere completely devoid of GHG’s and clouds is zero. Why would those O2/N2 molecules behave any differently when a trace GHG is added? It seems that you are considering the actions of a trace component as a bulk property and then uniformly attributing the effect to all molecules in the bulk.

I should point out that when you consider the atmosphere as a uniform, macroscopic absorber/emitter you get the correct bulk answer (joules are joules), so the textbooks are not wrong and represent a proper equivalent model, its just that the implied microscopic mechanism requires the impossible emissions of LWIR photons by N2/O2 molecules. Separating EM from non EM energy also gets the correct answer and requires only narrow band emissions by GHG’s, broad band emissions by the water and ice in clouds and O2/N2 that is transparent to visible light and LWIR.

Here is the average emission spectra calculated based on the method I’ve outlined. All of the features are present, except that the specific feature at 667 cm-1 is more subdued for the reason that we both seem to agree on which is that the measured one is likely related to ozone absorption in the ionosphere or upper stratosphere that is unrelated to the absorption of photons emitted by the surface. If I smoothed the fine lines a little more, it would look closer to the measurements.

The energy being emitted in the absorption bands comes from the half of surface emission absorbed by GHG’s that eventually makes it out to space. Most originated from GHG’s and managed to avoid another GHG molecule on the way out. The other half returns to the surface and comprises the emissions seen when looking up into the atmosphere. Note that if you measured the energy parallel to the surface, rather than looking straight up, it would look much the same owing to the random directions of re-emissions.

Think of the GHG’s in the atmosphere acting like a transmission line with a VSWR of about 6 (half of the power is reflected and half is transmitted).

Consider the Sun, It takes a eons for a photon to get from the core to the corona as it bounces from atom to atom along a path much longer than the distance from the core to space. At a much smaller scale, the energy of a photon absorbed by GHG molecule takes a longer path through the atmosphere, passing from GHG molecule to GHG molecule until it eventually leaves the atmosphere to either exit to space or be returned to the surface.

All of the features are present, except that the specific feature at 667 cm-1 is more subdued for the reason that we both seem to agree on which is that the measured one is likely related to ozone absorption in the ionosphere or upper stratosphere that is unrelated to the absorption of photons emitted by the surface.

I’m not sure you are paying attention because we don’t agree.

The energy being emitted in the absorption bands comes from the half of surface emission absorbed by GHG’s that eventually makes it out to space. Most originated from GHG’s and managed to avoid another GHG molecule on the way out. The other half returns to the surface and comprises the emissions seen when looking up into the atmosphere.

I’ve got no idea how you come up with this.

The principle of conservation of energy is used to derive the Schwarzschild equation. There is no “principle of conservation of photons” or “conservation of 50% of the direction of photons” or whatever it is you are using to derive your formula. There is no constraint that says “half of surface emission absorbed by GHGs eventually makes it out to space”.

I suspect that you have no idea what you are doing. Anyone that talks about the proportion of energy that “eventually makes it out to space” probably doesn’t know what conservation of energy is, or how it is used to solve heat transfer problems.

Write down your equation, and define your terms.

It should be an equation that looks like this:

first in differential terms:

dIλ/dτ = Iλ – Bλ(T) [12]

now integrated:

Iλ(0) = Iλ(τm)e-τm+ ∫ Bλ(T)e-τ dτ [16]

The intensity at the top of atmosphere equals..The surface radiation attenuated by the transmittance of the atmosphere, plus..The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

This is derived from fundamental physics and has been well-established for over 60 years. If your equation is different you need to explain your derivation. I’ll point you to Nobel prize winner Subrahmanyan Chandrasekhar for his contribution to radiative transfer theory.

The energy being emitted in the absorption bands comes from the half of surface emission absorbed by GHG’s that eventually makes it out to space. Most originated from GHG’s and managed to avoid another GHG molecule on the way out. The other half returns to the surface and comprises the emissions seen when looking up into the atmosphere.

(end co2isnotevil)

I’ve got no idea how you come up with this.

The principle of conservation of energy is used to derive the Schwarzschild equation. There is no “principle of conservation of photons” or “conservation of 50% of the direction of photons” or whatever it is you are using to derive your formula. There is no constraint that says “half of surface emission absorbed by GHGs eventually makes it out to space”.

I suspect that you have no idea what you are doing. Anyone that talks about the proportion of energy that “eventually makes it out to space” probably doesn’t know what conservation of energy is, or how it is used to solve heat transfer problems.”

This requires clarification, because you’re misinterpreting co2isnotevil (and he should clarify to avoid confusion). What is meant by the above statement is it is an emergent property of the data in the steady-state that an amount equal to half the power captured by GHGs is ultimately radiated out into space and an amount equal to to the other half is gained by the surface somehow in some way. Not that literally half is radiated to space and half is radiated to the surface, but only that the flow of energy in and out of the whole system is the same as if half of what’s absorbed by GHGs were radiated to the surface and half into space as depicted in the box model.

The emergent value of ‘F’ is an abstract concept based on the foundation of black box system analysis and equivalent modeling, which is based on inputs and outputs at the system’s boundaries (*in this particular case, for the Earth climate system, the input is 239 W/m^2 of post albedo solar power and the output is 385 W/m^2 of radiant black body power from the surface, because the net result of all of the effects, radiant and non-radiant, known and unknown, is for the net amount of input at the surface to be 385 W/m^2). For the climate system, save for infinitesimal amount from geothermal, the entire energy budget is all EM radiation, and that is all than pass across the system’s boundary between the atmosphere and space (in virtually all other system this is not the case). Thus it’s valid to consider only EM radiation for the box model exercise. At the surface, the box model only accounts for net flux in, because net flux in is the actual rate of flux input to the surface and is what needs to increase linearly proportionally if the system is to adapt linearly to increased GHG absorption.

The point of the box model is quantify the aggregate dynamics of the system prior to an imposed imbalance. Specifically, the aggregate dynamics offsetting GHG absorption, so a linear increase in those dynamics can be clearly and accurately quantified (though without modeling the actual behavior).

The concept of ‘no-feedback’ should be linear increase in adaption of the system prior to an imposed imbalance. It’s what gives a true measure of the intrinsic ability of a particular imbalance to act to ultimately warm the surface (or elevate the surface temperature above what it would otherwise be). 1.1C calculation of ‘no-feedback’ is based on linear increase in adaption to +3.7 W/m^2 of post albedo solar power entering, but really isn’t for +3.7 of GHG absorption. This is because, as depicted in the equivalent box model, the flow of energy in and out of the whole system is the same as if half of what’s captured by GHGs flows away from the surface and passed out into space. Meaning, upon a linear adaption, an amount equal to half the power captured by GHGs is ultimately radiated from the atmosphere into space, so the net change in OLR after this adaption is only about 1.85 W/m^2, and this the amount that’s resisting radiative cooling to space or acting to ultimately warm the surface the same as post albedo solar power.

The problem is you, Dewitt, Pekka, etc. think the box model is somehow trying to describe the complex, highly non-linear thermodynamic path that manifests the net of 385 W/m^2 gained at the surface, while 239 W/m^2 enters from the Sun and 239 W/m^2 leaves at the TOA. It’s not, it absolutely can’t, and would be laughably wrong if it were attempting to do so. The model is only claimed to be equivalent to the final already manifested result of the thermodynamic path, so far as the rates of joules gained and lost at the surface and the TOA. Absolutely nothing more. When all is said and done, the model is just showing that is only takes about +146 W/m^2 of net surface gain to ‘offset’ about 292 W/m^2 of GHG absorption, where by ‘offset’ I simply mean to establish equilibrium with space.

Another clarification is the COE constraint is an amount of joules per second equal to what’s going into the atmosphere must be coming out of the atmosphere, otherwise heating or cooling is occurring and there is no steady-state. This COE constraint itself does not require an emergent value of ‘F’ equal to 0.5, but the point is to show that with this COE constraint, a value of 0.5 is what is necessary to satisfy the required boundary fluxes to match the steady-state.

Emphasis again that an ‘equivalent’ model is attempting to show nothing more than the rates of joules gained and lost would be the same. In this particular case, the same as if half the power absorbed by the atmosphere were radiated into space, never to be gained by the surface (never acting to ultimately warm the surface). There is for sure no way to quantify and/or simplify the immense dynamical complexity, chaos, transient mixing of radiant and non-radiant energy, etc. ultimately driving the climate system towards equilibrium by such means (i.e. equivalent black box modeling, systems analysis etc.). It’s not even close. This includes both the path the system takes to maintain equilibrium as well as the path it takes from one equilibrium state to another when perturbed (i.e. ‘forced’, like from 2xCO2).

Another point worth clarifying is most people think the box model is trying to say something about the amount of IR the atmosphere passes to the surface and the amount it passes into space. It’s not attempting to describe that property or quantify either amount.

Of course, his calculated spectral ‘T’ of 0.24 has to be accurate (and must physically mean what it’s depicted to quantify). I’ve searched and cannot find anything in the literature that attempts to calculate that value on global average. I understand that it is largely a trivial value, and that everyone is focused on incremental effects. None the less, it’s absolutely critical to the even possible validity of the analysis.

co2isnotevil does calculate with his RT simulation that the net change in absorption is +3.6 W/m^2 for 2xCO2, which right in line with what everyone else in the field is getting. Maybe if some of you could see he knows what he’s doing with these simulations and is doing them correctly and to a high level of precision, it might lend some credibility to him and some of his still largely not understood or agreed to claims.

BTW, a better title might be ‘Proof that only half of absorption acts to ultimately warm the surface’.

In a nutshell, the black box model depicting a 50/50 ‘equivalent’ split is claimed to be a kind of macroscopic emergent property of the system for a state of energy balance which is independent of the highly complex and non-linear thermodynamic path manifesting the actual balance.

In no way is it describing or quantifying the complex path the system takes from one equilibrium state to another, nor is it in any way describing or quantifying why the current or prior equilibrium has manifested to what it is (or was). That is, in no way is it attempting to describe why the surface energy balance has manifested to a net of about 385 W/m^2 gained. As stated, what is ultimately being deduced by it is only about half of what’s absorbed by GHGs is ultimately acting to warm the surface (or ultimately resisting radiative cooling to space), and from this and due to this, it’s being claimed that even though the model doesn’t in any way describe or quantify the path from one equilibrium state to another, i.e. the path to the new equilibrium, one can start and operate as though only half of what’s incrementally absorbed will ultimately be acting as GHG ‘forcing’, where as the other half will be acting to ultimately cool the surface by contributing to the push toward the radiative cooling of the system by radiation from the atmosphere into space.

I understand the equations, and what I’m saying is not inconsistent with them. It seems to me that your explanation of the microscopic behavior requires N2/O2 to emit significant LWIR photons and this is all I have an issue with. I have offered an alternate explanation that does not require the otherwise impossible emission of photons by N2/O2, matches the data, conforms to the requirements of LTE and fits the equations. Why is this so objectionable?

I don’t know where you get ideas like conservation of photons from anything I’ve said. Conservation of energy applies, where whatever does not pass through the transparent window as I’ve defined it must eventually leave the atmosphere and is split between leaving the top and leaving the bottom in roughly 50/50 proportions. The 50/50 split is a macroscopic requirement of absorption and emission, where EM surface emissions are absorbed by the atmosphere (GHG’s and clouds) over half the area (bottom) across which this energy is eventually emitted (top and bottom) and the data confirms that this requirement is met.

Maybe it would be better to start at the beginning of the list.

Do you agree that only the ideal black body with an emissivity of 1.0 represents the unit gain, zero feedback system?

In this article, you assert the results of the grey body equivalent model of the planet is the zero feedback model and this is incorrect.

Do you further agree that the equivalent grey body emissivity of the planet is trivially related to gain and feedback as follows:

1/Go = 1/Gc + f

where Go is the open loop gain (assumed to be 1) f is the feedback fraction and Gc is the closed loop gain which is equal to 1/e, where e is the effective emissivity of the planet as a grey body whose temperature is the surface temperature.

In this context, gain is the dimensionless ratio dictated by control theory (1.6 for the Earth) and the incremental gain is what the IPCC calls sensitivity.

I understand the equations, and what I’m saying is not inconsistent with them.

Most of what you have said is inconsistent with them.

I don’t know where you get ideas like conservation of photons from anything I’ve said.

I’m just trying to fill in the blanks, wondering where you get your ideas.

Conservation of energy applies, where whatever does not pass through the transparent window as I’ve defined it must eventually leave the atmosphere and is split between leaving the top and leaving the bottom in roughly 50/50 proportions. The 50/50 split is a macroscopic requirement of absorption and emission, where EM surface emissions are absorbed by the atmosphere (GHG’s and clouds) over half the area (bottom) across which this energy is eventually emitted (top and bottom) and the data confirms that this requirement is met.

Conservation of energy means energy is not created or destroyed. Therefore “it” never has to leave the atmosphere. You just add up joules and calculate the corresponding temperature change, you don’t chart their journey. And no, there is no 50/50 split between top and bottom defined by the conservation of energy.

Produce an equation that results in this requirement starting from conservation of energy.

You still have unanswered objections to your ideas about LTE. I’ve produced textbooks.

Where did you get your ideas about LTE? Are you trying to convince me that you know what it is? You are trying to explain why I am wrong about LTE when you haven’t produced one piece of evidence?

Produce a textbook, produce an equation, produce an experimental result. What is LTE? What results does it constrain?

Otherwise I’m going to wonder whether this will be like arguing with RW who presents your ideas and between a number of us we have written 100s (maybe even 1000 comments) trying to explain the conservation of energy with little to show for our time.

I did invite George here to try to explain his work, which I admit I largely misunderstood for a time and did a poor job of explaining. I think I understand it now, but of course, I or it can certainly be wrong (anyone can be wrong). Part of the disconnect is George is following the protocol used in standard systems analysis, where as climate science seems to have largely made up its own rules regarding how feedback and sensitivity are defined, thought of, and used.

I understand the COE constraint of energy transported by photons is based on the fact that for the Earth/atmosphere system, EM radiation (from the Sun) is the only significant source of energy, and the surface radiates the same amount of power its gaining as a result of all the physic processes in the system, radiant and non-radiant, known and unknown; and EM radiation is all that can pass across the system’s boundary between the atmosphere and space.

For a state of energy balance, the atmosphere has a limited capacity to store energy, and an amount equal to what’s going in must be coming out, otherwise there is no steady-state (because joules would be accumulating or decreases, resulting in heating or cooling). Any flux or power entering or leaving the surface in excess of the power radiated from the surface must be net zero across the surface/atmosphere boundary. In other words, those joules per second are perpetually ‘in limbo’ in that they are neither adding or taking away joules from the surface, nor are they adding or taking away joules from the atmosphere.

I surmise you have never had any exposure to black box systems analysis, which is necessary to understand how COE is being applied to radiant energy for this application. I suspect it’s being misinterpreted for the same way it generally applies in a raw thermodynamics sense, thus your thinking COE is not being validly applied and the whole thing is bunk and makes no sense.

The way it’s applied in a black box system analysis is solely and simply joules going in = joules going out, which in this case is independent of how the joules going in may exit either boundary (the surface or the TOA).

At the TOA boundary, the only way a joule can exit is by radiation, where as at the surface/atmosphere boundary, a joule can pass from the atmosphere to the surface in both radiant and non-radiant form. Moreover, the actual amount of joules exiting the atmosphere at the surface, i.e. actually gained by the surface, is not in any way quantifiable in terms of radiant and non-radiant joules passing to the surface, because the net amount of joules exiting the atmosphere at the surface is the sum of the gross fluxes in and out where the additive superposition principle applies to the actual amount joules leaving the atmosphere and being added to the surface.

Furthermore, in the steady-state COE limits the power captured by and transmitted though the whole of the atmosphere to not be less (or more) than the power radiated from the surface.

Another way of illustrating this might be when the transmittance of a particular wavelength is zero, the wavelengths of those photons emitted from the surface must be 100% absorbed (i.e. attenuated from passing into space). Since an amount equal to all the photons emitted from the surface must either be transmitted to space or absorbed (i.e. added to the energy stored by the atmosphere), the sum of the transmittance and absorption fractions applied to the power radiated from the surface must be 1. That is, the atmosphere does not have an infinite capacity to store energy, so in the steady-state, it contains as much energy as it can hold and there must be an equal amount of energy continuously coming out as is going in. Thus, COE dictates that the total amount of IR power ‘instantaneously’ transmitted through the whole of the atmosphere and ‘instantaneously’ absorbed by the atmosphere must be equal to the power radiated from the surface.

This is why, as I mentioned prior, the spectral transmittance ‘T’ and subsequently the spectral absorptivity ‘A’ are quantified as fractions of the power directly radiated from the surface even though most of the absorbed and transmitted emission originates from the atmosphere. I also understand that this is what fundamentally establishes the COE constraint George is referring to.

I think this is kind of analogous to light passing through a semi-opaque medium in the steady-state in that simultaneously more light can’t be absorbed by the medium and transmitted through the medium than is supplied into the medium in the first place, as that would violate COE (assuming the medium has no alternate energy source). The difference is the atmosphere is effectively both supplied light externally and internally re-emits absorbed light at the same time, but it none the less can’t have more light, i.e. EM energy, absorbed and transmitted out its opposite side (the TOA) than is being supplied in from the one side (the surface) at the same time, i.e. ‘instantaneously’ or in any one instant.

The other analogous difference is the atmosphere can only lose ‘light’, i.e. EM energy, out its opposite boundary as ‘light’, where as in the first example, some of the absorbed light can be thermalized and lost from the medium by conduction and/or convection (assuming the medium is surrounded by matter). Since the emitted EM energy from the surface, i.e. the sole source of the ‘light’ entering the medium (the atmosphere), and is a direct consequence of the net amount of energy input to the surface, at any one instant, there can’t be more ‘light’ transmitted through the whole of the atmosphere and absorbed by the atmosphere than is initially radiated in from the surface, as that would violate COE (in the steady-state).

Of course, for this example the only real difference between using ‘light’ instead of LWIR is the frequency of the emitted photons. I believe it’s analogous because either way it’s still a measure of transparency of EM radiation passing through the whole of a medium.

What are you asking an equation for? As best I know, an equation itself doesn’t establish a COE constraint. Since you agree that the Sun is the only significant source of energy entering the system, quantified in W/m^2 — what specifically establishes a condition of steady-state for a surface temperature of 287K with an emissivity of 1 where COE is fully satisfied?

I suspect a big part of the disconnect here you have never had any exposure to black box system analysis and don’t understand the foundation behind equivalent black box modeling:

The fundamental issue here is the 3.7 W/m^2 from 2xCO2 is only incremental GHG absorption, but the entire 3.7 W/m^2 is quantified as incremental ‘forcing’, where forcing is specifically quantified as being equal to post albedo solar forcing, i.e. post albedo solar power entering the system. What this effectively means, is a watt of incremental GHG absorption and a watt of incremental post albedo solar power both have the same intrinsic ability to act to ultimately warm the surface.

George and myself are challenging the validity of this because the none of GHG absorption, aggregate or incremental, is new energy added to the system and the re-radiation of the power captured by GHGs is non-directional, i.e. occurs — by and large — with equal probability up or down until the initially captured energy (or at least an amount equal to it) somehow finds its way out of the atmosphere.

Post albedo solar power is not only all new joules pumped into the system, it is also a continuous stream of radiant energy all flowing the direction of the surface, right?

“And no, there is no 50/50 split between top and bottom defined by the conservation of energy.”

Correct. The COE constraint itself does not require a 50/50 equivalent split between the surface and the TOA, but the point is with the COE constraint I at least tried to lay out before, it’s an emergent property of the satellite data if the spectral transmittance is about 0.24, the surface temperature is about 287K, and the global albedo is about 0.3.

The fundamental problem here is you (understandably) don’t seem understand that the 50/50 equivalent split George is referring to is not quantifying the amount of IR the atmosphere passes to the surface to how much IR it passes into space. This is roughly 300 W/m^2 to the surface and 200 W/m^2 to space, and this is because IR emission decreases with height.

The problem is all the fluxes, radiant and non-radiant, from the Sun and surface, once they enter the atmosphere they get all mixed together and are no longer traceable in any way. They mix together and ultimately manifest a highly complex and non-linear thermodynamic path (in the steady-state or otherwise).

is to calculate how much of the power captured by the atmosphere is ultimately resisting radiative cooling to space (or how much of it is acting to ultimately warm the surface the same as post albedo solar power) even though the system is far too complex and non-linear to trace the specific path of the initially captured energy.

The box model is attempting to show nothing more than in the system’s approximate converged equilibrium state it only takes about +150 W/m^2 of net surface gain to ‘offset’ about 300 W/m^2 of GHG absorption. By ‘offset’, I simply mean to establish equilibrium with space. And that the true measure of the ‘intrinsic’ surface warming ability of incremental GHG absorption should be a linear increase in adaption of the system or a linear increase in the aggregate dynamics of the system to aggregate GHG absorption prior to an imposed imbalance. The div2 model is in no way attempting to describe the complex thermodynamic path that manifests the current equilibrium, but only that it’s equal the final result of the already physically manifested thermodynamic path (and accurately quantifies aggregate dynamics even though it’s not modeling the actual behavior). That is, if you were to stop time, remove the actual atmosphere and replace it with the box model atmosphere, and start time again, the rates joules are being added to the surface, entering from the Sun, and leaving at the TOA would stay the same. Nothing more.

Note in no way at all is the model attempting to describe the complex thermodynamic path itself. Moreover, if it were, it would definitely be wrong. The model is only claimed to be equivalent relative to the final equilibrium, and in no way describes or quantifies the complex path the system takes to achieve equilibrium.

That’s all for now. You’ve asked George for equations, though I’m not sure what specifically, but that’s fair enough. We’ll wait for George.

SOD, CO2isnoevil, and RW: I found the DATA in George White’s “latest proof” very interesting and worth of discussion at SOD – though I think he has misinterpreted this data. The most obvious and simplest interpretation of the data actually suggests an ECS of about 3.0. (:()

Remove the theoretical S-B curves from George’s graph and picture a best fit line through all of the orange data points. The slope of that line has the same units as ECS (K/(W/m2) and could be interpreted as ECS. I estimate the slope is about 0.9 K/(W/m2) or 3.2 K for a 3.7 W/m2 doubling of CO2. The curve in the data might be explained by saying climate sensitivity is not a constant.

Looking more closely at the graph, the overall slope is mostly determined by temperatures from polar regions where Ts is below 273 K. In regions between 273 K and 288 K, the slope is a little lower. The warmest regions, there may be two domains: one with the lower slope and one with negligible slope. (Two domains suggests to me that the clear skies under the descending branch of the Hadley circulation may behave differently from the ITCZ.) The polar region – a small faction of the planet – dominates George’s graph and is misleading for that reason.

The planet’s overall ECS may be interpreted as a composite of all of these local slopes/ECSs. This composite over-emphasizes the high climate sensitivity of polar regions. Climate models tell us about more warming at the poles and during in winter; I see the same features in this plot. If regions of the planet behave so differently, then the idea of one “planetary” climate sensitivity is an oversimplification.

(For the record, George White believes that the ability to fit this data modestly well with an S-B eqn with emissivity of 0.6 means the planet will respond to surface warming as if it were a gray body. The “climate sensitivity” of a gray body at 255 degK is about 1 degK (3.7 W/m2/K). I might comment more on this subject later.)

Frank,
The planets overall ECS is the slope at the operating point of the system, which is 0.3 C per W/m^2 or about 1C per 3.7 W/m^2 at {240 W/m^2, 288K) and no where near 0.8C per W/m^2. The data is also pretty clear that the planet behaves like a gray body. Feel free to try and find evidence that shows otherwise. I haven’t been able to identify any and I’ve tried pretty hard.

For any gray body, the T^4 relationship between temperature and power means that the sensitivity for the next W/m^2 must be less than the average for all that preceded. The average is 1.6 W/m^2 of incremental surface emissions for each of the 240 W/m^2 of accumulated forcing and is about 0.3C per W/m^2 at the current operating point. Note that all W/m^2 of average input are subject to the same average conditions and each Watt has an equivalent effect to the others.

The power reflected by clouds and surface ice as reflected as part of the response of the system to forcing, including this power in the total forcing makes the effective gain even smaller. This is obfuscated by the IPCC metrics of forcing and sensitivity, which effectively ignore the effective negative feedback effect from the non vapor forms of water reflecting solar power.

The relevance at the ‘operating point’ of the system concept is foreign to most climate scientists, but its the only way to make sense of a non linear system. The fact that you think you can extrapolate the higher sensitivity below 0 across the entire surface indicates a gap in your understanding of nonlinear system analysis. Few actually grasp these concepts, and even fewer among climate scientists, so its not unexpected or unusual.

The dots on the plot are not equally weighted when it comes to calculating the global effect. Each corresponds to the pixels in a 2.5 degree slice of latitude, so polar measurements actually count less to the overall average, moreover; most if the incident energy arrives at the tropics whose higher temperatures indicate a local sensitivity less than 0.3 C per W/m^2.

If we superimpose the relationship between input power and temperature, what we see is a translation of the SB relationship at (290 W/m^2, 288K) up and to the left, in which case the sensitivity is only about 0.2C per W/m^2. This bounds the true sensitivity somewhere between 0.2C and 0.3C per W/m^2. The real question is which of these is the true operating point. I actually think the (390 W/m^2, 288K) point might be the real operating point.

During periods of glaciation, the effects around 0C are more important because more of the planet is cold. In an inter-glacial period like today, the cold has pushed further towards the poles and contributes less to the whole. In addition, most of the ice has melted, so the ‘feedback’ from melting ice gets progressively smaller as the planet warms. This is another case where the sensitivity when the planet was covered in ice is incorrectly projected across the globe.

This next plot illustrates where the consensus sensitivity fits, relative to the SB law and shows the mistake.

The mistake was ‘linearizing’ the sensitivity around units of degrees per W/m^2, which around the operating point is approximately linear, its slope is 0.2 C per W/m^2 and not the slope of the operating point passing through zero. If the sensitivity was properly considered linear in the power domain and gain (sensitivity) was a dimensionless constant, as control theory says it should be, none of this would be an issue as this error would have been obvious.

You still haven’t answered my question about whether or not you believe that the N2 and O2 in the atmosphere emits LWIR? If so, by what mechanism do you propose this happens?

You also haven’t answered my questions about whether you understand that only the ideal black body with an emissivity of 1.0 represents the unit gain/zero feedback system.

Of course the energy entering the atmosphere must leave it because the atmosphere is not an infinite sink of energy. In the steady state, the flux of energy entering the atmosphere must be equal to that leaving. But before we get to the equations, please answer the questions. I want to identify the common ground so we can work from there.

Now that you acknowledge that O2/N2 doesn’t emit LWIR, can you be more specific about the origin of the LWIR in GHG absorption bands that leaves the top of the atmosphere?

You seem issue a blanket statement that it comes from the atmosphere without specifying what components of the atmosphere are emitting these photons. The later is more important when trying to understand the microscopic behavior of the atmosphere.

You also seem to imply that the temperature of the atmosphere drives the temperature of the surface when its actually the other way around. The emissions by GHG molecules that find their way back to the surface increases the surface temperature, but this is not the driving force. Without the Sun supplying the energy in the first place, there would be no GHG effects. GHG’s and clouds effectively delay the disposition of some fraction of the energy associated with Planck emissions by the surface, returning some of this back to the surface and the remaining sent out to space.

To answer your question about the math, the equation you presented earlier is relevant, although you must consider the optical depth (tau) as a function of wavelength and height in the atmosphere which is what we derive from HITRAN line data individually applied to slices of atmosphere, per the article I referenced you to earlier.

Paraphrasing the article, if your ideas in words can’t be written in an equation that is derived from theory or new experimental work then it’s just a bunch of words that sounds nice but isn’t physics. Many people arrive at this blog and write a lot of stuff that fits that description.

Imagine yourself as the 500th person who has arrived with this approach.

I look forward to you writing down your set of equations that demonstrates your claims about LTE and “stuff about direction of energy”. Don’t be the 500th person to disappoint.

It’s old school and I don’t know about these days but if you couldn’t prove your propositions (or conversely disproved ideas that needed disproving) by using physics principles then you failed your exams.

I can demonstrate that energy entering the climate system from space must leave the climate system to space (given insignificant geothermal energy transfer) unless the temperature of the climate is changing. That’s the first law of thermodynamics. The rest of your words are not derived from physics. They are an idea sitting in your head.

There is a story to be told about the climate changing. And I think physics is helping us to tell that story. One of the most interesting stories is the last 1000 years of freezing and warming. For 600 years, to 1650, there was a cooling of the globe, perhaps with a 25 cm sinking of sea level. It was of course some ups and downs over these centuries. And the globe was stripped of some of its CO2 blanket. How can physics and equations help us with that. There was a little less solar SW radiation, but perhaps not enough to explain. Then there came a change, gradually. In a mysterious way the radiation from the sun got access to the oceans (and the atmosphere). The level of CO2 in the air increased. The TOA imbalance shifted, with less radiation out than in. What was the main forcing of this change. I would think that it was the sun SW radiation that could begin to warm a planet that had been cooling for 600 years. Can forcing be defined as the relation between a warming source and a cooled down object? And what about the atmospheric warming? Could the increased OHC climb out of the water, in a Trenberths spirit? Would it then be the OHC that is the forcing agent?

For the second time, I’m not talking about spontaneous emissions. I’m talking about stimulated emissions where the stimulating event is either absorption of another photon or a collision.

Long Term Equilibrium is the state that a system will end up at if the system and stimulus remains constant.

For the case of the Earth, the ‘constant’ stimulus is diurnally, seasonally and orbitally periodic. Many are oblivious to this since this variability is always backed out of the data to generate anomalies. The steady state response to this periodic stimulus are periodic surface temperatures. This is the LTE response. Its long term because the planet has had a long time to arrive at equilibrium with the Sun. The average of this periodic response can be considered a scalar metric representing the dynamic state variability that defines the steady state.

This is pretty basic system analysis stuff. Perhaps some simple math will help. We can define LTE with an equation,

Pi(t) = Po(t) + dE/dt

where Pi(t) is the instantaneous input power, Po(t) is the power being emitted, E is the energy stored in the system and dE/dt is its rate of change (see how dE/dt is the same as forcing, per the IPCC definition?). The steady state, or LTE, is defined when the average dE/dt over the period of the stimulus is zero.

If we arbitrarily define an amount of time, tau, such that all of E can be emitted at the rate Po, we can rewrite this as,

Pi(t) = E(t)/tau + dE/dt

which you should recognize as the DE describing the LTI system of a resistor and a capacitor, where tau is the time constant. The long term response to seasonal variability from the weather satellite data fits the solutions to this LTI almost exactly.

The Siegel & Howard definition is specific to the kinetic theory of gases and certainly the linear momentum of GHG molecules is shared in this manner. Referring to energy of absorbed photons as being shared in this way is only true when the enrgy of the collision is >> the energy of the state change, but saying gases emit a Planck spectrum as a black body is wrong. Molecules in the gaseous state only absorb and emit specific wavelengths of energy,

The Vardavas & Taylor definition recognizes that in the absence of collisions, the radiation field is decoupled from the thermodynamic state of the non GHG gases, but again, the specific claim that it is coupled when collisions are frequent is not true at the energy levels of our atmosphere.

None of these explanations is always wrong, but neither is always right either even as the apparently right answer is produced. Go back to my last post and try and identify the mechanism by which this can work based on the energy levels and QM requirement that quanta of energy must be absorbed and emitted all in one transaction.

Again, I must emphasize that stimulated emission upon collisions is another mechanism for sharing energy and results in the proper bulk behavoir, so in this context, both descriptions can be considered correct.

Control theory defines forcing as the input to the system, which in a strict sense is limited to solar power. Control theory defines sensitivity as the influence some change in the system has on the steady state gain.

Climate science incorrectly calls forcing incremental input and calls incremental gain the sensitivity. This layer of obfuscation seems to be the root of so much confusion that it almost seems purposeful.

The incremental gain varies around 1.6 W/m^2 of surface emissions per W/m^2 of solar input and doesn’t even get close to the 4.3 W/m^2 of surface emissions per W/m^2 required to support a 0.8C rise from only 1 W/m^2 of new input.

Consensus climate science defines forcing (incremental input) incorrectly relative to the control theory its supposed to be derived from since both an instantaneous change in the post albedo solar power is considered to have the same effect as an instantaneous change in the optical depth (change in the transparent window per my earlier definition) arising from increasing GHG concentrations.

Here are some pointers to Bode’s work that climate feedback analysis is supposed to be based on. Bode’s book is freely available on-line, but I don’t have the link handy.

I’m sensing here that no one understands or knows what the spectral transmittance evaluated at the temperature of the surface is and what it quantifies, where as to co2isnotevil (since he’s done these simulations from scratch), it’s basic assumed knowledge of atmospheric RT (and is arguably the most basic thing an RT simulation is actually ultimately calculating).

SoD,

I recall asking you if you know what it means and you said you didn’t know. It seems no one (save for Grant Petty) knows what it is, as I’ve never come across anyone else anywhere who had even the slightest clue what it was, let alone what it quantifies in specific physical terms.

I understand the spectral transmittance ‘T’ for a multilayer RT simulation quantifies the transparent window and what fraction of the power radiated from the surface that is transmitted through the whole of the atmosphere into space instantaneously, leaving the spectral absorptivity ‘A’, i.e. 1-T, that which quantifies the power radiated from the surface which is captured by the atmosphere (i.e. attenuated from passing into space). And it is this absorbed IR flux which constitutes total GHG absorption in W/m^2, and is what the atmosphere re-radiates both up and down (and is what is initially driving the GHE prior to an imposed imbalance).

co2isnotevil,

Maybe I’m wrong, but I don’t think anyone here understands this at all. Yet is the most basic starting point that has to be fully understood and agreed to first (assuming it’s correct and can be agreed to). I sense you’re operating as though everyone understands this and it’s basic stuff, but I don’t think that’s the case.

RW,
I’ll give them the benefit of that doubt that they understand that one of the conditions for LTE is that the average rate of energy entering the atmosphere must be equal to the average rate of energy leaving the atmosphere. Otherwise, the atmosphere is not in LTE and is either warming or cooling. Transiently, it’s never in LTE and the atmosphere is always either warming or cooling (diurnal and seasonal variability). But for any warming above average there must be a corresponding cooling so that the average is maintained.

LTE is most accurately quantified as a dynamic steady state whose average integrated over day, night and seasons is a scalar that characterizes the LTE response.

“I’ll give them the benefit of that doubt that they understand that one of the conditions for LTE is that the average rate of energy entering the atmosphere must be equal to the average rate of energy leaving the atmosphere. Otherwise, the atmosphere is not in LTE and is either warming or cooling.”

but was told it wasn’t worth trying to read and understand, so I’m not sure they do. You have the luxury of being the only here who has really done these simulations in detail entirely from scratch, so naturally you know exactly what everything is doing and what various things mean and quantify. Myself and everyone else have had to try to understand mostly from outsourced knowledge or literature.

I could be entirely wrong, but I don’t think SoD (or anyone else here) understands what the spectral T and A even are, let alone what they quantify relative to what we’re discussing. Yet I sense you’re operating as though it should be assumed basic knowledge.

“I’ll give them the benefit of that doubt that they understand that one of the conditions for LTE is that the average rate of energy entering the atmosphere must be equal to the average rate of energy leaving the atmosphere.”

Can this be established by an equation? I’m not sure it can specifically. As I interpret SoD, he is asking for an equation that establishes it. That and also an equation that says it’s established by COE.

I don’t have any problem with his demanding of equations, but I do think it first makes sense to establish agreement on certain conceptual things independent of equations, which seems to be what you’re trying to do with him. In SoD’s defense, there have been a lot of quacks and loons who have come here spouting reasons and pseudo physics why the basic accept elementary tenants of climate change and GHG warming are wrong.

It’s important to note that you (and myself) accept the basic physics that increased GHG absorption from increased GHGs, which put the system out of equilibrium with the Sun, should push the climate in warming direction that ultimately requires the surface to warm by some amount in order to re-establish equilibrium with space. What we are discussing and debating is how that push is best and most accurately quantified.

Please correct me if I’m wrong, but my understanding is the calculation of the spectral ‘T’ and ‘A’ for the steady-state does not involve the absorption and subsequent re-emission of IR (like it does for the imposed imbalance state). It’s simply calculated from the upwelling IR from the surface through to the TOA and from each subsequent layer from its layer through to the TOA and how much of the spectrum emitted by the surface is ‘seen’ and not ‘seen’. The amount of the spectrum ‘seen’ quantifies ‘A’ and the amount not ‘seen’ quantifies ‘T’.

RW,
Correct. The quantification of T that’s relevant only counts energy that leaves the planet in a straight line from the surface. Everything else is quantified by A, which is equal to 1 – T, and represents the EM flux slowed down by the atmosphere which in the steady state must be exiting the atmosphere at the same rate. Re-emission and subsequent absorption is the mechanism by which this power eventually finds its way out of the atmosphere and this can be via narrow band emission/absorption by GHG’s or the broad band emission/absorption by the water and ice in clouds.

BTW, the fraction of A slowed down by the water and ice in clouds is larger than the fraction of A slowed down by GHG’s. This is because water is much closer to a broad band absorbed/emitter than GHG’s and of course, clouds cover about 2/3 of the surface. This fact is often misrepresented to make all of the surface warming appear to a consequence of GHG’s.

Well geez. We went round and round here in what must have been several hundred exchanges amongst many individuals on this point, among several threads. As best can tell, no one here agrees or understands this (if anyone does, please say so — so we can move beyond this).

Just to absolutely clarify, if the spectral ‘T’ evaluated at a steady-state surface temperature that was radiating at 400 W/m^2 was calculated to be 0.25, it physically means 100 W/m^2 of the 400 W/m^2 radiated from the surface is transmitted through the whole of the atmosphere instantaneously, and the difference of 300 W/m^2, i.e. (1-0.25)*400, quantifies the power captured by the atmosphere or total GHG absorption, right?

It seems to me that your explanation of the microscopic behavior requires N2/O2 to emit significant LWIR photons and this is all I have an issue with. I have offered an alternate explanation that does not require the otherwise impossible emission of photons by N2/O2, matches the data, conforms to the requirements of LTE and fits the equations. Why is this so objectionable?

No one here is claiming “impossible emission of photons by N2/O2”.

“Why is this so objectionable?” – because your starting point is an incorrect assertion about LTE which you haven’t defended.

The position of atmospheric physics is that (in the troposphere) absorption of photons by CO2 (and other GHGs) is energy that is shared by collisions with other molecules even those that are not radiatively active (like N2/O2). Not energy that is emitted as another photon.

I have provided a link with a number of references from physics textbooks – You have not commented on these.

I have asked you for your calculation of mean time between collisions and mean time for spontaneous emission of a photon by CO2 – You have not provided it.

Clearly your calculations will be wrong if you have as a premise that energy absorbed by photons is emitted as a photon rather than shared by collisonal energy with molecules in the vicinity.

This blog isn’t the argument from authority – but generally we take stuff found in physics textbooks as correct unless someone has some proof to the contrary. Or at least textbooks with a different claim.

I expect you don’t have any idea about this subject – so please focus on it, answer the questions asked, otherwise all of your calculations will be wrong. Your calculations will definitely be different to my calculations and to calculations in textbooks and papers.

Or, acknowledge that you don’t know the answer, you have assumed something that is perhaps wrong and you will go and investigate. That will be fine.

“The position of atmospheric physics is that (in the troposphere) absorption of photons by CO2 (and other GHGs) is energy that is shared by collisions with other molecules even those that are not radiatively active (like N2/O2). Not energy that is emitted as another photon.”

Macroscopically, this works fine and is consistent with bulk behavior quantified by the Kinetic Theory of Gases, but breaks at the microscopic level where Quantum Mechanics rules.

How is this energy shared? The only way to share it with N2/O2 is by somehow increasing the velocity of the N2/O2 and/or CO2 molecules.

If the energy from the photon is converted into velocity, quantum mechanics requires all 1.3 E-20 joules be converted at once. This requires a collision to share the photon energy between colliding molecules in a way that doubles the kinetic energy of each molecule. For CH4 absorption, the energy of the absorbed photons is more than twice as much and you are requiring tripling the kinetic energy of each molecule. Furthermore, if a large fraction of this absorbed photon energy was ‘shared’ it would no long be in the form of photons and we would see absolutely no emissions in any of the absorption bands since the ‘kinetic temperature’ of N2/O2 has no effect on the emissions by the planet per the example of a planet with only an N2/O2 atmosphere.

Please explain in quantum mechanical terms how this presumed transfer of energy occurs. You don’t even need to supply the equations, just explain the physical mechanism.

As a hint, consider that the reverse isn’t possible at nominal atmospheric energy levels since if N2 and CO2 collided and the CO2 state increased as if it absorbed a 15u photon, the ending velocity of the colliding molecules would be close to 0. If only it was this easy to get to 0K. Laser cooling a gas does something like this, but can remove just a tiny amount of energy at a time.

In a gas, the redistribution of absorbed energy occurs by various types of collisions between the atoms, molecules, electrons and ions that comprise the gas. Under most engineering conditions, this redistribution occurs quite rapidly, and the energy states of the gas will be populated in equilibrium distributions at any given locality. When this is true, the Planck spectral distribution correctly describes the emission from a blackbody..

Another definition, which might help some (and be obscure to others) is from Radiation and Climate, by Vardavas and Taylor, Oxford University Press (2007):

When collisions control the populations of the energy levels in a particular part of an atmosphere we have only local thermodynamic equilibrium, LTE, as the system is open to radiation loss. When collisions become infrequent then there is a decoupling between the radiation field and the thermodynamic state of the atmosphere and emission is determined by the radiation field itself, and we have no local thermodynamic equilibrium.

And an explanation about where LTE does not apply might help illuminate the subject, from Siegel & Howell:

Cases in which the LTE assumption breaks down are occasionally encountered.

Examples are in very rarefied gases, where the rate and/or effectiveness of interparticle collisions in redistributing absorbed radiant energy is low; when rapid transients exist so that the populations of energy states of the particles cannot adjust to new conditions during the transient; where very sharp gradients occur so that local conditions depend on particles that arrive from adjacent localities at widely different conditions and may emit before reaching equilibrium and where extremely large radiative fluxes exists, so that absorption of energy and therefore populations of higher energy states occur so strongly that collisional processes cannot repopulate the lower states to an equilibrium density.

———

These guys are all wrong?

You say:

Macroscopically, this works fine and is consistent with bulk behavior quantified by the Kinetic Theory of Gases, but breaks at the microscopic level where Quantum Mechanics rules.

Do you know any physics textbook that agrees with you?

Siegel & Howell didn’t understand quantum mechanics? One of the co-authors of the other reference is Prof F.W. Taylor at Oxford University. How did he get his job? Oxford Science Publications doesn’t vet the basics anymore?

You have come up with this all by yourself? Have you tried to get this revolution published?

If the energy from the photon is converted into velocity, quantum mechanics requires all 1.3 E-20 joules be converted at once. This requires a collision to share the photon energy between colliding molecules in a way that doubles the kinetic energy of each molecule.

For CH4 absorption, the energy of the absorbed photons is more than twice as much and you are requiring tripling the kinetic energy of each molecule.

Furthermore, if a large fraction of this absorbed photon energy was ‘shared’ it would no long be in the form of photons and we would see absolutely no emissions in any of the absorption bands since the ‘kinetic temperature’ of N2/O2 has no effect on the emissions by the planet per the example of a planet with only an N2/O2 atmosphere.

What prohibits doubling the speed of a molecule? Some new law you have discovered?

Given that you haven’t provided the time between collisions (previously requested) I will provide it – typical value up in the atmosphere is less than 1ns.
[You still need to provide the typical time for spontaneous emission of a photon for CO2].

Anyway, let’s take a look at the numbers quickly:

Let’s say up in the atmosphere at 273K, typical speed (vrms) of an N2 molecule, with mass, m=4.7-26kg, is 490 m/s. Let’s say 500 m/s. Check.

KE (typical N2) = mv2/2 = 5.6×10-21 J

A 15 μ photon (2×1013Hz) is absorbed by a CO2 molecule which increases its energy by hv = 1.3×10-20J. Check.

So now this CO2 molecule collides with the N2 molecule and the collision transfers the absorbed photon energy into translational kinetic energy.

KE (this N2) = 1.9×10-20 J

speed (this N2) = 903 m/s

What is wrong with this?

Now in a lot less than 1ns it collides with another molecule, probably an N2 and loses some momentum, the other N2 molecule gains it.

And in the next few ns, many more collisions and the energy is spread out among the local population of molecules, which now share this “extra energy” of 1.3×10-20J.

I’ll give them the benefit of that doubt that they understand that one of the conditions for LTE is that the average rate of energy entering the atmosphere must be equal to the average rate of energy leaving the atmosphere. Otherwise, the atmosphere is not in LTE and is either warming or cooling. Transiently, it’s never in LTE and the atmosphere is always either warming or cooling (diurnal and seasonal variability). But for any warming above average there must be a corresponding cooling so that the average is maintained..

You are talking about conservation of energy and yes “I understand” that the average rate of energy entering the system must be equal to the average rate of energy leaving the system so long as the system is not warming or cooling.

A cooling atmosphere can be in LTE. A warming atmosphere can be in LTE.

Local thermodynamic equilibrium does not mean static temperature. It does not mean conservation of energy.

I believe he’s not using the term LTE in that way in those comments, but just means LTE as steady-state, in particular at the surface/atmosphere and atmosphere/space boundaries.

My understanding of LTE for the gases of the atmosphere, as it’s generally used, is it means energy is shared equally by collisions on a local level. On Wikipedia, there are different definitions and variants of the term Local Thermodynamic Equilibrium.

“It is important to note that this local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.”

The discussion about micro mechanism, LTE, etc. is kind of moot anyway, because black box system analysis is specifically designed to not allow any assumption of internal mechanism. Not even isotropic emission on a photonic level is an assumption.

co2isnotevil is making an important claim about how radiation interacts with the atmosphere.

Black box analysis is something that looks at the inputs and outputs without knowing the details of what goes in inside.

You guys can choose what argument you want to make. Right now it’s a mish mash of flawed claims. It’s not a black box analysis and it’s not a physics analysis.

It’s like Miskolczi’s thesis. Introduce an unsupported claim that sounds vaguely like something thermodynamic but isn’t. Don’t defend it. But insist on leaving it there because removing it destroys the foundation of the thesis.

If someone makes a flawed claim I’m going to question it.

If the basics of radiative transfer in the atmosphere are not important to the theory then they can be – and should be – left out. Unfortunately you don’t understand what I’m talking about so I’m going to stop.

“co2isnotevil is making an important claim about how radiation interacts with the atmosphere.”

Important to what specifically? I understand his claim is not that non-GHGs are not in LTE with the GHGs. He’s claiming they are and LTE exists, but the flux of photons moving through the GHGs, i.e. absorbed and re-emitted by the GHGs, are not necessarily being shared by collisions with non-GHGs; and collisions not the dominant mechanism causing photons to be emitted from GHGs.

I understand his argument is the speed at which GHG molecules are capturing photons and will achieve a high enough energy state where the absorption of photon will ‘excite’ the emission of the another photon from that same GHG molecule — is faster than the speed at which the energy of absorbed photons can be shared or transferred to the non-GHGs by collisions.

Whether he’s is correct are you’re correct, the probability of absorbed photon’s energy being re-radiated is still 50/50 up or down. This is entirely independent of the initiating mechanism of re-emission, even if the energy is thermalized and re-radiated as broad band emission.

If 1 W/m^2 of upwelling IR goes into X layer emitting 300 W/m^2 or emitting 100 W/m^2, it will re-emit 0.5 W/m^2 back up in the same direction it was going pre-absorption and emit 0.5 W/m^2 back downward in the opposite direction it was going pre-absorption. This is independent of the dominant mechanism initiating the re-emission of the captured IR.

This is the foundation behind what we are discussing and debating here.

“Black box analysis is something that looks at the inputs and outputs without knowing the details of what goes in inside.”

It’s highly counter intuitive though because the actual behavior is not being modeled, and the modeling is based on the concept of equivalence as it applies to the joules flowing in and out of the whole system at well characterized boundaries, where all joules entering from each boundary have to be conserved and embody all of the effects in between, radiant and non-radiant, known and unknown.

“You guys can choose what argument you want to make. Right now it’s a mish mash of flawed claims. It’s not a black box analysis and it’s not a physics analysis.

It’s like Miskolczi’s thesis. Introduce an unsupported claim that sounds vaguely like something thermodynamic but isn’t. Don’t defend it. But insist on leaving it there because removing it destroys the foundation of the thesis.”

If the basics of radiative transfer in the atmosphere are not important to the theory then they can be – and should be – left out. Unfortunately you don’t understand what I’m talking about so I’m going to stop.”

I think I do. You’re not giving me enough credit here (or co2isnotevil). The basics of radiative transfer in the atmosphere, i.e. what variant of LTE exists, etc., are not germane to the validity of black box analysis, but if not completely correct, may explain why people such as yourself apparently don’t even understand the foundation behind why the black box analysis is being done in the first place.

I’ve tried to establish agreement on the fundamental foundation, independent of whether the final conclusion or hypothesis is right or wrong, but everyone is just ‘shouting down’ the whole thing without giving it fair or proper consideration. This is what disappoints me. None of the basics associated with the physics of GHG warming are in dispute, i.e. that increased GHGs should cause some push in the warming direction that will ultimately further warm the surface. What’s in dispute is how to correctly and most accurately quantify that warming push.

I appreciate your need to want to see equations, but maybe we first need to start with some ‘do you agree’ basics, because I genuinely do not understand what you don’t understand (or somehow see as conflicting with what has been established).

1. Do you agree the GHE is established to be a radiative resistance to outer space cooling by radiation from the atmosphere into space? That is, the underlying mechanism is the constituents of the atmosphere are largely opaque to upwelling IR from the surface and some of it absorbed by GHGs and subsequently re-radiated back downward towards the surface, resisting the upward push toward radiative cooling of the system eventually achieved by upwelling IR emitted from atmosphere that passes into space?

2. Do you agree the GHG absorption, aggregate or incremental, is not new energy to the system? That is, it has to be all prior absorbed solar energy which ‘blocked’ from leaving in the immediate present?

3. Do you agree that the re-radiation of the energy captured by the GHGs is non-directional? That is, by and large occurs with equal probability up or down, because at any discrete layer the probability of re-emission is 50/50 up or down?

4. Do you agree that radiation emitted up in the atmosphere is in the act of cooling or contributing to the push toward the radiative cooling of the system by radiation from the atmosphere into space?

5. If you agree with numbers 1-4, how does it follow that incremental GHG absorption is equal to incremental GHG ‘forcing’ if not all of what’s absorbed is re-radiated back downward?

How about we assume that most of the absorbed IR is shared with the surrounding gas molecules via collisions, as has been established and claimed in the literature and text books, etc.?

Even if this is totally correct (which it may be), the absorbed IR energy being shared with the non-GHGs can’t be persisting for very long, otherwise it would not cool down at night to anywhere near the amount that it does. Meaning absorbed surface IR is getting re-radiated fairly quickly and subsequently absorbed and re-radiated multiple times over and moving through the atmosphere quite quickly. The volumetric heat capacity of air is infinitesimally ‘thin’. It’s something greater than like 1/3000th that of water. The biggest delay provided by the atmosphere would be when surface IR is absorbed by clouds, but even on a cloudy night it still cools down quite a bit, showing the limits of to what degree the atmosphere can delay the release of absorbed solar energy back out to space.

But even more importantly than this is I understand that the GHE is specifically established to be a radiative induced effect, and not a conductively induced effect. In other words, the GHE is not established to be driven by absorbed surface IR (that would otherwise pass into space) which is thermalized and the energy of which subsequently conducted down to the surface. In fact, the reason why the GHE doesn’t violate the second law is specifically because it’s a radiative induced effect and the surface is not further warmed through a conduction process.

I understand the way GHGs act to elevate the surface temperature is to further delay the release of absorbed solar energy back out into space. The delay is achieved by the fact that a significant amount of surface IR is absorbed by GHGs and subsequently re-radiated back downward towards the surface, resisting the necessary IR push upward toward space required to achieve balance at the TOA. The net result of this effect is the surface and the lower atmosphere have to be emitting IR a higher rates (i.e. rates higher than 240 W/m^2), and be subsequently be warmer in order to be pushing through the 240 W/m^2 that has to be passing into space in order to achieve balance with the Sun at the TOA.

As a physicist, how can you just ignore the fact that the constituents of the atmosphere, i.e. GHGs and clouds, act to both cool the system by re-radiating absorbed IR energy up toward and ultimately out into space as well as act to warm the system by re-radiating absorbed energy back downward toward the surface, thereby resisting radiative cooling to space?

Above all, the system must be making the push toward radiative balance with the Sun at the TOA, right?

Are you aware of measurements of the emitted spectra that are done at night? I’m wondering if the ozone peak is reduced or goes away when the Sun is not energizing the ionosphere.

Clearly, the origin of these peaks is not surface emissions, as the features seem to be completely independent of surface and cloud conditions. It’s also notable that the 666 cm-1 and 1150 cm-1 ozone peaks are reversed (in the Antarctic view), as to their relative strength, that is the 1150 cm-1 lines are stronger than the 666 cm-1 lines, yet have fewer emissions. This indicates that the O3 was energized by something with a higher energy (uv or high energy particles) whose energy is subsequently emitted as the spontaneous emission of lower energy photons. The probability of spontaneous emissions increases dramatically as the molecule achieves higher energy states. It could also be the signature of ozone creation since when an O2 molecule combines with O1, there’s excess energy that’s often considered to be taken away by a third body, although it might make more sense to emit the excess energy as photons.

In any event, it has nothing to do with the absorption and return of surface emissions, which is all that I’m simulating, except perhaps as an unaccounted for input of energy, where absorbed UV is converted to LWIR and sent towards the surface.

The velocity of the molecules in air sets the speed of sound, If you can show an experiment where you irradiate air with massive amounts of 15u photons and significantly increase the speed of sound, you can make a case.

Another problem is the reverse. Generally, state transitions are reversible and at the current energy levels of the atmosphere, no collision is powerful enough to convert any of its kinetic energy into the state transition of a GHG molecule. If we were operating at energy levels where this could happen, then the reverse could happen as well.

Keep in mind that this is quantum mechanics and everything is probabilistic, so we can’t rule it out entirely, but the probability that it will occur is very, very low.

Also, the rms velocity of the molecules in an ideal gas goes as sqrt(T), thus doubling the velocity increases the temperature by 400%. There is only about a 5% difference in the rms velocity of air molecules between 0C and 30C.

2.2.3: “Collisional transitions involve a radiating and colliding molecule”
“Absorption and induced emissions involve a radiating molecule and a photon”
“Radiating transitions are accompanied by the appearance or disappearance of a photon”

How is this any different that what I have been saying?
The radiating molecule is the GHG molecule which radiates a photon upon collision and the colliding molecule is N2 or O2.

The radiating molecule is the GHG molecule which radiates a photon upon collision and the colliding molecule is N2 or O2.

Mostly not. As SoD pointed out several times above, the lifetime of an excited ghg molecule in the troposphere is quite short, on the order of microseconds (the collision frequency is much higher than that, but most collision do not cause a change in internal energy). The radiative lifetime of an excited CO2 molecule, for example, in the absence of collisions is about 2 seconds. The vast majority, 99.9+%, of collisions which do produce an excited molecule do not result in the emission of a photon. The excited molecule transfers the energy by collision rather than radiation. The fraction of molecules in the excited state, however, is constant at any given temperature and pressure.

After any collision between an energized GHG molecule and another molecule that does not emit a photon, the final state is an energized CO2 molecule and the colliding molecule moving in different directions after having exchanged momentum during the collision. It’s certainly true that the energy of a typical collision is not enough to insure that every collision of an energized GHG molecule will emit a photon, but a significant fraction will.

Please read the section from Goody and Yung’s book that SoD referenced. Nowhere does it qualify or quantify the conversion of the energy of a state transition into translational momentum. It only talks about the conversion between the energy of state transitions and photons.

It seems to me that consensus climate science and not radiative physics is the origin of the assertion of significant conversion between the energy of a state change and linear momentum. This is a naive interpretation of the underlying physics, even as it seems to work relative to the observed bulk properties. I’m not surprised as since the IPCC was formed, climate science has been biased towards the preconceived conclusion of a high sensitivity. There are far too many games like this being played to provide the wiggle room necessary to make extraordinary claims seem plausible. For this reason I have little faith in anything claimed in any climate science textbook written since or any other text that emphasizes climate change caused by man, especially those written by anyone with their name on any of the ‘supporting science’ in IPCC AR’s.

If we only believe what’s written in textbooks, science would never advance. Climate science is in dire need of advancement as the presumption of a high sensitivity has broken it in countless ways.

“Mostly not. As SoD pointed out several times above, the lifetime of an excited ghg molecule in the troposphere is quite short, on the order of microseconds (the collision frequency is much higher than that, but most collision do not cause a change in internal energy). The radiative lifetime of an excited CO2 molecule, for example, in the absence of collisions is about 2 seconds. The vast majority, 99.9+%, of collisions which do produce an excited molecule do not result in the emission of a photon. The excited molecule transfers the energy by collision rather than radiation. The fraction of molecules in the excited state, however, is constant at any given temperature and pressure.”

OK, assuming this is correct, what then is considered to be the dominant mechanism that causes a GHG molecule to emit a photon? Is it not a collision with an N2 or O2 molecule?

BTW, how are you defining an ‘excited ghg molecule’? One or few photons absorbed above the ground state or at its ionization potential where the absorption of a photon near instantly ‘excites’ the emission of the another photon from the same molecule?

Is it least agreed that independent of the initiating mechanism of photon emission from the GHG moleclue, that the direction is random, i.e. occurs with equal probability up or down?

Are you claiming the speed at which GHGs are capturing photons to the point where the absorption of photon will ‘excite’ the emission of another photon, is faster than the speed at which the energy of the absorbed photons can be transferred by collision, or are you claiming that there is no mechanism by which the absorbed photons’ energy can be ‘collisionally’ transferred into the linear kinetic energy of the O2 and N2 gas molecules?

It seems to be latter, where you’re claiming the non-GHGs and GHGs are in LTE with each other, i.e. their linear kinetic energy is equalized or fully ‘shared’ amongst each other, but the flux of photons being absorbed and emitted by the GHGs themselves are not being shared and their absorbed and emitted energy is moving through the atmosphere mostly without being ‘shared’ with non-GHGs.

In other words, you’re not claiming the gases are non-LTE, but instead claiming this version of LTE from wikipedia is what is primarily operating:

“It is important to note that this local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.”

CO2isnoevil and RW have cited work by George White showing that the radiation leaving the planet above a particular location fits reasonably well to the S-B equation where T is the surface temperature (Ts) directly below and emissivity is 0.6. The “climate sensitivity” of a gray body at 255 degK is about 1 degC, so the consensus on climate sensitivity must be wrong. Can we trust this conclusion?

Scientists do use “blackbox” models, but they know that such models can’t be trusted when the model doesn’t include all of the relevant physics. George White’s model assumes that the earth has an emissivity of about 0.6, but we know that the surface of the earth (where T used in his model is actually measured) is composed of materials with emissivity above 0.9. Therefore, we know the model is flawed. RW and CO2isnoevil admit that surface temperature and emissivity do NOT control what comes out of “our blackbox”: Except for the atmospheric window, the temperature, quantity of GHGs present, and cloudiness of the atmosphere control the radiation reaching space. Any model that reduces the role of the atmosphere unvarying emissivity of 0.6 can’t be trusted to yield a correct answer for the climate feedback parameter – the relationship between changes in surface temperature and outgoing radiation. ECS is the reciprocal of the climate feedback parameter.

George’s data and theoretical curves for different emissivities make it LOOK like a graybody model is almost perfect. However, the dynamic range of the temperature data is about 75 degK, while full scale is 350 degK, so the discrepancies and scatter in the data are compressed into a small space. Magnifying the graph makes these problems easier to see. So would a linear plot of P^(1/4) vs T. When surface temperature is above 290 degC, emission to space varies by a huge 60 W/m2! This would be more obvious with a conventional plot: causation – surface temperature – as the independent variable on the x-axis and the measured consequence – OLR – as the dependent variable on the y-axis.

George White’s data actual PROVES that the S-B equation is not – as he claims – an “immutable truth” for our planet. It is simply a first-order approximation for the behavior a planet with a non-homogeneous surface and atmosphere whose temperature varies almost 100 degK! If you dig more deeply into the fundamental physics – which starts with Einstein coefficients for emission and absorption – you learn that the S-B equation itself is not fundamental. The standard derivation of Planck’s Law begins by assuming the existence of an equilibrium between radiation and matter! Given the rapid change in temperature and composition with altitude, equilibrium doesn’t exist in our atmosphere. Emissivity turns out to be a “fudge factor” used to deal with interfaces and other problems. Strictly speaking, the concepts of blackbody and graybody radiation should not be applied to our atmosphere.

If you are searching for “immutable truth” where local thermodynamic equilibrium exists, try the Schwarzschild equation used in radiative transfer calculations. The lower and middle atmosphere are in LTE with respect to thermal infrared radiation.

There are simpler equations that correctly describe the relationship between radiation and temperature than the ones George White uses. P looks too much like pressure for equations containing T, so I’ll use the tradition symbol for radiated power, W.

Differentiating both sides of the S-B equation with respect to T (assuming emissivity is a constant, which may not always be appropriate) gives:

These versions are useful because they don’t depend on emissivity. The third version is particularly useful because it says that small percent changes in power are 4-fold the percent change in temperature. (You need to use the blackbody equivalent temperature for T and restrict yourself to small changes in W and T.) A 3.7 W/m2 change in radiation (1.5% post albedo) will cause an 0.38% or 0.98 K change in temperature. You can substitute for W (in terms of e, o and T) or T (in terms of W, o and e) and presumably arrive at the equations George White derives.

The effective emissivity of 0.6 is not that of the surface, but that of the planet, which is the combination of a surface at temperature T and a semi-transparent atmosphere reducing the emissivity by slowing down emissions from the surface to space. In fact, when we talk about surface temperatures from satellite measurements, and this is true with the GISS data I used, an emissivity of 1.0 at the surface is assumed to establish what the equivalent surface temperature will be, the operative word being equivalent. In practice, this equivalent temperature closely corresponds to near surface measurements taken by thermometers shaded from the Sun.

The dynamic range covers the entire span of monthly averages per 2.5 degree slice of latitude which more than spans the range of temperatures implied by ice cores. I’m not sure what your point about this is.

The bulk behavior of the planet converges to the SB relationship of a gray body comprised of a unit emissivity surface (ideal black body) and a semi-transparent atmosphere providing an effective emissivity for the composite gray body. Even the monthly averages don’t deviate far from the gray body response. The more data is averaged, the closer the averages get to the behavior of an ideal gray body. This is the LTE behavior of the planet as seen from space in response to changing accumulated forcing. This is also why 2.5 degree slices are important. Adjacent slices have nearly identical physical properties and the most significant difference between them is the total input power each slice receives from the Sun, or the total accumulated forcing.

You also need to consider that about 2/3 of what is seen from space are cloud tops and not the surface. The averages include both cloudy and clear conditions and averages are all we care about when predicting how those averages will change in response to a change in input (sun) or a change to the system (co2). Is seems to be the instantaneous differences from optimum cloud properties that result in deviation from nominal. But whenever it goes one way, it goes an equal and opposite amount in the other direction. The reasons for this is that clouds are the control valve (thermostat) that multiplexes between warm surface and cold clouds to present the planet emissions required to offset the incoming solar power. Note that this isn’t a thermostat acting on the surface temperature, but a thermostat acting on the planets energy balance.

The equations with no dependency on albedo are useless as comprehensive predictors of change, as the only property affected by changes in gain (changes in feedback) is the emissivity, which I’ve shown is equal to 1/gain.

Your assertion of about 1C per 3.7 W/m^2 of forcing is the same value I get, so why are you objecting again?

Co2isnoevil wrote: “Your assertion of about 1C per 3.7 W/m^2 of forcing is the same value I get, so why are you objecting again?”

A blackbody at 255 degK has a “climate feedback parameter” of about 3.7 W/m2 emitted per degK. So does a graybody at 288 degK with emissivity of 0.6.

Unfortunately, the atmosphere is not a blackbody or graybody. Your own data shows modest disagreement between a theoretical graybody and observations. The disagreement is serious above 290 degK, which is about half of the planet. Your discussion of thermostats and clouds implicitly acknowledges that the physics of graybodies provides an incomplete description for how the planet behaves (and therefore what its climate sensitivity).

The physics of black- and graybodies applies only to materials in equilibrium with the local radiation field. The derivation of Planck’s Law begins with this assumption. Blackbody radiation is emitted only when absorption and emission of photons come into equilibrium with each other. There is no such equilibrium on our planet. Graybody radiation is emitted when blackbody radiation is partly reflected at a surface. This is why absorptivity equals emissivity.

At some wavelengths (the atmospheric window), emission to space is appropriate for a blackbody of about 290 degK. Other wavelengths, 260 degK. In much of the CO2 band, 220 degK. At the strongest CO2 line, 250? degK. The planet does not emit as if the radiation escaping to space came from any one temperature. We all know why.

You are closing your eyes to the inconsistencies in your own data, to the misapplication of theory to this situation, and to the role of clouds. I’m happy discussing the real behavior of climate system, but not what an inadequate theory says must be happening.

What is really happening is that the best linear fit to your temperature vs power emitted plot has a slope of about 0.9 degK/(W/m2). I’d like to understand what this means. I’m uncomfortable interpreting it as climate sensitivity, but my reasons are vague. I don’t think a world with high climate sensitivity or lower climate sensitivity will look much different on this type of plot

Kirchhoff’s Law, absorptivity [at a given wavelength] equals emissivity [at that wavelength]. Is required by the Second Law. Otherwise, you can construct a perpetual motion device of the second kind. To be specific: If emissivity were greater than absorptivity, the object with that property would lose energy and have a temperature below ambient and conversely for absorptivity greater than emissivity. This allows extraction of energy from a single heat source at one temperature.

Here is the relationship between surface emissions and planet output. Again, the change around 0C is a consequence of emerging feedback and being applied to all accumulated W/m^2 of forcing. As you can see, its quite linear in the power domain.

Start here for plots of the sensitivity of everything on everything else.

I’m still confused about what appears to be the new branch of physics you have come up with.

Can you confirm a few points:

1. The atmosphere does not emit photons according to its temperature. Instead it emits photons that it absorbed. Therefore the temperature of the atmosphere is irrelevant in emission of thermal radiation.
2. The surface does emit photons according to its temperature.
3. (Assuming I am correct in item 1) Liquid water in clouds falls into which category?

First, there is no new branch of physics here. All I am doing is applying the laws of physics we already know to the climate system. Apparently, this is something that consensus climate science has never done with any kind of rigor, otherwise they would not be pushing such an absurdly high sensitivity.

1. The atmosphere does not emit photons according to its temperature. Instead it emits photons that it absorbed. Therefore the temperature of the atmosphere
is irrelevant in emission of thermal radiation.

The O2, N2 and other non GHG gases in the atmosphere are like this. Liquid water, ice and dust are black bodies that absorb and emit equal average power in the steady state.

2. The surface does emit photons according to its temperature.

And in the steady state, it’s absorbing the same photon flux its emitting.

3. (Assuming I am correct in item 1) Liquid water in clouds falls into which category?

I explained that in 1). However, while collisions of N2/O2 with liquid water do share momentum, in the steady state, the flux entering particles of atmospheric water, ice and dust is equal to the flux leaving them, thus there is no net conversion of photons into heated mater and the only thing we should be interested in relative to climate change is how the steady state adapts to change.

First, there is no new branch of physics here. All I am doing is applying the laws of physics we already know to the climate system.
First, there is no new branch of physics here. All I am doing is applying the laws of physics we already know to the climate system.

Oh, your definition is new. So far you haven’t provided a single physics reference. (You just appear to think it’s obvious from quantum mechanics).

I wonder when you think climate science lost the plot as far as the emission of thermal radiation from the atmosphere is concerned. And at what time the plot was correct.

where Bλ(T) is the Planck law of emission at wavelength λ and temperature T.

That is, the change in intensity of monchromatic radiation is the difference between the absorption and the emission. And the emission depends on the local temperature. (The reason that the emissivity/absorptivity coefficient, ελ doesn’t appear in the right side is because it is in the left side under τ, the optical thickness – you can see this in the derivation in equation 10).

The key point is emission is not based on “photons absorbed” but on local temperature and emissivity.

Here’s Radiative Transfer, by Nobel prize winner Subrahmanyan Chandrasekha. This was published in 1950. Here is an extract from the 1960 reprint in books.google.com.

Note the same relationship of emission as I provided, which relies on the local temperature.

CO2isnotevil wrote: “The atmosphere does not emit photons according to its temperature. Instead it emits photons that it absorbed. Therefore the temperature of the atmosphere is irrelevant in emission of thermal radiation… The surface does emit photons according to its temperature.”

Emission in the troposphere does depend on temperature.

Liquid and solid water cover much of the surface of the planet. Water vapor is the dominant GHG in the lower troposphere. Does it really make sense to believe that emission from liquid and solid water depends on temperature, but that emission from water vapor does not?

At 0 degC, molecules in all three forms have the same mean kinetic energy and Boltzmann distribution of energies – some of which are high enough to cause collisional excitation.

When water and other liquids evaporate at one atmosphere, they expand about 1000-fold. Instead of touching other molecules, molecules in the gas phase average about 10-fold further apart in each of three directions. They don’t go far before colliding (about 100 diameters).

And in the steady state, it’s absorbing the same photon flux its emitting.”

BTW, I certainly cannot agree with this statement unless it’s qualified to be absorbing the equivalent of the photon flux its emitting.

I assume you just misspoke. In the steady-state, there is certainly no thermodynamic requirement that the replacement of the radiant power emitted from the surface the in the steady-state be manifested by the same amount of photon flux the surface is emitting.

The only requirement is for the net flux gained, independent of its physical manifestation, to be equal to the photon flux emitted from the surface as a consequence of its temperature (dictated by S-B).

That is, if the surface in the steady-state emits X Joules per second as a consequence of its temperature, a net of X joules per second must be some how gained (i.e. added back), otherwise the surface will cool and radiate less or warm and radiate more (and not be in a steady-state).

And here’s another plot with that same strange peak in emission of radiation in the very center of the CO2 band:

This is from Richard M Goody & YL Yung, Atmospheric Radiation: Theoretical Basis, 2nd ed. (1989). The 1st edition was published in 1964 and the theory hasn’t changed. In fact he draws a lot on the just cited Radiative Transfer, by Chandrasekha

Obviously your theory can’t explain it, but the theory (universally believed by people in atmospheric physics) that emission of radiation depends on the local atmospheric temperature predicts it – as you see in the above graphic which both has the calculated and measured values.

I think the problem may be there is really no was to distinguish a measured temperature that is the manifestation of both a flux of photons flux and a kinetic flux of energy shared by collisions. The measured temperature (like from a thermometer) would really be measuring the combined effect of the two, would it not?

I think the problem may be there is really no way to distinguish a measured temperature that is the manifestation of both a flux of photons and a kinetic flux of energy shared by collisions. The measured temperature (like from a thermometer) would really be measuring the combined effect of the two, would it not?

Maybe I’m wrong, but I fail to see how this violates Kirchoff’s law if the linear kinetic energy of non-GHGs are equalized via collisions with the GHGs, i.e. are in LTE with each other. Are you claiming it does violate Kirchoff’s law? If so, can you maybe provide a little more detail explaining that?

You guys seem to be splitting hairs here over what is supposed to be happening at a very, very micro level, which in reality is likely some combination of both purported mechanisms.

I understand there are basically three ways that can initiate the emission of photon from the atmosphere (not emitted from the water or ice in clouds). The first is by spontaneous emission, the second is by way of a collision ‘exciting’ the emission of the photon from a GHG molecule, and the third when a GHG molecule is in a high enough energy state where the absorption of a photon can ‘excite’ the emission of another photon from that same GHG molecule. In all three cases, the direction of the photon emitted is random, i.e. occurs with equal probability up or down.

We know the energy of absorbed photons one way or another are being re-emitted fairly quickly, right? And the emission is non-directional, right? Is this really worth splitting hairs over, especially since black box analysis implicitly does not assume any internal mechanism?

..You guys seem to be splitting hairs here over what is supposed to be happening at a very, very micro level, which in reality is likely some combination of both purported mechanisms..

This is not minutae. This is “how does radiation interact with the atmosphere?” It’s important if you want to calculate things to do with the greenhouse effect and climate.

You get a completely different result for your calculations of radiative transfer if you have co2isnotevil’s theory. All other theories proposed which rely on a flawed idea are consequently flawed.

In the case accepted since at least 1950, and probably before, radiation is emitted according to the temperature of the atmosphere. Let’s call that case A.

In the case proposed by co2isnotevil, radiation is not a function of the atmospheric temperature at all. Let’s call that case B.

In case A the atmospheric temperature changes affect the “greenhouse” effect.

In case B the atmospheric temperature changes do not affect the “greenhouse” effect (apart from with clouds).

It also finally explains your 1000 comments about 50% up / 50% down! I have to say it made me happy to finally understand the source of this amazing idea. Look again at this graph and you will see – if you concentrate – why the idea is not in accord with measurements.

Which is why I ask your colleague in this endeavor to explain various measurements that conflict with his theory. And of course to provide just one reference from physics in support.

Of course, co2isnotevil is welcome to attempt to demonstrate that regardless of the “differences” in theory proposed by case A (atmospheric physics since at least 1950) or case B (co2isnotevil plus the enormous body of literature he has provided in support of his theory), his other theory is still intact.

I await.

RW, I put it to you – with regret – that given you think this is splitting hairs over unimportant details you understand just about nothing about this subject.

And when you say “..in reality is some combination of both purported mechanisms..“, well, what can I say? Are you having a laugh? Can you even read an equation? Your study of atmospheric physics has led you to this important conclusion? Or are you thinking of Jack Nicholson in “Mars Attack”?

Ask co2isnotevil, “if we accept the preposterous ideas of climate science about emission of thermal radiation from the atmosphere, will it affect your theory?” and see what the answer is.

I appreciate your reply, and before reading it I spent all night thinking about all of this. Despite what you may think, I’m not interested in fooling or deluding myself. I’m genuinely trying to understand where you’re going with this (and where co2isnotevil is going with it). In other words, why each of you apparently thinks this is such a big deal, given black box analysis is implicitly designed to not allow any assumption of internal mechanism.

I hope you’ll stick with me on this, and I do respect you and this blog. I also appreciate that you and this blog don’t advocate argument from authority.

“It’s important if you want to calculate things to do with the greenhouse effect and climate.”

OK, but calculate what specifically about the GHE and/or the climate? This seems to be the crux of the matter. It seems to me either the GHE is a radiative effect, i.e. it’s underlying mechanism is driven by the effects of radiative resistance to outer space cooling by radiation, or it’s not. In other words, it’s not a conductive resistance to outer space cooling by radiation. What I mean is it’s not driven mechanistically in the same way thermal insulation further slows the heat loss of a surrounded object, making the object warmer than it would otherwise be.

You seem to want to have it both ways in that you’re claiming photonic IR flux is absorbed by GHGs in the atmosphere and the absorbed energy is fully shared and equalized by collisions with non-GHGs, i.e. fully thermalized into the linear kinetic energy of all the gas molecules, and the atmosphere subsequently emits IR based solely on its temperature in the same way a dense body would emit broad band Planck emission from its outer surface (i.e. in kind of the same way a liquid or solid would emit Plank emission based on its temperature), yet the IR emission by the atmosphere is only narrow band and only emitted by GHGs (the latter point is agreed to by you and co2isnotevil).

Either the dominant way the joules of absorbed photonic flux are moving through the atmosphere is by re-radiation or it’s by conduction. I understand it predominately by radiation, and should be if the GHE is in fact driven by radiative resistance to outer space cooling as it’s claimed to be.

“You get a completely different result for your calculations of radiative transfer if you have co2isnotevil’s theory.”

Why and how so? co2isnotevil says he gets the same results and is following the same protocol for RT used in the field. If you want details on how he’s doing his RT simulations, I’m sure he will be more than happy to provide you with anything you want to see. Also, I understand the RT simulations themselves that calculate changes in IR opacity don’t assume operate as though the emission is LTE. Moreover, co2isnotevil, says the mainstream or established model for atmospheric radiation is valid equivalent to his for accurately predicting spectrums. He only really claims his model or hypothesis is a more accurate model relative to the micro behavior involving the physics of photonic absorption and initiating mechanism of re-emission.

“In the case accepted since at least 1950, and probably before, radiation is emitted according to the temperature of the atmosphere. Let’s call that case A.”

Understood.

“In the case proposed by co2isnotevil, radiation is not a function of the atmospheric temperature at all. Let’s call that case B.”

Yes, also understood. At least in a dominant way he’s saying.

“In case B the atmospheric temperature changes do not affect the “greenhouse” effect (apart from with clouds).”

No, not understood and not agreed to. How are you defining temperature? More importantly, how are you measuring it? The measured temperature of the gases that make up what we generally refer to as ‘air’ is always going to be a combination of a flux of incident photons and a present kinetic flux of molecules in motion. A thermometer can’t really distinguish or separate the two. It’s not the same as dipping a thermometer in a liquid, where in that case, it’s pretty much entirely a measure of the kinetic energy of molecules in motion.

“It also finally explains your 1000 comments about 50% up / 50% down! I have to say it made me happy to finally understand the source of this amazing idea. Look again at this graph and you will see – if you concentrate – why the idea is not in accord with measurements.”

Regarding 50/50 up or down, I’m referring to emission only on the photonic level, and not bulk emission behavior of the whole or for large portions of the atmosphere. I’m totally aware that IR emission rates decreases with height, because measured temperature decreases with height, i.e. the lapse rate. The result of this is when emission effects are summed over layers that are emitting at different rates, and because the lower layers are emitting at higher rates, more IR flux is passed out the bottom than out the top. In fact, this is why the atmosphere as a whole masses passes more IR flux to the surface than it does out to space.

That you see co2isnotevil’s box equivalent model as conflicting with this elementary property of atmospheric radiation tells you me you don’t understand what he’s doing or what the model is attempting to show. I’m still not sure you understand the foundation behind equivalent modeling and black box system analysis. Regarding the amount of IR the atmosphere as a whole passes to the surface and passes into space, the model is not attempting to describe that property or quantify either amount. Again, that you apparently think it is, tells me you don’t understand it and in general don’t understand the methods co2isnotevil is using.

BTW, the obvious flag that one doesn’t understand the foundation behind equivalent modeling is when they object to it on the basis that the actual behavior of the real system is different than the equivalent modeled behavior.

“Which is why I ask your colleague in this endeavor to explain various measurements that conflict with his theory. And of course to provide just one reference from physics in support.

Of course, co2isnotevil is welcome to attempt to demonstrate that regardless of the “differences” in theory proposed by case A (atmospheric physics since at least 1950) or case B (co2isnotevil plus the enormous body of literature he has provided in support of his theory), his other theory is still intact.

I await.”

OK, let’s see what co2isnotevil says in response.

“RW, I put it to you – with regret – that given you think this is splitting hairs over unimportant details you understand just about nothing about this subject.”

Well, I think you’re wrong about that. I think that you think they’re critically important details — that you don’t understand black box system analysis and immense power of that kind of analysis. But of course, I — like anyone, can be wrong for some reason unbeknownst to me.

Sorry this was meant to appear like this (the bottom part un-italicized):

“In the case proposed by co2isnotevil, radiation is not a function of the atmospheric temperature at all. Let’s call that case B.”

Yes, also understood. At least in a dominant way he’s saying.

“In case B the atmospheric temperature changes do not affect the “greenhouse” effect (apart from with clouds).”

No, not understood and not agreed to. How are you defining temperature? More importantly, how are you measuring it? The measured temperature of the gases that make up what we generally refer to as ‘air’ is always going to be a combination of a flux of incident photons and a present kinetic flux of molecules in motion. A thermometer can’t really distinguish or separate the two. It’s not the same as dipping a thermometer in a liquid, where in that case, it’s pretty much entirely a measure of the kinetic energy of molecules in motion.

And in these measurements, also reproduced in Grant Petty, there’s something I’d like you to explain:

The radiance around 650 cm-1 when you look up is about 100 W/[m2 sr cm-1].

But when you look down from 20km the radiance at that wavenumber is only about 55 W/[m2 sr cm-1].

Why is that?

The “confused climate science people” have an explanation – this highly absorbing part of the CO2 band emits radiation dependent on the temperature of the atmosphere. At 20km it’s cold. At the surface it’s hot (relatively speaking because it was the polar ice sheet). Because the atmosphere is opaque at these wavelengths, radiation is reabsorbed in a short distance so at 20km altitude you are measuring the local emission of radiation.

SOD replied to RW: “..in reality is some combination of both purported mechanisms..“, well, what can I say? Are you having a laugh?”

There ARE two competing mechanisms for exciting a molecule and for returning it to its ground state: 1) absorbing and emitting a photon. 2) collisional excitation and relaxation. No fundamental law of physics demands that one mechanism or the other dominate.

It turns out that 99+% of excited CO2 molecules in the troposphere were excited by collision rather than by absorbing a photon simply because collisional excitation AND relaxation occurs so fast there. That means that the fraction of molecules in an excited state – and therefore the rate of emission – depends on the local temperature via the Boltzmann distribution. (Ironically, Planck’s Law depends on the Boltzmann distribution, so anyone discussing black- or graybody radiation is assuming that mechanism 2 is operating.)

In the thermosphere, 99% of the excited CO2 molecules were excited by absorbing a photon because collisions are so infrequent. Somewhere between both mechanisms are important. Induced emission is a third mechanism.

The data you show demonstrates that emission in the lower atmosphere depends on temperature.

The ‘extra’ photons you refer to do not originate from the surface. The only place they can be coming from is by excitation of the ionosphere by UV, whatever solar wind gets past the magnetosphere and cosmic rays. This comprises a unique thermodynamic system completely decoupled from the surface and what we observe is the sum of this with the effects from surface emissions. In addition, the strength of the emissions in the different ozone lines is inconsistent with the strength of the absorption lines, which is a strong indicator of secondary emissions consequential to primary absorption in different absorption bands. You do understand that if these features were dependent on surface emissions, they would have to be much smaller in polar regions than at the equator, yet they seem to be independent of surface temperatures.

None of the equations you’ve presented has lent any support to your claim that the vibrational energy of a state change is converted in whole to linear momentum. I am going to insist that you provide this. A reference that just states it to be so is insufficient. We need a QM centric description of the mechanism and equations describing it. Even your reference on radiative physics describes the mechanisms exactly as I do where the only exchange between the energy of a state change is to and from photons. It’s certainly true that that the bulk behavior doesn’t change and even the spectrum matches when you assume the N2 and O2 are emitting photons, which is the implicit assumption of your claim that you seem unable to reconcile.

In my conversations with Perry, he came to that understanding and then said that the conversion from the transfer of GHG captured energy to the ‘atmosphere’ results in narrow band emissions, rather than Planck emissions, which doesn’t seem logical. The conflict arises because if what you say is true, we should be seeing much higher emissions in the transparent regions of the atmosphere and they would not fit the Planck spectrum of surface emissions.

It may be easier for you to see if you consider that the water in clouds is tightly coupled to the water in the oceans via evaporation and rain (70% of the surface we care about) and is part of the same thermodynamic system as the surface. Note that this differs from Venus, where the clouds are completely decoupled from the surface, yet its the cloud tops that are in thermodynamic equilibrium with the Sun. A failure to recognize this is why it seems that a runaway GHG is plausible even as control theory precludes this given that the source of power is the input power itself, rather than an external power supply.

If you have learned about radiative transfer and climate science in the last 2 decades, then most of the material you would have been exposed too has been contaminated by IPCC rhetoric and this is making you brain shut down when the actual physics defies the rhetoric. But of course, this was the motivation for the IPCC to become the arbiter of climate science by what they published in their reports. They had to be sure to filter the science in a way that didn’t undermine their reason to exist. You are aware of this conflict of interest, right? I hope you are not among those who considers this COI a necessary means to an end.

“In my conversations with Perry, he came to that understanding and then said that the conversion from the transfer of GHG captured energy to the ‘atmosphere’ results in narrow band emissions, rather than Planck emissions, which doesn’t seem logical.”

“If you have learned about radiative transfer and climate science in the last 2 decades, then most of the material you would have been exposed too has been contaminated by IPCC rhetoric and this is making you brain shut down when the actual physics defies the rhetoric. But of course, this was the motivation for the IPCC to become the arbiter of climate science by what they published in their reports. They had to be sure to filter the science in a way that didn’t undermine their reason to exist. You are aware of this conflict of interest, right? I hope you are not among those who considers this COI a necessary means to an end.”

In SoD’s defense, despite the name of the blog I do not think his motivation is driven by blind support for the IPCC conclusions at all. The purpose of this blog is for people on all sides of this debate, for whatever reason, to discuss and learn climate science so they can better make their own conclusions about it. I have not seen an IPCC bias in presentations and references from SoD at all (or a skeptic bias). This forum is not like skeptical science or realclimate, where posts and alternative viewpoints are censored and only one side is welcomed and ultimately allowed to participate. There are many stauch skeptics that regularly post here and are respected and treated fairly (at least from what I’ve seen).

I’m not suggesting that anyone is necessarily consciously biased by the IPCC, just that most of the material available about climate science made available since the IPCC was formed has a high probability of being significantly biased towards the preconceived conclusions of the IPCC and its this bias in the available literature that leads to a unconscious bias against anything that disputes the claims of CAGW artificially setting an unreasonably high bar for anything that might contradict the IPCC’s reason to exist.

For example, in the news about a recent Kepler discovery, they point to a newly discovered planet that is presumed to be heading towards a runaway GHG effect as Earth is expected to do in the far future (and they point to Venus as proof of concept). Meanwhile, accepted and well tested control theory points out that a runaway effect like that claimed is impossible and the biggest effect 1 W/m^2 of solar input at 100% positive feedback can have on the surface is 2 W/m^2 of incremental surface emissions. There’s a far better way to explain Venus.

They neglect the fact that the Venusian atmosphere is heated from above by a cloud layer in equilibrium with the Sun, while the Earth’s atmosphere is heated from below by a surface in equilibrium with the Sun, most of which is not even the actual solid surface of the planet. The difference being Earth and Venus is the direction of the ‘lapse rate’ which is based on the PVT profile of the substance between the source of heat, gravity and the location of the temperature of interest. On Earth, the lapse rate goes up and the kinetic temperature gets cooler with altitude, on Venus, the lapse rate goes down and the temperature gets warmer with decreasing altitude. People often forget that the Venusian CO2 atmosphere has nearly the same mass as our oceans and behaves more like an ocean than an atmosphere as it separates the Venusian solid surface from the actual surface in equilibrium with the Sun.

“I’m not suggesting that anyone is necessarily consciously biased by the IPCC, just that most of the material available about climate science made available since the IPCC was formed has a high probability of being significantly biased towards the preconceived conclusions of the IPCC and its this bias in the available literature that leads to a unconscious bias against anything that disputes the claims of CAGW artificially setting an unreasonably high bar for anything that might contradict the IPCC’s reason to exist.”

Alright, fair enough, but let’s just stick to the science.

“For example, in the news about a recent Kepler discovery, they point to a newly discovered planet that is presumed to be heading towards a runaway GHG effect as Earth is expected to do in the far future (and they point to Venus as proof of concept). Meanwhile, accepted and well tested control theory points out that a runaway effect like that claimed is impossible and the biggest effect 1 W/m^2 of solar input at 100% positive feedback can have on the surface is 2 W/m^2 of incremental surface emissions. There’s a far better way to explain Venus.

They neglect the fact that the Venusian atmosphere is heated from above by a cloud layer in equilibrium with the Sun, while the Earth’s atmosphere is heated from below by a surface in equilibrium with the Sun, most of which is not even the actual solid surface of the planet. The difference being Earth and Venus is the direction of the ‘lapse rate’ which is based on the PVT profile of the substance between the source of heat, gravity and the location of the temperature of interest. On Earth, the lapse rate goes up and the kinetic temperature gets cooler with altitude, on Venus, the lapse rate goes down and the temperature gets warmer with decreasing altitude. People often forget that the Venusian CO2 atmosphere has nearly the same mass as our oceans and behaves more like an ocean than an atmosphere as it separates the Venusian solid surface from the actual surface in equilibrium with the Sun.”

I have curiously noticed that a lot of people seem to view or internally conceptualize the atmosphere as more like a layer of insulation wrapped around the Earth (hence the blanket analogy) as opposed to it really just being surrounded by a very thin gas By my estimation, over 99.9% of the stored solar energy in the system is located below the surface (primarily in the oceans) and less than 0.1% is contained within the atmosphere, yet the atmosphere comprises about 3-4 times the volume of space as the average depth of the ocean. Moreover, the overwhelming majority of the less than 0.1% in the atmosphere is that of the kinetic energy of the O2 and N2 molecules, which they themselves supposedly don’t even emit (or have an emissivity near zero). The fraction comprised of GHGs and clouds, which are doing virtually all of the IR emitting (aside from the surface itself), is an infinitesimally small fraction of the total energy in the system. Given all of this, it doesn’t seem to be all that extraordinary of a claim that the constituents of the atmosphere aren’t dense enough and/or aren’t opaque enough in the IR to push the value of ‘F’ away from 0.5 in the direction of 1.0. But I could be wrong.

By the sheer rate at which the temperature drops once the Sun goes down, especially if there are no clouds, tells me that the radiant joules of energy pumped into the atmosphere from the surface and Sun must be moving through quite quickly and don’t persist for very long, otherwise it would not cool down at a night to anywhere near the amount it does. And the atmosphere is not really acting the way thermal insulation would keep something warm for a much longer time than without. But again, maybe I’m wrong.

The atmosphere responds quickly and has the shortest time constant. The land reacts slower and has a longer time constant. The intrinsic response of the oceans is much longer, but since clouds are tightly coupled to the oceans via the hydro cycle, the effective time constant is lower than might be expected, but is still much longer than the time constant of land.

The atmosphere is best modelled as a mismatched transmission line, rather than as a blanket of insulation or a greenhouse unless the blanket is half full of holes or half of the glass roof panels in the greenhouse are removed. It’s easy to forget that the glass in a greenhouse reflects nearly 100% of the LWIR back, while the actual atmosphere ultimately lets almost 2/3 of the LWIR pass right on through to space or about 240 W/m^2 of the 390 W/m^2 emitted by the surface.

CO2isnoevil and RW: Grant Petty’s textbook, A First Course on Atmospheric Radiation on Local Thermodynamic Equilibrium:

p 126: “For all common applications in atmospheric radiation, Kirchhoff’s Law can the taken as an absolute. It is therefore only for the sake of completeness that I point out that Kirchhoff’s Law only applies to systems in local thermodynamic equilibrium. This condition applies, for example, when molecules of a substance exchange energy with each other (e.g. through collisions) much more rapidly than they do with the radiation field or other sources of energy. LTE, and therefore Kirchhoff’s Law breaks down at extremely high altitudes in the atmosphere where collisions between molecules are rare. LTE also breaks down in systems like lasers, fluorescent light bulbs, gas discharge tubes and LEDs, where the average electronic energy levels of the molecules may be artificially “pumped up” by various means to levels far higher than that expected from the thermodynamic temperature of the molecules. Emission from such systems is therefore far greater than expected for a blackbody and therefore does not obey Planck’s Law or Kirchhoff’s Law [or the Schwarzschild eqn].”

p 238. “LTE can be taken as a given for most problems in the lower and middle atmosphere,where atmospheric density is comparatively high and collisions are therefore quite frequent. This assures us that knowledge of the physical temperature is sufficient to accurately predict the distribution of total internal energy among all possible modes … Note however that the immediate consequence of the absorption or emission of any particular photon is usually a change in the internal energy of a single molecule … Yet all of these energy changes eventually get redistributed between all of the molecules in the vicinity, so there is always a predictable distribution of the total energy among all available storage modes.”

If a molecule in LTE absorbs a photon, it rarely “re-emits” that photon. By definition of LTE, the energy of the excited state is exchanged through collisions with other molecules much faster than a photon is emitted from an excited state. In LTE, the fraction of gas molecules in an excited state depends solely on the local temperature, not on the number of photons that may have been absorbed.

“If a molecule in LTE absorbs a photon, it rarely “re-emits” that photon. By definition of LTE, the energy of the excited state is exchanged through collisions with other molecules much faster than a photon is emitted from an excited state. In LTE, the fraction of gas molecules in an excited state depends solely on the local temperature, not on the number of photons that may have been absorbed.”

Yes, I understand this is how LTE is being defined and accepted as meaning in this field, but this seems more akin to how absorbed radiant energy is thermalized in a liquid or solid. It would seem to get much trickier and less clear cut in the realm of radiating gas (and a relatively thin radiating gas at that). I referenced here a couple times a couple times from wikipedia a definition of LTE that doesn’t entirely fit or is different (but is consistent with what co2isnotevil is claiming):

“It is important to note that this local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.”

I understand co2isnotevil is essentially saying he doesn’t think there is a real mechanism by which an absorbed photon by a GHG molecule, whose energy is stored as internal vibration energy, can transfer this internally stored energy by collision with another molecule in the same way it can and does in liquid or solid, and that because the energy of absorbed photons by GHGs are not shared by collisions, they achieve high enough energy states where the absorption of a photon ‘excites’ the emission of another photon from that same molecule a short time after the photon is absorbed.

RW,
Yes. Although the emission of a photon can also arise from a collision. One other point is that while spontaneous emissions are isotropic, induced emissions are somewhat more restricted in the direction of the emitted photon. However, this doesn’t change the isotropic nature as collisions are random thus emissions consequential to collisions are random and this is the mechanism, that dominates.

BTW, co2isnotevil claims his dominant mechanism of emission is not non-LTE, and that the GHGs and non-GHGs are in full LTE with each other, i.e. their linear kinetic energy is equally shared amongst each other (the massive particles are in LTE with each other), but that the flux of photons going through the GHGs, i.e. absorbed and re-emitted by the GHGs, is mostly not being shared with non-GHG molecules. I understand co2isnotevil thinks this is still LTE emission, but under the latter definition of LTE I referenced.

Everyone’s making out co2isnotevil’s claimed mechanism as some massive re-write of over decades of understanding that would totally transform the field of atmospheric radiation when it’s really only a slight refinement of current theory. As I understand it at least.

Its not even a refinement of known physics theory, just a refinement in how climate science applies this theory to the climate system. Few understand that consensus (i.e. per the IPCC) climate science is not about physics. It’s all about finding ways to interpret physics and data to justify a high sensitivity as. It was specifically not in their charter to determine if CO2 emissions are catastrophic, the organization was formed to determine what to do about the presumed catastrophe. Since the sensitivity required to justify the presumed catastrophe is demonstrably too high per both known theory and data, the only way to support the claimed sensitivity is by misinterpreting physics. I’m simply pointing out where these misinterpretations are.

“Its not even a refinement of known physics theory, just a refinement in how climate science applies this theory to the climate system. Few understand that consensus (i.e. per the IPCC) climate science is not about physics. It’s all about finding ways to interpret physics and data to justify a high sensitivity as. It was specifically not in their charter to determine if CO2 emissions are catastrophic, the organization was formed to determine what to do about the presumed catastrophe. Since the sensitivity required to justify the presumed catastrophe is demonstrably too high per both known theory and data, the only way to support the claimed sensitivity is by misinterpreting physics. I’m simply pointing out where these misinterpretations are.”

OK, but ultimately science is science and physics is physics. Meaning just because the IPCC’s reason to exist is not necessarily to get the science right, i.e. accurately quantify climate sensitivity, it doesn’t necessarily mean their conclusions are wrong just for that reason.

On this site, the IPCC gets little mention and people, including the site’s host, just discuss the science and stick to the science (at least from what I’ve seen). I’ve seen no bias here that only science claiming to support high sensitivity is presented.

I say we should just forget about the IPCC and just talking about the science. Not all the science in the published literature agrees with the IPCC conclusions anyway.

Myself and co2isnotevil fully accept and agree with the established physics of the GHE that increased GHGs in the atmosphere increase the IR opacity of the atmosphere and cause the Earth to be out of radiative balance with the Sun, i.e. OLR is reduced, and that this should require the atmosphere and ultimately the surface to further warm by some amount in order to re-establish equilibrium with the Sun at the TOA.

What we are discussing and disputing involves the magnitude of this effect, in particular how much of a warming push do added GHGs really provide, and how the warming push has been quantified intrinsically. We are challenging and claiming that the 1.1C of claimed ‘no-feedback’ surface warming for 2xCO2, while would restore balance at the TOA as claimed, is not really an accurate measure of the intrinsic surface warming ability of +3.7 W/m^2 of GHG absorption, and an amount equal to about half of this (i.e. about 0.55C) is really the correct amount and what should be quantified as what climate science designates as ‘no-feedback’ for 2xCO2.

Independent of whether you agree or disagree (for whatever reason or reasons), does everyone fully understand this? It has occurred to me given the direction of where many of these exchanges have gone that this may still not actually be fully understood.

*Note that we do agree that 1.1C of surface warming is a correct calculation for so-called ‘no-feedback’ for +3.7 W/m^2 of post albedo solar power, and is a proper measure of its intrinsic surface warming ability.

Actually, I believe that the 1.1C is after all feedbacks, positive, negative, known and unknown have manifested their full effect. The current steady state of the planet has certainly had the time to arrive at this post feedback, steady state equilibrium response to solar forcing.

The only possible feedback related exception is long term ice melting, but if we calculate the incremental effect of reduced reflection and recognize that 2/3 of the planet is covered by clouds and reduced surface reflection has no effect and all the ice on the planet were to permanently disappear, it would comprise less than half of the missing power required to achieve a 3C rise from doubling CO2.

CO2isnotevil: Surface albedo arises from both ice and snow. During the seasonal/annual cycle, SWR reflected from clear skies increases by about 6 W/m2 during winter in the NH because the NH has greater has far greater seasonal snow cover than the SH. As the globe warms, some of that seasonal snow in the NH will fall as rain and/or melt more quickly. This component of ice-albedo feedback is a fast feedback. I don’t know how much seasonal snow cover is expected to diminish for every 1 degC of surface warming, so I don’t know how to calculate the size of the feedback it will produce.

I presented the equation of radiative transfer derived from first principles.
I provided the 1950 textbook (well, 1960 reprint of 1950 textbook) by Nobel prize winner Chandrasekha which also shows that emission of thermal radiation from the atmosphere is dependent on local temperature.

I asked “Can I confirm that you disagree with Chandrasekha?”

Your response, best as I can tell, to this question:

If you have learned about radiative transfer and climate science in the last 2 decades, then most of the material you would have been exposed too has been contaminated by IPCC rhetoric and this is making you brain shut down when the actual physics defies the rhetoric. But of course, this was the motivation for the IPCC to become the arbiter of climate science by what they published in their reports. They had to be sure to filter the science in a way that didn’t undermine their reason to exist. You are aware of this conflict of interest, right?

You haven’t addressed the question. I gave you a 1950 textbook.

Previously I have asked you for a physics textbook as a reference for your ideas. You have yet to provide one.

In fact, I also asked you when presenting Chandrasekha’s equation: “I wonder when you think climate science lost the plot as far as the emission of thermal radiation from the atmosphere is concerned. And at what time the plot was correct.”

The reason for my questions should be obvious.

Your response is that my brain has shut down due to blah blah blah.

It must be very uncomfortable to find out that 65 years ago a Nobel prize winner published the same equation I’m using now and very uncomfortable to find that you can’t produce a single textbook to back up your theory.

Please can you at least answer the question – “Can I confirm that you disagree with Chandrasekha?”

Please can you at least provide a physics reference – or confirm that you don’t have one.

This will demonstrate that you are interested in engaging in a science discussion.

If you aren’t interested in the science discussion but just insisting you are right, please go and visit other blogs. There are many that will be very happy to take your ideas on board.

I don’t need any new equations to support my case. The existing equations all work and characterize the behavior of radiative transfer independent of whether the mechanism is collision stimulated emissions, kinetic transfer of power, the action of black body emissions or cow farts. The equations you refer to describe the bulk macroscopic behavior which is the same independent of the mechanism.

Please be specific about what concept you do not believe known physics already properly describes. As far as I’m concerned, nothing I’ve said violates any known principles or any of the equations describing them. Why do you think any new equations are required?

The Goody and Yung pages you copied earlier in no why shape or form can be considered to support the conversion of the energy of a state change into translational energy. They talk strictly about conversion to and from photons. Why don’t you believe your own reference?

You already agreed that the lapse rate and consequential temperature of the atmosphere is irrelevant for the case of a zero feedback planet (no GHG’s even in the upper atmosphere). The temperature measured anywhere in the real atmosphere will be the combined effect of the irrelevant kinetic temperature of its gases, the temperature of the water and dust within, the photon flux arising from surface emissions and the high energy particles and UV interacting with the upper atmosphere.

And yes, all of these mechanisms, other than the kinetic temperature of the gases, contributes to photons that leave the planet. But in fact, we don’t care about the origin of specific photons, only how much photon flux is needed and how this changes as conditions change.

I don’t need any new equations to support my case. The existing equations all work and characterize the behavior of radiative transfer independent of whether the mechanism is collision stimulated emissions, kinetic transfer of power, the action of black body emissions or cow farts. The equations you refer to describe the bulk macroscopic behavior which is the same independent of the mechanism.
Please be specific about what concept you do not believe known physics already properly describes. As far as I’m concerned, nothing I’ve said violates any known principles or any of the equations describing them. Why do you think any new equations are required?

Either you won’t answer the question or are incapable of understanding it.

Basic Science is Accepted – This blog accepts the standard field of physics as proven. Arguments which depend on overturning standard physics, e.g. disproving quantum mechanics, are not interesting until such time as a significant part of the physics world has accepted that there is some merit to them.

Refusing to provide a reference and just claiming you are using established physics fits into that category – despite being presented with evidence to the contrary.

Refusing to confirm or deny that an equation provided from a standard text in the field conflicts with your ideas fits into that category.

Asking me to “be specific” given my very specific formula and very specific question just asked demonstrates we are not going to get along.

You can post one last comment as your final comment.
Interested readers can of course read all of your existing comments and follow the links through to your work.

You have not asked a specific question, other than rhetorically ask for equations, which you already produced and as I keep telling you, are correct. If you think that anything I’ve said is inconsistent with any of the equations, any existing physical laws or requires new laws to explain, then you are misunderstanding what I’ve been saying, and this includes any assertion you make about LTE, which again is characterized as a bulk behavior and has already been explained. It’s not at all clear to me what laws or equations you think are being circumvented. It’s bizarre to me that you keep referring to physics and equations that are in complete agreement with everything I’ve been saying, yet fail to recognize we are in agreement even when I point it out.

Here’s the black box analysis RW is talking about.

The IPCC reports average cloud cover as 2/3 of the planet
The average emissivity of these clouds is about 0.75 (0.25 transparency)
Line by line simulations tell us about 46 % of surface emissions pass through to space in a straight line at the speed of light for the standard atmosphere.

Calculate the net T, or fraction of surface emissions that pass through to space in a straight line,

T = (1-0.66)*0.46 + 0.66*0.46*0.25 = 0.232

For 390 W/m^2 of surface emissions, 91 W/m^2 are passing through
This leaves 240-91 = 149 W/m^2 of energy that must pass to space

If 91 W/m^2 are passed, 390-91 = 299 W/m^2

The missing 149 W/m^2 must come from the 299 W/m^2 being absorbed by the atmosphere leaving 150 W/m^2 to be returned to the surface which when added to the 240 W/m^2 replaces the 390 W/m^2 being emitted.

If you believe the transparent window is something else, for example, only 25% (I’d like to see the results of your simulations …),

T = (1-0.66)*.25 + .66*.25*.25 = 0.127

50 W/m^2 is passed and 340 is absorbed. 190 is required to space leaving 150 added to 240 resulting in the 390 W/m^2 being emitted and works, except that the physical requirement of energy arriving across twice the area its being emitted is not being met and an even smaller fraction of absorption is returned to the surface.

No, it doesn’t work. You’re leaving out the ~100W/m² that is transferred to the atmosphere from the surface by latent and sensible heat transfer. Your energy flows only balance if you leave this out. Your disciple RW has been trying to sweep this under the rug since he first showed up here claiming that the KT97 and TKF09 energy balance diagrams were grossly in error. They’re not, but you and he are. You won’t be missed.

“You’re leaving out the ~100W/m² that is transferred to the atmosphere from the surface by latent and sensible heat transfer. Your energy flows only balance if you leave this out.”

Black box modeling is not attempting to model the actual thermodynamics, i.e. the actual thermodynamic path manifesting the energy balance. That you think it is tells me you don’t understand what co2isnotevil is doing with it and don’t understand the foundation behind equivalent modeling. Which is OK, but I’m just saying.

It’s based on the simple principle that in the steady-state, for COE to be satisfied, the number of joules going in, i.e. into the black box, must be equal the joules going out, i.e exiting the black box, and that this is independent of how the joules going in may exit either boundary of the black box (the surface or the TOA), otherwise a condition of steady-state doesn’t exist.

The surface at a steady-state temperature of about 288K (and a surface emissivity of 1), radiates about 390 W/m^2 which basic physical law dictates must somehow be replaced, otherwise the surface will cool and radiate less (or warm and radiate more). For this to occur, 390 W/m^2, independent of how it’s physically manifested, must exit the atmosphere and be added to the surface (i.e. be added to the surface). This 390 W/m^2 is what comes out of the black box at the surface/atmosphere boundary to replace the 390 W/m^2 radiated away from the surface as consequence of its temperature of 288K.

That there is significant non-radiant flux in addition the flux radiated from the surface — is certainly true, but an amount equal to the non-radiant flux leaving the surface must be cancelled by flux entering the surface in excess of the 390 W/m^2 radiated from the surface. Otherwise, you don’t have a condition of steady-state. The fundamental point relative to the black box, is joules in excess of 390 W/m^2 entering or leaving the surface are not adding or taking away joules from the surface, nor are they adding or taking away joules from the atmosphere.

The bottom line is the flow of energy in and out of the whole system is a net of 390 W/m^2 is gained by the surface, while 239 W/m^2 enters from the Sun and 239 W/m^2 leaves at the TOA. Really only 390 W/m^2 is coming down and being added to the surface. These fluxes comprise the black box boundary fluxes, or the fluxes going into and exiting the black box. The thermodynamics and manifesting thermodynamic path involves how these fluxes, in particular the 390 W/m^2 added to the surface, are physically manifested. The black box isn’t interested or doesn’t care about that, but only what comes out at the boundaries of the black box.

I should also add the black box only considers the net of 390 W/m^2 gained at the surface to be exiting at its bottom boundary, i.e. actually leaving the box and entering the surface.

Keep in mind that the non-radiant flux leaving surface and all its effects on the energy balance (which are do doubt huge) have already had their influence on the manifestation of the surface energy balance. If fact, all of the effects have, radiant and non-radiant, known and unknown.

Also, the black box and it’s subsequent model does not imply that the non-radiant flux from the surface does not act to accelerate surface cooling or accelerate the transport of surface energy to space. COE is considered separately for the radiant parts of the energy balance (because the entire energy budget is all EM radiation), but this doesn’t mean there is no cross exchange or cross conversion of non-EM flux from the surface to EM flux out to space and vice versa.

There also seems to be some misunderstand that it’s being claimed COE itself requires the value of ‘F’ to equal 0.5, when it’s the other way around in that a value of ‘F’ equal to 0.5 is what’s required to satisfy COE for this black box. It also seems no one understands what the emergent value of ‘F’ actually is supposed be a measure of or what it means physically. ‘F’ is the free variable in the analysis that can be anywhere from 0-1.0 and quantifies the equivalent fraction of power captured by the atmosphere (quantified by ‘A’) that is *effectively* gained back by the surface in the steady-state.

From all of this, since flux exits the atmosphere over 2x the area it enters from, i.e. the area of the surface and TOA are virtually equal to one another, is supposed mean that the radiative cooling resistance of the atmosphere is no greater than what would be predicted or required by the raw properties of the photons, i.e. radiant boundary fluxes and isotropic emission on a photonic level. Or that an ‘F’ value of 0.5 is the same IR opacity through a radiating medium that would *independently* required by a black body emitting over twice the area it absorbs.

I should add that George doesn’t actually claim the value of ‘F’ that matches the satellite data is necessarily exactly 0.5, but only that 0.5 is a very close match to the satellite data, which he said does have an error margin of about +/- 10%; and the spectral ‘T’ of about 0.24 he calculates is only good to about +/- 5%. He also has mentioned that in the short term (maybe annually?) the emergent value of ‘F’ for the system fluctuates a little above and below 0.5, but its long term average converges to something really close to 0.5. From all of this, I believe he is deducing that a value of 0.5 is a good long term average to use and operate with, even though it’s probably not exact.

Anyway, right or wrong, these are some of the claims and justifications.

Because the black box considers only 390 W/m^2 to be actually coming out at its bottom and being added the surface, and the surface radiates the same amount (390 W/m^2) back up into box into the box, COE dictates that the sum total of 630 W/m^2 (390+240 = 630) must be continuously exiting the box at both ends (390 at the surface and 240 at the TOA), otherwise COE of all the radiant and non-radiant fluxes from both boundaries going into the box is not being satisfied (or there is not a condition of steady-state and heating or cooling is occurring).

Whatever isn’t transmitted straight through by the surface into space (about 300 W/m^2), must be being added to the energy stored by the atmosphere, and whatever amount of the 240 W/m^2 of post albedo solar power entering the system that doesn’t pass straight to the surface must be going into the atmosphere (adding those joules to the energy stored by the atmosphere as well). While we can’t quantify the latter as precisely as we can quantify the transmittance of the surface radiation, the COE constraint still applies just the same, because an amount equal to the 240 W/m^2 entering the system has to be exiting the box none the less.

The black box equivalent model is only attempting to show that the final flow of energy in and out of the whole system is equal to the flow it’s depicting, independent of the highly complex and non-linear thermodynamic path manifesting it. Meaning if you were stop time and remove the real atmosphere, replace it with the box model atmosphere, and start time again, the rates joules are being added to the surface, entering from the Sun, and leaving at the TOA would stay the same. Nothing more.

“No, it doesn’t work. You’re leaving out the ~100W/m² that is transferred to the atmosphere from the surface by latent and sensible heat transfer. Your energy flows only balance if you leave this out. Your disciple RW has been trying to sweep this under the rug since he first showed up here claiming that the KT97 and TKF09 energy balance diagrams were grossly in error. They’re not, but you and he are.”

With regard to latent heat, evaporation cools the surface water from which it evaporated and as it condenses, transfers that heat to the water droplet the vapor condenses upon, and is the source of energy driving weather. What is left over subsequently falls back to the surface as the heat in precipitation or is radiated back to the surface. The bottom line is in the steady-state an amount equal to what’s leaving the surface non-radiantly must be being replaced, i.e. ‘put back’, somehow at the surface, closing the loop.

With regard to the KT energy balance diagrams, the main objection is they depict that the only way a joule can pass from the atmosphere to the surface is by radiation, which is obviously wrong. Of course, latent heat (and sensible heat) is certainly an input to the atmosphere, but to claim it’s all being offset at the surface by ‘back radiation’, i.e. DLR, is misleading since the latent heat from evaporation is largely offset at the surface by the heat of condensed water in precipitation and clearly not offset at the surface solely by radiation.

I appreciate your reply, and before reading it I spent all night thinking about all of this. Despite what you may think, I’m not interested in fooling or deluding myself. I’m genuinely trying to understand where you’re going with this (and where co2isnotevil is going with it). In other words, why each of you apparently thinks this is such a big deal, given black box analysis is implicitly designed to not allow any assumption of internal mechanism.

I’ve never doubted your sincerity or your desire to understand the subject. I think I have conveyed this more than once.

Black box analysis is not what is provided in the latest document from co2isnotevil, aka George White.

There is an assumption of an internal mechanism.

If instead you assume a slightly more realistic assumption of an internal mechanism you get a different answer.

If you start with the assumption that the relationship being modeled is the relationship between power and temperature from a constant emissivity surface you get one answer. Actually the derivation is a lot quicker than shown in George White’s document and well-known (because differentiating wrt one variable is a one line exercise).

Let’s take a slightly more realistic assumption – still far from the real complexity of the real climate – let’s say:

1. a surface at temperature, Ts, with constant emissivity, εs, radiating through an atmospheric window to space, where the %, α passing through the window will depend on the surface temperature (because the % of radiation at each wavelength depends on the emission temperature)

2. an atmospheric layer at a fixed altitude with emissivity εa, and temperature, Ta radiating to space. Of course, εa is a function of Ta.

I leave as an exercise for you to differentiate the total power to space (OLR) as a function of surface temperature, Ts to calculate climate sensitivity. (Let’s make it simple by having a constant relationship between Ta, and Ts. Ta = Ts – c, i.e. a fixed lapse rate).

You will get a different answer.

I also leave as an exercise for you to see what happens when you add clouds as a fixed percentage at 3 levels in the atmosphere. You will get a different answer again. How you can think that a model without clouds will get some kind of relevant result is a mystery. Cloudy skies cover about 62% of the total sky and act effectively as black body emitters at much colder temperatures.

If you pick a physics model of what is going on in a box and you get the model wrong you get the wrong result.

“I’ve never doubted your sincerity or your desire to understand the subject. I think I have conveyed this more than once.”

OK. No response to the rest of my post? There was a lot of salient stuff in there I was hoping to get some of your responses to.

Black box analysis is not what is provided in the latest document from co2isnotevil, aka George White.”

Agreed.

“There is an assumption of an internal mechanism.”

What mechanism? As best I see it, the data is measured and not derived from an assumed mechanism. The point is, as I understand it, is to show that the measured data, independent of mechanism, overall best fits the 0.6 emissivity slope and starts to drop below it toward the 0.8 emissivity slope around the current global average temperature and continue in that same direction above the global average temperature.

This is totally consistent with the incremental gain being less than absolute gain of the system, because an incrementally larger emissivity is a direct measure of sensitivity since the emissivity is just the reciprocal of the gain. 1/0.6 = 1.66, which is the absolute gain of the system, i.e. 390/239 = 1.63, and 1/0.8 = 1.23, which is less than the absolute gain, indicating the direction of incremental net negative feedback. Note net positive feedback requires an incremental gain greater than the absolute gain, or for the incremental emissivity to be less than absolute emissivity of about 0.6.

“If instead you assume a slightly more realistic assumption of an internal mechanism you get a different answer.

If you start with the assumption that the relationship being modeled is the relationship between power and temperature from a constant emissivity surface you get one answer. Actually the derivation is a lot quicker than shown in George White’s document and well-known (because differentiating wrt one variable is a one line exercise).

Let’s take a slightly more realistic assumption – still far from the real complexity of the real climate – let’s say:

1. a surface at temperature, Ts, with constant emissivity, εs, radiating through an atmospheric window to space, where the %, α passing through the window will depend on the surface temperature (because the % of radiation at each wavelength depends on the emission temperature)

2. an atmospheric layer at a fixed altitude with emissivity εa, and temperature, Ta radiating to space. Of course, εa is a function of Ta.

I leave as an exercise for you to differentiate the total power to space (OLR) as a function of surface temperature, Ts to calculate climate sensitivity. (Let’s make it simple by having a constant relationship between Ta, and Ts. Ta = Ts – c, i.e. a fixed lapse rate).

You will get a different answer.”

I don’t quite see what you’re doing here or where you’re going.

Let me ask you, what does a 0.6 emissivity mean to you in physical terms? To me it simply means of the 390 W/m^2 radiated from the surface, only 240 W/m^2 is effectively transmitted into space (239/390 = 0.61), and the difference of 150 W/m^2 (390-240 = 150) is effectively somehow recirculated back to the surface.

The fallacy of using an effective emissivity for the planet as a whole is believing that the effective emissivity does not vary with ghg concentration. Why anyone who has actually studied the physics of atmospheric radiative transfer would think that is completely beyond me. Anyone who is convinced that the effective emissivity doesn’t vary with ghg concentration and is unwilling to believe all evidence to the contrary (and there is a lot) is not worth my time or the bandwidth at this blog.

“The fallacy of using an effective emissivity for the planet as a whole is believing that the effective emissivity does not vary with ghg concentration.”

The effective emissivity is reduced with increased GHGs, because OLR is reduced by increased GHGs. Is that what you mean by the emissivity varying with GHG concentration?

The relationship of the emissivity of the measured data is to that of post albedo solar forcing, and the point is to show how the emissivity varies with surface temperature above and below the global averages throughout the system. And that the behavior is consistent with an incremental gain less than the absolute gain above the current global average temperature.

Do you understand that sensitivity can be fully quantified as dimensionless gain? A sensitivity of 3.3C requires a gain of 4.8, i.e. +3.3C requires about +18 W/m^2 of net surface gain and 18/3.7 = 4.8, which is about 3 times greater than the absolute gain of about 1.6. Why isn’t the emissivity of the planet more like about 0.2, i.e. 1/4.8 = 0.21?

RW wrote above: “With regard to latent heat, evaporation cools the surface water from which it evaporated and as it condenses, transfers that heat to the water droplet the vapor condenses upon, and is the source of energy driving weather. What is left over subsequently falls back to the surface as the heat in precipitation or is radiated back to the surface.”

Let’s clarify by using slightly different terms. Although a group of molecules has a temperature (which is proportional to their average kinetic energy), individual molecules have a wide range of kinetic energy.

At the surface of the ocean, those molecules which have more kinetic energy than average are more likely to escape into the atmosphere, leaving the surface of the ocean colder. The energy lost by the ocean is called latent heat. (Water vapor molecules also return to the surface of the ocean and net evaporation is zero when the atmosphere is saturated.)

In the atmosphere, we have “nuclei” with some water adhering to their surface surrounded by air saturated or super-saturated with water vapor. Water molecules are constantly leaving the surface of these nuclei and bumping into these nuclei and sticking. More of the molecules with more kinetic energy than average return to the atmosphere and more of those with less than average remain in the nascent water drop. Since the water drop on the average retains more slower-moving molecules, the atmosphere is enriched in faster-moving ones – the latent heat released. Latent heat remains in the atmosphere, not the water drop!

As a rain drop falls, some potential energy is converted to kinetic energy. However, it is falling at terminal velocity most of the time with the force of gravity balanced by friction. So most of the potential energy could be dissipated as friction (heat) in the atmosphere and only a small amount is released when it collides with the surface. The surface of the water drop as well as the air is heated by friction, but the faster moving water molecules continue leave for the atmosphere on the way down.

I would think the kinetic energy of falling rain is quite small. Latent heat evaporated from the surface is generally the main source of energy driving weather, with what’s left over returning to the surface as the heat, i.e. the energy of molecules in motion, contained in precipitation (though generally cooler than the surface) or radiated back to the surface. The point though is an amount equal to what’s leaving the surface non-radiantly, must be being exactly replenished at the surface somehow, otherwise a condition of steady-state doesn’t exist.

The key consideration is all power in excess of 390 W/m^2 flowing out of the surface must be exactly offset by power in excess of 390 W/m^2 flowing into the surface, and the surface emits 390 W/m^2 of radiant black body power as consequence of its temperature (and emissivity which is really close to 1). Any flux in excess of 390 W/m^2 flowing out of the surface must be non-radiant, otherwise the surface temperature would be higher, where as there is no such requirement for the proportions of radiant and non-radiant flux flowing into the surface.

A point of continual misunderstanding is the box model is not modeling or trying to emulate the actual thermodynamics. There is absolutely nothing more attempting to be shown other than the rates of joules gained and lost, i.e. the final flow of energy in and out of the whole system, would be the same.

“The point though is an amount equal to what’s leaving the surface non-radiantly, must be being exactly replenished at the surface somehow, otherwise a condition of steady-state doesn’t exist.”
Is there a real steady state? Or is there a natural state of imbalance? Planet earth has gained energy for about 400 years. The problem is that the natural imbalace has no place in computer models. All imbalance for the last 60 years (and also the last 160 years?) is thought to come from evil oil and coal industry, and bombard us with the net energy of 4 Hiroshima bombs pr second. But every night and every winter planet earth will not take all this, and bombard back again. 19. January every year the planet has the greatest struggle to get rid of all this heat, as it is the coldest day in the year. And there can be small differences over time that can have a great impact.
I don`t know if a box model can add much to our understanding of these dynamics.

The total amount of energy leaving the surface ~492W/m², 390W/m² by radiation and 102W/m² by convection (KT97), is replenished by incoming sunlight, 168W/m², and radiation from the atmosphere, 324W/m². It’s all there in the KT97 energy balance and slightly different numbers in the TKF09 energy balance. The 102W/m² is the NET energy flow. There is no missing non-radiative energy transfer from the atmosphere to the ground.

This has been your fundamental error from the start. The fact that you continue to refuse to accept that you’re wrong after all these years would be, IMO, sufficient ground for banning you from the site. But SoD apparently has more patience than I do.

You’re grasping at straws. Do you really think that because they call it ‘gross’ that they left out another energy flow in the other direction that would balance it? Don’t bother answering. I’ve already wasted too much time on you.

I agree with the basic picture of the surface absorbing more radiant power than its emitting, the difference is accounted for by non-radiant flux leaving the surface but not entering the surface as non-radiant flux. But the 102 W/m^2 of non-radiant flux leaving the surface is not the net, i.e. up minus down, but the gross non-radiant flux leaving the surface, which means the 324 W/m^2 of ‘back radiation’ is too high or the 67 W/m^2 of absorbed SW is too low. That is, unless you think the only way a joule can pass from the atmosphere to the surface is by radiation.

Factaganda
Facts as agenda and propaganda. I am struggling to get to some basic understanding of climate change, and one thing that frustrates me is the controversies of facts. Temperature records get adjusted, Temperatures from “instrumental period” are constructed and reconstructed. Proxies are showing different results. And what frustrates me even more is that development of facts are linked to certain interests in the controversies about climate. New BEST records are presented as a support for great impact of volcanic aerosols on climate, and thereby saving some climate models.
I want to get the most true pictures of climate change and natural variation for the last 1000 years. This can provide a deeper understanding of climate sensitivity and forcings. What I have found is sea level change since 1750. I hope the uncertainties of earlier times can clear up. Another measure I think I can trust is proxies from boreholes. The earths crust has some memory of temperatures. I find it interesting that this are facts that have not separated fronts in the climate war.
The guardians of the holy fire of AGW Theories has embraced this kind of science. “The reconstructions show the temperatures of the mid-Holocene warm episode some 1–2 K above the reference level, the maximum of the MWP at or slightly below the reference level, the minimum of the LIA about 1 K below the reference level, and end-of-20th century temperatures about 0.5 K above the reference level.” From AGW Observer. Underground temperatures as indicators of surface temperatures – part 2 Posted by Ari Jokimäki on March 3, 2010.
This is in contrast to some proxy studies and some hockey stick ideas. And I think it can tell us something of climate sensitivity.

DeWitt Payne. I could not find any article on Climateaudit. JoNova have read the papers from Huang et al. on borehole temperature reconstructions (1997, 2000 and 2008). She interpret it as contradictory, as it is three reconstructions that are different, and show different results. So as you say: “Borehole temperature records are a large can of worms.”
But there are some interesting things.
The mid-holocene warming that seems to exceed 2 deg C warmer than preindustrial level. That brings Mann foreward with his friend Gavin Schmidt. (Potential biases in inferring Holocene temperature trends from long-term borehole information Michael E. Mann,1 Gavin A. Schmidt,2 Sonya K. Miller,1 and Allegra N. LeGrande2)
The clear global climate shifts between medieval warm period and the little ice age. Which in other cases brings Mann foreward with his friends.
The rapid warming from 1800, with an U-curve (exponential like / acceleration) to 1920. Flattening of the curve (linear like) between 1920 and 1980, and a temperature jump from 1980 to 2000. This point to a great natural warming trend to 1920 (ca 2/3 of the warming from 1750 to 1980). The late jump in temperatures seems unrealistic, and is found in other studies“ — the increasing rate of ground surface temperature is greater than that in atmospheric temperature during the last 140 years at Osaka Meteorological Observatory, Japan Meteorological Agency. The high increasing rate of the ground surface temperature suggests that the change in atmospheric temperature is influenced by the change in long wave radiation from the ground surface.” Climate change for the last 1,000 years inferred from borehole temperatures. 2013. Kitaoka, K et al.
I think these findings point to a greater natural variation and a lower climate sensitivity than is often implied.

Thank you DeWitt Payne. You made me understand that the borehole data is difficult to use in climate reconstruction. It is still interesting that they come up with some historical variations. Perhaps other proxies are better to use. Moberg reconstruction is in the middle of the road, when it comes to the temperature reconstructions, and is perhaps more reliable.