Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Three

In Part Two, we looked at the beginnings of a very simple 1-d model for examining how the atmosphere interacts with radiation from the surface.

Simplification aids understanding so the model has some fictious gases which absorb radiation – pCO2 (pretend CO2) and pH2O (pretend water vapor). We saw that as the concentrations of pCO2 were increased we stopped seeing a change in “top of atmosphere” (TOA) flux.

That is, the “greenhouse” effect became “saturated” as the pCO2 concentration was increased.

Of course, the model was flawed. The model only included absorption of radiation by the atmosphere, with no emission. There are other over-simplifications and progressively we will try and consider them.

Emission

Once the atmosphere can emit as well as absorb radiation the results change.

The model has been updated, and for these results is now at v3.1 (see note 5).

Here is a comparison of “no emission” vs “emission”. In each case 10 runs were carried out of different pCO2 concentrations. Each graph shows:

TOA spectral results for runs 1, 5 and 10

Surface spectral downward radiation (DLR or “back radiation) for the last run

Temperature profile for the last run

Summary graph of flux vs concentration changes for all 10 runs

No emission:

Figure 1 – Click for a larger view

Note that the surface downward radiation is zero. This is because the atmosphere doesn’t emit radiation in this model.

With emission:

Figure 2 – Click for a larger image

If you compare the spectral results you can see that in the “no emission” case the “pretend water vapor” (pH2O) band of wavenumbers 1250-1500 cm-1 is in “saturation”, whereas in the “emission” case, it isn’t.

In essence radiative forcing is the change in TOA flux. When less flux escapes this is considered a positive radiative forcing. The reason is this: less flux radiated from the climate system means that less energy is leaving, which means the climate will heat (all other things being equal).

Here is the same graph expressed as radiative forcing:

Figure 4

If you compare it with the IPCC graph in Part Two (or Part Seven of the CO2 series) you will see it has some similarities:

Figure 5

Note that the actual values of radiative forcing in this model are much higher – in part because we are comparing the results of a gas with made up properties and also because we are comparing the effect from ZERO concentration of the gas vs a very high concentration.

However, one point should be clear. Using the very simple Beer-Lambert law of absorption, and the very well-known (but not so simple) Planck law of emission we find that the “radiative forcing” for changing concentrations of one gas doesn’t look like the Beer-Lambert law of absorption..

Real World Complexity

The very simple model here – with emission – will eventually reach “saturation”.

The much more complex real-world “line by line” models eventually reach “saturation” but at much higher concentrations of CO2 than we expect to see in the climate.

This model is still a very simple model designed for education of the basics – and to allow inspection of the code.

The model has no overlaps between absorbing molecules and has no effects from the weaker lines at the edges of a band.

Reducing Emission and “The Greenhouse Effect”

As emission of radiation is reduced due to increases in absorbing gases, with all other things being equal, the planet must warm up. It must warm up until the emission of radiation again balances the absorbed radiation (note 1).

Another way to consider the effect is to think about where the radiation to space comes from in the atmosphere. As the opacity of the atmosphere increases the radiation to space must be from a higher altitude. See also The Earth’s Energy Budget – Part Three.

Higher altitudes are colder and so the radiation to space is a lower value. Less radiation from the climate means the climate warms.

As the climate warms, if the lapse rate (note 3) stays the same, eventually the radiation to space – from this higher altitude – will match the absorbed solar radiation. This is how increases in radiatively active gases (aka “greenhouse” gases) affect the surface temperature (see note 4).

The Model

The equations will be covered more thoroughly in a later article in this series.

The essence of the model is captured in this diagram for one “layer”:

Figure 6 – One layer from the model

The atmosphere is broken up into a number of layers. In these model runs this was set to 30 layers with a minimum pressure of 10,000 Pa (about 17km). The “boundary condition” for radiation from the surface was Planck law radiation from a surface of 300K (27°C) with an emissivity of 0.98. And the “boundary condition” for radiation from TOA was zero.

This is because we are considering “longwave radiation”. A further complication would be to consider an absorber (like CO2) which absorbs solar radiation, but this has not been done in this model. Absorbers of “shortwave” radiation trap energy in the atmosphere rather than allowing it to be absorbed at the surface. Therefore, they affect the “in planetary balance” but don’t significantly affect the total energy absorbed.

The transmissivity at each wavenumber for each layer was calculated. Absorptivity = 1- transmissivity (note 2).

So for each layer – and each wavenumber interval – the transmitted radiation (incident radiation x transmissivity) was calculated for each wavenumber. This was done separately for up and down radiation. The emitted radiation was calculated by the Planck formula and the emissivity (= absorptivity at that wavenumber).

The net energy absorbed as a result changed the temperature, and the model iterated through many time steps to find the final result.

With the very simple pCO2 and pH2O properties the number of timesteps made almost no difference to the TOA flux. The stratospheric temperature did vary, something for further investigation.

The wavenumber interval, dv = 5cm-1, was changed to see the results. Very small changes were observed as dv changed from 5cm-1 to 1cm-1. This is not surprising as the absorbing properties of these molecules is very simple.

Conclusion

The model is still very simple.

The changes in TOA fluxes are significantly different for the unrealistic “no emission” case vs the “emission” case. This is to be expected.

As concentration of pCO2 increases the TOA flux reduces but by progressively smaller amounts.

When we review the results as “radiative forcing” we find that even with this simple model they resemble the IPCC “logarithmic forcing result”.

There’s more to think about with real-world gases and absorption.

Hopefully this article helps people who are trying to understand the basics a little better.

And surely there must be mistakes in the code. Anyone who sees anything questionable, please comment. You will probably be correct.

People trying to do these calculations in their head, or with a pocket calculator, will be wrong. Unless they are mathematical savants. Playing the odds here.. when someone says (on this subject): “I can see that …” or “It’s clear that..” without a statement of the radiative transfer equations and boundary conditions plus their solutions to the problem – I expect that they haven’t understood the problem. And mathematical savants still need to explain to rest of us how they reached their results.

Notes

Note 1: Simplifications aid understanding. But in the current “climate” where many assume that climate science doesn’t understand complexity it is worth explaining a little further. Words from others neatly describe the subject of “equilibrium”:

The dominant factor determining the surface temperature of a planet is the balance between the net incoming solar radiation and outgoing thermal infra-red radiation.

That is, radiative equilibrium can be assumed for most practical purposes, although strictly speaking it is never achieved, as time-dependent and non-radiative processes are always present..

Note 2: Scattering of solar radiation is important, but scattering of longwave (terrestrial radiation) is very small and can be neglected. Therefore, Absorption = 1 – Transmission.

Note 3: Lapse rate is the temperature change with altitude – typically a reduction of 6.5K per km. This is primarily governed by “adiabatic expansion” and is affected by the amount of water vapor in the atmosphere. See Venusian Mysteries and the following articles for more.

Note 4: The explanations here do not include the effects of feedback. Feedback is very important, but separating different effects is important for understanding.

I’m trying to get the effects of radiation relative to other forms of heat transfer in the troposphere.
The dry adiabatic lapse rate is given by dT/dh = -g/Cp
=-9.8K/km
Where g = Gravitational Field Strength
Cp = Heat Capacity.

In other word the temperature acquired by air molecules after contact with the surface drops by almost 10K per Km of ascent.

Now in the case of the dry adiabatic troposphere although water vapour may be absent, CO2 being well mixed should be there as usual.

However it seems to play no part that I can see.

Even more alarming, in this Nasa description of the atmosphere with various conditions specified there is no mention of greenhouse gases!
Surely the radiative effects of CO2 must get at least a tiny mention, shouldn’t they?

The structure of the troposphere and the formation of clouds can be explained solely by the physics of adiabatic expansion and water phase changes, which is what is being discussed on the linked page. The radiative properties of ghg’s are irrelevant to the topic being discussed. The absolute value of the surface temperature, OTOH, which is not being discussed, is determined by radiation as well as convection.

Thanks!
Just a simple – and possibly even silly – question. Correct me if I am wrong in my thoughts here: When a CO2 molecule emits a photon, this photon is most probably absorbed by a H20 molecule as the wavelength of the photon most probably falls within the very wide H20 absorption band at roughly 5 to 10 micrometer. Eventually this photon is “lost”, that is, converted to heat in the atmosphere. It will never reach the surface of the earth. (Water has additionally a very wide absorption band at above 20 micrometer, so no photons in this range will get through). Is this something accounted for in your model with emission?

Concerning the graphs with no emission – am I reading it right that the graph says that our model atmosphere is semi-transparent in the pretend CO2 absorption ranges at concentrations 129 ppm (and even 880 ppm in the previous post)? Semi-transparent from the surface to space that is. Why I ask is that I’ve been a bit under the impression that atmosphere is actually quite opaqe in the actual CO2 concentrations in the CO2 absorbtion band at current concentrations. Am I wrong and some of the surface radiation actually reaches the wide wide space through the absorption bands or is this just the property of the pretend CO2.

Furthermore I’ve kind of thought that “saturation” had something to do with 3D-2D geometry relations (some of the molecules can “hide” behind others if there’s enought of them), but as I understand it has more to do with the limited width of the absorption band in this case?

This is just pretend CO2.
The absorption characteristics were arbitrarily chosen, then concentration was increased from a very low value until “saturation” was reached – all to allow demonstration of basic properties of absorption and emission.

This version of pretend CO2 has a very simple absorption model – absorbs equally from 600-800 cm-1 with nothing outside.
And no variation within the band.

The energy from an absorbed photon is several orders of magnitude more likely to be re-distributed by collision than emitted again by the same molecule. But at about the same time, a molecule somewhere in the vicinity will have an inelastic collision that increases its energy enough that it will emit a photon of about the same energy.

Energy gained by EM absorption is lost by molecular collision; energy gained by MC is lost by EM emission. In an air parcel at any time EM absorption approximates EM emission, though by different molecules.

What might cause simultaneous MCs to give rise to less EM emission than absorption, causing the air parcel to warm?

I’ve read the October 24th post, already last October, but it doesn’t seem to address my point. Which is: what happens to the photon emitted by an excited CO2 molecule? Often you see stated that it will be radiated in a random direction and that statistically half of all photons will radiate upwards, away from the surface of the earth, and the other half downwards towards the surface. But my question is: what ever direction an emitted photon initially takes, there is a very big chance that there is a H2O molecule in the vicinity to grab it. What THEN happens? Am I completely lost here?

But my question is: what ever direction an emitted photon initially takes, there is a very big chance that there is a H2O molecule in the vicinity to grab it. What THEN happens? Am I completely lost here?

If we talk about the fate of one photon..

If it is absorbed by a H2O molecule this increases the energy of the H2O molecule. Before the H2O molecule emits a photon of equal energy a collision takes place which “thermalizes” this energy – that is, turns it into “translational energy” – or temperature.

So the H2O molecule has absorbed a photon and shared this energy with its local “dance club” – all the neighboring molecules that bounce into it.

Usually we talk about the statistics of this process rather than the fate of one molecule, which history does not usually record..

Of the many CO2 molecules emitting energy, the molecules that absorb this energy will be those in the vicinity which absorb at that wavelength. The main absorbers will depend on the altitude.

In the lower atmosphere, there is a much higher concentration of H2O than in the upper atmosphere.

Statistics determines the relative rates of absorption and emission.

The radiative transfer equations are used to determine the resulting equilibrium temperature of the atmosphere vs altitude.

scienceofdoom
Thanks for your reply. I confirms my thinking (which is based on my experience with fluorescence and the use of secondary fluorophores as wavelength shifters). Thus we should expect the atmosphere to heat up at first. Now to my original question: is the in any way taken into account in models in general and in your “with emission” model above?

We might try to analyze the problem from the description of the processes on the molecular level. This is how I imagine them.
Generally, a molecule in the gas at room temperature and 1 atm pressure undergoes in average 10^9 collisions per second. This means that the emission of photons by the molecules when being in the excited states has to compete with the process of thermalizing. Simultaneously, it is also evident that the collisions might result in the excitations of molecules from the ground state to the excited resonance ones which might be followed by the emission of photons.
If the energy of the collision is not sufficient to excite the molecule to the (relatively long lived) resonance energy levels, then the molecule might be excited to the (short lived) intermediate state, instead. This will result most probably in the emission of photons of energy equal to the difference between the intermediate states and the ground one. The emission of photons corresponding to the intermediate state is much more probable than the emission from the resonance states due both to the difference in time of life of the resonance state and to the high rate of collisions. At the given temperature, an (average) equilibrium between the number of excitations and radiation is reached. Here comes the Planck distribution function into the view. Thus created thermal radiation will tend to escape from the given volume, the power losses being given by the Stefan – Boltzmann law. If the escaped radiation is not compensated, then the temperature of the sample will start to decrease.
The compensation of the radiative losses can be achieved due to the inflow of radiation from the outside. The incoming photons excite the molecules to the intermediate, respectively resonance, energy states which will be followed either by the re-emission of photons or by the thermalizing process. If the external source of radiation is at the same temperature as the gas sample, then the radiation flux from the external source into the gas volume will be equal to the radiation flux from the volume to the external source and both the sample and the external source will retain their temperature unchanged (this is what actually is happening in the gas volume of some given temperature – divide the total gas volume into sub volumes and they will exchange energy with each other at the same rate per unite area retaining thus the temperature unchanged).
If the external source is at a higher temperature, then the inflow of photons to the given gas volume will be larger than the flow of photons in the opposite direction. The excess of photons will now tend to increase the excitation of the molecules in the sample under observation. If the excitations go through the intermediate states, then we will have the re-emission of photons, most probably in the direction of propagation of the absorbed photons, which is due to the conservation of the linear momentum. If the absorption of photons results in the transitions to the rotational (or vibrational) energy states, then the possible re-emission will be random since the conservation of the linear momentum is now governed by the rules for the emission process from the resonance state.
It is also evident that if the sample under observation loses more radiation energy than it gains (for example from the neighboring volume of the gas), then its temperature will start to decrease.
The flow of radiation in the direction of the decreasing temperature is described by the Schwarzschild equation.
The next step is to add more molecules of some given sort to the volume under observation (in the gravitational field these added molecules will force the lighter molecules out of the volume into the volume above). In the case of the heavier molecules, the process of absorption of the photons from the neighboring volume with the higher temperature (respectively from the upward radiation from the surface of the Earth) will be shifted down to the lower heights and the scattering of the photons with the energies within the absorption band of the molecules will be accomplished at the shorter distance from the source of the external radiation. The addition of the extra absorbing molecules within the given volume will thus tend to increase the temperature of the volume due to thermalizing. However, the extra molecules will be also the subject to the excitations through the collisions. This will increase the probability of re-emission of the photons out of the volume, which will steal energy from the volume and thus tend to decrease its temperature – this is the cooling effect of the molecules characterized by the sufficiently small transition energies. However, if the thermalizing process is predominant, the increase of the temperature of the volume will be more probable as compared to the cooling one.
In the former case, the increase of the temperature of the gas in the vicinity of the radiation source (here close to the surface of the Earth) will tend to slow down the cooling of the source at the nighttime (due to the decrease of the net flow of radiation between the source and the gas) and increase the heating of the source at the daytime. This will influence the temperature of the source (here that of the surface), the actual increase of its temperature being dependent on the heat capacity of the source and the gas.
This effect can be most clearly observed at the tropic regions as compared to the desert ones due to the enriching of the atmosphere by the water vapor in the tropic regions. On the other hand, the effect of enriching the atmosphere by CO2 should be mostly detectable through the observation of the changes of the temperature of the deserts.

Ernest: One doesn’t need to ask what fraction of the molecules have enough energy to be in an excited vibrational state that can emit a photon. The Planck function, B(lamba,T), in the Schwartzschild eqn automatically accounts for the Boltzmann distribution of energy in gases emitting radiation.

The Schwartzschild eqn predicts the intensity of the radiation flux in ANY direction (from hot to cold and cold to hot), but the net flux will always be from hot to cold. Increasing GHGs increases the flux in both directions, speeding up the net energy transfer from hot to cold.

Frank.
Thanks for your comments. Schwarzschild equation is the general one describing the propagation of radiation in all directions, as you have mentioned correctly, and not only in the direction of the net flux even if the latter case is usually of most interest.

The Planck function relates to the given temperature and, thus, to the actual energy distribution of the molecules (inclusively the fractions of excitations). The mentioning about fractions is of interest when discussing what can be expected if the sample is the subject to the disturbance caused by the irradiation from the external source. This might give a somewhat clearer picture and increase, hopefully, the understanding on how radiation is expected to affect matter.

In the atmosphere, reflection of EM radiation in the thermal range (> 5 μm) is pretty close to zilch. Ice crystals in cirrus clouds have some reflectivity for some wavelengths in the thermal range, but for everything else it’s just transmission and absorption. No reflection, no scattering.

SOD: Your model is suffering from “mission creep”. In Part II, you said: “The main purpose of this model is to demonstrate the effect of the basic absorption and emission processes on top of atmosphere fluxes. It is not designed to work out what troubles may lie ahead for the climate.” By the end of Part III, we get to: “The dominant factor determining the surface temperature of a planet is the balance between the net incoming solar radiation and outgoing thermal infra-red radiation.”** Your models – and similar studies done with Modtran and Hitran – explore how a PREDETERMINED climate and varying GHG’s influence radiation – not how radiation influences climate.

IMO, the short, vague section on “Reducing Radiation and the Greenhouse Effect” isn’t worthy of being included alongside the carful modeling you are doing. How about something like: The climate system must respond in some way that restores the balance between incoming and outgoing radiation – most likely by warming some or all of the surface or the atmosphere. This model shows that DLR reaching the surface will increase, so the response should begin there. However – as we have seen with the skin of the ocean – when convection is involved, increased DLR isn’t always used to warm the surface that absorbs it. An unstable lapse rate could limit surface warming, enabling convection to transfer most of the energy from increased DLR to the upper troposphere. Many climate scientists begin with the approximation that a fixed lapse rate will cause raise the temperature by the same amount everywhere in the troposphere.

** Isn’t this quote wrong (or out of context)? Venus has less net incoming radiation than the Earth (due to Venus’ higher albedo). Since neither planet is believed to have a significant internal source of heat, net incoming solar radiation on both planets is balanced by outgoing thermal infra-red radiation. The surface temperature of Venus about 500 degK hotter than it should be if this were the “dominant factor”.

SOD: Perhaps my comment wasn’t clear. I’m here because you try to properly illustrate the physics. I think the model you are constructing is valuable and I’m hoping to see more sophisticated versions, perhaps one breaking the pCO2 band up into a series of narrower bands with different absorption coefficients. When possible, changing one variable at a time is a excellent way to proceed. I’m here because you try to properly illustrate the physics.

Your original mission and model intended to demonstrate how outgoing radiation changes with GHGs. It was originally not intended to illustrate how climate would change. In Part III, you began talking about climate change, particularly temperature rising in the upper atmosphere to restore balance. IMO, this discussion had several problems: a) It seemed “radiation-centric”, rather than “radiative-convective”. Particular, the dominant factor quote. b) It wasn’t as rigorous as your usual work. I suggested an answer to the question: What can we say with certainty about the climate’s response to a radiative imbalance? It can’t persist indefinitely. It can be eliminated by warming somewhere. Increased DLR could provide the energy for warming. c) Your discussion raised the possibility that your model may have become capable of predicting temperature change. I had to re-read both posts carefully before I felt I understood what was going on. When looking at spectra of outgoing radiation, it is easy for readers to lose sight of the nature of the “model atmosphere” needed to calculate these spectra. Are they models that respond to a local radiative imbalance with a temperature change or not?

About 10% of the infra-red emissions from the surface lie in the CO2 frequency band. As will be seen from the table, the average height of absorption of 50% of the photons is around 300m . The same is true for photons emitted by CO2 molecules in the atmosphere: 50% are from 300m or below. The intensity of the photons reaching the surface is governed by the temperature when they were emitted. The average temperature is the temperature of air at 300m, ie about 13DegC.
The surface, at 15DegC, and the 300m CO2 radiating layer at 13oC are almost in radiative balance. The calculated difference between the surface emitted radiation and the atmospheric back-radiation in this band is 1W/m2. In this area of the spectrum, the “greenhouse” is nearly perfect, and 1W/m2 is the maximum direct increase in back radiation which would result if the CO2 frequencies were completely blocked.
A doubling of CO2 in the atmosphere would result in a lowering of the average CO2 radiating layer to about 150m. So a doubling will result in a direct increase in back-radiation of 0.5W/m2. [Calculated from the difference in blackbody emissions at 15DegC and 13DegC, multiplied by 10%, which is the approximate proportion of the power in the spectrum occupied by CO2 frequencies.]

You’ve calculated the increase in back radiation before equilibration. In fact, it’s less than that because increasing CO2 increases absorption of incoming solar in the near IR. The actual greenhouse effect is determined by the altitude of emission to space. At 670 cm-1, that’s on the order of 10 km where the temperature is 220 K. Increasing CO2 doesn’t change the properties of the center of the band but it does expand the wings resulting in less emission to space. The atmosphere and the surface then have to warm to achieve balance. That’s when you see a significant increase in back radiation.

Here the emission temperature at wavenumber 670 is around 237DegK, and for most of the rest of the CO2 band is around 212DegK.

CO2 is very active at wavenumber 670. For a wavenumber 670 photon to escape the planet it must evade all the strongly wavenumber 670 absorbing molecules in its path. Nicol has calculated (in my view correctly) that at sea level 50% of photons are absorbed in one metre (about 0.008% of the atmosphere). At the top of the atmosphere this number of CO2 molecules would be encountered by a photon emitted at 70km.

Wavenumber 670 photons escaping to space must be emitted from higher in the atmosphere than the other CO2 frequencies. (CO2 is less opaque at those frequencies).

DeWitt Payne stated “You’ve calculated the increase in back radiation before equilibration. In fact, it’s less than that because increasing CO2 increases absorption of incoming solar in the near IR. ”

I thank him for his helpful (and as usual, polite) comment. I think he agrees with my rough estimate of about 0.5W/m^2 increase in back-radiation due to a doubling of CO2. But he has also pointed out a decrease in the amount of incoming solar radiation. I would be grateful if he could quantify that.

Insolation at the surface will be reduced by ~1 W/m2 on average so the net forcing at the surface is ~0.5 W/m2. But the lower emission causes the atmosphere to warm. Adjusting the surface temperature offset to make the emission the same for 560 ppmv CO2 as for 280 ppmv using the assumption of constant relative humidity gives:

Temperature offset 1.48 C (301.18 K), 560 ppmv CO2

100 km looking down

Iout = 289.163 W/m2

0 km looking up:

Iout = 360.472 W/m2 for a difference from 280 ppmv of 12.87 W/m2. Less the 1 W/m2 (and probably a little more from the increased water vapor) that’s ~12 W/m2 forcing at the surface.

That’s a first approximation, though. It doesn’t take into account changes in convection that would occur because the downward radiation increases faster than the upward radiation from the surface. That’s also clear sky. The forcing is a lot smaller for a cloud covered sky.

I thank DeWitt Payne for his response.
As I understand it, neglecting the effect of increased temperature for the moment, the effect on the surface of increasing CO2 concentration is:
a. A reduction of about 1W/m^2 in insolation
b. An increase of about 0.5W/m^2 in back radiation,
for a net DECREASE in surface forcing of 0.5W/m^2.

I don’t understand DeWitt’s “temperature offset” numbers. Must be a hot planet? Neither do I understand what Iout is. Is it the outgoing radiation?

An increase in downwelling radiation is a positive, not a negative forcing. For upwelling radiation it’s the opposite. A positive forcing means heat is added to the system. You can do that by increasing the incoming radiation or decreasing the outgoing radiation.

I thank DeWitt Payne for his patience, and the courtesy of his response.

I think that he agrees with the thrust of my post – that the direct effect of doubled CO2 (neglecting second order effects of increased water vapour and upper atmosphere heating for the present, until we quantify those) is a REDUCTION in forcing of 0.5W/m^2 (a reduction of 1W/m^2 in insolation with an increase of 0.5W/m^2 in back radiation).

I found a number of issues which highlight that the model is not well-defined and the code consequently has some flaws.

I’m not sure what consequences they bring to the results. I am defining boundaries more rigorously (which I should have done before writing the code!) and will report back when the code has been revised accordingly.

Here the emission temperature at wavenumber 670 is around 237DegK, and for most of the rest of the CO2 band is around 212DegK.

CO2 is very active at wavenumber 670. For a wavenumber 670 photon to escape the planet it must evade all the strongly wavenumber 670 absorbing molecules in its path. Nicol has calculated (in my view correctly) that at sea level 50% of photons are absorbed in one metre (about 0.008% of the atmosphere). At the top of the atmosphere this number of CO2 molecules would be encountered by a photon emitted at 70km.

Wavenumber 670 photons escaping to space must be emitted from higher in the atmosphere than the other CO2 frequencies. (CO2 is less opaque at those frequencies).

In the past it has been claimed that because of pressure drop, there is less attenuation in the wings, and it is these frequencies which get out to space.

Yes sort of. In the wavenumber 670 case, absorption takes place very close to the source. The pressure reduction is small – at most 5 percent at 20km altitude – there will not be much tightening of the lines between the emission point and the rough 50% absorption point 600m higher. So I cannot see any mechanism for wavenumber 670 emissions to space from below the tropopause – they must be from the stratosphere.

I turn to wavenumber 650 emissions to space. At sea level these emissions are 50% absorbed within 25m (0.2% of the atmosphere).

At 20km, 0.2% of the atmosphere is approximately 700m, ie a photon emitted at 20km would encounter the same number of potentially absorbing CO2 molecules in 700m as would a photon emitted at ground level in 25m.

The reduction in air pressure between 20 km and 20.7 km is around 6 percent [Note that in my previous post on wavenumber 670, I got the numbers wrong. Its 50% absorption at 20km is only 30m with negligible pressure difference]. This will slightly reduce the absorption in the wings but accentuate it on the peaks of the lines.

It appears that emissions to space at this wavenumber, and for wavenumber 700 (sea level 50% absorption about 50m) are also from the stratosphere – photons from low down in the atmosphere – say 11km – have too many absorbing molecules to pass through to survive. (at least 30% of the atmosphere). Anything in the CO2 absorption band from roughly wavenumbers 650 to 700 will be extinguished from this low down.

Assuming the CO2 band is from wavenumbers 625 to 725 (10% of photons emitted by a blackbody at 212DegK are in this band), the total energy emitted by CO2 to space is 11W. Allowing a little extra for the wavenumber 670 spike and the edges of the band, CO2 accounts for only around 15-20W of the LW emissions to space.

According to the excellent A First Course in Atmospheric radiation by Grant Petty:

The effect of the CO2 15 μm band on atmospheric transmission to space of surface emission can be crudely approximated by assuming total opacity between 13.5 and 17μm, and total transparency outside these limits..

If we look at the energy between 13.5-17μm from a 212K Planck curve it is 18.7 W/m².

[…] It is the proportion of radiation that is transmitted (not absorbed or scattered) through the atmosphere from the surface to the top of atmosphere. It doesn’t include any re-radiation by the atmosphere – an important element of atmospheric radiation (see, for example, Part Three). […]

The viewing of the greenhouse effect must be faulty if the radiation alone is considered. Strong convection prevails in the atmosphere – and must considered. An (almost) constant lapse rate in the troposphere. Therefore, the radiative transfer equation can be applied only in the stratosphere. The boundary shifts between the troposphere / stratosphere.

I am wondering if a molecule emits a photon with exactly the same energy that it absorbs. For example, CO2 absorbs IR energy by bending between the wavelengths of 13 and 18 um. If a molecule absorbs a photon hv from a wavelength of exactly 15.000000 um, then would it have to emit a photon with the exact same energy, or could it emit a photon with the energy corresponding to a wavelength of 15.0002 um, or even 13.478 um? And, regardless of the answer, what controls what wavelength of energy is emitted?

I am wondering if a molecule emits a photon with exactly the same energy that it absorbs.

Almost never. That’s because the molecule that absorbs is rarely the molecule that emits. The molecule that absorbs transfers its energy to another molecule, usually nitrogen or oxygen, before it can emit at least 9,999 times out of 10,000. The molecule that emits was, in the same proportion, excited by collision. Of course some molecule will be excited to exactly the same energy level and emit without changing its rotational state so somewhere in the vicinity a molecule will emit at exactly the same wavelength.

The wavelength is controlled to first order by the rotational energy state of the emitting molecule. That’s also determined by the details of the collisional activation. The rotational energy state distribution of the molecules is controlled by the temperature and Maxwell-Boltzmann statistics. But emission lines also have a wavelength distribution. The line is either pressure or doppler broadened depending on the local pressure. Even at absolute zero, the line width would still not be zero because of the uncertainty principle. If the line width were, in fact, identically zero, there would be no absorption.

As a once-practicing spectroscopist, I have a rather fundamental problem with this post. When we measure an absorption spectrum using, say, an IR spectrometer, something I did hundreds of times in grad school, what we measured was the difference between the IR received at the photomultiplier tube, in a particular spectral band, with sample present, and without — without usually meaning dry air = N2, or pure solvent. We did not have a switch on the side of the box to magically turn off emission in the sample, so it was free to emit all it wanted to, in all directions. Thus the absorbance we measured already took into account forward emission, at lab temperature and pressure. That is, the detector does not distinguish between photons emitted by the source, and those emitted by the sample. And Beer’s law was generally quite adequate to explain our findings, although we were usually more concerned with the locations of strong lines than with concentration or temperature effects.

Your methodology seems to presuppose a means of obtaining “absolute” absorbances, with emission “turned off”. You then correct for an emission arising from thermal excitation, to get the “real” spectral behavior, which does not obey Beer’s law. But this hypothetical “real” behavior cannot be observed in the lab, whereas Beer’s law can be. As a scientist, I am unwilling to discard observations simply because they disagree with theoretical models.

I will grant that a column of mixed gases kilometers in length, with large variations in temperature, pressure, and composition is a more complicated situation than a sample in a quartz vial. But that does not justify ignoring what we have learned from the simpler case, and just making things up. Is there something I am overlooking, that justifies this procedure?

You can ignore sample emission in an IR absorption spectrophotometer for at least two reasons. First, the sample is optically thin, absorbance less than 2 or so. Beer’s Law tends to fail above that level unless you have a really good spectrophotometer that is both sensitive and has very low stray light. Second, the source temperature is much, much higher than the sample temperature and intensity goes as T^4 so the source emission will be orders of magnitude higher than the sample emission. Obviously, sample emission in the UV/Vis is negligible for room temperature samples.

Consider a sample and a source that are at the same temperature. If the emissivity of the source is nearly 1, you won’t see a spectrum at all no matter the path length or whether sample is present or not. Looking up from the surface, the source temperature is 2.7 K. Looking down from space the source temperature is the surface of the Earth, on the order of 300K. The source temperature of an IR spectrophotometer is on the order of 1000K

To be more precise, the spectrum you will see when the sample and source are the same temperature is identical to the source spectrum. There will be no absorption or emission features. You will see emission features if the source is at a lower temperature than the sample and absorption features if the source is at a higher temperature than the sample.

You seem to imply that every light source “has” a temperature, and that temperature somehow stamps itself upon the light, and determines the nature of its interaction with other matter. This is simply untrue. Many light sources produce light that is not black-body distributed. Lasers. LEDs. Fluorescent tubes. Atomic line lamps.

What DeWitt is describing is the situation where the radiation from the source has the intensity of a black body at the temperature of the sample for the wavelength being considered.

That situation may arise from a source that really radiates nearly at the black body intensity corresponding to its temperature and that’s certainly one common case. You may produce the same source intensity in other ways as well, but the way it’s produced doesn’t matter for the argument.

So what? Measure the power spectral density and invert the Planck equation and you still get a temperature. You might want to call it an effective temperature for non-thermal sources, but it’s still the temperature of a black body source that would produce that power spectral density at the particular wavelength or frequency. Solar radiation, for example, has an effective temperature at 1 AU of ~395K, but its brightness temperature is ~6000K.

You say you have a “fundamental problem” with the article, but it seems more like a practical problem.

The point of this series is to separate out a number of different components in atmospheric radiation, using two artificial (that is, invented) gases.

The idea of separating out individual components of basic physics is an educational one.

If you have a fundamental problem then it will be with the equation in note 1 of Part Two or the derivation of the fundamental equations in Part Six – The Equations.

The background to this series of articles is many people writing blogs, comments and, occasionally, unpublished papers which demonstrate (at best) a basic knowledge of absorption spectroscopy but no understanding of emission.

The practical problem you note is simply that the real world observation includes emission. I agree. Of course, emission always takes place.

And it is easy to construct an experiment where Beer’s law of absorption matches the experimental results that include emission simply due to the wavelengths in question or the temperature of the gas.

I’m not advocating “discarding observations”. I’m just trying to explain that absorption and emission both affect the measured result.

And so I am confused by your claim.

Are you claiming that:

a) Beer’s law is sufficient to explain the emission of radiation from an absorbing body in all cases?

or

b) Beer’s law is sufficient to explain the emission of radiation from an absorbing body in some cases to be defined to an accuracy level you propose to define?

And of course the reason that atmospheric absorption and emission is more complex is for the reasons you note in your concluding paragraph. The complexity is not just “making things up”. The complexity is calculated using well-founded physics, derived from first principles and well established over more than 60 years of experimental work.

If you are confused ONLY by my concept of an atmosphere that does not emit then I can only apologize for not making my thought experiment – and the reasons that sparked off this thought experiment – clearer.

If you are confused by the inclusion of the calculation of emission in atmospheric radiation then this is simply real physics rather than (Beer’s law of absorption only) a handy approximation that works under many experimental spectroscopy conditions.

Having read some of your later postings, I realize that my problem is with the way you use terminology, in particular, the term “absorption”. Your later posts make your meaning clearer. In particular, it becomes apparent that you are discussing energy transfer, rather than inherent properties of molecular species. Thus, a statement like “the absorption of CO2 does not saturate” is actually a statement about a hypothetical column of gas, containing some, perhaps only a little, CO2.

To make such a claim, you must assume a particular temperature profile in the gas column. If you assumed a different profile, you would find different properties. For this reason, I think arguing from these conclusions to claims about thermal behavior is suspect. It may well be the case that the processes which determine air temperature are convective, and completely overwhelm any radiative transfers. One can show, by careful reasoning, that a match lit in a cold, dark room will heat the walls. But that fact does not mean that rooms with lit matches are warmer than rooms without. Other factors are dispositive. Perhaps you address those issues elsewhere.

I do believe I now understand the controversy a little better, and why so many well-educated people continue to argue past each other about it. Thanks, and I will take a look at some of your other posts.

The author’s claim seems to be that Beer’s law fails by predicting a saturation that does not occur. Your argument seems to apply to behavior after saturation has occurred. That is, it would seem that you are claiming Beer’s law overestimates transmission, while the author claims the error is in the opposite direction. Maybe I misunderstand?

As to “source temperature”, that is only relevant, or even meaningful, when the EM field has a black-body spectral distribution. That is not the case in a SPM, since the field is processed by a spectral grating, resulting in a narrow-spectrum source, which does not “have” a temperature. What is relevant is the intensity of the field at a particular frequency, not the temperature of its source. In any case, the claim here seems to be that what is relevant is the temperature of the sample, since that is the property that is modeled.

More important, the model here seems to be, that we are dealing with a column of gases, and its spectroscopic behavior depends strongly upon its temperature profile. That claim strikes me as bizarre, but even taking the notion as a given, it would not follow that we can predict bulk spectroscopic behavior by assuming the thermal profile is determined solely by IR absorption. We know that many other factors are involved. If the optical transmission of a column of gases is strongly dependent upon the thermal profile within that column, then it is simply wrong to try to assign bulk properties like specific absorbance to the molecules in the column. By hypothesis, each molecule finds itself in a separate milieu, and behaves accordingly. If we wish to describe the behavior of the column as a whole, we must first determine its thermal profile. And this model lacks almost all of the inputs needed to do that.

Your third paragraph is about issues that are explicitly considered in the model of radiative energy transfer in atmosphere that SoD has made, put openly available, and discussed in a recent thirteen part series on visualizing atmospheric radiation. All the relevant issues must be considered in such a model to get valid results, and they are discussed in this series and related discussion.

No, Pekka, my third paragraph is about the fact that the model proposed in this posting is based upon a faulty understanding of basic principles. I don’t mean that elaborating it will improve it. Rather the opposite.

Beer’s Law doesn’t fail. It’s incomplete because it ignores sample emission. Ignoring emission is valid in an absorption spectrophotometer for the reasons I stated. Looking at the atmosphere from the ground up, there is only emission.

The effective temperature of EM radiation can be determined from the Planck equation even if it’s narrow band or even from a non-thermal source like a laser or microwave generator. The Planck equation energy units are energy per unit area per unit frequency or wavelength, W/(m² cm-1) in IR speak. Invert the equation for a known spectral power density and frequency and you get a temperature. The broad spectrum EM radiation that impinges on the grating in an IR spectrophotometer is from a thermal source, often a Globar at 1200-1500K. The effective temperature after wavelength selection using slits and a grating will be lower than that because of transmission losses in the spectrometer, but it will still be much higher than the sample temperature. Besides, who uses grating spectrophotometers in IR anyway? FT-IR is the way to go. This article is based on emission from a source temperature of 300K, i.e. the approximate temperature of the surface of the Earth, passing through ~100 km of atmosphere.

As to your last paragraph: It may strike you as bizarre, but that’s because you are ignorant of the literature on the physics of atmospheric radiative transfer. Quantum mechanics strikes most people, including a lot of physicists, as quite bizarre too, but it works. I suggest reading Chapter 5 of Rodrigo Caballero’s Lecture Notes in Physics of the Atmosphere for a start. Grant Petty’s A First Course in Atmospheric Radiation is also available for purchase at a reasonable price if you want more detail.

I availed myself of Pekka’s suggestion, and took a look at SoD’s more recent thirteen-part posting on Visualizing Atmospheric Radiation. In part one, he explicitly deals with the situation inside an SPM. He uses the concept of transmissivity, rather than absorbance. So, I guess this post is just a failed first attempt, left up for historical reasons.

..As to “source temperature”, that is only relevant, or even meaningful, when the EM field has a black-body spectral distribution. That is not the case in a SPM, since the field is processed by a spectral grating, resulting in a narrow-spectrum source, which does not “have” a temperature. What is relevant is the intensity of the field at a particular frequency, not the temperature of its source. In any case, the claim here seems to be that what is relevant is the temperature of the sample, since that is the property that is modeled..

The temperature of the gas is important.

Planck’s law is quite simple. It defines the “black body” spectral intensity, which is a function of wavelength and temperature.

Emissivity is a function of wavelength for a given gas. And a function of the number of molecules of that gas.

Therefore, the intensity of emission of radiation at a particular frequency is dependent on the temperature of the gas.

You state: “What is relevant is the intensity of the field at a particular frequency, not the temperature of its source.“. This is a confusing statement at best, and an incorrect statement at worst. The intensity of the field is the relevant parameter – it is the dependent parameter. But this is dependent upon the temperature, the number of molecules and the capture cross section.

Do you think these equations are wrong?
Or do you think these equations do not link emission intensity with temperature of the gas?

This is just standard radiative physics.

That’s why we can calculate the spectrum of radiation at the top of atmosphere from:
a) the temperature profile of the atmosphere
b) the concentration profile of each radiatively-active gas
c) the surface temperature

And why we can calculate the spectrum of downward longwave radiation at the surface from:
a) the temperature profile of the atmosphere
b) the concentration profile of each radiatively-active gas

Actually, Payne, what you are stating as a fact is that every light source has two temperatures; the one a thermometer measures, and the one you get from your calculation. For thermal sources, they are identical. For non-thermal sources, they are unrelated.

There are lots of temperatures. In an analytical argon plasma off the top of my head, there’s the gas kinetic temperature, the electron temperature, the ionization temperature and probably more. As you say, at LTE these are all equal. But for non-thermal sources, the apparent temperature derived from the radiance is the important number.

When I last studied statistical mechanics, 1/T = dS/dU. That is, the temperature of a system is defined as the inverse of the rate of increase of entropy with addition of energy. It is assumed that the system is in thermal equilibrium, although in practice this requirement can be relaxed a little. Temperature is not defined for systems far from equilibrium. In particular, the electromagnetic field at a single location is often simultaneously exchanging energy with multiple systems at different temperatures. Under those conditions, the apparent “temperature” of the field will depend upon direction.

I suppose other definitions of temperature are possible, and may have use in certain specialized circumstances. Plasma emission spectrometry, say, or music criticism. I expect there are blogs devoted to those topics.

In plasma emission spectrometry, btw, the emission intensity is linear with concentration over many orders of magnitude. That’s because the analytical zone is optically thin and absorption can be ignored. The governing equation for radiative transfer is Swarzchild’s equation (see here or here ). The Beer-Lambert equation is a special case of Swarzchild’s equation where emission is set to zero. For ICP emission, absorption is effectively zero.

Maybe someone can help me here. I’ve decided it long past due to time for me to understand fundamentally how radiative transfer simulation works. I’m looking through these tutorials.

I’m pretty sure I understand these atmospheric RT simulations account for changes in up/down emission through the whole atmosphere — and that the final calculated result is an upwelling flux change at the TOA (though I surmise an atmosphere to surface IR flux change is probably also calculated). However, in the case of increased absorption, i.e. increased atmospheric opacity, most of the increase in attenuated flux at the TOA originates from that emitted from the atmosphere and not the surface, but it is still quantified as an increase surface radiant power absorbed by the atmosphere even though it doesn’t all originate from the surface directly?

The ‘average transmittance’, quantifying the net transparency from the surface to the TOA, seems to be based on some integration of the separate layers from the surface to the TOA. Like a net ‘pass through’ of radiant energy from the surface to the TOA considering up/down emission/absorption changes from the multiple layers.

RW,
As far as I know the concepts are not all well defined. Thus different people give them different meanings, which leads to some confusion.

For some purposes, most obviously for imaging, the transmittance is uniquely defined by the fraction emitted at the surface that reaches the altitude where the picture is taken (TOA for satellite images). This kind of transmittance is low for the total IR emission from the surface. Recent estimates give a value of only 0.05 for that. This value is therefore not a large factor in the energy balance. Emission from the atmosphere is much more important as you correctly notice.

Another way of looking at the situation is to consider the atmosphere as kind of coating on top of the surface, and to determine the intensity of outgoing radiation at TOA and compare that to the Planck’s law intensity for the temperature of the surface. That results in a wavelength dependent emissivity that varies in the range 0.4 – 1.0. The weighted average that gives the right value in Stefan-Boltzmann formula is about 0.60.

I appreciate the reply, and I think I understand, but then regarding what is referred to as ‘average transmittance’ — what specific measure of transparency is this? I understand — though I’m not entirely sure — it is a measure of the net transparency through the entire mass of the atmosphere after up/down emission/absorption changes are considered from the multiple layers the atmosphere is divided into for the simulation. Or it is effectively the fraction of the power radiated from the surface, weighted by the Planck Function of the surface temperature, that is transmitted through the whole mass of the atmosphere to space, albeit mostly not directly (i.e. it’s not the direct surface transmittance, which as you say, is only about 0.05-0.10 or maybe 20-40 W/m^2).

Figures for the global ‘average transmittance’ that I’ve seen tend to be roughly in the 0.23-0.25 range. Or least that’s what I’ve seen from various Modtran simulation outputs.

RW,
The transmittance is much larger for clear sky situations than it is taking clouds int account. In addition the earlier estimates were much higher even when clouds were taken into account (the value of Trenberth et al 2009 was 0.10). Values like 0.23-0.25 may well correspond to clear sky estimates. Modtran calculations are usually for clear sky cases.

“The transmittance is much larger for clear sky situations than it is taking clouds int account. In addition the earlier estimates were much higher even when clouds were taken into account (the value of Trenberth et al 2009 was 0.10). Values like 0.23-0.25 may well correspond to clear sky estimates. Modtran calculations are usually for clear sky cases.”

As I understand it, the direct surface transmittance of 0.10 estimated by Trenberth virtually all comes from the clear sky, as the direct surface transmittance for the cloudy sky is virtually — if not literally — zero. I understand it to none the less be a global average (i.e. 10% of the 390 W/m^2 radiated from the surface), which — at least should — include the clear and cloudy skies. A big problem with his estimates in general is he does not seem to properly or clearly distinguish between the clear and cloudy skies.

All this being said, I’m fairly sure the global ‘average transmittance’, whether it’s weighted for the clear and cloudy skies or not, is not the global average direct surface transmittance of 0.05-0.01, but rather it’s somehow quantifying the net transparency of the whole mass of the atmosphere from the surface all the way through the TOA after up/down emission/absorption changes from the multiple layers have been accounted for.

Perhaps SoD can weigh in an clarify this? Again, I’m fairly sure the calculation of ‘average transmittance’ is not the direct surface transmittance or not the amount of direct radiant power from the surface that passes straight through the atmosphere the same as if the atmosphere wasn’t even there.

RW: Transmittance is pretty worthless, since most photons that escape to space are emitted from the middle and upper troposphere, not the surface. Stick with the idea that the radiation traveling along any path is always being modified by both absorption and emission. It’s hard to have one without the other.

Looking at the derivative form of the Schwarzschild equation makes me feel like I understand how radiative transfer calculations are done, but the calculations are a job for Modtran or Hitran:

At any wavelength, the incremental change in intensity of the light (dI) as it passes an incremental distance (ds) along a path depends on the density of GHG molecules in the path (n, sometimes written as a product of the total density and the GHG mixing ratio), the absorption cross-section (o) for that wavelength, the Planck function (B(T)) for the wavelength at the local temperature, and the intensity (I_0) of the light entering the ds path increment. (The equation is usually written in proper differential form ready for integration by multiplying both sides by ds, but this form is easier to look at for me.) n and B(T) vary with temperature, you need to supply them (often by choosing a “standard” atmosphere such a summer in a temperate zone). For OLR, you numerically integrate over all wavelengths and along a path, from the surface to the TOA for OLR. I_0 is the blackbody radiation emitted by the surface and it enters the first layer (perhaps 0.3-1 km thick, with an average temperature and composition). Some radiation is absorbed by the layer and some radiation is emitted in all directions, but we are only interested in the upward component of the emission. The upward flux from the first layer enters the second, then the third…. Integrate from space (I_0 = 0) to the surface to obtain DLR, adding the downward component of the emission.

When radiation has passed through a homogenous gas for long enough, absorption and emission come into equilibrium and dI/ds = 0. That makes the radiation blackbody (I_0 = B(T)). Radiation traveling through a gas is always changing toward blackbody intensity at a rate that depends on the number of GHGs and their absorption coefficient. If B(T) is negligible compared with I_0 (for example, I_0 comes from a 4000 degK filament in a spectrophotometer lamp), integration of the remaining term gives Beers law. If there is no I_0, the equation reduces to the emission of blackbody radiation. If the temperature of the layer makes B(T) less than I_0 (the usual case for OLR, the rate at which the upward flux drops depends on n and o. More GHG and a more strongly absorbing GHG produce a negative dI/ds aka radiative forcing.

“RW: Transmittance is pretty worthless, since most photons that escape to space are emitted from the middle and upper troposphere, not the surface.”

Yes, I’m pretty sure I understand this, though admittedly at one time I didn’t fully understand it.

“Stick with the idea that the radiation traveling along any path is always being modified by both absorption and emission. It’s hard to have one without the other.”

Yes, I’m pretty sure I understand this as well.

As for the rest of your detailed message, I’m not sure I understand all the details. I’m trying to first make sure I understand the fundamentals involved (i.e. what’s being calculated and measured), and then understanding the finer details, such as what you’re describing, will hopefully come with time.

RW: Measurements of thermal emission of LWR are probably technically challenging since everything is emitting blackbody radiation at these wavelengths. Absorption is much easier, since we can use a lamp at about 4000 degK to overwhelm thermal background. Emission is simply the time reverse of absorption (with the same cross section), so careful absorption spectroscope provides rellable info about emission (IMO).

The radiative transfer equation is a calculation only ostensible of the intensities as a function of the temperature. She is to calculate the fixed temperature profile because the sum of all changes in the intensities must yield zero – otherwise it would not stationary.

To account I use prefer a different form of the radiative transfer equation

pa * dI / dp = I – B

Here, pa is a pressure value from the vertical length of the column, equal to the absorption length. The absorption length is dependent on pressure because the density of the molecules depends on the pressure. The omitted away at the absorption pressure, because at the same pressure differences include same molecular quantities.

It’s worth noting that I do indeed fully understand that it is the TOA flux change that is the focus of radiative transfer simulation, because this is what ultimately what puts the system out of balance and pushes it toward a new equilibrium. However, I understand these simulations do calculate an average net transparency that I do believe is weighted by the Planck Function of the surface temperature, even though — in the case of increased absorption/opacity — much of the increased attenuated flux at the TOA originates from the atmosphere.

It seems like the average net ‘T’ or net transmittance must be some sort of integral that’s weighted by the Planck Function of the surface temperature. That is, it is the fraction of surface radiative power that is considered to be transmitted through to space and the difference (i.e. 1-T) is the fraction of surface radiative power that is considered to be absorbed or attenuated by the atmosphere.

It does make sense to me that the net ‘T’ should be weighted by Planck Function of the surface since the radiating surface is the plane from which the opacity of the whole mass of the atmosphere through to the TOA is being measured.

As far as I know no radiative transfer model calculates OLR flux using weighing based on Planck law calculated using the surface temperature and transmittance trough the whole atmosphere. Only some crude descriptions that have other than scientific purposes might do that. It’s possible that this error appears in some educational material for audience outside climate science.

Every serious radiative transfer model uses for emission the temperature at the point of emission, i.e. the lower temperature the higher in the troposphere, the emission takes place. This is also the approach in the model described in the series of posts on visualizing atmospheric radiation

I’m referring to the model discussed in the whole series of posts I gave a link to above. The model code is given in the fifth part of that series, while the other parts discuss mainly properties of that model and results obtained by using it.

I could be wrong, but I’ve done some more thinking and investigating and the average net ‘T’ must be some sort of integral of all the contributing layers that’s weighted by the Planck function of the surface temperature. Or that the TOA flux changes for changes in GHG concentrations — even though most of the TOA flux changes originate from emission in the atmosphere — are quantified as a change that would occur from the radiating plane of the surface from which opacity through whole mass of the atmosphere is being considered.

The definition of the effective radiative temperature tells exactly, what it is:

It’s the temperature of a blackbody that radiates as much as the Earth to the space.

That’s not exactly an average with any easily definable weights. Some weighted average may agree reasonably well with that, but not exactly. It’s better to stick with the definition. Trying to interpret it as a weighted average is probably more confusing than clarifying.