It’s an interesting paper and clearly Miskolczi has put a lot of time and effort into it. I recommend people read the paper for themselves, and the link above provides free access.

The essence of the claim is that the optical thickness of the earth’s atmosphere is a constant – at least over the last 60 years – where water vapor cancels out any change from CO2. So if more CO2 increases the optical thickness, then the optical thickness from water vapor will reduce.

In his paper he make this statement:

Unfortunately no computational results of EU, ST, A, TA and τA can be found in the literature, and therefore our main purpose is to give realistic estimates of their global mean values, and investigate their dependence on the atmospheric CO2 concentration.

Among the terms noted in this quote, τA is the optical thickness of the atmosphere.

As we delve into the paper, hopefully the reasons why this value isn’t calculated in any papers will become clear. In fact, the first question people should be asking themselves is this:

If the result is of significant importance why has no one else calculated this parameter before?

There are thousands of papers about radiative transfer, CO2 and water vapor.

Why has no one (apparently) published their calculations of the globally averaged optical thickness of the atmosphere and how it has changed over time?

Because optical thickness isn’t an obvious parameter, let’s start with a simpler property called transmittance.

Transmittance is the proportion of radiation which is transmitted through a body (in this case, the atmosphere). We will use the letter “t” to refer to it.

t has a value between 0 and 1. Slightly more formally, we can write 0 ≤ t≤ 1.

For t = 1, the body is totally transparent to incident radiation.

For t = 0, the body is totally opaque and absorbs all incident radiation.

For non-scattering atmospheres (note 1), absorptance, a = 1- t, which means that whatever is not absorbed gets transmitted. This is simple enough, and everyone would expect this from the First Law of Thermodynamics.

Now for optical thickness. We will use τ for this parameter. τ is the Greek letter “tau”.

The Beer-Lambert law says that the transmittance of a beam of radiation:

t = exp(-τ)

The “exp” is a mathematical convention for “e to the power of”. So this can alternatively be written as:

Optical thickness is tedious to calculate because the properties of each gas vary strongly with wavelength.

In brief, for each molecule at each wavelength, the total optical thickness is equal to the total number of molecules in the path x the absorption coefficient (which is a function of wavelength).

So optical thickness is a very handy parameter. Calculating it does take some work and a pre-requisite is a database of all the spectroscopic values for each molecule – as well as knowing the total amount of each gas in the path we want to calculate.

Absorption and Emission

The atmosphere absorbs and also emits.

Absorption, as we have just seen, is a function of the total amount of each gas (in a path) as well as the properties of each gas.

And, in case it is not obvious, the total radiation absorbed is also a function of the intensity of radiation travelling through the body that we want to calculate. This is because absorption = incident radiation x absorptance.

What about emission?

Emission of radiation is a function of the temperature of the atmosphere, as well as its emissivity, ε. This parameter emissivity is equal to the absorptivity or absorptance, of a body at any given wavelength – or across a range of wavelengths. This is known as Kirchhoff’s law.

Emission = ε . σT4 in W/m², where T is the temperature of the atmosphere at that point.

If we want to calculate the radiative transfer through the atmosphere we need both terms.

Here is a simple example of why. Readers who followed the series Understanding Atmospheric Radiation and the “Greenhouse” Effect will remember that I introduced a simple atmosphere with two molecules, pCO2 and pH2O. These had a passing resemblance to the real molecules, but had properties that were much simpler, for the purposes of demonstrating some important aspects of how radiation interacts with the atmosphere.

This following example has three scenarios. Each scenario has the same total amount of water vapor through the atmosphere, but a different profile vs height. These are shown in the graph:

Figure 1

The bottom graph shows the top of atmosphere (TOA) flux from each of the three scenarios.

If we calculated the total transmittance through the atmosphere it would be the same in each scenario (update: correction – see Ken Gregory’s point below). Because the optical thickness is the same. The optical thickness is the same because the total number of pH2O molecules in the path is the same.

Yet the TOA flux is very different.

This is because where the atmosphere emits from is very important in calculations of flux. For example, in the case of the 3rd scenario, the TOA flux is lower because more of the water vapor is at colder temperatures, and less is at hotter temperatures.

which is also known as Schwarzschild’s Equation – and is the fundamental description of changes in radiation as it passes through an absorbing (and non-scattering) atmosphere. Bλ(T) = the Planck function, which is a function of temperature. And the subscript λ in each term identifies the wavelength dependence of this equation.

For the mathematically minded, it will be clear reviewing the above equation that total optical thickness tells you less than you need. As the location of optical thickness varies, if temperature varies (which it does in the atmosphere) then you can get different results for the same optical thickness.

That is, the simulations above demonstrate what is clear, and easily provable, from the form of the fundamental equation.

This is why papers on total optical thickness of the atmosphere over time are hard to come by. It is of curiosity value only.

What About Methane, Nitrous Oxide and Halocarbons?

The total optical thickness of the atmosphere is not just determined by water vapor and CO2. If the atmosphere has an invariant optical thickness then surely all molecules should be included?

According to WM Collins and his co-authors (2006):

The increased concentrations of CO2, CH4, and N2O between 1750 and 1998 have produced forcings of +1.48, +0.48, and +0.15 W m, respectively [IPCC, 2001]. The introduction of halocarbons in the mid-20th century has contributed an additional +0.34 Wm for a total forcing by WMGHGs of +2.45Wm with a 15% margin of uncertainty.

I’m sure someone with enough determination can find some results for the changes in the radiative forcing from CH4 and N2O between 1950 and 2010. But this at least demonstrates that there is some significant absorption characteristics for other molecules. After all, halocarbons have added a quarter of the longer term CO2 increase in radiative forcing from CO2 (from 1750 to the present day) in just half a century.

So if total optical thickness from CO2 and water vapor has stayed constant over 60 years then surely total optical thickness must have increased?

This is not mentioned in the paper and seems to be a major blow to the not-particularly-useful result calculated.

Update, 31st May: Ken Gregory, a Miskolczi supporter armed with the spreadsheet of calculations, says that minor gases were kept constant. So Part Six demonstrates my basic calculations of optical thickness changes due to CO2 and some minor gases.

Cloudy Thinking

Miskolczi says:

In all calculations of A, TA, tA, and of the radiative flux components, the presence or absence of clouds was ignored; the calculations refer only to the greenhouse gas components of the atmosphere registered in the radiosonde data; we call this the quasi-all-sky protocol. It is assumed, however, that the atmospheric vertical thermal and water vapor structures are implicitly affected by the actual cloud cover, and that the atmosphere is at a stable steady state of cloud cover.

Clouds reflect solar radiation by 48 W/m² but reduce the outgoing longwave radiation (OLR) by 30 W/m², therefore the average net effect of clouds – over this period at least – is to cool the climate by 18 W/m².

Are they constant?

Here is a snapshot from Vardavas & Taylor (2007):

From Vardavas & Taylor (2007)

Figure 2

Another important point – given the non-linearity of the equations of radiative transfer, even if the cloud cover stayed at a constant global percentage but the geographical distribution changed, the optical thickness of the atmosphere cannot be assumed constant.

Here are some values of cloud emissivity from Hartmann (1994):

From Hartmann (1994)

Figure 3

Just for some perspective, as emissivity reaches 0.8, τ = 1.6; with emissivity = 0.9, τ = 2.3. And Miskolczi calculates the global average optical thickness of the atmosphere – without clouds – at 1.87.

At the end of his paper, Miskolczi concludes:

Apparently, the global average cloud cover must not have a dramatic effect on the global average clear-sky optical thickness..

I can’t understand, from the paper, where this confidence comes from.

Conclusion

There is more in the paper, including some very suspect assumptions about radiative exchange. However, six out of the 19 references in the paper are to Miskolczi himself and the fundamental equations brought up for energy balance (where radiative exchange is referenced) rely on his more lengthy 2007 paper, Greenhouse effect in semi-transparent planetary atmospheres.

I will try to read this paper before commenting on these energy balance equations.

However, the key points are:

optical thickness of the total atmosphere is not a very useful number

the useful headline number has to be changes in TOA flux, or radiative forcing, or some value which expresses the overall radiative balance of the climate system (update: see this comment for the correct measure)

optical thickness calculated as constant over 60 years for CO2 and water vapor appears to prove that total optical thickness is not constant due to increases in other well-mixed “greenhouse” gases

clouds are not included in the calculation, but surely overwhelm the optical thickness calculations and cannot be assumed to be constant

Other Articles in the Series:

Part Two – Kirchhoff – why Kirchhoff’s law is wrongly invoked, as the author himself later acknowledged, from his 2007 paper

Part Three – Kinetic Energy – why kinetic energy cannot be equated with flux (radiation in W/m²), and how equation 7 is invented out of thin air (with interesting author comment)

Part Four – a minor digression into another error that seems to have crept into the Aa=Ed relationship

Like this:

Related

60 Responses

SoD,
Thank you for the post and your interest in Miskolczi’s work. I hope we will be able to clarify all the points you have raised here (and also those you haven’t). But first let’s allow some time to the readers to look at the papers and think themselves.

You say:
1. optical thickness of the total atmosphere is not a very useful number

My reply:
– Global average atmospheric greenhouse-gas optical thickness is the measure of the infrared absorptive capacity of the atmosphere – that is, of the greenhouse effect. If the global average infrared absorptive capacity of the atmosphere is not a useful measure of the greenhouse effect, then what is?

You say:
2. the useful headline number has to be changes in TOA flux, or radiative forcing, or some value which expresses the overall radiative balance of the climate system

My reply:
– The infrared greenhouse-gas absorptive capacity of the atmosphere need not have to be changes in TOA flux, neither radiative forcing. It is the quantity itself that expresses the overall radiative balance of the system.

You say:
3. optical thickness calculated as constant over 60 years for CO2 and water vapor appears to prove that total optical thickness is not constant due to increases in other well-mixed “greenhouse” gases

My reply:
– Optical thickness was calculated over 60 years for CO2 and water vapor and other 9 IR-active molecular species (O3, N2O, CH4, NO, SO2, NO2, CCl4, F11 and F12), and turned out to be strictly fluctuating around a theoretically predicted equilibrium value, not showing the awaited strong change with CO2-increase.

You say:
4. clouds are not included in the calculation, but surely overwhelm the optical thickness calculations and cannot be assumed to be constant

My reply:
– Clouds are not included in the clear-sky (cloudless) IR greenhouse gas absorption calculation (giving 80% of the total, ~28°K), and at no rate are assumed to be constant. The all-sky incoming SW available energy is heavily dependent on the average cloud cover (albedo), and is an input parameter of the system, not influencing the IR absorptive property.

Optical thickness was calculated over 60 years for CO2 and water vapor and other 9 IR-active molecular species (O3, N2O, CH4, NO, SO2, NO2, CCl4, F11 and F12), and turned out to be strictly fluctuating around a theoretically predicted equilibrium value, not showing the awaited strong change with CO2-increase.

What strong change? As I pointed out on another thread, you can get a large change in surface temperature with only a small change in τ. Where is the calculation of τ from the standard model that shows that τ varies more than the measured data?

Optical thickness was calculated over 60 years for CO2 and water vapor and other 9 IR-active molecular species (O3, N2O, CH4, NO, SO2, NO2, CCl4, F11 and F12), and turned out to be strictly fluctuating around a theoretically predicted equilibrium value, not showing the awaited strong change with CO2-increase.

Which page is that on? I just can’t see it.

I did find this buried in the maths explanation (which I should have picked up):

Clouds are not included in the clear-sky (cloudless) IR greenhouse gas absorption calculation (giving 80% of the total, ~28°K), and at no rate are assumed to be constant. The all-sky incoming SW available energy is heavily dependent on the average cloud cover (albedo), and is an input parameter of the system, not influencing the IR absorptive property.

I’m not clear what you are saying, because it seems like you are saying clouds don’t affect IR. But you wouldn’t be saying that because clouds do influence the longwave absorption very considerably. The ERBE results from 1985-89 show a reduction in OLR of 30 W/m2 due to clouds.

Is the claim that the clear sky optical thickness is invariant over time, but the cloudy sky optical thickness isn’t?

Please look at page 19 of the 2007 paper. Clouds play a central role in the story, both by their SW and LW characteristics. The claim is that the ALL-SKY (global average clear+cloudy) optical thickness is invariant over time.

where G is the “greenhouse” effect, F is outgoing longwave radiation (OLR) at top of atmosphere (TOA), T is surface temperature.

F will match the absorbed solar radiation over the long term. Therefore, if absorbed solar radiation is a constant, F will also tend towards a constant and so G will increase if T increases due to increased opacity in the atmosphere.

(Note: this relies on the earth coming back into radiative balance via changes in surface temperature).

You say:
1. optical thickness of the total atmosphere is not a very useful number

My reply:
– Global average atmospheric greenhouse-gas optical thickness is the measure of the infrared absorptive capacity of the atmosphere – that is, of the greenhouse effect. If the global average infrared absorptive capacity of the atmosphere is not a useful measure of the greenhouse effect, then what is?

Well, that’s what the article explained.

The infrared absorptive capacity of the atmosphere can be constant, yet with varying emission of thermal radiation.

This is one of the reasons why there has been such discussion about the height of water vapor changes over the last 20 years.

For example:How Dry is the Tropical Free Troposphere? Implications for Global Warming Theory,
Spencer & Braswell, Bulletin of the American Meteorological Society (1997)

Sensitivity of the Earth’s Climate to height-dependent changes in the water vapor mixing ratio, Shine & Sinha, Nature (1991)

Some Coolness concerning Global Warming, Lindzen, Bulletin of the American Meteorological Society (1990)

These are just a few of the tens (hundreds?) of papers on the subject of the non-linearity of the OLR of water vapor.

I clearly remember reading an
article that explained how the climate
shifts between the glaciated and unglaciated
homeostatic regimes, and that shifting
back and forth is unavoidable. If I can find it, I
will post it to you. It it gave some solid math
which elaborated upon g = 1/3.

Maybe someone can tell me why a simple explanation does not lead to a constant optical thickness.

Water vapor is a condensing gas. It condenses when it cools. If you additional GHGs to the atmosphere then the water vapor in the atmosphere will be blocked from absorbing as much radiation as it could without those GHGs. Hence, it would cool ever so slightly. This would lead to increased condensation and a net lowering of the total water vapor in the atmosphere.

Is anything else required or is this effect too small to be important?

I agree with you; a simple explanation of
constant “tau” should be possible.
It would probably not give precisely the
same value that Miskolczi gets, but it should
be fairly close and have the virtue of being much
more understandable in terms of the main
physical processes already studied in
climate science. I have tried out a couple of
ideas, but, so far, they give “tau” values
that are too far off.

Those are just lines on a graph. Where’s the statistical analysis that shows that the theoretical trend is, in fact outside the 95% confidence limits of the data? Eyeballing the data, I seriously doubt that it is.

The statistical analysis in M2010 is a joke:

The linear regression coefficient of the actual values of against time has a Student t value of 0.499. This is not even nearly statistically significantly different from zero. The Student t value that would correspond to the theoretically calculated virtual effect of actual CO2 is 1.940.

But it isn’t St that changes in the standard model, it’s Su. A 1 W/m2 change in Su while St remains constant produces a much smaller change in τ. For Su=379.64W/m2 and St = 58.54 τ = 1.86951. If Su increases by 1 W/m2 then τ = 1.87214 or 0.14%. Even if we assume that the average surface temperature increased by 0.8 C over the 60 year period causing an increase of 4.3W/m2 in Su, then τ changes by 0.6% rather than the 0.9% in your example.

I am sorry guys, but you all seem to be missing a biggie. Optical thickness is not the only factor for vertical heat transfer, convection is also a big player. SOD is correct that the location of outgoing radiation is more important than just total optical thickness, but it is the combination of radiation and convective heat transfer that raises the heat up. The lack of understanding on the trade off of convection vs radiation, the lack of reasonable understanding of cloud dynamics, and the two facts that temperature has not risen significantly the last decade and total water vapor seems to have not increased either seem to make all of your arguments of little point.

The most important law in the background of Miskolczi’s system is probably a thermodynamical form of the principle of energy minimum: the most effective cooling. This keeps the K term (convection, heat conduction, meridional advection, turbulence, sensible and latent heat etc.) at its maximum, optimizing the global average Eu , and therefore powerfully constraining St and G.

This fact is a direct arithmetic consequence of Miskolczi’s first empirical rule, Aa=Ed, resulting the new form of Ramanathan’s G = Su-OLR as G=Ed-Eu, meaning that the greenhouse effect equals to the downwelling radiative heating minus the upwelling convective cooling – as it must be in our radiative-convective atmosphere.

Miklos Zagoni said (in response to my question of April 22, 2011 at 10:41 pm):

Please look at page 19 of the 2007 paper. Clouds play a central role in the story, both by their SW and LW characteristics. The claim is that the ALL-SKY (global average clear+cloudy) optical thickness is invariant over time.

We can turn back to this important point later

I agree they play a central role in “the story”. That’s why their absence in the 2010 paper is confusing – p245 (3rd page of the paper) says:

In all calculations of A, TA, , and of the radiative flux components, the presence or absence of clouds was ignored; the calculations refer only to the greenhouse gas components of the atmosphere registered in the radiosonde data; we call this the quasi-all-sky protocol.

Emphasis added.

I look forward to a clear statement on clouds in the computation of tau.

The 2010 paper talks only about the optical thickness of CO2 and water vapor.

A mere hint is given via this obscure comment within the maths statements:

N = 11 is the total number of major absorbing molecular species

– that perhaps other gases have been considered. And this hint doesn’t reference any other papers.

The 2010 paper cites 17 papers, including the 2004 paper.

The 2004 paper that you refer to says:

The atmosphere was stratified using 32 exponentially placed layers with about 100 m and 10 km layer thickness at the bottom and the top, respectively. The full altitude range was set to 61 km and the slant path was determined by spherical refractive geometry. The upward and downward slant path were identical, which assured that the directional spectral transmittances for the reverse trajectories were equal. Altogether eleven absorbing molecular species were involved: H2O, CO2, O3, N2O, CH4, NO, SO2, NO2, CCl4, F11, and F12.

The reader of the 2010 paper is supposed to understand that the 2004 paper is listing the reference molecules in the 2010 paper?

What concentration change is assumed about each molecule over the time period?

For reference, this is the kind of minimum standard for papers on this subject (from WM Collins, 2006, see References):

Collins lists the concentration of each gas in each scenario and references that for the various model runs of radiative forcing.

And this paper is claiming nothing novel or revolutionary, just business as usual inter-model comparisons.

For papers claiming revolutions in climate science this minimum standard is well below what would be expected.

Maybe someone can tell me why a simple explanation does not lead to a constant optical thickness.

Water vapor is a condensing gas. It condenses when it cools. If you additional GHGs to the atmosphere then the water vapor in the atmosphere will be blocked from absorbing as much radiation as it could without those GHGs. Hence, it would cool ever so slightly. This would lead to increased condensation and a net lowering of the total water vapor in the atmosphere..

The processes are very complex. Each gas has different absorption characteristics. Gases absorb and emit radiation. Emission depends on temperature of the gas. Absorption does not depend on the temperature of the gas.

So if you add GHGs water vapor will not be blocked from absorbing as much radiation as it could without these GHGs. Well, strictly speaking, as water vapor absorbs throughout the longwave spectrum there will be a very slight effect. But a non-linear one. Totally different for halocarbons in the 8-12um range compared with CO2 at 15um – for example.

And why will it cool? The atmosphere where the GHGs have been added will warm slightly in the first instance.

SOD: Having personally made hundreds of measurements of amounts of material in terms of absorbance (optical density units), I find that tau is a much more tangible measure of the amount of GHG in the atmosphere than TOA flux. It is easy to confuse calculated TOA imbalance (or radiative forcing), caused by increasing GHGs or other perturbations, with actual TOA flux, which must remain equal to F0 in the long term. However, I agree that TOA imbalance is far more relevant to GW. To make his work more accessible to other scientists, Miskolczi should have reported his results in term of both TOA imbalance and tau. However, I doubt that seeing Miskolczi’s results presented in terms of TOA would make them any more palatable to believers in the IPCC consensus. So your discussion about tau vs TOA appear to be a diversion from the real issue.

In my limited understanding, the fundamental problem arises from the fact that old radiosonde measurements show that the relative humidity of the upper troposphere has dropped as the earth has warmed. Miskolczi has reported that water vapor data – combined with other GHG changes – in units of tau, which hasn’t increased as expected with time. He or someone else could use the same data to report changes in terms of a calculated TOA imbalance/radiative forcing. The units are irrelevant; the bottom line is that the IPCC’s conclusions about water vapor feedback would be grossly incorrect if the old radiosonde data were correct. However, even Roy Spenser admits that this old data can’t be trusted.

The situation leads to a number of questions: a) Why hasn’t anyone attempted to re-analyze the old data? If that problem could be solved, it would provide the longest record of water vapor feedback driven by the largest change in temperature. b) How can scientists like Susan Solomon report on changes in the more difficult problem of stratospheric water vapor (in terms of TOA forcing) when water vapor in the upper troposphere is such a problem.

Both Ramanathan and Miskolczi failed to accurately differentiate between global and “clear-sky” analyses. Neither wants readers to worry about how to interpret conclusions that apply only to the ever-changing where the sky with clouds thin enough to be considered clear.

The OLR were computed for the NOAA R1 by NOAA, you can download their 61 year annual mean OLR from the same website. And, obviously,
the OLR increased, as it should according to the Su=OLR/f. I also computed the OLR for the 61 years and it also increased. But regarding
the AGW – co2 greenhouse hypothesis, there is nothing to do with the
OLR, it is irrelevant, and the key parameter is the tau. If tau constant then there is no increased backradiation, no matter how much is the co2 or the OLR or Fo.

Why the OLR is a useless parameter in this context is twofold. First, it has the St and Eu partition which can not be measured separately. Second, the OLR can change due to changes in Fo (or system albedo), and the IR emissivities, and there is no way to tell the changes that related to the co2 greenhouse effect only.

What SoD is proposing is a total nonsense. If you want to study greenhouse effect, study the absorption properties of the atmosphere directly, based on observations. Where we are know is that the co2 is increasing (yes), and the tau is constant (yes). Conclusion: the dynamics of the system takes care of the increased co2, and prefers the mystery tau=1.87. This means, that the stochastic atmospheric environment (unlike GCMs) does not take the Beer-Lambert law at its face value, and
there are higher order principles of physics that governs the atmospheric absorption.

Ferenc, thank you for taking the time to reply to my comment. Reading your reply and some of SOD’s, it seems to me that we are discussing OLR/TOA flux from two different points of view. First, there is the “true” OLR, which can be measured from space or calculated for observed or specified atmospheric conditions (temperature profile, well-mixed GHG mixing ratios, humidity profile). The “true” OLR is equal to St plus Eu. When the earth is not warming or cooling, OLR must be equal to F0 in the long run. Second, there is radiative forcing, the imbalance TOA flux or change in OLR, which can be calculated when the composition of the atmosphere is abruptly changed. In the real world, this calculated imbalance is transitory and only lasts until climate warms or cools enough to restore balance.

The 61-year record of OLR Ferenc mentions above presumably refers to “true” OLR calculated from historical weather data. “True” OLR is not directly related to tau; if tau is increased by additional CO2 and “true” OLR decreases temporarily, the earth may warm until equilibrium is restored and OLR again equals F0. Assuming I understand correctly, Ferenc believes that the earth responds, not by warming, but by returning tau to its original value by decreasing water vapor.

SOD wants to know how OLR transiently changes in response to changes in the composition of the atmosphere. Tau doesn’t tell him what he wants to know; he has written at least one post on papers which show that adding the same quantity of water vapor to the atmosphere at different altitudes creates different amounts of radiative forcing: https://scienceofdoom.com/2010/09/18/clouds-and-water-vapor-part-three/ So tau can be the same and the temperature profile of the atmosphere can be the same, but warming will be different. The warming is calculated from transient OLR imbalance, not OLR itself.

Thinking simplistically, I’d like to be able to take “today’s atmosphere” with tau 1.87 and the “atmosphere 60 years ago” also with tau 1.87 and calculate the difference in OLR coming out of the top of each. The problem is that the result won’t tell me anything useful about the greenhouse effect because the temperature at each altitude has changed in 60 years. We could take one temperature profile and look at the TOA imbalance that is created by switching from the 60-year old composition to today’s composition. If less OLR leaves the top of today’s atmosphere in this calculation (even though tau hasn’t changed), we might conclude that the changing composition of the atmosphere was responsible for a radiative imbalance that was corrected by some warming.

A simpler way to address the problem is to use Ramanathan’s definition of the the greenhouse effect G: OLR = oTs^4 – G Apparently we have 60 years of calculated OLR to match up with surface temperatures.

Minor (and late note). This is a common mistake of spectroscopists who are used to light sources which are much hotter than the gas they are studying.

Folks who do combustion, solar physics and atmospheric measurements understand that in such systems where the light source(s) are at the same temperature as the absorbers you cannot naively use the absorption coefficient

To make his work more accessible to other scientists, Miskolczi should have reported his results in term of both TOA imbalance and tau. However, I doubt that seeing Miskolczi’s results presented in terms of TOA would make them any more palatable to believers in the IPCC consensus. So your discussion about tau vs TOA appear to be a diversion from the real issue.

Who cares about palatability to believers?

Being interested in science I ask – How do you know it’s a diversion unless the TOA imbalance is calculated?

The high level of interest in scientific circles in upper tropospheric (and stratospheric) water vapor is because it is easy to demonstrate by theory and measurement that small amounts of water vapor at high altitudes have disproportionate effects.

This might have an insignificant effect on total global tau yet a significant effect on OLR.

This is also why I asked for a table of values of tau by year or month so that I can see how it relates to the real “greeenhouse” effect. (Not that I have that data to hand, but armed with measurements of tau I will put some effort into finding it).

I have no idea whether global calculations of tau correlate well with (upward surface flux – OLR) or not. But it seems like a fundamental question.

So fundamental that it is surprising it isn’t addressed in any of Miskolczi’s papers.

And saying it’s probably not important is the kind of thinking that creates the ED = AA confusion (see Part Two – where Miskolczi’s paper indicate a thermodynamics identity, the two top supporters claim it is a thermodynamic identity, and Miskolczi arrives to say, “no it’s an approximate equality” – but still with no quantifying of this to be seen.

“Figure 5. Violation of the radiative exchange equilibrium law. In each plot the negative spectral Ed is plotted for clarity. The gray shaded area indicates AA − ED. The global average bias is about 3 % of SU.”

And the related case studies are pretty good quantitative representations of the different Aa-Ed situations in different atmospheric structures. If this is not sufficiently clear, then I can not help.

“Being interested in science I ask – How do you know it’s a diversion unless the TOA imbalance is calculated?

The high level of interest in scientific circles in upper tropospheric (and stratospheric) water vapor is because it is easy to demonstrate by theory and measurement that small amounts of water vapor at high altitudes have disproportionate effects.

This might have an insignificant effect on total global tau yet a significant effect on OLR.”

You are, as usual, fundamentally correct in your comments. However, some sources suggest that there have been large decreases in water vapor in the upper troposphere in the last half-century:

When upper atmosphere drying trends this large (see Figure 3, 4, 10) are seen in older radiosonde data, I can understand how Miskolczi could have calculated a relatively constant tau from similar data. One can express this drying trend in terms of relative humidity, absolute humidity, tau (CO2 up, H2O down, little net change), radiative forcing, or temperature change for that forcing (Figure 10). From my perspective, knowing IF these humidity trends are reliable is more important than how they are described. When one calculates a radiative forcing, one holds the temperature profile of the the troposphere constant and changes the composition of the atmosphere. With radiosonde data, both the composition and temperature profile of the atmosphere have both changed, so the change in OLR is not the same as radiative forcing.

b) How can scientists like Susan Solomon report on changes in the more difficult problem of stratospheric water vapor (in terms of TOA forcing) when water vapor in the upper troposphere is such a problem.

What is the relationship between these two problems that you are getting at?

Are you comparing the problems of 60-year old radiosonde data of the upper troposphere with HALogen Occultation Experiment (HALOE) measurements of the stratosphere over the last decade or so?

“Figure 5. Violation of the radiative exchange equilibrium law. In each plot the negative spectral Ed is plotted for clarity. The gray shaded area indicates AA − ED. The global average bias is about 3 % of SU.”

The reason it is more important than a passing comment is it introduced as a foundation of your theory in your 2007 paper. This equality – or now “approximation” – is an equation upon which the rest of your theory is based.

Your 2007 paper introduced it as an equality not as an approximation and there is no discussion in any case of what the factors are which affect the value of this approximation, or the size of the approximation.

Your 2010 paper “papers” over this by apparently endorsing the 2007 paper theory, by citing “Miskolczi 2007” for equation 5 rather than bringing this subject into focus and explaining the “real theory”.

Nevertheless, you have a passing comment about the approximation in a later paper.

I will accept that this is your claim about the relationship.

Perhaps when you have covered some of the more important matters already raised in these articles we can go back and discuss what impact ED = 0.97AA instead of ED = AA has on your theory.

Questions outstanding that I have raised in these articles:

1. Rationale for calculating tau for clear skies only (this part)
2. Values of GHG concentrations used in the time-series of tau in the 2010 paper (this part)
3. Proof of equation 7 in the 2007 paper (part three)
4. Proof that kinetic energy can be equated with flux (part three)
5. How ED/AA can increase as εG is given a realistic 0.96 value (part four)

I was referring to her 2010 Science paper and hoping someone knowledgeable would respond by that saying current technology is or isn’t much more reliable. Almost half of the time period covered by Figure A was from one radiosonde site before satellite data became available. There may be perfectly good reasons why the radiosonde data used in this paper is reliable while older radiosondes making (easier?) measurements lower in the atmosphere aren’t reliable. Or it may be that marginal data gets through peer review in some cases and while being complete rejected in others.

The size of the one std error bars in Figure 1A and the lack of error bars on Figure 3C were distressing. How could any peer reviewer not request a more complete statistical analysis?

What SoD is proposing is a total nonsense. If you want to study greenhouse effect, study the absorption properties of the atmosphere directly, based on observations. Where we are know is that the co2 is increasing (yes), and the tau is constant (yes). Conclusion: the dynamics of the system takes care of the increased co2, and prefers the mystery tau=1.87. This means, that the stochastic atmospheric environment (unlike GCMs) does not take the Beer-Lambert law at its face value, and there are higher order principles of physics that governs the atmospheric absorption.

I think if we said that Ferenc Miskolczi’s theory about why tau must be immutable was correct – then, and only then we can say that my proposal is nonsense.

If your theory is not correct then even with observational evidence of constant tau (neither accepted nor denied by me) there is the necessity of proving that constant tau also produces a constant “greenhouse” effect.

Of course, as I stated earlier, OLR is not the whole story. The point is that it can change even with a constant tau (until such time as the Miskolczi theory is proven).

I find it helpful to break apart different elements of a theory. If we do accept this part what are the consequences, if we don’t accept this part what are the consequences, and so on.

It would be useful to know if you are stating that my proposal is nonsense with the pre-requisite that your theory is correct OR you believe my proposal is nonsense even if your theory (perish the crazy thought) is not correct.

But what would be even more useful is for you to address the various points made, especially about your theory that – so far – waiting on your explanation – fails the basic test of putting together simple equations.

– Equation 7 in the 2007 paper?
– Kinetic energy = flux?

With either of these equations falsified the theory appears to have no foundation.

The article shows a graph of three water vapour profiles, all with the same total amount of water vapour.

SoD said:
“If we calculated the total transmittance through the atmosphere it would be the same in each scenario. Because the optical thickness is the same. The optical thickness is the same because the total number of pH2O molecules in the path is the same.”

This is not correct. The transmittance doesn’t just depend on the number of molecules of greenhouse gases, it depends on the temperature and pressure at each layer. An increase in temperature, keeping the profiles constant, increases the transmittance. This is clearly demonstrated by Figure 11 of M2010, the dashed curve. An increase in pressure greatly reduces transmittance due to line broadening.

Some time ago I requested F. Miskolczi to perform a test using HARTCODE to investigate the effect on optical depth (OD) of adding water vapour to an atmospheric layer at the surface and in the upper atmosphere. The result of this test is here:http://members.shaw.ca/sch25/Ken/h2o.pdf

This is a PDF file with five graphs. The first graph shows the effect on optical depth of adding and removing certain volumes of water vapour from layers surface to 848 mbar and from 423 mbar to 307 mbar.

Note that adding 0.06323 prcm water vapour to the upper layer, a 90% increase of that layer, causes only a 0.0029 increase in OD, but adding the same water volume to the surface layer, a 4.35% increase, causes a 0.0285 increase in OD. This is 9.7 times as much, so SoD guess that “The optical thickness is the same because the total number of pH2O molecules in the path is the same.” is very mistaken.

The reason an change of water vapour near the surface has a larger effect on OD than in the upper atmosphere is due to line broadening from the higher pressure.

The second graph of the PDF shows a change of water vapour in the surface layer has a much larger effect on St (the part of the surface flux transmitted directly to space) than the same change in an upper layer.

The third graph shows opposite effects in the two layer to Eu (the part of OLR emitted from the atmosphere). Adding water vapour to the surface layer increases Eu, but adding the same amount of water vapour to the upper layer decreases Eu.

The fourth graph is the most important. It shows the effect of water vapour on outgoing longwave radiation (OLR), which is the sum of St and Eu. The graph shows adding water vapour to the upper layer reduces OLR by 41 times more than the same change in the surface layer. A greenhouse forcing is caused by changing the OLR so there is an imbalance between incoming shortwave radiation and OLR. But water vapour is declining in the upper atmosphere, the opposite of model predictions, just were it has a very large effect on OLR. Consequently, the reduction in water vapour in the upper atmosphere offsets both the increase in CO2 and the increase in water vapour near the surface. Both radiosonde measurements and satellite measurements confirm the reduction in water vapour in the upper atmosphere as shown here:http://www.friendsofscience.org/assets/documents/FOS%20Essay/Climate_Change_Science.html#Water_vapour

SoD said:
“If we calculated the total transmittance through the atmosphere it would be the same in each scenario. Because the optical thickness is the same. The optical thickness is the same because the total number of pH2O molecules in the path is the same.”

This is not correct. The transmittance doesn’t just depend on the number of molecules of greenhouse gases, it depends on the temperature and pressure at each layer. An increase in temperature, keeping the profiles constant, increases the transmittance. This is clearly demonstrated by Figure 11 of M2010, the dashed curve. An increase in pressure greatly reduces transmittance due to line broadening.

Despite my incorrect statement about transmittance remaining the same while OLR changes, my main point was that it was possible for transmittance to remain the same while OLR changes because emission height (and therefore temperature) is a key factor in OLR.

Optical thickness does not give the complete picture.

Your graph 4 just confirms this point.

And so the transmittance of the atmosphere is only one component, and reporting on the change in the “greenhouse” effect – Surface flux less top of atmosphere flux – is more important.

If it turns out that water vapor has declined that will be a very interesting topic for discussion. But here in this article I am trying to cover foundations.

If yourself and Ferenc Miskolczi would like to revise your claims to simply that water vapor has declined against model predictions that will be a very interesting discussion.

Or that the “real greenhouse effect” (Surface flux less TOA flux) has stayed constant.

But if yourself and Ferenc Miskolczi want to claim that optical thickness is the correct measure of the inappropriately-named “greenhouse” effect then you need to defend this point.

Right now you have pointed out a minor error in my claim (I agree) YET apparently agreed with my main point that optical thickness is not the best measure.

The graph at cell AP50 shows these two curves from 1948 to 2008, and the graph at cell AQ70 shows from 1960 to 2008. The first graph shows the Gn decline trend of 0.27% over 61 years. I think the NOAA data does not have good coverage in the 1950s, and this data may be less reliable, so the second graph starts in 1960. We expect the relative humidity near the surface should stay constant with global warming because 70% of the surface is covered with water. Air immediately above the oceans is in equilibrium with the water, so its RH should stay constant with warming.

The graph here shows constant RH from 1960.

At least near the surface, there is no declining water vapour bias in the NOAA data.

The second graph in the excel spreadsheet shows that the best fit trend of Gn increased by 0.18% in 49 years. Meanwhile, the OD increased by 0.45%; see the calculation immediately below the graph, the % change is the change/average value.

Interested in the idea of what impact the minor “greenhouse” gases have had on optical thickness over the last 60 years, I have been developing my line by line model further and introducing other gases from the HITRAN database.

And in doing so realized that – of side interest only – the optical thickness is not really the (alleged) constant.

It is transmittance that is really the (alleged) constant. Planck-weighted optical thickness can increase dramatically with no noticeable impact on transmittance.

And of course the calculation in M2010 (and M2007) is really Planck-weighted transmittance converted back to optical thickness.

For those few who might be interested..

Suppose you have only 2 parts of the spectrum, A & B. And suppose A is 80% of the weighted spectrum and B is 20%:

Where’s the statistical analysis that shows that the theoretical trend is, in fact outside the 95% confidence limits of the data? Eyeballing the data, I seriously doubt that it is.

Ken Gregory’s spreadsheet had the data to reproduce the global average IR absorption anomaly graph in slide 16 in the AGU presentation. I did a regression analysis in Excel and added the 95% confidence limits for the the regression to the graph. The 95% confidence limits on the slope of the regression are -0.0016 to 0.0027. That means a total change in absorption over the 61 year time period of 0.16 would still be inside the 95% confidence limits. Unfortunately, the CO2 only data isn’t in the spreadsheet, but you can overlay the plots and see that the CO2 line is inside the 95% limits. So the method is not, in fact, sensitive enough to show the signal, as I predicted.

Question: How is Su determined when calculating absorption and emission from radiosonde profiles? If you’re using the 2m air temperature as being equal to the surface temperature, you’re assuming your conclusion that ta = tg. On a clear calm day, there can be very large surface temperature gradients when the sun is high above the horizon. On a calm night, the surface temperature can be much lower than the 2m air temperature. An IR thermometer would probably be the best measure of effective surface temperature because it’s effectively measuring Su.

For example, the SURFRAD data for Sioux Falls, SD on 5/17/2011 shows that the difference between the effective surface temperature (ε = 1) and the air temperature at 10m ranged from -7.6 to +8.1 C. Any error in Su will also generate an error in calculated OLR. The 24 hour average air temperature at 10m was 13.8C compared to Teff of 12.8C. The average wind velocity was ~4 m/s. I would expect to see larger differences on a day with lower wind velocity.