I only ask because I want to know

I propose to raise a question about the Earth’s energy budget that has perplexed me for some years. Since further evidence in relation to my long-standing question is to hand, it is worth asking for answers from the expert community at WUWT.

A.E. Housman, in his immortal parody of the elegiac bromides often perpetrated by the choruses in the stage-plays of classical Greece, gives this line as an example:

I only ask because I want to know.

This sentiment is not as fatuous as it seems at first blush. Another chorus might say:

I ask because I want to make a point.

I begin by saying:

You say I aim to score a point. Not so:

I only ask because I want to know.

Last time I raised the question, in another blog, more heat than light was generated because the proprietrix had erroneously assumed that T / 4F, a differential essential to my argument,was too simple to bea correct form of the first derivative ΔT / ΔF of the fundamental equation (1) of radiative transfer:

, | Stefan-Boltzmann equation (1)

where F is radiative flux density in W m–2, ε is emissivity constant at unity, the Stefan-Boltzmann constant σ is 5.67 x 10–8 W m–2 K–4, and T is temperature in Kelvin.To avert similar misunderstandings (which I have found to be widespread), here is a demonstration that T / 4F, simple though it be, is indeed the first derivative ΔT / ΔF of Eq. (1):

. (2)

Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.

My question relates to one of many curious features of the following energy-budget diagrams for the Earth:

The surface flux density cannot be reliably measured. So did the “consensus” use Eq. (1) to reach the flux densities shown in the five diagrams? Yes. Kiehl & Trenberth (1997) wrote: “Emission from the surface is assumed to follow Planck’s function, assuming a surface emissivity of 1.” Planck’s function gives flux density at a particular wavelength. Eq. (1) integrates that function across all wavelengths.

Here (at last) is my question. Does not the use of Eq. (1) to determine the relationship between TS and FS at the surface necessarily imply that the Planck climate-sensitivity parameter λ0,S applicable to the surface (where the coefficient 7/6 ballparks allowance for the Hölder inequality) is given by

= 0.215 K W–1 m2 ? (3)

The implications for climate sensitivity are profound. For the official method of determining λ0 is to apply Eq. (1) to the characteristic-emission altitude (~300 mb), where incoming and outgoing radiative fluxes are by definition equal, so that Eq. (4) gives incoming and hence outgoing radiative flux FE:

= 239.4 W m–2 (4)

where FE is the product of the ratio πr2/4πr2 of the surface area of the disk the Earth presents to the Sun to that of the rotating sphere; total solar irradiance S = 1366 W m–2; and (1 – α), where α = 0.3 is the Earth’s albedo. Then, from (1), mean effective temperature TE at the characteristic emission altitude is given by Eq. (5):

where the numerator of the fraction is the CO2 radiative forcing, and f = 1.5 is the IPCC’s current best estimate of the temperature-feedback sum to equilibrium.

Where λ0,E = 0.313, equilibrium climate sensitivity is 2.2 K, down from the 3.3 K in IPCC (2007) because IPCC (2013) cut the feedback sum f from 2 to 1.5 W m–2 K–1 (though it did not reveal that climate sensitivity must then fall by a third).

However, if Eq. (1) is applied at the surface, the value λ0,S of the Planck sensitivity parameter is 0.215 (Eq. 3), and equilibrium climate sensitivity falls to only 1.2 K.

If f is net-negative, sensitivity falls still further. Monckton of Brenchley, 2015 (click “Most Read Articles” at www.scibull.com) suggest that the thermostasis of the climate over the past 810,000 years and the incompatibility of high net-positive feedback with the Bode system-gain relation indicate a net-negative feedback sum on the interval –0.64 [–1.60, +0.32] W m–2 K–1. In that event, applying Eq. (1) at the surface gives climate sensitivity on the interval 0.7 [0.6, 0.9] K.

Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude, in which event the value for surface flux density FS may well be incorrect and no one has any idea of what the Earth’s energy budget is, and still less of an idea whether there is any surface “radiative imbalance” at all, or the flux density at the Earth’s surface is correctly determined from observed global mean surface temperature by Eq. (1), as all five sources cited above determined it, in which event sensitivity is harmlessly low even under the IPCC’s current assumption of strongly net-positive temperature feedbacks.

It is worth noting that, even before taking any account of the “consensus’” use of Eq. (1) to govern the relationship between TSand FS, the reduction in the feedback sum f between IPCC’s 2007 and 2013 assessment reports mandates a corresponding reduction in its central estimate of climate sensitivity from 3.3 to 2.2 K, of which only half, or about 1 K, would be expected to occur within a century of a CO2 doubling. The remainder would make itself slowly and harmlessly manifest over the next 1000-3000 years (Solomon et al., 2009).

Given that the Great Pause has endured for 18 years 6 months, the probability that there is no global warming in the pipeline as a result of our past sins of emission is increasing (Monckton of Brenchley et al., 2013). All warming that was likely to occur from emissions to date has already made itself manifest. Therefore, perhaps we start with a clean slate. Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.

Once allowance has been made not only for the IPCC’s reduction of the feedback sum f from 2.05 to 1.5 W m–2 K–1 and the application of Eq. (1) to the relationship between TS and FS but also for the probability that f is not strongly positive, for the possibility that a 50% increase in CO2 concentration is all that can occur before fossil-fuel exhaustion, for the IPCC’s estimate that only half of equilibrium sensitivity will occur within the century after the CO2 increase, and for the fact that the CO2 increase will not be complete until the end of this century, it is difficult, and arguably impossible, to maintain that Man can cause a dangerous warming of the planet by 2100.

Indeed, even one ignores all of the considerations in the above paragraph except the first, the IPCC’s implicit central estimate of global warming this century would amount to only 1.1 K, just within the arbitrary 2-K-since-1750 limit, and any remaining warming would come through so slowly as to be harmless. It is no longer legitimate – if ever it was – to maintain that there is any need to fear runaway warming.

Post navigation

485 thoughts on “I only ask because I want to know”

I am not a scientist, but a little common sense would seem to indicate that 400 or even 540 parts per million of CO2 in the atmosphere could not have the major temperature driving force that the “science is settled” crowd would have us believe.
That is putting a lot of power in a trace gas that is even just a tiny percentage of the greenhouse gases in the atmosphere.

I really do think that is a pretty weak argument.
Common sense might tell you something iff you have strong grasp on the underlying facts, if not, it is worth nothing. When you have some knowlegde on this, iwe are not talking about ‘common’ sense any more.

That is an invalid response to a simple point. OK, you disagree, but you fail to provide any rationale or argument to support your disagreement.
Not one of the supposed physics processes have addressed the ‘magic’ of how one lone molecule causes 10,000 other molecules to warm 0.5, 1.0, 2.0, 3.0, 4.0 and so on degrees Kelvin.
Instead the individual molecular interactions are ignored and buried with gross assumptions.

Hugh,
I’m with Dave on this one. Common sense questions often reveal good insights.
Here is another one …
In the IPCC energy budget for the Earth, it shows 342 W/sqm as the down-welling energy from ‘greenhouse gasses’. Can you please tell me where on that chart the down-welling energy is shown from the 99.96% of the atmosphere that is not CO2?
You won’t find it because it is a fudge. All gasses emit photons downwards to the earth, not just ‘greenhouse gasses’. The percentage that come from CO2 is tiny.

Perhaps we should use the special man made c02 to insulate our houses in winter.
It seems its the most insulative material in the universe, trapping all that heat.
Just think off all the polar bears we can save not having to burn that evil fossil fuel…
The possibilities are endless…. you can have pockets of man made x02 on our jackets, earmuffs, gloves, etc

morLogicThanU June 27, 2015 at 3:40 pm
Perhaps we should use the special man made c02 to insulate our houses in winter.
Not too wild an idea. We currently use argon gas in the insulating space between panes of multi-pane windows, but CO2 has a thermal conductivity ~5% lower than argon. Argon is around 23 times more abundant than CO2, however, and is the main residual gas left from the industrial production of oxygen and nitrogen, so it’s handier to use. And that leaves more CO2 in the air to help grow broccoli and keep us warm.

From wikipedia (atmosphere of earth):
“If the entire mass of the atmosphere had a uniform density from sea level, it would terminate abruptly at an altitude of 8.50 km (27,900 ft).”
Pre-industrial levels of CO2 at 280 ppm would equal a layer of air of 2.38 m at sea level pressure (this ballpark figure assumes for simplicity that each molecule in the atmosphere takes up equal space: 8500*280/1000000). At present there is around 400 ppm suggesting that that mankind added around 1 m of infra-red absorbing CO2 (at sea level pressure). Are you sure that this could not possibly have an effect because it is a trace gas?

Take a 0.3mm thick sheet of aluminum foil and measure how much light passes through it. Now add another 0.3mm thick sheet and measure again.
Are you sure doubling the total thickness couldn’t possibly have an effect?

–At present there is around 400 ppm suggesting that that mankind added around 1 m of infra-red absorbing CO2 (at sea level pressure). Are you sure that this could not possibly have an effect because it is a trace gas?–
Suppose one had flat piece of land which 1 km square. And one measured the air temperature in the middle of this field in white box 5 feet above the ground and it had average temperature of 12 C
And then put a glass box 1 square km by 5 meter high. And it it elevated above the ground by 5 meters. So top of box is 10 meters above the ground.
Then you filled the box with air will enough pressure so that the air does become a lower pressure then outside air when air is the coldest. So it’s sealed and designed to withstand any higher pressure were air to become warmer.
Then measure the average temperature for another year with white box in the middle. And it should be warmer temperature than compared to not having
the big glass box. Then you replace the air with pure CO2 and so thereby added 5 meters of CO2.
And question would be how warmer would the average temperature as measured in white box be?
And does increase the highest daytime temperature by how much. And/or does it increase the night time temperatures by the most?

Gbaikie,
Can you please re-write? As an example:
“Then you filled the box with air will enough pressure so that the air does become a lower pressure then outside air when air is the coldest. ”
I can try to interpret what you mean, but it will only lead to misunderstandings.

Gbaikie,
Can you please re-write? As an example:
“Then you filled the box with air will enough pressure so that the air does become a lower pressure then outside air when air is the coldest. ”
I can try to interpret what you mean, but it will only lead to misunderstandings.
Have it slightly over pressurized- say 1/2 psi.

Whether CO2 is a trace gas is actually irrelevant; the key issue is whether an absorbtion-band photon leaving the surface would make it through to space. Surprisingly, in spite of the low concentration, the answer is that such a photon will encounter a CO2 molecule ever few tens of metres. We call this the mean free path. Thus, even at 400ppm, an outbound phton will encounter many CO2 molecules on its route upwards. This should explain why increasing the concentration beyond a certain level doesn’t have all that much of an effect on the odds of a photon escaping, ending up back at the surface, or being converted into bulk atmospheric heat.
Here is the mathematics behind it: http://www.biocab.org/Mean_Free_Path_Length_Photons.html

In my view, the minutae of arguments atmospheric warming rates and responses within the frame of fluctuations of natural variability under decadal time scales are nonsense. The absolutely only thing that matters is that the earth is warming with MASSIVE amounts of excess heat. This is happening when the sun is at it’s longest and deepest minimum in over 100 years, and with nearly 40% of the total warming effects of CO2 being shielded by aerosols (dust and pollution) in the atmosphere.
please observe the following graphic:https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content2000m.png
The amount of heat represented in this graph is about 93% off the total global heat accumulation that has occurred over the last 10 years (I am only considering the very accurate ARGO buoy analysis annotated in red). This warming represents so much heat energy that, if it was to be put into the earth’s atmosphere, the atmosphere would warm by nearly 17C ( and we would all be dead).
The reason that this matters is that the only way that the earth reaches equilibrium is through blackbody radiation to space. This heat energy represents an continual investment of energy into the earth that will NEVER GO AWAY, until the earth’s surface warms enough to equalize with the incoming energy.
So my question to you all is:
With this definitive evidence of heat accumulation in the earth, why do you still believe that humans are not the primary contributor to this effect?
and
If you believe that humans are the primary contributor to this effect, why would you want to do anything but maximize the implementation of alternative energy and distributed generation, using clean fuel sources that give people the independence and freedom to charge their own cars from the solar panels on their own roofs and never send a single more dime to Saudi Arabia?
???

Ian,
What about the molecules of N2 and O2 or other atmospheric gases? They also absorb and emit photons – how do you isolate the CO2 effect? The outbound photon is much more likely to hit an N2 or O2 molecule which will block its path to space.
The chances are 99.96% that the photon will hit a air molecule other than CO2. This other molecule will either emit a new photon or simply vibrate a bit more.
The link you provided actually gives the answer in its first conclusion:
“The results obtained by experimentation coincide with the results obtained by applying astrophysics formulas. Therefore, both methodologies are reliable to calculate the total emissivity/absorptivity of **any** gas of any planetary atmosphere.”
Any gas acts this way – not just CO2. That is why CO2 only being a trace gas is relevant.

Wagen,
Thanks for that link. I have read it before and its definition of radiative gases is helpful. My first point is that all matter radiates photons – provided it is above absolute zero. This includes N2 and O2. Larger gas molecules tend to be more radiative than smaller molecules as your link describes – but they all do radiate. My second point is that the chances of an emitted photon hitting a non CO2 molecule are 99.96% which means that most of what happens next is driven by the properties of those molecules, not CO2. Hence CO2 only being a trace gas is relevant.

And given that these photons are traavelling at 299792.458 km/sec, what does it matter if they interact with some CO2 molecules on their way out to space.
So the photon takes a zig-zag course on its way through the 50km of atmosphere whilst travelling at 299792.458 km/sec, it merely delays the photon by seconds at most (possibly only fractions of seconds).
Given that on average, there is 12 hours of darkness, this gives plenty of time for photons to escape to space before the next surge of incoming photon from the sun is received and the entire process repeats itself, such that there is no build up in temps. All the energy received during the day has plenty of opportunity to find its way out to space during the nighttime hours..

Sorry, wiki again (on N2):
“Molecular nitrogen is largely transparent to infrared and visible radiation because it is a homonuclear molecule and, thus, has no dipole moment to couple to electromagnetic radiation at these wavelengths. Significant absorption occurs at extreme ultraviolet wavelengths, beginning around 100 nanometers.”
If your point is that all molecules in the atmosphere absorb and re-radiate, then you have difficulties explaining the direct effects of sunshine. I may misunderstand your point, though.

Wagen,
Do you agree that all matter with a temperature greater than absolute zero emits thermal radiation? I had assumed that was a generally accepted principle but please correct me if I am wrong as what I say next comes from this.
If you do agree then you will agree that N2 and O2 will therefore emit radiation according to their temperature. Approximately half of that will be back down towards the Earth’s surface.
Question is – where did they get their thermal energy from? You say that N2 and O2 are largely transparent to visible and IR radiation. Logically that means they get must get their energy from other frequencies? Or maybe they simply get it from convection and conduction in the atmosphere?
Whatever the source of their energy, N2 and O2 are emitting thermal radiation to the ground just like CO2. The wavelengths may vary but even if they are not in the LWIR range, they will still be absorbed by the earth and re emitted at longer wavelengths just like when sunlight or UV hits the surface of the earth. Bottom line is that CO2 and the other ‘greenhouse gasses’ are not the only sources of down-welling radiation.

Jai: When you look at that “very accurate” ARGO data, please note that the heat content of the top 100 m has fallen (through 2013 at least) and that of the next 200 m has remained almost unchanged. Most of the heat has accumulated from 700-2000 m. That heat accumulation is large, but so is the mass warming, so the temperature change over a decade is ridiculously small – averaging about 0.01 degK. IMO, the ARGO data is interesting, but doesn’t agree with what was logically expected and climate models projected. Perhaps we will know more in a decade or two, but I see little reason to believe such tiny changes in temperature constitute conclusive evidence of global warming, much less the amount of warming expected from anthropogenic CO2.

Ian Macdonald, may I suggest that there is more to it than the CO2 absorption band for photons leaving the Earth’s surface. What is missing from the IPCC is any mention of the absorption bands across the entire spectrum, in particular, for the incoming Sunshine.
The absorption spectrum for CO2 has its primary maximum at about 4.3 microns, within the spectral range of our incoming Sunshine. The secondary maximum is at about 15 microns, within the spectral range of the average Earth’s surface emission. All of the other spectral lines are at shorter wavelengths, that is, not within the range of the Earth’s emissions.
The primary peak for CO2 absorbs about three times the energy from the incoming Sunshine as does the secondary peak in absorbing energy out-going from the Earth’ surface. Hence any increase in CO2 atmospheric concentration will cause cooling of the Earth as less energy reaches its surface in the first place, that is, before any surface emission that might be “trapped” by the atmosphere.
As the Earth’s temperature has been stable throughout this century, with neither warming nor cooling, it is apparent that the four molecules of CO2 to the 10,000 molecules of other atmospheric gases is insufficient to cause any measurable effect on the Earth’ temperature regardless of what path the photons may take. This is especially important for the back-radiation of the incoming Sun’s energy as those photons escape directly into space never to return.

Bernard, Wagen and others: Your comments reflect some significant misunderstanding about the behavior of GHGs and radiation.
1). The troposphere and most of the stratosphere are in local thermodynamic equilibrium. This means that an excited vibrational state of CO2 is relaxed by collisions much faster than the excited state can emit a photon. Therefore essentially all thermal infrared photons arise from GHG molecules that were excited by collisions, not by absorption of a photon. Therefore emission of thermal infrared photons by GHGs in the atmosphere depends only on the local temperature – local thermodynamic equilibrium. “Re-emission of absorbed photons is negligible. Essential all absorbed photons become kinetic energy (heat) when they are absorbed. “Thermalized” is a name for this process.
2). Essentially all the radiative cooling of the atmosphere is done by CO2 and water vapor. “All matter above absolute zero may radiate” at some low rate, but the power radiated by N2 and O2 in our atmosphere is negligible compared with GHGs.
3). The atmosphere is warmed by absorption of thermal infrared by GHGs and cooled by emission of thermal infrared by GHGs. These energy gains or losses are transmitted to and from N2 and O2 by collisions. In general, more energy is lost than gained by radiation in the troposphere and the difference is supplied by latent heat from the condensation of water vapor into liquid water and ice (clouds). About 1/3 of sunlight that isn’t reflected back into space is absorbed by the atmosphere before it reaches the ground. Convection of latent heat stops at the tropopause, so all heat transfer above occurs by radiation. Since DLR and OLR are similar near the surface, convection is more important than net radiation in removing heat from the surface.
4). The strongest absorption lines for CO2 are saturated by all of the CO2 in the atmosphere, but weaker lines are not. And all lines have width, so that, even if the center is saturated, the sides may not be. The 3.7 W/m2 forcing for 2XCO2 is a small change in the 100+ W/m2 contribution of CO2 to OLR and comes mostly from the weaker lines and sides of stronger lines. So saturation is a real phenomena, but increasing CO2 still decreases OLR.
Hope this helps.

— Bernard Lodge
June 27, 2015 at 9:56 pm
Wagen,
Do you agree that all matter with a temperature greater than absolute zero emits thermal radiation? I had assumed that was a generally accepted principle but please correct me if I am wrong as what I say next comes from this.-*-
Well if matter emits thermal radiation, one can assume such emission would involve energy leaving this matter.
Matter does not inherently have infinite amount of energy to emit, so one say it has some finite amount of energy it could emit. And this finite energy would be what meant by being at a temperature greater than zero.
Gas temperature is kinetic energy. A gas molecule can stop moving [and *could* do this hundreds of times within one second] and then regains it’s velocity via collision with other gas molecules. And were all gas molecule in area were to stop moving then that gas would be at absolute zero temperature.
So gas molecule is like fast moving bullet, the difference is a bullet could also be warm and radiating while racing towards something before it hits anything- a molecule does not likewise radiate between hitting something [another molecule or say a photon].
All gas can interact with other matter and it can gain energy by such interaction which can then cause the gas molecule to emit a photon. But the molecule itself does not have some finite amount energy which it emits.
Or the temperature of gas is the average velocity of million or billions of gas molecules.
And a gas molecule absorbs and emits electromagnetic energy by having it’s electrons going to higher energy level and then returning to more stable energy level.
Since matter is comprised of protons and electrons and all electron could go to higher energy level, All matter can absorb and emit thermal radiation.
Now, according to ideal gas law, ideal gas don’t lose their kinetic energy via
collision with other ideal gases. If they radiated energy via collision, that would mean they would lose kinetic energy [or that’s all the energy they have to lose].
So CO2, N2, O2 are considered ideal gases. H20 is not ideal gas
because at earth temperature and pressure it can condense [form into water] So H20 collision with other H20 molecules can lose energy in the collision [they stop being a gas and be liquid- of course the liquid gains the kinetic energy, but the gas loses a member. But also briefly sticking or clumping together, adds heat which can radiated].
So what CO2 and other greenhouse gases are is they are much more easily [then non-greenhouse gas] to have have the electron energized by infrared light. They gaining electromagnetic energy and then emit the same electromagnetic energy- and emit it in random direction [all matter emits in random direction],
So the greenhouse gas aren’t transparent to some wavelengths of IR, whereas non greenhouse are mostly transparent to IR [but being transparent, they can reflect, scatter some of this light, but don’t absorb and re-radiate very much of it].

Jai Mitchell:
“…warming with MASSIVE amounts of excess heat. This is happening when the sun is at it’s longest and deepest minimum in over 100 years,”
It sounds like you’re saying the sun has been during the whole 20th century in some kind of minimum? It seems if anything could be said about max or min, some kind of minimum started in 21th century mostly during the so-called “hiatus”. However, climatic responses would be normally deduced on meta-decal timescales, often 30 years periods, with significant trends around half of that, around 15 years. That’s why the 18+ years hiatus in isolation is still significant and at the same time not yet definitive in term of climate science.
” and with nearly 40% of the total warming effects of CO2 being shielded by aerosols (dust and pollution) in the atmosphere.”
It sounds like you’re saying adding dust and pollution would be a good protection from global warming? If we’re already blocking 40% of the looming disaster now, perhaps doubling the aerosols might be a solution? :-p

Co2 by it’s nature accelerates the evaporative heat transport. So the individual photon’s incidental contact with a co2 molecule gives the photon a boost on it’s way out of the system.
Same reason why the evaporation of liquid co2 causes minus (-109.3) degree dry ice. Sucks all of the heat out of the local area and keeps on running. http://www.thegreenhead.com/2007/10/portable-dry-ice-maker.php

Bernard,
“Do you agree that all matter with a temperature greater than absolute zero emits thermal radiation? I had assumed that was a generally accepted principle but please correct me if I am wrong as what I say next comes from this.”
No. Some molecules only move faster. I.e. Atmosphere warms (which may lead to other molecules to radiate more though).

Hugh.
“It sounds like you’re saying adding dust and pollution would be a good protection from global warming?”
Yes it would (lowering ocean ph ignored here). You would have to live with the smog though.

Frank,
Thanks for your reply.
You say …. ‘the power radiated by N2 and O2 in our atmosphere is negligible compared with GHGs’.
Do you have a source for that?
‘Negligible’ is an opinion – can you quantify it?

Jai Mitchell
My question to you is, why do you ignore the geologic and glacial history that natural processes dominate the climate system? We’ve seen large fluctuations in climate and temperatures in the past, with no CO2 “cause” for the fluctuation. From Rasmussen et.al., (2014) “About 25 abrupt transitions from stadial to interstadial conditions took place during the Last Glacial period and these vary in amplitude from 5 °C to 16 °C, each completed within a few decades”, and no, the world did “die”….Please leave your bubble and join the real world….. http://www.sciencedirect.com/science/article/pii/S0277379114003485

Bernard Lodge wrote: “You say …. ‘the power radiated by N2 and O2 in our atmosphere is negligible compared with GHGs’. Do you have a source for that? ‘Negligible’ is an opinion – can you quantify it?”
Rats, you noticed the modestly vague words I used when I was sure of these facts but didn’t have proof at my fingertips. Time to do better.
You can go to the online MODTRAN calculator at the link below and remove carbon dioxide, water vapor and methane from the spectrum of outgoing radiation by putting zeros in the top five boxes. The software calculates the spectrum of OLR at the top of the atmosphere (70 km). The spectrum of the radiation reaching space looks like the blackbody spectrum emitted by the surface of the earth at a single temperature. At no wavelength is there any evidence that the N2 and O2 in the atmosphere have absorbed and emitted any OLR. When the traditional GHGs absorb and emit, less radiation is emitted at characteristic wavelengths. For example, with the default setting of 400 ppm of CO2, the intensity emitted around 15 um is appropriate for a temperature of 220 degK because all of the 15 um radiation emitted by the earth’s surface has been absorbed. The average photons that escapes to space at this wavelength is emitted by CO2 molecules near the tropopause where the temperature is 220 degK (about 13 km). 15 um photons emitted upward from lower in the atmosphere have little chance of escaping to space. If you play with the model, perhaps you can convince yourself that traditional GHGs have a much bigger impact on OLR and therefore N2 and O2 are negligible in comparison.http://climatemodels.uchicago.edu/modtran/modtran.html
You can find many closely figures with the absorption spectra of GHGs on the web (search infrared spectrum gases), such as the one below. Such Figures never include N2, because “everyone” knows it is “negligible”. They show O2 plus O3, and strongest bands in this spectrum are due to O3. (O2 does absorb UV.)http://www.meteor.iastate.edu/gccourse/forcing/spectrum.html
At http://www.spectracalc.com, you can plot the spectrum of N2 and CO2. The strongest lines for CO2 are 10 orders of magnitude stronger than for N2. For O2 and CO2, the difference if about 6 orders of magnitude. I don’t have much experience with this website.
Hope this helps.

Bernard,
There’s an easier way to think of this. Sure, the N2/O2 is matter with the potential to emit EM energy, but it has an emissivity of near zero and near unit transmittance. Would you assert that if the Earth’s atmosphere contained only its present amounts of N2 and O2 and nothing else, significant emissions from the O2/N2 would be seen from space and that the average temperature and sensitivity would be any different than the case for no atmosphere at all? The only photons we can see that are escaping the planet either originate directly from the surface or clouds and pass through the N2/O2 and GHG’s without being absorbed and emissions by GHG molecules. If you examine the spectral properties of the planets emissions, those in the GHG absorption bands are about 3db lower then they would be (about 1/2) without GHG absorption. While the N2/O2 do have emission/absorption bands, they are not in the visible or relevant LWIR spectrum.
George

daveandrews723 June 27, 2015 at 7:12 am
Not only are you not a scientist but you aren’t a philosopher or logician either. What you have written comes into that famous “not even wrong” category.
Argument from personal incredulity is no argument at all. it irks me that Mr Watts permits such dreck to be published here as it makes us climate realists look stupid.https://yourlogicalfallacyis.com/personal-incredulity

Unexpected effect of CO2
The weight of air pressure is equivalent to the pressure of 10m of water. On a simple pro rata calculation 280 parts per million of this is equivalent to a 3mm thick layer of solid CO2.
The radiation emitted to space treating earth as a black body would imply a black body temperature of 255K, which is -18 degrees C, yet the surface temperature is effectively 288K or +15 degrees C, giving a 33 degree C (or K) difference with and without atmosphere.
The standard texts say that water vapour is responsible for most of this greenhouse gas-driven rise, and that CO2 is responsible for only around 23% of it. So you can work out that the pre-industrial level of 280 ppm CO2 is responsible for an 8 degree C higher surface temperature than would be the case without it. Not bad for a 3mm thick solid layer – and contradictory to what at least your common sense is telling you.
Common sense might then tell you that doubling the thickness from 3mm to 6mm (concentration rising from 280 ppm to 560 ppm) would double the temperature rise. But again common sense would be wrong, because it is somewhat less than that. All the discussion centres around how much more water vapour would be in the atmosphere if CO2 caused an initial temperature rise, and what the additional rise from the extra water vapour greenhouse effect would be – and there is a lot more water vapour in the atmosphere than CO2.
So common sense is really not a good guide when a calculation based on known physics can get you the right answer.

Climate Pete
You and the general population have been grossly mislead. The fact is that the Greenhouse Effect of 33 degrees Celsius is completely fictitious.
A primary claim for the cause of global warming has always been that the Earth suffers a “greenhouse effect”, being 33 degrees Celsius warmer than it would be if there were no greenhouse gases in the atmosphere. This is calculated from the difference between the estimated global surface temperature of 15 deg.C and the estimated temperature of the Earth without greenhouse gases. How many of the public are aware of the method used to calculate the latter?
The calculated temperature is derived from the surface temperature of the Sun, the radius of the Sun, the distance of the Earth’s orbit from the centre of the Sun and the radius of the Earth using the Stefan Boltzmann equation with an albedo of 0.3 to give an effective blackbody temperature of -18 deg.C.
Behind this seemingly innocuous calculation is hidden rarely stated assumptions. For example the Stefan Boltzmann equation applies to radiation from a cavity in thermal equilibrium emitting directly into a vacuum. That means that the model Earth is a perfectly smooth sphere with a uniform surface, no oceans, forests, deserts, mountains, ice sheets and, amazingly, no atmosphere with the same surface temperature everywhere, that is, no day or night. Furthermore the model assumes that the Sun’s insolation is spread uniformly across the whole Earth surface so no tropical equatorial zone and no ice-covered poles, just a uniform solid surface with a uniform albedo of 0.3 everywhere receiving a uniform insolation of one quarter of the Sun’s known insolation, everywhere across the spherical surface of the Earth. Does anyone, other than the IPCC and its followers, think that this is a reasonable model of the Earth from which to estimate the “greenhouse effect”, that is, the temperature difference between an Earth with and without “greenhouse gases” in the atmosphere when the model does not even include an atmosphere? I certainly do not.

Well Dave, you seem to have diverted the whole thread away from MofB’s question regarding the appropriateness of the 390 Wm^-2 surface total radiant emittance value.
Where I think Christopher, might have had some gear slippage, is that this question seems to be based on an expectation that TSI – albedo attenuation losses, should match the simple BB radiation emittance of the surface, using of course the 1/4 sphere / circle surface area factor; a factor which personally I find totally bogus.
342 Wm^-2 over the entire earth surface, less the 30% or so albedo losses, can’t possibly raise the surface Temperature to 288 K, corresponding to the 390 value.
So your Lordship, have you considered the additional energy over and above the Planckian radiative assumption, that is conveyed from the surface to the atmosphere, and subsequently lost to space.
You have direct thermal conductivity of HEAT energy (noun) from the total surface to the atmosphere, followed of course by convective transfer of this energy (as heat) to the upper atmosphere, which at some point must be converted to some spectrum of LWIR radiation (also BB like) for transfer out to space (at least half of it.
Then of course, for the oceanic parts of the earth condensed surface, you will have a considerable latent heat of evaporation that gets added to the atmospheric “heat energy”, and also convected to higher altitudes, where it will get extracted in the condensation and possibly freezing phase changes.
Remember that this “heat energy” must get extracted by transfer to other colder atmospheric gases, or of course by THERMAL (BB like) emission from the H2O molecules, so when the water droplets or ice crystals fall to earth as rain or snow or other precipitation, that latent heat does not return to the surface with the water, since the water had to lose it already before the phase change can occur.
So the total energy in the upper atmosphere that must eventually be lost as thermal radiation to space, is somewhat greater than just the original surface total radiant emittance number.
And frankly, I’m not convinced that we have good data on just what all those processes contribute to the mix.
I also do not like the feedback model that has increased CO2 simply creating increased downward GHG LWIR radiation back to the surface.
In my view, the controlling feedback connection, is the direct Surface Temperature change resulting in an evaporation change of 7% per deg. C, as found by Wentz et al (SCIENCE for July 13 2007, “How much more rain will global warming bring ?” ) or words close to that.
That atmospheric H2O change implies somewhere, a commensurate change in cloud cover that directly attenuates the incoming solar spectrum radiation that is able to reach the surface.
And that effect is a huge and negative feedback factor, as it directly affects the amount of atmospheric (CAVU) attenuation that reduces the exo TSI from 1362 Wm^-2 down to a surface value closer to 1,000 Wm^-2.
The outgoing CO2 LWIR capture is of course real. But I also believe it is quite irrelevant to the outcome, as the water feedback to the solar input, is where the stabilizing control is.
And Dave you say that you are NOT a scientist.
Evidently not a solid state Physicist either.
You computer of perhaps some finger toy, or whatever you logged in here on, contains silicon chips that contain silicon atoms in a diamond lattice at a density of about 5.6 E+22 Si atoms per cc.
Surface layers in CMOS structures that make the thing work contain tiny amounts of dopant atoms at densities of maybe 10^16 to 10^18 impurity atoms per cc, and without that impurity content of one part in five million, to one in 50,000, which is way less than the atmospheric CO2 abundance, you simply couldn’t be here conversing with us all.
So don’t hang your hat on the “too little to do anything” mantra.
Denying the CO2 or other GHG capture process, is not a good hill to choose to die on.
Just my opinion of course.

I forgot to add, that with the sun beating down on the surface (below it) at about 1,000 Wm^-2, rather than 240 Wm^-2 as Trenberth asserts, will easily raise the surface Temperature during the day to Temperatures, which we all experience every day.

I’m lost on one point- *when* did they decide it was 2C since 1750? I never heard that until the last year or so. And I thought I understood that the Avg Temp had already risen a fair chunk of that 2C between 1750 and 1850…?

IMO it’s because the Industrial Age began shortly after 1750, not 1850, and CO2 levels were similar in both centuries.
Much of the warming since the depths of the little Ice Age during the Maunder Minimum occurred in the early 18th century. After 1750 there was both cooling during the Dalton Minimum and warming after it until the Modern Warming Period began in the late 19th century.

Well done CM of B. I guess Trenberth and his cohorts didn’t expect to ever be questioned,
I mean, such a pretty set of illustrations!
But interestingly, the little ‘o” knows 97% of “real scientists” are “in agreement” ?
This reminds me of Barry’s Dame Edna’s stage show when singling out any foreigners in the audience;
“They don’t understand a word I’m saying, but they love colour and movement”.

I am chuckling to myself after reading this… So in summary, if we assume that the “Scientific Consensus” actually had a valid point in understanding the “:heat budget”, M of B can demonstrate using the same process it leads to a tiny amount of warming… Precious. LOL

Given that surface temperatures are changing at different rates (higher rates that is) than the troposphere temperature trends, we should, indeed, move the sensitivity estimates to be based on the surface only rather than on the troposphere like it is in climate theory.
And, the change in temperature (K) per additional forcing (1.0 W/m2) is, indeed, smaller at the surface (0.184 K/W/m2) than at the tropospause (0.265 K/W/m2).
And, the surface needs to increase its forcing (including the feedbacks) by +16.5 W/m2 in order to increase its temperature by 3.0K while the tropopause only needs +11.5 W/m2.
In the theory, the tropopause goes up in temperature by 3.0K per doubling because there is +4.2 W/m2 of direct GHG forcing per doubling (including other GHGs besides CO2) and the water vapor and cloud and albedo and lapse rate feedbacks add another 7.3 W/m2 of forcing.
And the lapse rate feedback is supposed to be negative. The lapse rate of 6.5K/km is supposed to be decrease in climate theory (the troposphere hotspot) while it is clearly increasing in the current environment given that the troposphere is increasing in temperature by much less than the surface.
Climate science uses a bunch of shortcuts in order to avoid doing all this simple math properly. Lately, they have been avoiding it because the math does not work to get to 3.0C per doubling.
If we move the calculations to the surface and include the proper feedbacks for water vapor and clouds that are actually showing up, we get only 1.1C of warming per doubling and a total forcing change from the current 390 W/m2 at the surface to 395.9 W/m2.
And that only occurs because we are also ignoring the Planck feedback whereby outgoing longwave radiation should increase as the surface temperature rises. You never ever hear anyone in the climate science community talking about this.

Bill Illis:
The answer is that it doesn’t matter. Whatever the source of warming — assuming that there is indeed some such source of warming — the hotspot is still required in theory.
The reason is that hot gases become less dense and rise, to the point at which they shed their heat via radiation to space. So no matter what was causing it to get hot, there would still be a hotspot. Different gases might have to rise to somewhat different altitudes before they shed that heat, but remember too that atmospheric gases tend to be rather well-mixed.
So the absence of a hotspot is not just evidence against CO2-based warming, it is evidence against significant warming, period.

Without quibbling over the use of S-B at all, the consensus method seems to describe a stable system. Which is what we have.
Are the tales of runaway warming in the Summary of Policymakers or described in the actual AR5 report?
(or did I completely misunderstand your math this early morning).

jeanparisot,
You wrote “the consensus method seems to describe a stable system. Which is what we have.”
You are correct.
The tales of runaway warming exist only in the fevered imaginations of a certain type of “skeptic”. And, I suppose, in history since in the early days (perhaps 40 years ago) there was some speculation that it *might* be a possibility.

I love it when Venus is mentioned. 96.5% concentration of CO2 and 96 times the atmospheric mass. So approximately 230,000 times as many CO2 molecules as here in Earths atmosphere, but only about 2.5 times the absolute temperature at the surface. If you still believe in “dangerous climate change” after those stats, well, there is no helping you!

Sturgis Hooper wrote: “Hansen still warns of the Venus Express”.
I was going to answer “not so far as I am aware” but then I thought, “well this is Hansen we’re talking about” so I decided I better do a search.
So I correct myself: “only in the fevered imaginations of a certain type of “skeptic” and certain kooks on the alarmist fringe”.

Mike M.
June 27, 2015 at 9:13 am
You appear to have joined the debate late. Your observation is, however, evidence that as little as 4 to 5 years ago, the consensus was pushing thermogeddon and that since the reality of the pause became reluctantly accepted by the consensus (despite the recent NOAA attempt “hide the pause”), the number of death by fire CAGW types has shrunk considerably (your certain type of ‘sceptic’ I guess – a funny word for a the remaining end of the world zealots that used to make up the mainstream).
I can see that your late arrival at a time when real sceptics had already chopped the crisis in half would make it look like sceptics were quibbling over a degree or so.

Just in the past week we have been treated to actual “end of the world and no human can survive” warnings.
And all manner of other crazy s**t.
And it is not coming from skeptics.
This is a bizarre contention from Mike M.

“The tales of runaway warming exist only in the fevered imaginations of a certain type of “skeptic”.”

So Gavin Schmitt is a “certain type of skeptic” by even acknowledging it nine years ago?http://www.realclimate.org/index.php/archives/2006/07/runaway-tipping-points-of-no-return/
And oh my lying eyes, it must be my imagination allowing me to read this as well – http://arxiv.org/abs/1201.1593
I think most informed people know that the ESSENTIAL ingredient for CAGW theory has always been that more CO2 will beget warming that will then beget more water vapor which begets even MORE warming thus begetting even more water vapor (plus methane from tundra under the north pole, heh heh), ad nauseum until there is so much begetting going on that the planet is ….
Gee, I wonder who it was who introduced the idea that we would reach a point in man made global warming where it would be impossible to do anything about it? In 2006 someone said –

“Many scientists are now warning that we are moving closer to several “tipping points” that could — within as little as 10 years — make it impossible for us to avoid irretrievable damage to the planet’s habitability for human civilization. “

Mike M (the one who struggles with reading comprehension):
I was responding to a comment by jeanparisot who wrote: “Are the tales of runaway warming in the Summary of Policymakers or described in the actual AR5 report?”
Mainstream climate science, even IPCC, does not warn about runaway warming. Here is a link that talks about why, and also about the often inapt use of the phrase “tipping point”: http://www.realclimate.org/index.php/archives/2006/07/runaway-tipping-points-of-no-return/
Maybe if you read it a second time, you will understand it.
And your other link says that “almost all lines of evidence lead us to believe that is unlikely to be possible, even in principle, to trigger full a runaway greenhouse by addition of non-condensible greenhouse gases such as carbon dioxide to the atmosphere”.
In my original reply to jeanparisot, I overstated by implying that none of the alarmists make such ridiculous claims. I have since corrected that.
Yeah, “(the one who struggles with reading comprehension)” was needlessly nasty. Just wanted to make the point that two can play that game. Maybe you were just trying to be funny, but calling someone and imposter is not funny.

I applaud your efforts Lord Monckton, the further you dig into the data and theory the less valid it becomes. Observations:http://drtimball.com/wp-content/uploads/2011/05/average-global-temperature.jpg
1) CO2 traps heat most efficiently at 15 micrometers. That is consistent with -80 degrees C. The absorption band does spread out to include 0 degree C or 255 degree K. At 5 K into the atmosphere both H2O and CO2 exist so 255 degree K IR should be absorbed. If CO2 were the cause of atmospheric warming would be in the upper atmosphere, not the surface temperatures. CO2 doesn’t efficiently absorb 10 micrometer IR very well. The problem is, there isn’t a hot spot 5 k into the atmosphere.http://wattsupwiththat.com/2014/08/04/what-stratospheric-hotspot/
2) We have 800 million years of geologic history when CO2 reaches levels as high as 7,000 PPM. Never in the past 800 million years when 99.999% of the record had higher CO2 levels, did the earth experience catastrophic warming…never. Clearly, using the data from the past 600 million years, the sensitivity of CO2 and temperature is non-existent. Are climate scientists not ware of the data they have already published that totally refutes their theory?http://drtimball.com/wp-content/uploads/2011/05/average-global-temperature.jpg
3) The “evil sister” of Global Warming is Ocean Acidification. For your next article you may want to do the math exploring how much CO2 would be required to alter the pH of the Oceans, and put that in terms of CO2 existing in the atmosphere and fossil fuel consumption.
Once again, keep up the great work.

The graph at the link provided by Nick Boyce is very similar to the one posted by co2islife, but they are not the same. The one posted by co2islife does not include the error limits (more than a factor of 2). But I suspect that the ultimate source for the “data” in the two graphs is the same. The link to the paper is provided on the web page linked to by Nick. It’s a model!

Whenever I see these charts it appears that everyone seems to think that gravity has been constant; however, there is no way that the giant dinosaurs could have survived with our existing gravity. Therefore, how might a gravity of about 50% of the present have affected our atmosphere millions of years ago.?

You are assuming that the fossil interpretation of giant land animals is correct. An easier explanation is that all large dinosaurs were water based. Perhaps the T-Rex, for example was half submerged in water, like a Carnivorous hippo.

“there is no way that the giant dinosaurs could have survived with our existing gravity.”
If one is to assume that gravity is variable, then the implications of that render just about all of scientific inquiry a waste of time, as one must allow that all the other physical constants and forces also varied.
And if they vary, why just in time? Why not in space?
Separately, such logic as is attempted by this statement also concludes that bumblebees cannot fly, and various other conclusions that are at variance with observation.
Anyway, this train of thought makes as much sense as the guys who say there is no such thing as water vapor (calling it the myth of cold steam) or convection.

Earl,
There is every way that the giant dinosaurs could have and did survive under gravity the same as at present.
The only suborder of dinosaurs outside the size range of Cenozoic land animals were the sauropods, and far from all of them. Estimates of their weight are coming down, as more is learned of their anatomy, but they’re still bigger than anything before or since.
However mechanical studies of their hollow-boned structure show that they violate no laws of mass or motion.
It may be that some spent part of their lives in water, but that former hypothesis is no longer required to explain their unusual size. At least some of them did however apparently float, as shown by footprints. Sauropods apparently returned to North America in the Late Cretaceous across water gaps from South America, after a long absence.
By what mechanism do you suppose gravity to have been the same as now in the Paleozoic, when land animals were smaller than now, then suddenly much greater for part of the Mesozoic, then back down again?

Biomechanical study of Argentinosaurus, the largest sauropod known from good material, shows no need to invoke variable gravity or mass of earth:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3864407/
Conclusion:
Forward dynamic simulations shows that an 83 tonne sauropod is mechanically competent at slow speed locomotion. However it is clear that this is approaching a functional limit and that restricting the joint ranges of motion is necessary for a model without hypothetical passive support structures. Much larger terrestrial vertebrates may be possible but would probably require significant remodelling of the body shape, or significant behavioural change, to prevent joint collapse due to insufficient muscle.

PS: One reason herbivores eating such low quality vegetation as conifer needles could reach such great mass was the abundance of trees even in their often arid environments, thanks to the healthy levels of ambient CO2 in those lush, luxurious periods.
Mid-Cretaceous CO2 was probably around 2000 ppm, but might have been higher. Arboreal vegetation probably maxes out its potential at a little lower level than that, but the extra plant food in the air didn’t hurt.

Samuel,
Unless gravity still hadn’t reached present strength 25 to six million years ago,

I fail to comprehend the inference that the force of gravitational attraction on the body mass of land animals, …… as measured at sea level, ….. was increasing prior to 6 million years ago, let alone 25 million years ago. But on the contrary, it would be logical to assume that the force of gravitational attraction on the body mass of land animals, …… as measured at sea level, ….. was decreasing post-22,000 years BP simply because of the Post Glacial Sea Level Rise of 450+ feet.

how then to explain these flying giant birds:

Now flying giant birds is one thing, but flying giant reptiles is a per se, …. horse of a different color. And the last statement in your cited reference pretty much supports my claim about the inability of giant pterodactyls, and possibly all pterodactyls, of having the ability of flight or flying. To wit, quoting from your cited source:

“This is pushing the boundary of what we know about avian size, and I’m very confident that the wingspan is the largest we’ve seen in a bird capable of flight.”

The ability of “self powered” flight surely evolved on or near sea shores by the smaller species of dinosaurs that were in the process of evolving a “feathered” covering of their body …. simply because of the steady and consistent “on-shore” and “off-shore” winds which made for an easy “lift-off” from the ground …. as well as an “easy task” for said evolving dinosaurs to remain “off-ground” for extended periods of time. There are several Sea Bird species of today that are dependent on said “sea breezes” for getting airborne.

No reason to suppose that the largest pterosaurs didn’t fly, as all evidence suggests that they did.

Phooey, …. there are several good reasons why. To wit:
1. The length of their wing-span and calculated/estimated body weight.
2. Their lack of sufficient wing muscles for self-powered flight.
3. Their inability to achieve “lift-off” without the aid of an extremely strong breeze or wind.
4. Their extremely large “bill-snipper” compared to body size would be of no use whatsoever to a “flying” predator ….. and would probably prove quite dangerous in trying to control their flight path …. due to a weight “imbalance”, … a very short tail …… and/or the applied forces of unanticipated “wind shear”.

Their remains have been found far out to sea. They didn’t paddle there. They soared, like big sea birds, such as the one reported in the above link.

“Found far out to sea” means nothing …. because that locale may have been a swampy area, a tidal zone or a very shallow Inland Sea ….. many, many eons ago.
Me thinks and actually believe, that all pterosaurs were shallow-water feeding reptiles that employed the same “feeding technique” as does the present day Black Heron, to wit:http://ibc.lynxeds.com/files/pictures/IMG_0954_0.jpg

The hot spot only mattered when the believers thought they found it. Once it was gone it was irrelevant.
The entire theory of CAGW is based on a positive feed back loop… as your chart shows; when does this feed back occur?
My belief is man made c02 has special properties that regular co2 doesn’t have … You simply don’t know it yet… /sarc

Lord Monckton: Your math is correct only in the limit that the change in T and F approach zero, the standard assumption one makes in elementary derivatives. Your equation can be re-written in a form I find more useful:
dF/F = 4*(dT/T)
A 1% change in F (2.4 W/m2 post albedo) produces a 0.25% change in T (about 3/4 of a degK). This equation is independent of any assumptions about emissivity.

If the entire system warmed by 1 deg., then (as you say) the increase in surface emission is much greater than the increase in emission of the whole system to outer space. But it is the latter that matters in climate sensitivity, since it is what is actually lost by the system. Does that answer your basic question?

Correct.
The F involved here is forcing. That’s the hypothetical initial radiation reduction in which an instantaneous change in (say) atmospheric opacity would result because of the higher effective radiation altitude. To restore equilibrium, what needs to happen is for that new radiation altitude to warm up enough to get outgoing radiation back up to the incoming level, and to a first approximation it warms about the same as the surface.
So it’s the temperature at the effective radiation altitude, not at the surface, that goes into the Stefan-Boltzmann equation.

I seem to remember some previous difficulty in explaining forcing to Lord Monckton, so I’ll elaborate.
When we talk about sensitivity, we’re talking about response to a given forcing. So you have to know what the forcing is. If you start at pre-industrial CO2 concentration (and thus atmospheric optical density), the effective radiation altitude is , where the temperature is , the temperature at which the Stephan-Boltzmann equation gives an outgoing radiation intensity the same as the incoming value from space. By convention, we can consider that concentration’s forcing to be zero.
Now, if optical density were to increase instantaneously from that zero-forcing level to a level at which the new radiation altitude is , then the new radiation altitude’s temperature would initially be , where and is lapse rate. This would cause an initial radiation imbalance of . The value of that hypothetical initial imbalance is what is referred to as the “forcing” associated with the concentration increase.
Because of that imbalance the surface warms. How much would the surface warm without feedback? If we ignore things like lapse-rate feedback, etc., it warms by , i.e., by the change in radiation-altitude temperature that would be needed to redress the initial radiation imbalance. As Dr. Spencer observed, the change in surface, as opposed to radiation-altitude, radiation would be more than the imbalance to be eliminated. But it isn’t the surface radiation we’re interested in when we talk about climate sensitivity; it’s the outgoing radiation.
So the change in surface temperature would need to be more than would be required for the surface radiation to increase by the forcing; it would instead be the temperature change required for the radiation-altitude radiation to increase by that quantity: a larger number. Although we are indeed talking about a temperature change at the surface, therefore, it’s the absolute temperature at the effective radiation altitude, not at the surface, that needs to be used in Stefan-Boltzmann to determine the “Planck parameter.”

John, perhaps you can explain to a simpleton like me on a couple of confusing issues. I am fully aware of the fact that energy, radiation and temperature are not the same things so I don’t think my confusion is there.
My first confusion has to do with the gas laws. I understand that there can be an energy gain due to GHG but I do not understand how that can translate to higher atmospheric temperatures at surface altitudes unless the atmospheric pressure increases as a result of this increase in energy. Common sense tells me that the atmospheric volume would increase a little and the temps would stay the same rather than pressure go up. To me you seem to be somewhat implying this but still translate that into a surface temperature increase. What am I missing between the universal gas laws an the atmosphere? The second confusion that I have is in the radiation budget diagrams. I see no inclusion implying long term storage of energy within the biomass and the rate of storage in the biomass increases with increasing GHG. Would this have something to do with my confusion to the first question? Maybe I am doomed to stay confused.

Perre DM:
Not sure I understand your question, but maybe this will help:
To a first approximation, don’t worry about the gas-law issues. Just think about the radiators.
Radiation that reaches space comes from the molecules at the surface and at various altitudes in the atmosphere, but the overall effect is as though they were all at an effective radiating altitude, which is somewhere in the troposphere that is higher if there is more greenhouse gas and there thus are more radiators to make a thicker “blanket.” Due to the lapse rate, a higher effective altitude means a larger temperature difference between that altitude and the surface. But that altitude’s temperature must over time equal the level that results in the same amount of radiation that comes from space (and isn’t reflected), so it can be thought of as fixed independently of what the effective radiation altitude is. Since the effective radiating altitude is therefore always at the same temperature independently of how high it is, but the difference between its temperature and the surface depends on that altitude, the surface temperature has to be higher for a greater effective radiating altitude.

Joe Born
I would question you about the assumption you make in the first paragraph — and assumption it most certainly is — as you yourself admit.
“If you start at pre-industrial CO2 concentration (thus atmospheric optical density) the effective radiation altitude is h1, where the temperature is Teq, the temperature at which the Stephan-Boltzmann equation gives an outgoing radiation intensity the same as the incoming value from space. By convention, we can consider that concentration’s forcing to be zero.
“By convention” means that the basis of your whole argument begins with an assumption.
Do you have any proof for it? Isn’t it just another warmists meme that is “cherry picked”? To put what you are saying another way using a well known cultural image —
O Noes, we were living in the garden of carbon dioxide paradise and we screwed it up!
Really, isn’t that what you are assuming? That we were just a couple of hundred years ago living in the garden of carbon dioxide paradise?
What if I say that carbon dioxide levels during pre-industrial times were too low and then blame the Little Ice Age on the lack of CO2 in the atmosphere?
I then could say that if in fact there is CO2 caused global warming we are merely returning to a time when CO2 concentration was truly optimal — and soon we will actually reach the concentration at which “the Stephan-Boltzmann equation gives an outgoing radiation intensity the same as the incoming value from space”.
Because you “assume” at the beginning all you after arguments are worthless. You are employing a well-known Sophist trick. Let me set the initial assumption and I will have you admitting that pigs can fly.
Eugene WR Gallun

Joe Born: So the change in surface temperature would need to be more than would be required for the surface radiation to increase by the forcing; it would instead be the temperature change required for the radiation-altitude radiation to increase by that quantity: a larger number. Although we are indeed talking about a temperature change at the surface, therefore, it’s the absolute temperature at the effective radiation altitude, not at the surface, that needs to be used in Stefan-Boltzmann to determine the “Planck parameter.”
You have completely missed, or decided to ignore, the point that the calculated change at the effective radiation altitude is not necessarily accurate for the calculated change at the surface. Indeed, the change at the surface depends on the changes in the non-radiative fluxes from the surface to the upper troposphere.

Eugene WR Gallun:
I am doing what Lord Monckton is doing: accepting the conventional assumptions for the sake of argument and determining what they imply. But I’m showing that they don’t imply what he imagines they do.
It’s true that I’ve additionally called the forcing at pre-industrial concentration levels zero, but nothing depends on that, since the sensitivity calculations depend on forcing and temperature changes, not on those quantities’ absolute values.
In other words, if I called the forcing at pre-industrial concentration levels, say, -300 W/m^2 instead of zero, the conclusions would have been exactly the same.

matthewrmarler: “You have completely missed, or decided to ignore, the point that the calculated change at the effective radiation altitude is not necessarily accurate for the calculated change at the surface.”
I have no idea what you’re trying to say; I explicitly did say that a given temperature implies a smaller radiation change at altitude than at the surface, where the temperature is higher..
As for the rest of your comment, it would perhaps help matters if you were to specify what quantities you refer to when you use “change.”

Joe Born: Any simple calculation of the no-feedbacks climate sensitivity has at its HEART the assumption that the earth can be treated as a blackbody. This is the simplest reason why such a calculation must be based on the blackbody equivalent temperature (255 degK). An object that emits 240 W/m2 and is not 255 degK is not a blackbody. Such calculation are meaningful if all of the non-blackbody behavior of the planet can later be introduced as feedbacks.

Joe Born: As for the rest of your comment, it would perhaps help matters if you were to specify what quantities you refer to when you use “change.”
Basically, I refer to changes in energy transfers from the surface to the troposphere that are “forced” by the hypothesized change in Earth surface temperatures. Specifics are in the post I wrote calculating “a senstivity” of the Earth surface temperature to a 4 W/m^2 increase in down welling LWIR “forcing”. It is possible, as I wrote later, that a 1C increase in Earth surface temperature would produce a 2C increase in the temperature of the “effective radiating surface”. The estimates are not to be considered terribly accurate, but can be improved by research.

Roy is of course right. In the expression for sensitivity, you need to use the total forcing. It’s true that a small rise in surface T will lead to a large surface upflux. But the air also radiates more. The downflux increases by a fairly similar (and linked) amount. It is the nett change that counts for sensitivity.

PierreDM, if the global atmosphere warms, then surface pressure remains the same because the total mass above the surface remains the same and the volume is not constant, that is, the atmosphere expands upward. Actually, the lower atmosphere would expand upward…since the upper atmosphere cools with global warming, it contracts. I don’t know what the net effect at top-of-atmosphere would be.
In other words, PV=nRT with increasing T only implies increasing P if V is held constant…which it isn’t.

I suspected that I might be looking at that equation backwards assuming that the increase in volume with T being constant keeps T constant because its a gas instead of, volume increasing as a response to T. That would make what John said make sense. With a bit of further thinking along those lines I think the upper troposphere hot spot makes more sense.
I am glad that I asked. I have a habit of looking at things backwards all the time because that is how I make my living. The golden rule for me is “strong correlations between parameters (especially in nature), does not necessarily equate to causation”.
I work with inventing new adhesive formulations. I often look in the rear view mirror at material I have already covered for answers. Many times the answer I have discarded as incorrect because of biases or knowledge at the time of experimentation turns out to be the right answer. That was the case for relativity with Einstein.
I always remember a lesson I learned from my father about hunting deer. HIs words “Don’t worry about finding the deer because as soon as you enter the woods making noise the deer find you and follow you to keep track of you. To find them, turn around and freeze still, they will walk up on you. He was right but I still couldn’t shoot-em.

Have you looked into how mussels and barnacles attach themselves to rocks so tightly that can barely be blasted of with great effort?
That they do so while immersed in salt water and while being pound by waves makes it all the more incredible.
I had heard some years back that the military was looking into this.
Any insight?
(Part of the work I do involves repairing electrical machinery, and sometimes this equipment located in conditions that allow these organisms to become a real problem.
I have seen these things stuck with amazing tenacity on surfaces that are usually thought of as being impossible to paint or glue because nothing will stick to them.

Roy W. Spencer: If the entire system warmed by 1 deg., then (as you say) the increase in surface emission is much greater than the increase in emission of the whole system to outer space. But it is the latter that matters in climate sensitivity, since it is what is actually lost by the system.
What matters to human and other life is the climate change on and near the surface. What happens at the “effective radiating temperature” and “effective radiating altitude” matters only as it affects the near surface changes. As it happens, a reasonable estimate of surface sensitivity can be calculated from published data about surface flows, as I posted below (I have posted it earlier at RealClimate and ClimateEtc.) An earlier version has been downloaded a few dozen times from my ResearchGate Page.
It is at the surface where the changes produce losses and gains to human and other life. The deep oceans and upper atmosphere complicate the calculations and require some sort of “equilibrium” to get an approximation relevant to the surface.

Dr Trenberth admitted to Dr Noor Van Andel that the atmospheric window was actually 66 w/m2 and not 40. If that is correct then there is no need for the right hand side of the heat balance cartoon. Evaporation 80 + convection 19 and radiation 40 then equals the incoming at the surface of 165 w/m2.
The cartoons are cartoons and do not reflect reality.

To claim the net mean IR energy flux leaving the surface is given by the S-B equation for the mean surface temperature is puerile nonsense from people with very limited knowledge of physics. That’s because the S-B equation does not predict a real energy flux, rather the Potential energy flux from the emitter to a radiation sink at absolute zero; a radiant exitance.
The problem is that since Goody and Yung (1989) Atmospheric Science has believed in a bowdlerisation of the two-stream approximation’ as a bidirectional photon gas diffusion argument. This breaches Maxwell’s Equations (you must use two S-B equations to get the vector sum of all the Poynting Vectors) and the 2nd Law of Thermodynamics (it claims vast increase of radiation entropy production rate for the Open Thermodynamic System).
This bad mistake is about the same as adding Electromotive Force (Volts), another EM potential term, as a scalar. They assume that the Pyrgeometer instrument, which outputs mostly the S-B exitance, proves the point, but it’s an exitance, not real.
The real net mean surface IR is determined experimentally AND SHOWN in Trenberth’s cartoon as 63 W/m^2. 40 W/m^2 is from image analysis of satellites in the Atmospheric Window; 23 W/m^2 is the difference of 160 W/m^2 SW heating and (17 W/m^2 convection + 80 W/m^2 evapo-transpiration). It is also the vector sum of (396 W/m^2 surface exitance and 333 w/m^2 atmospheric exitance), which is 63 W/m^2 integral of net Poynting Vectors.
So they exaggerate surface IR 6.3 times. They then put in an imaginary Down |OLR|, 238.5 W/m^2, making the lower atmosphere’s energy balance = 1.4 times real SW thermalisation. They add the imaginary 94.5 W/m^2 to the real 63 W/m^2, which cannot heat the lower atmosphere, to give 157.5 W/m^2 ‘Clear Sky Atmospheric Greenhouse factor’.
The experimental proof that this does not exist as a heating energy is that it implies a mean ~20 m atmospheric temperature of ~0.5 deg C, lower than at any time in the past 444 million years. The Last Glacial Maximum was ~10 deg C. A decade ago, Hansen admitted NASA had set out to measure Surface Air temperature (he uses ~50 ft), but couldn’t do it, so model it. In my view since that time they have been bluffing it out hoping that no-one realises they were always wrong in copying Sagan and Pollock’s original mistakes, three of them. Sagan later messed up aerosol optical physics as well – the sign of the AIE is reversed.
Also in 1977, Houghton in ‘The Physics of Atmospheres’ Fig. 2.5 shows that Lapse rate convection, needec for the gravitational temperature gradient from a virtual work argument, keeps that temperature difference at zero. Later when he co-founded the IPCC and accepted Hansen’s false science, Houghton gave up science to become a religiously-driven politician preaching false science.
This is a mess and you can always tell who was taught this incorrect radiative physics; they transpose ’emittance’, an old term for the SI Unit ‘exitance’, for ’emissivity’, so they cannot communicate with the rest of science. This course module at MIT and the next one, show how easy it is to mislead students: http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node134.html

AlecM
Your link does not support your argument. I rather think it is you who misunderstands.
A body at temperature (gray bodies, not black bodies, are usually used for a more realistic approximation) emits in a range of wavelengths. As your link points out, these wavelengths change with temperature.
Real materials (not black or gray bodies) also have preferred absorbance and emittance bands. These vary by material and also by temperature.
However, the relation ɛσT⁴ is not some kind of “theoretical emittance” as you appear to think; it is the integral of the total radiant output across all frequencies. As such, it is not just some theoretical mathematical construct, but rather a real effect; multiply by area and you get total actual radiative power output in W/m². (I hope those symbols make it through WordPress.)
This total radiant output is dependent only on emissivity and temperature. Again for real materials you have to account for their actual radiation bands, but for most purposes a gray body is going to give you a decent real-world approximation at a given temperature. Also in the real world, emissivity can vary somewhat with actual temperature so nobody is claiming that it’s exact, but it’s a very good rule of thumb.
You appear to be saying that isn’t “real” output because the atmosphere interacts with that radiation. But that’s a misunderstanding. The output is the output. What it may interact with later might change the effective output at a distance, but that is the real output from the surface.
Heat transfer is another matter and depends on other factors. But saying the Stephan-Boltzmann equation is only some kind of theoretical beast divorced from reality is simply false. It’s a good approximation of the REAL radiative output. What you do above the surface can affect what that output does, but at a given emissivity the ACTUAL output at the surface is dependent only on temperature, according to that equation.
But to say “the S-B equation for the mean surface temperature is puerile nonsense from people with very limited knowledge of physics. That’s because the S-B equation does not predict a real energy flux, rather the Potential energy flux from the emitter to a radiation sink” indicates that is is you who does not understand how it actually works. There need be no “sink”… at a given temperature, the output is the output. It can be in a styrofoam box or in deep space, as long as the temperature is constant, so will be the output. Nor is it measured at absolute zero; in fact other than emissivity, temperature is the sole variable on which everything depends.

AlecM: To claim the net mean IR energy flux leaving the surface is given by the S-B equation for the mean surface temperature is puerile nonsense from people with very limited knowledge of physics. That’s because the S-B equation does not predict a real energy flux, rather the Potential energy flux from the emitter to a radiation sink at absolute zero; a radiant exitance.
So far, so good, but how inaccurate in practice is the S-B equation when used this way? 10%? (a figure in Pierrehumbert’s text); 20%?
The rest of your post tries, I think, to cover too many topics. If at the surface the S-B equation is accurate to within 20% (as with barely adequate electrical components in days long past), then the calculations using it are accurate enough to guide research toward more accurate approximations, and maybe enough for practical considerations like planning for future climate change. A 20% error from using S-B to calculate radiant energy change at the surface is smaller than the error (prevalent in most of these discussions) from ignoring the changes in the non-radiative energy transfers. This course module at MIT and the next one, show how easy it is to mislead students:
That link does not show anything about misleading students.

An interesting use of their own game rules against the Global Warming Theorists. Well done.
But they will just change their rubber rules…
BTW, the more basic problem with these energy budget graphs is that they are static scored. Daily we have day night cycles, so all those formulas need a sine wave cycle (and at a fourth power function this is a dramatic deal… what with sun in and IR out on different shifts). On a longer term, does anyone really think clouds and precipitation are a global average constant? Precipitation represents giant heat flow, via convection and phase change, to TOA, and has big decadal scale changes. (Clearly visible in flood reports vs drought cycles). So what happens to a 2 W radiative flux calculation when there is a 20 W precipitation change?
In short, you play their model game well, but it is about as real as a game of bridge.

Thunderstorms are likely to be the very largest movers of thermal energy on the planet, outside of the ocean currents.
And the hotter it is, the stronger the storm and the faster they transport energy to altitude.

Indeed, I often ask, but have no answers..Do all energy inputs have to manifest as heat?
“How much energy does it take to accelerate the hydrological cycle, and to grow 35 percent more bio-life? I only ask because I want to know.

@Menicholas I’m not a scientist, but as I recall it has only been since the long term orbiting of the space shuttle and international space station that we discovered that the electrical energy involved in thunderstorms had been underestimated by orders of magnitude. I seem to recall there is also evidence that interactions of the solar wind with the earths magnetic field induces as least some of the current in thunderstorms which is an interesting addition to the atmosphere’s energy budget that is induced instead of radiated. I don’t know but it seems to me there have been a number of astonishing discoveries in the last half a century (such as the biomass of the “other” kingdom of life that resides near deep sea vents outweighs ALL the photosynthetic dependent Animal Kingdom) that haven’t yet been fully worked into the various “budgets” and “cycles”. In fact I find it curious that we live beneath a cascade of newly created energy and everyone is wanting to find equilibrium! It is really interesting to me that no one wants to assign the “perfect” equilibrium for the planet and how we plan to engineer staying at that point. Talk about interfering with nature!

“In short, you play their model game well, but it is about as real as a game of bridge.”
I don’t think their model game is even as real as a nice game of bridge. One can look at Kiehl & Trenberth (1997) and see the stupid. It is amazing to me that the scientific community thinks that horror is science.

I may be missing something here, but I don’t understand equation 2.
if F =sT^4, then T=[F/s]^1/4.
dT/dF=(1/s)^1/4 . (F^-3/4)/4=(1/s)^1/4 ,1/(4.x^3/4)
I think the mistake is inversion of the derivative. In general dx/dy does not = 1/(dy/dx).
For Example:
y=x^2, dy/dx=2x
then
x=y*1/2, dx/dy=1/(y^1/2) which does not = 1/(2x)

A final note. The second of the two derivatives shown above, dT/dF, can also be written as
which in turn is
Interesting … looks like I might be wrong and his Lordship was indeed correct … per Mathematica

BTW, the Oceans are the most significant factor driving the climate, not CO2.
“Ocean circulation is responsible for the immense global heat transport that affects every facet of world climate. And yet, the scientists remain perplexed as to the mechanics and intricacies of the entire system.
But what do the climate scientists know for sure? Well…that human CO2 emissions have absolutely nothing to do with this natural ocean circulation heat transport and weather system. That part is settled.”\http://www.c3headlines.com/2015/06/settled-climate-sciencereally-research-finding-ocean-currents-more-powerful-than-co2.html
Just look at the temperature graph and how El Ninos totally distort the charts.http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_Oct_10.gif
BTW, how does atmospheric CO2 warm the oceans? It doesn’t. So we either have 3 forces in action. 1) One force to warm the atmosphere ie man made CO2 2) One force to warm the oceans (ie visible light reaching the oceans) and 3) one force that is increasing all non-man made GHGs other than CO2. Or we can have one natural force doing all three, the sun radiation reaching the earth warms the oceans and surface, the warming causes outgassing of the oceans and biosphere, and it is all natural, or a trace gas made by man is the cause? I go with choice #1 that explains all observations with a common sense explaination.

We have a sun our only heater, the cycles and moods controlled by the various perambulations of the giant planets. These planets are the final arbiters of our heating and cooling, not only is it all predictable, the forces involved also create our vulcanism which can easily be tied to the various moods of our sun and planetary system. Solar science, warmanist science and indeed standard model science have been barking up the wrong trees for my entire life. Lord Moncton is trying valiantly to disprove their nonsense by playing in their backyard and using their faulty maths against them. Thank you sir.

The energy budget diagrams are a load of nonsense. They aren’t in units of energy and they assume the earth is flat and doesn’t have a day and a night. How they ever got through a peer review process beggars belief – but this is climate “science”.

Phillip Bratby
You wrote: “The energy budget diagrams are a load of nonsense. They aren’t in units of energy and they assume the earth is flat and doesn’t have a day and a night.”
Your comment is nonsense. The top of atmosphere flux is given as 340 W/m^2; that is plainly for a sphere, not a flat surface. The numbers are global averages. You are aware that is always daytime over about half of the Earth’s surface?

Have to agree with you Mike. It’s unfortunate that Phillip makes statements that make no sense. Incoming insolation is around 1366wm2 and I’ll let him figure out why the diagram shows 1/4 of this amount. Also, what units would Phillip use? Maybe he doesn’t know that the first energy balance diagram was made in 1911 and the numbers are only being refined with more modern equipment.

Of course I am aware that it is daytime over half the surface. Since when has energy flux been in units of W/m^2? Are you not aware that global averages are totally meaningless (unless you think it makes sense to average day and night and the polar regions with the equatorial region)?

Phillip Bratby wrote: “Since when has energy flux been in units of W/m^2? ”
Since always. https://en.wikipedia.org/wiki/Flux#Flux_as_flow_rate_per_unit_area
“Are you not aware that global averages are totally meaningless”
So you think that the total amount of energy entering and leaving the planet is meaningless? Really?
I suppose you will respond that the issue is not “total” but “average”. If you are so innumerate that you don’t see that the difference is trivial, I don’t know what to say.

Phillip you are totally correct. These “energy budget” cartoons are a bad joke. Start with the wrongs units, add error bars that are larger than the supposed “imbalance” and end up “proving” that CO2 “traps heat”.
Any further analysis based on these cartoons is also flawed.
It is a large macro system that is never ever in equilibrium anywhere. You cannot assign a global “energy flow” and do any useful analysis.
The whole climate science “energy budget” exercise is pseudo science.
Cheers, KevinK

Lord Monckton,
You wrote: “Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude … or … sensitivity is harmlessly low”.
It is neither.
There is another possibility, as already indicated by Roy Spencer: You have misunderstood something fundamental. Specifically, delta_F is the change in flux, it is only equal to the change in forcing at the emission altitude. So no, you have not discovered some silly error in computing the radiation budget.
I must say that I find your claim that “you only ask because you want to know” to be somewhat disingenuous. If your claim was honest, you’d have stopped with the question, rather than following it with extensive unfounded speculations based only on your misunderstanding.

It is entirely logical to present the information/understanding one currently has and ask a question that would help clarify what, if anything, is wrong with that information. LM stated that he had asked the question before and it resulted in more futility than productivity, so is it not likely that he presented his course of thinking here along with the question to make responses to him as efficient as possible?

Dear Lord Monckton,
My knowledge of radiation is too long ago, so no comment on that part. Only a comment on:Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.
As we have used some 370 GtC up till now and the recoverable fossil fuels seems to be at least a 10-fold of that quantity (including coal), I am not sure that it will end at 500 ppmv. With “business as usual”, human emissions simply go up slightly quadratic over time and as the increase in the atmosphere follows more or less in ratio, we may end at over 800 ppmv in 2100. Except for a massive switch to nuclear…

Sturgis,
Probably not in Western countries, but China is assumed to triple its CO2 emissions around 2030 and India may double in the same time frame… I suppose that it is also a matter of cost of fossil vs. cost of the alternatives where hydro and nuclear are the main large scale competitors.

Ferdinand,
Against China, I would set the US and other countries which have lowered their CO2 contribution by relying on natural gas instead of coal, and those benighted states which are trying to control their emissions at the cost of lives and treasure.
IMO it’s improbable that, even with China and India burning more coal and oil, the average annual increase in CO2 levels will climb from ~1.2 ppm in the 1958-2015 period (67 years) to 4.7 ppm during 2016-2100 (85 years), a near quadrupling. IMO the total should be less than for the most rapidly rising region, ie the forecast tripling in China.
IMO, 600 ppm by 2100 should be more like it. IIRC, Gosselin also computed that concentration as likely “peak CO2”. But of course no one knows the sink rate nor the future emission rate with any robustness, to put it mildly. IMO however, future CO2 growth rate is more likely to average 2-3 ppm per annum than the 4-5 ppm implied by a doubling by end of the century. The lower rate range yields CO2 levels for AD 2100 of 570 to 655 ppm rather than 800 ppm.
The 570 ppm level would of course be a doubling in 250 years from the 280-285 estimated for AD 1850.

The point is that with increasing amounts of manmade CO2 emissions, the carbon sinks are expanding and that is why the annual increase in CO2 is considerably less than the quantity of CO2 emitted by man on an annual basis.
The capacity of the carbon sinks in 1960 is less than their capacity today.
If that trend continues (ie., the capacity of the carbon sinks today is likely to be less than those in 2050 etc.), and it has for the past 50 years so there is no reason to presume that it will suddenly stop, even if China and India ramp up CO2 emissions by far more than the developed West cuts CO2 emissions, the increasing capacity of the sinks will mean that only a fraction will end up in the atmosphere.
It is difficult to see how a figure of more than 600 ppm can be reached by the end of this century, and it is likely to be less than that. It is probable that there will be about one full doubling from the 1800s level of CO2, ie., an increase from about 280ppm to about 560ppm.
Given that present observation suggests a sensitivity to CO2 so small that it cannot be measured using our best measuring devices, this increase in CO2 may add nothing to temperatures over and above that brought about by natural variation.

Sturgis,
Indeed as usual, it is impossible to predict the future… Maybe you are right and fossil fuel use per capita in China and India will not increase as high as in the Western world, either by better efficiency and/or more alternatives…
The “air borne fraction”, the part of human emissions (in mass, not origin) remaining in the atmosphere, varies between 10-90% year by year and 40-60% decade by decade. Sink rates were highest 1975-1995 and 2000-current.
Richard,
CO2 sinks do expand with the increased CO2 pressure in the atmosphere. That behaves as a remarkably linear process: since 1960 (Mauna Loa), the increase in the atmosphere quadrupled and so did the net sink rate, of course with the usual temperature caused year-by-year and decadal variability.
The IPCC with their Bern model expects an increase in airborne fraction, as they assume that the deep oceans buffer gets saturated, but until now there is no sign of that and the uptake by vegetation doesn’t have a restriction at all.

Ferdinand,
What exactly is assumed by “business as usual”? Does it account for the fact that developed country emissions have leveled off? And that developing country emissions should eventually level off? And that population should level off, and perhaps even stop falling after about mid-century?

The IPCC has different “scenario’s”, which are based on growth and leveling off of population but increase in energy use per capita in developing countries, where energy use is diverted – or not – from fossil fuels.
“Business as usual” is the scenario where there is practically no shift away from fossil fuel use.
The remarkable point is that until now the CO2 emissions – and the increase in the atmosphere – follow the IPCC scenario of maximum emissions, but temperatures follow the lowest scenario, where emissions leveled off after 2000.

Ferdinand,
“Business as usual is the scenario where there is practically no shift away from fossil fuel use.”
I’ve just done a bit of checking and I think I must disagree with this. IPCC does not seem to use the term “business as usual”. And whereas that term implies some sort of best estimate in the absence of concerted action, it seems that PCP8.5 is really a worst case scenario, with a high population of 12 billion in 2100 and most additional energy coming from coal. See figures 12 and 14 at: http://www.skepticalscience.com/rcp.php?t=3#popgdp

According to the K-T energy budget, only about a third of the energy absorbed by the earth’s surface comes from the sun. I have not found anybody who can explain what the source is of the other two-thirds of the energy absorbed by the earth’s surface. I sure can feel my skin absorbing energy from the sun when it is shining, but I have never noticed any other source of energy warming my skin.

Philip:
All the energy comes from the Sun – just that some of it is back-radiated after being absorbed and re-emitted as IR by the ground. Like sitting next to a south facing wall in the sun LWIR comes back at you. So you have to factor in the fraction of that that is re-radiated back to the ground, and integrate as it does the same again, and again etc. There is also re-emission from direct absorption of insolation by the atmosphere.

What complete nonsense. I am heated directly by the sun. The wall is also heated directly by the sun. So the wall is not being additionally heated by something else again and again. It is heated once by the sun just as I am.
What a marvellous scheme it would be otherwise Next time I light my woodburning stove I’ll see if I can get three times as much heat into the room by surrounding it with some magic compound. What would you suggest I should use?

Ian:
The heat flow from the earth through the surface is about 0.05 W/m^2. It varies, higher at hotspots like Hawaii and Iceland, lower in mid-continent. But that’s the ballpark. Quite a lot less than solar energy flux.

I agree with you, Phillip.
Not being a scientist I tend to listen and (hopefully) learn but there seem to be some very odd figures in here.
For a start the incoming radiation is 340 (watts, I assume) of which 180 – round figures – is either reflected at TOA it absorbed by the atmosphere. Yet 185 reaches the surface so there is a discrepancy to start with.
Then we’re told that 65 are “lost” to thermals and evaporation yet surface radiation accounts for 400.
It looks very much like pluck a number out of the air to support your hypothesis.
Like you I’m not aware of anything other than the sun that actually warms anything.

newminster,
The diagrams are not always as self-explanatory as they might be, but they do balance. For example, using the IPCC diagram (which I picked as the most legible), 340 in at TOA (units are W/m^2), 76 reflected by atmosphere, 79 absorbed by atmosphere, 185 reaches surface, 161 absorbed by surface, 24 reflected by surface making 100 total reflected. No discrepancies. For the surface, 503 total absorbed (161 from the sun, 342 from the atmosphere), 502 total up (398 emitted IR, 104 convection of sensible and latent heat). An imbalance of 0.6 (rounds to 1) due to net heating of the oceans. No discrepancies.

I find all the energy budget cartoons rather amusing, as they should be.
Their basic mistake is averaging a square meter at the equator with a square meter at the pole. The square meter at the pole absorbs zero solar radiation and is continuously losing energy, whereas a square meter at the equator gains more energy than it loses, via radiation and evaporation.
The cartoons would be a lot more accurate and physically representative, if they showed the net energy flux by latitude.

Well that might be more informative, maybe you could make it your project.
Then, when you have done that, you could average over all the square metres and summarise the results in a neat, easy to understand at a glance, cartoon.

jinghis,
Why stop there? You also have to distinguish by longitude, since emission, reflection, and cloud cover all vary with longitude. And of course, time-of-day and time-of-year. Oh, and absorption and emission in the atmosphere vary with height, so that should be included. All those details are in the models, so it is not like they don’t have the numbers.
I don’t see why they can’t put all that into a nice, easy to understand cartoon. Ridiculous.
Well, at least something is ridiculous here.

Mike M
“340 in at TOA (units are W/m^2), 76 reflected by atmosphere, 79 absorbed by atmosphere, 185 reaches surface, 161 absorbed by surface,” — Yep. With you so far.
“For the surface, 503 total absorbed (161 from the sun, 342 from the atmosphere),” — You just lost me.
161 absorbed by the surface from the sun. Since when was the atmosphere a heat generator? That’s my problem. How does the surface reflect (or radiate) more energy than it receives? Indeed more energy than the entire system receives.

Mike M. – “Why stop there? You also have to distinguish by longitude, since emission, reflection, and cloud cover all vary with longitude.”
No. they don’t. One square meter on the equator is pretty much equal to any other square meter on the equator.
Perhaps you are confused by the difference between latitude and longitude?

The proper distribution of spatial error in these estimates for our ovoid at varying inclination (not a sphere) with gridding mechanisms that date back to when detailed DEM/surface tomography was not available are a significant problem. As is the mechanism for distributing and calculating clouds. None of which are going to well represented in a cartoon.
I wish the various cartoon would simply represent the measured data differently than the calculated/hypothesized/modeled/guessed at values.

Lord Monckton wrote: “Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude, in which event the value for surface flux density FS may well be incorrect and no one has any idea of what the Earth’s energy budget is …”
One applies equation 1 to the temperature at the characteristic emission altitude, not the surface temperature. When applying equation 1, you are assuming that the earth behaves like a blackbody, so you need to use its blackbody equivalent temperature 255 degK, not its surface temperature. In doing so, we lump all of the non-blackbody behavior into “feedbacks”. Lapse-rate feedback controls whether warming at the surface will be greater or less than warming at the critical emission altitude. When we talk about the no-feedback climate sensitivity of 1 degK for a 3.7 W/m2 forcing from 2XCO2 (without your factor of 7/6), we then make the assumption that warming at both locations is the same. Studies with climate models give a no-feedbacks climate sensitivity of 1.15 degK, because the earth doesn’t have a uniform temperature and the average of T^4 is greater than (average T)^4.
When you do calculations that treat the earth as a blackbody, the most you SHOULD say is that after an instantaneous doubling of CO2, the planet will warm SOMEWHERE until the 3.7 W/m2 imbalance is eliminated. Nothing in a blackbody analysis says that all of the warming couldn’t occur only in the upper atmosphere or just in polar regions. The surface emits only about 10% of the photons that escape to space, so it certainly isn’t required to warm. However, if you ASSUME equal warming everywhere, the no-feedbacks climate sensitivity would be 1.0 degK at the surface. However, by eliminating the possibility lapse rate feedback, a disguised REQUIREMENT for equal warming everywhere has been imposed. This requirement doesn’t directly follow from the physics of blackbody radiation – it’s a function of what we chose to call a feedback.

+1
The TOA is cold, and warming there does not dictate warming where we live. Warmista’s talk about adding energy to “the system” but they do not mention that it is the system’s five-miles-up area. I like your SOMEWHERE.

Michael: I’d like the “somewhere” that warming occurs to be mostly 5 miles above the surface, but I can’t convince myself that it will be. The drop off in temperature with altitude – the lapse rate – is controlled by the rate heat is convected upwards. A high lapse rate (rapid temperature drop with altitude) promotes more convection, but the convected heat increases the temperature of the upper atmosphere, reducing convection. So it is unlikely that all warming can occur high in the atmosphere, because that would reduce convection and leave the surface warmer. No simple calculation yields the average observed lapse rate, so we can’t predict from simple physics how the lapse rate will change.
Fortunately, increasing absolute humidity – a feedback – is certain to decrease lapse rate, so the GHE from increase water vapor is expected to be partially offset at the surface by a falling lapse rate. In other words, there will be more warming higher in the atmosphere than lower.
Saying that warming must occur SOMEWHERE shouldn’t be taken to imply it won’t happen at the surface of the earth. It is meant to remind us that calculation of a no-feedbacks climate sensitivity at the SURFACE from blackbody considerations requires ASSUMING that warming will be equal everywhere.

Gino, the atmosphere is not well mixed in absolute terms of water vapor and CO2 content, the main gases which absorb and emit IR. Lower atmosphere contains a larger absolute amount of these gases than the upper atmosphere. THEN…the temperatures are vastly different, and the IR emission is a strong function of temperature (but the IR absorption much less so).

Dr. Spencer,
Water vapor is widely variable in the atmosphere and rapidly reduces with pressure, but CO2 is quite well mixed: +/- 2 % of full scale from the North Pole to the South Pole, everywhere over the oceans and above a few hundred meters over land up to over 30 km height, except for a lag from NH to SH and near ground level to high altitudes, as human emissions are mainly in the NH at ground level. See:http://www.nature.com/nature/journal/v288/n5789/abs/288347a0.html
A difference of 7 ppmv between tropopause and 33 km height on a level of 400 ppmv…
The highest variability is in the first few hundred meters over land where there are lots of near ground sources and sinks at work. But even if the first 1,000 meter was at 1,000 ppmv, that has hardly a measurable influence on the radiation balance (per Modtran)…

It’s great that we can do all these diagrams and math to represent the earths energy budget. They are definitely useful in understanding the overall climate system/radiation balance. Probably most of the guesstimates are very close, many are right on the money. However, the amount of certainty in some and the final estimate, represented as the climate sensitivity for instance, is greatly exaggerated vs the reality of uncertainty.
Wouldn’t it be great if we did have all the correct equations to accurately represent all processes and only had to plug in all the accurately measured values to come out with the precise solutions to tell us what we need to know………out to the year 2100.
I lost some of my math skills from 35 years ago, when learning atmospheric science. However, I gained much more in observing the global atmosphere since then. What is clear, is the false illusion by some(more educated and better at math than me), that representing the atmosphere with mathematical equations has provided them with the power to project beyond what is realistic……..because you can’t have absolute certainty, when several elements that contribute towards your product have(not clearly defined) uncertainty.

I am with you here Mike. I got excellent grades in several semesters of engineering calculus, but have little ability to remember much of it now.
My take is a little different on why these equations seem unlikely, to me anyway, settle much of anything…at least not at the present state of understanding.
At every single place I have ever witness discussions of radiation physic and the math involved in all of these calculations, I have never seen a single one which did not have various individuals or factions of individuals from various disciplines and many of them apparently highly knowledgeable on both side, arguing vociferously and with great acrimony about disagreements over details large and small of nearly every single aspect of every part of the process.
I find it unlikely, under these circumstances, that there is even one single person who has everything figured out correctly.
Maybe there is, but I have yet to see evidence of such a shining beacon of calculative and physical genius in our midst.
Murray Salby sure seems convincing, but so do many others.

Christopher
I can’t answer your question. However, I’d like to mention something which I believe has a bearing on it. Using average TSI throws off the scientists’ inputs for the radiative imbalance. It relates to your equation (4).
Let me begin by stating something that I’m sure you and many others know: using the assumed average TSI value of 1366Wm^-2 is a short cut which is supposed to average out the NH and SH summer variations in TSI of 1323Wm^-2 and 1413Wm^-2 respectively. This difference is due to the elliptical nature of the Earth’s orbit. Everyone accepts this averaging as being a reasonable expedient thus saving the labour of doing 365 daily calculations of equation 4 with different TSI inputs.
However, by conflating the average TSI figure with an equation that is derived from the T^4 element in the SB laws, it ignores the greater emissivity of the Earth’s surface during the SH summer months due to experiencing a higher equilibrium temperature. This won’t be offset by the lower emissivity of the NH summer months because the equilibrium temperature at both points in the Earth’s orbit is calculated using a T^4 input. Therefore, the surplus emissivity in the SH summer (over and above the average using the average TSI value in equation (4)) more than cancels out the deficit in emissivity during the NH summer. This in turn means that there is a seepage of heat flux during the SH summer that isn’t being accounted for in equation (4). Indeed, it is also the case for that entire half of the orbital ellipse which contains the SH summer though it’s much less marked towards the equinoxes.
The daily TSI readings are adjusted to 1AU so that at the height of the SH summer the higher figure of around 1413Wm-2 (depending on what the sun is doing that day) is scaled down to 1366Wm^-2 using the 1/r^2 orbital radius ratio and any daily fluctuations of a few tenths of a Watt are carried through and mirrored in the final daily figure. That then gets averaged with the other daily values to 1366Wm-2. If I recall, the measurements are actually 6-hourly. This means that the raw data is not being used in equation (4) and consequently, the surface emissivity when averaged over the year is too low due to the exponentially higher SH summer emissivity not accounted for due to the T^4 element. The result is that the calculated average surface temperature is too high and should be commensurately less.
There is a confounding issue here, which is that the Earth travels faster round the sun during the SH summer and so it experiences the higher TSI values for a shorter time. However, orbital speed is dependent on root 1/r, giving rise to a circa 3.3% variation in orbital speed over the year. TSI varies by circa 6.4% due to the 1/r^2 element. Seeing as the total energy received by the Earth’s disc is the integration of TSI over time and time is directly proportional to speed, it can be seen that the 3.3% speed-up during the SH summer isn’t enough to offset the 6.4% increase in TSI at that time. So, although the speeding up of the Earth dampens the extent of the surplus emmissivity that’s unaccounted for, it by no means cancels it out.

scute says…”However, by conflating the average TSI figure with an equation that is derived from the T^4 element in the SB laws, it ignores the greater emissivity of the Earth’s surface during the SH summer months due to experiencing a higher equilibrium temperature”
==================================================================
This is not in fact evident as far as I know. The atmosphere cools in the SH summer, despite this greatly increased input. And yes, the albedo increases in the NH. But the solar input into the worlds oceans massively increases, and that energy is lost to the atmosphere for a time as well, but unlike the NH albedo lost, it is still within the earth. In my view there is much to be learned from the annual energy pulse.

Even if, for the sake of the argument, one allows the IPCC warming forecast of 4degC by 2100 due to manmade CO2, then thanks to Count Rutherford (Benjamin Thompson) and his famous experiment to help clarify the identity between Energy = Work = Quantity of Heat (all measured in joule, J) it is evident that in comparison to solar power reaching the Earth, the CO2 effect is about as trivial as a flea on the equator jumping off in an easterly direction will have in retarding the Earth’s rotation according to Newton, when compared to solar and lunar gravity effects. Unless, of course, anyone can show me where I may have gone wrong at http://tinyurl.com/ot2hlp4

Matt asks:
“Should it be “I ask only because I want to know” and not “I only ask
because I want to know.”
Yes: it should be “I ask only because I want to know.”
Whoever penned the other, like 99% of those writing today, has
no idea where to put the word ‘only’: like most submissions to WUWT.
They will say something like “The earth will only heat up when such
and such conditions obtain”, when what they really mean to say is
“The earth will heat up only when such and such conditions obtain.”
& etc ad nauseam.

To quibble over mathematics, one first has to accept the central premise of the diagrams, which is that the majority of the atmospheric surface temperature comes from heating of the surface itself. I reject this premise, so all the mathematics that follows is meaningless. It should be pointed out that the mathematics have been calculated on the assumption that the central premise is correct and variables adjusted until the outcome matches observation. But there is no experimental evidence that can verify this premise to begin with. In short the diagrams are nothing more than illogical assumptions with no basis in reality. There is no Greenhouse Effect.
It is the mass of the atmosphere itself that causes temperatures higher than a planetary body with no atmosphere at equal distance from the sun. A conclusion that should be self evident when one observes temperatures on other planetary bodies with gaseous atmospheres in our solar system.
Still, if you persist with the paradigm then a simple graph with CO2 levels on the x axis and global mean temperatures on the y axis, plotted once in 1850 and once today, should give you three different types of mathematical trajectories to connect the two dots. All three of which, if you extend the line along each trajectory in both directions, will disprove some aspect of the Global Warming hypothesis. Try it for yourself and then reread both the claimed CO2 contribution for the greenhouse effect as a whole and the future temperature predictions for the 560ppm mark.

wickedwenchfan, you (and some others) continue to show your lack of understanding of the difference between lapse rate (due to gravity, specific heat of gas in atmosphere, and possible phase change of vapor to liquid/solid), which produces a temperature GRADIENT, and the cause of the actual temperature levels in that gradient. The actual temperature level is set by the energy balance at some average effective altitude of radiation to space. The change in effective altitude of radiation to space due to the change in quantity of optically absorbing gases (or particles) is the basis for the change in the atmospheric greenhouse effect.

I’ll tell you why answering your question (or trying to answer it) would give me a headache. Count the assumptions and approximations that go into each and every line of the algebra. Here is a partial list:
* Unit emissivity — emissivity of the Earth’s surface is a) not a constant either in space or in time; b) varies with wavelength (one really has to do an integral to compute emissions/absorptions even from t very simple systems) b) does not have an average value of 1 (averaged over space, time, and wavelength).
* Assuming some “average” value for top of atmosphere insolation. TOA insolation varies by 91 watts per square meter over the course of the year, peak to peak, as the Earth orbits with its current eccentricity. It spends longer out where it is farther away, so TOA insolation isn’t a nice, symmetric sine function with a mean in the middle of the peaks.
* Surface homogeneity of other parameters like albedo (which, like emissivity, is not constant in space, time or wavelength). Again, this almost certainly matters, as it is necessary to assume a peculiar inversion in order to explain why the Earth is coldest at perihelion (southern hemisphere summer) and warmest at aphelion (northern hemisphere summer). In southern hemisphere summer TOA insolation is well over 1400 W/m^2 — a “forcing” of roughly 45 W/m^2 compared to your assumed mean, with NH summer forcing down in the ballpark of 1325 W/m^2 (being very generous with the subtraction). Note that albedo comes off of the top in a single layer model — if you want to look for a parameter that has a direct, immediate effect on the climate, albedo has to be close to number 1 as the total radiation that one has to balance is the TOA insolation times . Increase (which again is ot constant in space, time or wavelength) by just a tiny amount and because TOA insolation is LARGE, you drop it by a rather lot.
* Then there is the dazzling detail in these figures. They have energy going up, going down, and the numbers are always for the entire planet. Have we a plausible way of measuring latent heat transfer as a function of height, integrated over the entire planetary surface, over a long enough period of time, over a spanning set of the other hidden variables in the system (things like the decadal oscillations, where to average over them would properly take centuries of detailed observations in depth) to be able to pin it down to within a percent? Two percent? Ten percent? And what are the assumptions that permit one to make ANY choice between these (Bayes theorem requires us to weaken any conclusions we draw according to the certainty of these assumptions).
One cannot plausibly solve the Navier-Stokes equations for the Earth at a resolution of 100x100x1 km^3 and expect to get a good answer for the climate decades into the future. One cannot solve the NS equations better by turning the entire planet into a single cell with an assumed cross-section to the sun of and an assumed homogeneous radiating surface are of and an assumed uniform constant average TOA insolation and an assumed unit emissivity and an assumed constant albedo when the real albedo looks more like these graphs:http://www.climateforcing.info/Forcing/Forcing/albedo.html
Even if you google it up, you see estimates that range from 0.31 to 0.39. Let’s see:
P_1 = 1366(1 – 0.31) = 942.5 W/m^2 (TOA)
P_2 = 1366(1 – 0.39) = 833.3 W/m^2 (TOA)
P_1 – P_2 = 109.3 W/m^2
This gives one a small idea of the silliness of the entire enterprise. A change of 0.01 in albedo — roughly 3% — is equivalent to a change of roughly 14 W/m^2 in “average” TOA insolation. CO_2, in contrast, is estimated to be on the order of 1-2 W/m^2. Albedo changes are local, not global, and would have an amplified effect in the tropics.
Some very good questions are then — how accurately can we measure “global average albedo”? How variable is it? Does it have long term variability on top of short term variability? Is it part of a general nonlinear feedback process? Does it vary, nonlinearly, the same way in response to changes in the climate system at different points in space and time? And the big one:
Can we predict the albedo one, ten, a hundred, a thousand months into the future? Can we even predict the variation in the average albedo, whatever that means?
If we don’t know the albedo within 1%, and if we cannot predict the future evolution of the albedo within 3%, then using any single value of it in a computation that also assumes a constant TOA insolation and a planet with unit emissivity etc is not going to make much sense, is it?
If your only purpose is to show that there is sufficient uncertainty in climate science that the total climate log sensitivity to CO_2 could be anywhere from barely positive to 2-3 C — mission accomplished. But I don’t think your question has an answer outside of that.
rgb

Thanks, rgb.
I’ve been really struggling with Lord Monckton’s equations (never really mastered maths, keep going back to remind myself of the terms, etc) and trying to understand the consequences, and suddenly you put it into a proper perspective. A few earlier commenters – such as Joe Born and ‘Frank’, certainly have relevant things to say but I’m really relieved you came in! Proof or disproof of the CAGW by circuitous appeal to S-B is actually getting pretty tedious, and I don’t think we are going to learn anything that route.
Please keep those posts coming – saves my brain hurting too much

“does not have an average value of 1”
Trenberth says:“The surface emissivity is not unity except perhaps in snow and ice regions, and it tends to be lowest in sand and desert regions, thereby slightly offsetting effects of the high temperatures on longwave (LW) upwelling radiation. It also varies with spectral band (see Chédin et al. 2004 for discussion). Wilber et al. (1999) estimate the broadband water emissivity as 0.9907 and compute emissions for their best estimated surface emissivity versus unity. Differences are up to 6 W m-2 in deserts, and can exceed 1.5 W m-2 in barren areas and shrublands. ““TOA insolation varies by 91 watts per square meter over the course of the year, peak to peak”
They are calculating a “global annual mean energy budget”. They calculate the annual total. It adds up.“Have we a plausible way of measuring latent heat transfer…”
Yes, and a very simple one. What goes up must come down. Total rainfall is quite well known.“One cannot solve the NS equations better by turning the entire planet into a single cell “
They are not solving the Navier-Stokes equations.‘how accurately can we measure “global average albedo”’
Global albedo is primarily measured using CERES and ERBE. This caused them to correct their 1997 figure from .313 to .298.

Nick Stokes continues to defend the indefensible. Let’s talk albedo: rocks, sand, dirt, vegetation in all its wondrous and beautiful variety, pavement, roofs, fresh water, salt water, ice, snow, waves, clouds, each with dozens to thousands of varieties. The satellites give us a value, which was wrong originally and is still wrong. The albedo of the Earth varies minute by minute and acre by acre. Sure, tell me you know what it is…

RGB: IMO, You exaggerate the difficulties. dF/F = 4*(dT/T) allows one to calculate small changes in F and T without worrying about emissivity. A 1% error in albedo and therefore F is a 1.5% error in dT. The error caused by assuming a uniform temperature is real, but modest (15%). The elliptical orbit is worth 91 W/m2 in terms of irradiance, but only 23 W/m2 in terms of the whole surface of the earth and 16 W/m2 post albedo. So post albedo radiation is about 240 +/- 8 W/m2 during the year. Using an annual average introduces negligible error. Temperature dependent changes in albedo are cloud and ice-albedo feedbacks and not relevant to no-feedback climate sensitivity. N-S is only needed if you want to know where warming will occur, not average warming without feedbacks.
Most of the error in calculating a no feedbacks CS of 1.0 degK comes from non uniform temperature being raised to the fourth power. A reasonable estimate of uncertainty might be +25% to -10%. All climate models agree with 1.15 degK within +/-1%, though that doesn’t include systematic errors. The biggest problem with treating the earth like a blackbody and calculating a no feedbacks climate sensitivity does not come from uncertainty – it comes from a lack of understanding of the assumptions being made in the calculation process. For example, Lord Monckton doesn’t understand why T should be 255 K instead of 288 K.

hi bob. while it’s been almost a year since i had time to work on a little project on the subject but a good example of what is going on with our system is that presently, the SH gets an average of around 3-5 W/m^2 more power in a year than the NH yet the NH is almost a degree C warmer on average. (this is mostly from sat. database info). It should be obvious that all those essentially unknowable little details you mentioned give substantially different values that are in the order of this CAGW effect that is being attempted to be measured. If one simply assumed to use the NH vs SH values from this, the conclusion would be that an increase in 4 W/m^2 would cause a decrease in T of nearly 1 deg C. LOL.
One thing I cringe at is the attempt at using the feedback eqn. It might be somewhat useful but I think the system is way too messy to yield anything useful. Breaking it down as a direct delta T effect by co2 and then seeing what delta F can occur additionally due to that delta T can be instructive. Absolute humidity increase is about the only thing one can get that is positive feedback capable of causing a radical increase in T. It’s easily shown co2 by itself can increase T less than 1 deg C for a doubling. With a relative humidity that is constant, the absolute humidity increase is far from a doubling for this (not even for a 5 deg C increase) and that means the CAGW crowd’s major helper isn’t going to be nearly enough. Basically, their case actually hangs on the notion that an increase in water vapor and evaporation will somehow result in less cloud formation and lower albedo.

Very interesting point.
But,,,,,as far as I can tell, in principle, it does not really matter what formula equation or method used for the calculation or a mathematical estimation of what is called climate sensitivity, for as long as the “scientific” definition of it is plainly wrong, and actually does not even play right and in accordance with any equation or method of such a calculation, any number assigned to the CS is wrong, in principle.
According to the CS definition, either while CS~3C as per IPCC or CS~0.7K as per L. Monkton there is an unavoidable accumulation of heat in the system.
So in both cases, which actually represent two different systems there will be an ever increasing of energy in the system……if for example in the CS~ 3C there will be ~1.6C continuing accumulation of heat for every CO2 “doubling” whenever that happens, in the case of L. Monckton .that will be ~0.4C….and even in the Monckton case the heat accumulation has the same problem of the ~ the same magnitude as in the IPCC for the “energy balance” book of such systems as the one in question.
An ~0.4C heat accumulation anomaly for a system with a CS~7K is as problematic as an ~1.6C anomaly for a system with a CS~3C….or an ~0.8C anomaly for a system with a CS ~1.5C…
So in principle either they all wrong or they all right, regardless of the actual CS number.
Or put it another way….. in principle either all AGWs are wrong or they all are right regardless of what CS scenario, either when catastrophic benign or mild all AGW scenarios must all be right or all be wrong, in principle….in accordance with the one principle hammered in to the CS definition, in a very AGW manner.
I don’t know how that may help Lord Monckton with his question…but at least he may consider the above when considering for which one of the AGW scenarios he goes and favors.
I have a hard time in considering how some one, who ever that one be, can have the courage to approach and try to understand or explain the intricacy of the Earth’s energy budget (in climatic terms) through the CS angle when the very definition of the CS violates the very principles and essentials of the methods the formulas or the equations used to calculate it….but maybe that is just me, probably missing some thing here! (hopefully not my mind 🙂 )
Lord Monckton, no disrespect meant, honestly, and I still do like very much your CS estimate…:-)
Apart from all this, I do really appreciate and value your courage shown through years now, in this particular issue.
Cheers

Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.

Ah, so you’ve finally accepted what I said when discussing your ‘scibull’ paper, that the Planck feedback is a feedback and should be treated like one. Since it is THE feedback that keeps the planet stable it must be viewed that way.
You have come up with a simple but effective article. The only defect I can see is that it relies heavily on gross assumption, ballpark figures about albedo. For example I doubt that saying spectral emissivity is 1 across the spectrum is realistic.
How would a somewhat lower figure affect the results?
Also, since I’m sure you’d want to get the terminology correct you should correct the following:
“where F is radiative flux density in W m–2”
That is the total flux leaving the body, not the flux density. There is no surface area, shape or dimension implicit in that formula.
Good article.

“No, the Planck “feedback” is not treated in the climate-sensitivity equation in the same way as the true feedbacks.”
There are two ways to do it, either treating all the responses the same (in which case they are all called feedbacks, as in IPCC AR5) or by splitting out the Planck (Stefan-Boltzmann) response and calling the rest feedbacks. Some authors use one, and some the other. Mathematically identical, but semantically confusing.
Doing it the first way, the total feedback must be (and is) net negative for a stable system. Doing it the second way, the total feedback must be less than unity for a stable system. In that case positive or negative feedback means a temperature change greater than or less than the Planck response.

In the climate sensitivity equation, the initial forcing and separately the sum of the true feedbacks are multiplied by the Planck parameter, by that parameter alone, and not by ant of the true feedbacks. For a discussion, see Roe (2009).

While the math may be interesting what is often left off is the assumptions. The primary assumption is the well mixed assumption. The primary gases are probably well mixed (nitrogen and oxygen). The others my guess are not well mixed at all. This raises considerable doubt to the use of the log function on some of the “green house” gases when indeed we do not know their actual concentrations in each instance.

Whiten You say
“as far as I can tell, in principle, it does not really matter what formula equation or method used for the calculation or a mathematical estimation of what is called climate sensitivity, for as long as the “scientific” definition of it is plainly wrong, and actually does not even play right and in accordance with any equation or method of such a calculation, any number assigned to the CS is wrong, in principle.”
I agree – but surprisingly so does the IPCC which has itself now given up on estimating CS – the AR5 SPM says ( hidden away in a footnote)
“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies”
but paradoxically they still claim that we can dial up a desired temperature by controlling CO2 levels .This is cognitive dissonance so extreme as to be crazy.
For a complete discussion of the inutility of the GCMs in forecasting anything or estimating ECS see Section 1 athttp://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
The same post also provides estimates of the timing and amplitude of the coming cooling based on the 60 and especially the millennial quasi- periodicity so obvious in the temperature data and using the neutron count and 10 Be data as the most useful proxy for solar “activity”.

Thank you Dr. Page, RGB at Duke, Whiten (and many others above) for these comments.
I too very much appreciate and support the courage and efforts of Lord Monckton, but the points you all bring up contain many ideas which float around in my head in some form, but I am usually not concise or eloquent enough to verbalize the ideas and jot them down in a way which is coherent.
Reading this article and the comments make me feel much better about what I think I know, and what I am sure that no one really “knows”.
Everything I read from the point of view of CAGW, or from those who believe those who preach it, makes me feel very bad (At turns angry, dismayed incredulous, fatalistic, mirthful, and sometimes physically ill), every time, and these feelings seems to be getting worse.
Where is it going to end? So many are actually doubling down on the lunacy, and so many of these have great influence and power.

RE:WARMING IN THE PIPE
If you do the math, you’ll find that it would take over 500 years at the current “energy imbalance” to raise the temperature of the oceans just 1C. And even with that, the difference between the ocean temperature and the average surface temperature would still be about 15C (making little difference in heat absorbed). Because of this, with respect to anthropogenic emissions…the ocean is effectively a bottomless pit of thermal storage.

In the words of Darth Vader “Nothing can stop that now”. The heat exchange between the warm surface and the rest of the ocean is far more effective than between surface and air. To claim otherwise is to claim that 2nd Law is violated (see here).
Even if hot surface water temperatures did raise atmospheric temperatures, it would be temporary. As the surface energy spreads throughout the ocean, temperatures would drop. There is no getting around the fact that a steady state increase is extremely difficult to pull off without many orders of magnitude more energy.

VikingExplorer
June 28, 2015 at 10:06 am
Unless I do misunderstand your point above, the only violation of the 2nd law you mention is in the interpretation of the given event, like in your case.
There is actually no violation of that 2nd law, unless that will satisfy and please one’s beliefs and predetermined world view.
When the energy (heat) moves from the oceans to the atmosphere, both the surface and the atmosphere are expected to have a warming-up signature, and in the mean time the CO2 emissions go up.
A significant discrepancy in this one, like one in question lately, when and where the warming signature of the surface is considerably higher than that of the atmosphere, does not necessary mean any violation of that 2nd law, because simply by relying on that very 2nd law you may just have the answer to it.
If the atmosphere is losing energy (heat) to the outer space than in this particular case the surface will be showing warming for a while when in the same time the atmosphere may show no any warming at all.
That actually explains why in the first place the CO2 emissions keep going up with no any atmospheric observed warming and why actually the heat is and must be moving from the oceans into the atmosphere (and out to the deep freezing space).
One’s beliefs and world view can not actually violate or break such a law.
cheers

“If there is an imbalance, a change in mean temperature will restore equilibrium.” There is no equilibrium. Earth is a system that’s not at equilibrium (a rotating body with a Sun close by and the cold space around cannot be). A dynamical, complex, non linear system that evolves at non equilibrium. There is not even a dynamical equilibrium. And mean temperature has nothing to do with an equilibrium, anyway. It’s an unphysical quantity. For a system that evolves towards equilibrium, there are other mechanisms that drive it, the mean temperature is not among them.

Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.

No comment on the overall point, but the above is incorrect or misleading. The statement is incorrect because it’s missing two things. (1) Any hint that this is a transient phenomena of unspecified time scale. (2) Any reference to the size of the reservoir.
An unbalance simply implies that the energy level of earth would change, and that would be reflected in some internal temperature or phase change or work performed. Mercury is still warming up, and Jupiter is still cooling down.
Consider Lake Erie, where the water level is determined by water coming in (Lake Huron, rain, rivers) minus water going out (Niagara Falls, evaporation). Most of these processes are not directly dependent on the water level.
The inflow from Huron depends on the level of Huron minus the height of the land blocking the flow. The outflow depends on the level of Lake Erie minus the height of the land blocking the flow. An increase in water also increases surface area, increasing evaporation.
Lake Erie may be slowly evolving from a lake to a river that flows from Lake Huron to Lake Ontario. However, there is no physical law of equilibrium that is striving to restore balance.
In this analogy, claiming that increasing CO2 would raise the temperature of Earth would be equivalent to claiming that blocking off a small part of the American Falls would raise the level of Lake Erie.
Even IF man were raising the radiative resistance slightly, there are several possible consequences:
a) The incoming radiative resistance is also raised (20 – 24% of TSI and a majority of near infrared radiation is absorbed by the troposphere).
b) The extra energy could cause additional phase change or work to be performed (additional emergent phenomena as Willis describes it).
c) It could simply result in a warmer troposphere. Any down-welling radiation from this warmer troposphere is simply reducing how much the troposphere is warmed. The TOA is thus warmer than it otherwise would be, and radiates more to space.

Good work L V M of B.
I see you have been reading my email exchanges on S-B Law and role of emissivity since April 12, 2015. You are closing in on the vanishingly small effect of CO2 on global T. Thanks again for editing http://www.principia-scientific.org/professor-singer-finds-co2-has-little-affect-global-temperature-v2.html
The forcing of interest is not F but [CO2], which affects emissivity ε. We want
dT/dCO2 = dT/dε * dε/dCO2
Rearranging S-B Law for
T = (F/εσ)^0.25
dT/dε = 0.25(F/εσ)^-0.75 * – (F/σ) ε^-2 = -0.25(F/σ)^0.25 * ε^-1.25 0
Therefore,
dT/dCO2 < 0.
This means cooling, so long as assumption F independent of [CO2] is valid.
To estimate how much, all you have to do is integrate atmosphere ε(T, P, C) through altitude to find bulk atmosphere effective εa and dεa/d[CO2] and use laws of radiant energy transfer to quantify the change in atmosphere Fa and surface Fs = 239 – Fa to find global ε and corresponding T’s (T, Ta , Ts).
Better to think of S-B Law giving F or I, intensity, irradiance, rather than flux. It is only flux when surroundings are at T = 0k. Driving force for radiant energy transfer from 2 to 1 is I2 – I1.

Mr Latour, in the dishonest fashion of the Sloyers, suggests that I am “closing in” on their position. He also suggests I “edited” his nonsense dishonestly stating that Prifessor Singer is also “closing in” on the “there is no greenhouse effect” rubbish. Professor Singer accepts, as does everyone who accepts the results of oft-repeated experiments, that there is a greenhouse effect. Accordingly, my alleged “edit” was confined to removing references to Professor Singer’s name.
For some years, the corrupt organisation that promotes the nonsensical notion that there is no greenhouse effect has freudulently used the names of many eminent scientists to attract donations by falsely asserting that they support its bizarre belief system. That matter is now before the investigating authorities, who are reviewing the evidence and will decide in due course whether and whom to prosecute for obtaining a pecuniary advantage by deception.
It would be wiser, therefore, if Mr Latour were to stop using Dr Singer’s name and, for that matter, mine in any context that might in any way be interpreted as implying we do not think there is a greenhouse effect,
Mr Latour is additionally and characteristically dishonest in his false implication that I am shifting to a position I have not held from the outset. In my first ever public statement on the greenhouse effect I said our enhancement of it could be expected to cause some warming, but on balance not very much. That remains my position and, so far at any rate, the last nine years have indicated that I am correct.
There is no incompatibility between recognising that there is a greenhouse effect and expecting an enhancement of that effect under modern conditions to be small.
The authorities have also been asked to investigate whether the true purpose of the corrupt organisation that thus makes free with the names of eminent researchers who do not in fact endorse its lunatic notions exists precisely for the purpose of discrediting not only them but all skeptics by creating confusion through its repeated false suggestions that various prominent skeptics do not believe there is a greenhouse effect,

Good work L V M of B. I see you have been reading my email exchanges on S-B Law and role of emissivity since April 12, 2015. You are closing in on the vanishingly small effect of CO2 on global T. Thanks again for editing http://www.principia-scientific.org/professor-singer-finds-co2-has-little-affect-global-temperature-v2.html
The forcing of interest is not F but [CO2], which affects emissivity ε. We want
dT/dCO2 = dT/dε * dε/dCO2
Rearranging S-B Law for T = (F/εσ)^0.25
dT/dε = 0.25(F/εσ)^-0.75 * – (F/σ) ε^-2 = -0.25(F/σ)^0.25 * ε^-1.25 0
Therefore, dT/dCO2 < 0.
This means cooling, so long as assumption F independent of [CO2] is valid.
To estimate how much, all you have to do is integrate atmosphere ε(T, P, C) through altitude to find bulk atmosphere effective εa and dεa/d[CO2] and use laws of radiant energy transfer to quantify the change in atmosphere Fa and surface Fs = 239 – Fa to find global ε and corresponding T’s (T, Ta , Ts).
Better to think of S-B Law giving F or I, intensity, irradiance, rather than flux. It is only flux when surroundings are at T = 0K. Driving force for radiant energy transfer from 2 to 1 is I2 – I1.

Your article you included with your link has a logical error. In your article, you claim that a body that has a higher emissivity is cooler than a body with lower emissivity under the same constant radiant flux. One could envision the inside of a large sphere that is a perfect blackbody emitter (but not necessarily). We then place another, smaller sphere in its center. Independent of the emissivity of the inner body, it will reach the same temperature as the outer body. It reaches its equilibrium only faster, the higher its emissivity is! Otherwise, you could run a heat engine between these two bodies, and that violates the 2nd law of thermodynamics.

“Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.”
A 50% increase in CO2 is unlikely as CO2 would partition into the oceans at 50 to 1. Because of this effect, if we burned everything we have, everything, our homes included, we might be able to raise atmospheric CO2 by 20% The oceans work against us as, for every CO2 added to the atmosphere, 50 are added to the ocean as it tries to go to equilibrium.

higley7,
You forget the time factor: until now, about halve the human emissions per year as mass are absorbed by the oceans and the biosphere. The whole carbon cycle seems to behave as a linear process in disequilibrium, where the sink rate is directly proportional to the extra quantity (=pressure) in the atmosphere above steady state for the current (ocean) temperature, which is around 290 ppmv.
The past (1960) and current (2012) sink rates show a similar e-fold decay rate of slightly over 50 years, or a half life time for the excess CO2 of ~40 years. That is not fast enough to remove all human CO2 in the same year as emitted. If the emissions go on as was the case until now, that is slightly quadratic over time, we can easily reach far higher levels (~800 ppmv for “business as usual”). Only if human emissions level off or drop, the levels in the atmosphere will level off too until emissions and net sink rate are equal, or drop to steady state if all emissions ceased.
Finally, the human emissions will be redistributed over atmosphere, biosphere and (deep) oceans, but that needs a lot of time, as the uptake speed of oceans and vegetation is limited.

Geran,
Take the same greenhouse and add a lot of manure and/or plant debris: the CO2 levels will go up to 1,000 ppmv, only somewhat lower during the day for weeks to come…
The big greenhouse called earth has several carbon cycles, where the exchange between atmosphere and biosphere is about 60 GtC in and 61 GtC out over the seasons: an uptake of some 1 GtC extra into the biosphere caused by the 30% extra CO2 pressure in the atmosphere. See:http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
Thus removing the current 230 GtC extra CO2 in the atmosphere above the steady state equilibrium for the current temperature (per Henry’s law: ~290 ppmv) will take a lot of time…

“If the emissions go on as was the case until now, that is slightly quadratic over time, we can easily reach far higher levels (~800 ppmv for “business as usual”).
No we cannot. There is no credible scenario where we get to 800ppm this century.
Here’s why:
1. Carbon uptake has consistently increased and the ocean uptake will increase further, on a path that increases the greater the concentration of Co2. I’ts about 5 GT of carbon, over 50% of the 9GT of carbon emissions. As ppm of Co2 goes up, so does rate of carbon uptake; when we get to 550ppm, the carbon uptake will be 10 GT, equal to current emissions. So if all we do is simply keep emissions at current levels, we will never go above 550ppm. The ocean sink is so massive (unlimited actually, due to calcium carbonate sedimentation), we could emit 10 GT of carbon for 280 years, and the oceans would soak up 7/8ths of it.
2. It’s not credible to suppose emissions will increase quadratically in coming decades, in 2014 it did not increase AT ALL over 2013. If trends continue, my 8 year old son will be 50 feet tall by age 35. OECD countries have flatlined and now even reduced emissions, and the rise of emissions due to China has now ended. The US doubled GDP since 1970, yet uses less oil per capita. With or without carbon taxes or limitations, we will not emit that much more carbon, because we can grow without increasing energy use.
3. There’s not enough economic growth to sustain much higher emissions. Even a doubling of emissions globally would require 4x – 5x increases in developing nations’ emissions, but that implies more growth than is actually happening *AND* a reversal of the trends towards renewables and energy efficiency.
4. We are increasing Co2 by 2 ppm per year. Given #1, even if we increase emissions, the carbon sinks will grow as well, and given #2 and #3 its very unlikely that emissions will grow that much. A 2% increase/ year in emissions will lead to about 550 ppm by 2100.
5. Technology is moving forward at a pace that implies renewables will be cost competitive and displace fossil fuels by 2040.
6. Since the alarmists are hyping up this threat, enough to force commitments to emit WELL BELOW the 2% increase per year, its likely we will not even hit 550ppm.

“A 2% increase/ year in emissions will lead to about 550 ppm by 2100.”
I should correct/clarify myself. I meant 2% increase until 2040 not 2100. I did an analysis wherein we estimate carbon uptake as increasing as ppm rise, and the emissions are increase 2% per year until 2040 and then flatlines at around 16GT emissions almost double current emissions – that leads to 585 ppm. if you project a 1% decline in emissions after 2050, bringing emissions from 16GT to 10GT it goes to 550ppm.
The uptake assumption is that it increases linearly with increasing ppm.
In short, realistic scenarios in which we global energy use in 21st century track what has happened to OECD nations in the past 40 years would lead to 550-580 ppm by 2100.
Given both technology trends and uptake trends, we could make 600ppm the upper limits. And given the work that shows TCR is around 1.3-1.4C, the upper limit of temperature change now until 2090 is about 0.7C.
A 2% increase ad infinitum (which does lead to 40 GT of output per year and higher Co2 level), as I noted, is unrealistic on many levels (resource constraints, defies economic models, contradicts historical consumption patterns/trends).

patmcguinness,
If and only IF the emissions don’t grow as “business as usual”, then you may be right, but until now the CO2 levels have grown with the worst scenario. One year of lower emissions isn’t a trend, the more that a lot of Western countries (and even China) didn’t grow in economical activity or less than expected.
As the response of the oceans and biosphere to the increased pressure in the atmosphere seems to be quite linear, steady emissions indeed would lead to a new steady state where emissions and sinks are equal but “business as usual” can lead to 800 ppmv and more in the atmosphere…
Thus everything depends of the future emissions…

Seawater dissolved inorganic carbon (DIC) is known to be about 2.2 millimolar at the surface. The air in contact with that surface is 0.4 millimolar CO2. So, 2.2/0.4 equals 5.5.
Basically, for 65 molecules of CO2 added to the atmosphere, about 55 will end up in the ocean and 10 will remain in the atmosphere.
That 50 to 1 number comes from using equal volume ratios.
For the change in CO2 from 280 to 400 ppm, the difference is 120 ppm. Basically, the same ratio applies, with about 100 ppm entering (and staying) in the ocean, and the remainder, about 20 ppm remaining in the atmosphere. This is consistent with many other approaches to the estimate of how much of the human CO2 is actaully in the atmosphere today, about 20 ppm.
CO2 never “accumulates” in the atmosphere, it is part of a flowing biogeochemical river with huge abiotic (ocean) and biotic exchanges.

Menicholas,
All three observations are slightly quadratic: human emissions, increase in the atmosphere and increase in net sink rate. There are of course year by year (10-90%) and decadal (40-60%) variations in sink rate, mainly caused by temperature, but the average sink rate is 45-50% of the emissions over the past 115 years, making that the increase in the atmosphere is 50-55% of the emissions:http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg

bw,
You forget a few points… Solubility of CO2 in fresh water is very low. In seawater about a factor 10 higher (the buffer/Revelle factor), but even so, including all buffering, a 100% change in the atmosphere results in a 100% change in free CO2 in seawater, per Henry’s law, but only a 10% change in DIC, as free CO2 is only 1% of all forms of carbon. The rest is 90% bicarbonate and 9% carbonate.
That means that the 30% CO2 increase in the atmosphere is good for a 3% increase of DIC in the ocean surface or from ~1000 GtC to ~1030 GtC. Far from your 50:1 ratio.
The 50:1 ratio may apply to the deep oceans, but the exchange rate of the atmosphere with the deep oceans is much more restricted to polar sinking and equatorial upwelling, each only 5% of the ocean surface.
Thus while the ultimate distribution may be 1:50, that needs a lot of time and as human emissions still are increasing year by year, that accumulates in the atmosphere, because the response of the sinks is not fast enough…

FE, what about dissolving of into fresh water as raindrops fall. Is it plausible that the pressure, friction, turbulence and mixing within a drop allow a significant amount of CO2 to be absorbed while a drop falls?

There’s another way of getting the ocean uptake amounts. The keys are to know the amount of DIC (dissolved inorganic carbon) and the revelle factor, which calculates how a change in atmospheric ppm changes the total DIC %age. basically, most of the DIC is in the form of carbonates, and adding CO2 changes the chemistry ratios of Co2, HCO3 and H2CO3 that make up the DIC. The amount of DIC in the whole ocean is MASSIVE – 37,000 gigatonnes of carbon as DIC. This is 70x the amount of Co2 in the atmosphere (at about 560 GT or so). the revelle factor is about 10.
What this means is that if atmospheric Co2 goes up by 30%, you divide by 10 and DIC in the ocean goes up by 3%. You multiple that by the ocean DIC. so what if we double CO2?
then ocean DIC can go up by about 10% of 37,000 GT, or 3700 GT. Since double CO2 means going from 560GT to twice that, the oceans will take up 3700GT vs the atmosphere 560GT.
Hence, over time (and this is a slow process, taking up about 1.6 GT of CO2 from surface ocean to deep ocean per year), 7/8ths of emissions will end up in the ocean. It also means that some level of emissions are such that we will not increase Co2 ppm, and that zero emissions will actually cause Co2 ppm to go down.

Mr. Engelbeen,
I appreciate the response, but I am not sure that you have explained anything.
BTW I dispute the temperature part of the graph you posted.
I do not think it represents objective reality.

Curious, has a rate of diffusion of dissolved gasses and ions into the water column ever been measured?
In other words, how long might it take for CO2/bicarbonate/carbonate in surface water to diffuse to the deep ocean?
Do we have a number for that? I understand it must be rather low, but not sure how low.

Aaron,
I once calculated the amounts of CO2 in rainwater: while that dissolves carbonate rocks (but even that needs millions of years to carve the beautiful caves…), what is dissolved at where the raindrops form is maximum 1.32 mg/l at 0°C. 1 l rainwater is formed out of 400 m3 of air and takes time to form, thus the water there is probably completely saturated with CO2. Even so, that hardly affects the CO2 levels at height. While the drops fall down, temperature in general increases which makes that even less CO2 is retained…
1 l/m2 gives 1 mm rain where the drops fall down. If all that water evaporates, setting all CO2 free, that gives less than 1 ppmv extra in the first meter of air without wind mixing…
Thus while the total water cycle is enormous and a lot of CO2 is moved along, that is hardly measurable in the concentrations, both in the high atmosphere and near ground.

Menicholas,
The temperature trend is from HadCRU ocean temperature. No matter which other trend you take, the trends are similar, but one can doubt the slope. What is sure is that the impact of temperature (variability and trend) on the CO2 levels is restricted: 4-5 ppmv/K for short term variations up to 8 ppmv/K for (very) long term influence.
That is also what Henry’s law says for the solubility of CO2 in seawater (4-17 ppmv/K in the literature).
In the above graph it is clear that the huge variability in temperature has little influence on the CO2 increase in the atmosphere. Neither has the trend, as the period 1945-1975 shows a cooling and 2000-current is flat while CO2 levels simply follow human emissions.
Diffusion of CO2 in water is very slow. Only by wind and waves the mixing between ocean surface layer (the “mixed layer”) and atmosphere is fast. Besides some carbon particles (organic and inorganic – shells) of dead plankton and fish excrements there are only limited exchanges between the ocean surface / atmosphere and the deep oceans.
There are several works which have followed human CO2 into the deep oceans, one of then is here:http://www.pmel.noaa.gov/pubs/outstand/sabi2683/sabi2683.shtml
As there is a slight difference in 13C/12C ratio between fossil fuels carbon and oceanic carbon, the changes can be traced back.
Other general exchanges are know by using tracers like the 1950-1960 peak in 14C caused by the open air nuclear bomb tests and millions of measurements over the years by research ship cruises from the surface to depth…

Here is an example of the fatal flaw in the “energy budget” posited by the climate science community. And you can do this in your front yard (but not so easily in the winter).
Take two pieces of pipe, same diameter/length. One made of steel (or copper) and the other made of PVC (or another plastic material). Paint them both gray to make the albedo’s identical (not that it matters). Place them both on your front lawn exposed to bright summer sunlight. Wait a while.
Both pipes are receiving the same incoming energy flux (units of Watts per Surface Area), both will heat up to about the same temperature as the surrounding grass. Convective cooling will keep them both at about the same temperature.
Now pick one up in each hand, in “sunny” areas of the Earth you are likely to wince in pain and drop the steel pipe while the plastic pipe will feel comfortable to hold in your hand.
Why ? Thermal capacity and thermal diffusivity.
The metal pipe has absorbed more thermal energy and stored it internally than the plastic pipe. The metal pipe has more thermal capacity so it can hold, contain or “trap” more thermal energy than the plastic pipe.
The second part of the explanations has to do with thermal diffusivity, from a systems perspective thermal diffusivity is essentially a measure of the velocity of heat flowing through a material. The velocity of heat flow through metals is faster than the velocity of heat flow through human skin/flesh.
When you pick up the metal pipe the heat flows quickly from the interior of the pipe into your hand at the pipe/skin interface, this causes the pain you feel.
When you pick up the plastic pipe there is less thermal energy in the pipe and it travels more slowly from the interior of the pipe to your hand. Thus it feels comfortable to hold.
You cannot perform a correct energy budget analysis of the thermal energy “trapped” in a material after being absorbed from light radiation WITHOUT properly considering the thermal capacities of the materials involved in the system.
Yes, you can make a “budget” and put measured values into said “budget” but it is a complete “made up” version of what is happening. And it will not provide any useful or predictive information about what the temperatures will be at locations in the system.
The thermal capacity of the Ocean’s of the Earth is huge, the thermal capacity of the gases in the atmosphere is much smaller. The thermal capacity of the “GHG’s” in the atmosphere is miniscule in comparison. The GHG’s are simply “along for the ride” when it comes to controlling the”average” temperature of the Earth and have NO EFFECT (i.e. climate sensitivity = 0.00000000).
Cheers, KevinK

I love this, because it makes sense and yet nearly everyone in the “climate science” community seems to disagree.
I do not. I think until someone can explain exactly how or why this is not correct, it is an unaccounted for piece of information.
And there are a very many such unaccounted for pieces of information.

I would like to add to this comment the following perspective. IMHO there are three different groups of gasses that participate in thermal “storage” in our atmosphere. The first group are the transparent gasses that are in the greatest abundance, the second set are the “passive” GHG which have absorbance bands in the IR spectrum (of which CO2 has received the most attention), and then the final gas is by far the most significant – the “active” GHG H2O. In the atmosphere water is an even more significant an energy storage/energy transfer molecule than these simplistic “energy budget” illustrations account for or than your example of a metal pipe versus plastic pipe suggests. Why? because not only does water have a significantly higher heat capacity in all it’s phases, but it is the only molecule in any of these discussion that ACTIVELY changes phase to store energy, then MOVES vertically and horizontally in a phase changed state to transfer energy, then changes state again to ACTIVELY release the stored energy. IMHO, this process sucks up radiative energy near the surface then vertically transfers it to the upper atmosphere, then in two separate phase changes, transfers the radiative energy back to space effectively super cooling the planet right where cooling is needed and transferring heat to other regions that need heat. Again IMHO, the fact that there is SO MUCH water present in the equation, the earth’s atmosphere is effectively buffered to rarely ever exceed temperature extremes. In an ACTIVE system like earths atmosphere, a passive 2D attempt to model an energy budget that is based on mythical temperature, and incomming and outgoing radiation AVERAGES is a fool’s errand.

Don,
The radiative budget includes the energy absorbed in evaporation of water vapor and released in condensation of water vapor. It’s the 80 W/m^2 labeled “evapotranspiration”.
The value is known reasonably accurately since it is an easy exercise to derive the amount of energy transported given the heat of vaporization of water (which does depend on temperature at which the evaporation / vaporization occurs, but fairly weakly) and the average amount of precipitation.

DonV, yes, in my example I was referring to the “GHGs” other than water.
We are very lucky to have water in all it’s states on this planet, without it we would see the wild temperature extremes observed on the Moon. As a resident of Upstate NY I do hate to think about scraping the ice off my car windshield on a January morning if it was minus 180 F outside. Heck minus 10 F seems plenty uncomfortable enough for me.
Usually the Fool’s on an errand are the very last ones to recognize the folly of their actions.
Cheers, KevinK

joeldshore says..
June 27, 2015 at 4:17 pm
Don,
The radiative budget includes the energy absorbed in evaporation of water vapor and released in condensation of water vapor. It’s the 80 W/m^2 labeled “evapotranspiration”.
The value is known reasonably accurately since it is an easy exercise to derive the amount of energy transported given the heat of vaporization of water (which does depend on temperature at which the evaporation / vaporization occurs, but fairly weakly) and the average amount of precipitation.
==========================================================
How much energy (on a watts per sq. meter basis) does it take to accelerate the hydrological cycle, and to grow 35 percent more bio-life? I only ask because I want to know.

Kevin,
Your argument is complete nonsense on a few levels.
(1) The GHG’s rapidly thermalize with the other molecules in the atmosphere, so the heat capacity of just them is irrelevant. It is the heat capacity of the entire atmosphere that is involved.
(2) Your whole picture is incorrect. The thermal energy is not trapped in the GHG’s or the atmosphere; it is trapped BY the GHG’s but mostly stored in the ocean. Your argument is akin to saying that one of those space blankets (https://en.wikipedia.org/wiki/Space_blanket) can’t possibly be useful because their mass…and hence thermal heat capacity…is much smaller than that of your body.
(3) All actual serious calculations of the climate system, especially dynamics (i.e., how long it takes for temperatures to change) are done using the relevant heat capacities. So, you are implying that climate scientists are ignoring something that they aren’t.

1) All of the energy must flow through the “GHGs” thus the “GHG” must have enough thermal capacity to store (even temporarily) all of the energy. You are incorrect.
2) The human body is a “source of heat” (or a heat supply) your body “burns” fuel and makes heat. This enables a “space blanket” to make you more comfortable. The surface of the Earth is a “reservoir of heat” it is not a source of heat. A “space blanket” on a rock in your front yard will not make the rock warmer.
3) The “climate science” “energy budget” cartoon (the original topic of this post) does not appear to have any dynamics involved in the calculations anywhere. Did I perhaps miss the units of time somewhere ?

Kevin,
1) So, can you show us your explicit calculation that there are not enough GHGs in the atmosphere to store the thermal energy for the very short time until the excited CO2 molecule thermalizes with the rest of the atmosphere? Since this claim contradicts a lot of known science, it would certainly be quite a discovery if this were true!
2) The sun is a “source of heat”. So, your arguments amounts to the claim that the greenhouse effect would not exist without the sun. I don’t think you are likely to get much argument on that point.
3) That’s my point…Since it does not involve dynamics, the heat capacities do not directly enter into it. However, once you start to try to calculate what happens over time if you change the energy balance, then the heat capacities are certainly important.

I think it will make the rock warmer if the rock is in the sun. Or, it will take longer rock to reach the ambient temperature when the ambient temperature changes.
But the earth is not a simple rock, and the atmosphere not a simple blanket.

Kevin, You make a fundamental mistake by confusing, on the one hand, the transient transfer of heat from one body to another (which as you correctly say, depends on the source material’s thermal capacity and surface diffusivity), and, on the other hand, the long term steady state transfer of energy from one body to another. There may be other flaws in the energy budgets under discussion here but they are certainly not due to the effect you describe.
If there are indeed real flaws in long term steady state energy flow budgets, it interesting how nobody ever suggests better numbers. Instead people tend to criticise them on various spurious ‘transient effect’ grounds. The figures in the Kiehl-Trenberth diagram do balance and should not be dismissed just because Trenberth himself happens to be a CAGW alarmist. Likewise, Monckton’s calculations relate to long term steady state averages and so should not be criticised using arguments about irrelevant short term ‘transient’ effects.
We skeptics do ourselves a profound disservice if we fight the wrong targets.

One problem is that the energy cartoon budget is 2 dimensional whereas we live in a 3 dimensional world, and insufficient thought has been given as to where the incoming energy ends up in 3 dimensions. Not all watts are created equally, and where they are in the system is important.
The ocean is very different medium and surface to that of the land, and these differences are not sufficiently recognised.
On land, incoming energy is absorbed at the surface and conducted/convected/radiates from the surface. Therefore one might expect energy in to be balanced by energy out.
However, the ocean is different since it is a selective surface and one that is free to evaporate and with that process there is a change in latent energy..
DWLWIR is absorbed in the top few microns. Given the omnidirectional nature, virtually none makes it past about about 8 microns and more than 50% is absorbed in just 3 microns. There does not appear to be any process that can sequester the energy absorbed within the first few microns to depth thereby diluting and disipating it by volume, at a rate faster than the energy in the top few microns would drive evaporation.
Solar, of course, is very different since it is maily absorbed in the top few metres of the oceans (ie., in a volume a million times greater than that of DWLWIR and thereby is disluted by volume and gently warms the oceans, rather than driving evaporation. Some of it penetrates as far down as about 100 metres.
The important point to note is that the solar energy that is absorbed in the top few metres and not all of this energy finds its way to the surface. Due to mixing, the action of wind, waves, swell etc and ocean overturning some of the energy finds its way down to depth where it goes to warm the deep ocean, ie., warming the melt water from the Arctic/Antarctic in the thermohaline circulation.
The energy budget cartoon asssumes that the surface balances/should balance because it assumes that all incoming energy (solar and backrdiation) is absorbed at the surface and radiated/convected/conducted from the surface, but that is not the case.

Lord Monckton,
I think that Roy Spencer (http://wattsupwiththat.com/2015/06/27/i-only-ask-because-i-want-to-know/#comment-1973709) has given you the best response.
However, another answer is that you are free to define Lambda_0 any way you like (although some definitions might be more useful than others), but how you define it will depend on what the feedbacks end up being. Hence, you can’t redefine Lambda_0 to be in terms of the surface response and then use the feedbacks that were computed for the other definition.
As another example of this, Isaac Held has argued that it makes more sense to define Lambda_0 in a way such that the RELATIVE humidity, rather than the ABSOLUTE humidity, of the atmosphere is held constant. This leads to a “bare” climate sensitivity to CO2 doubling in the absence of feedbacks that is more like 2 C, rather than 1.2 C.
Using your logic, we would then conclude that if Held is “right”, then the climate sensitivity is more than 1.5 times as large as has been previously estimated. However, this is not correct (and is not what Held claims!!!) because the feedbacks are different if you define the “bare” sensitivity in this way. In particular, there is no longer a large water vapor feedback if your bare sensitivity is defined for the case that the RELATICE humidity remains constant.

joeldshore: Isaac Held has argued that it makes more sense to define Lambda_0 in a way such that the RELATIVE humidity, rather than the ABSOLUTE humidity, of the atmosphere is held constant.
Since neither is constant in the actual climate system, any derivation based on either assumption is inherently inaccurate by unknown amounts.

Back to the basics, how would atmospheric CO2 trap enough heat to warm the oceans? The warming oceans is the 800lb gorilla in the living room. The energy required to warm the oceans is enormous, 2000 to 4000 x the heat available in the atmosphere. How can CO2 and trapping IR radiation, radiation that doesn’t penetrate the oceans, warm the oceans and especially the deep oceans? What is heating the oceans is most likely warming the atmosphere above it, and that is the visible wavelengths that have nothing to do with IR and the GHG effect. The Sun warms the oceans, the oceans warm the atmosphere, it is that simple. BTW, all GHGs are increasing, not just CO2, so clearly there is a natural cycle people seem willing to ignore.

I only ask because I want to know. Why is the Earth even supposed to have an energy budget which at equilibrium means the amount of energy coming in equals the amount of energy going out? Surely not. The earth absorbs solar energy which is not returned to space through processes like photosynthesis. Therefore the energy balance would be that the energy going out is equal to the energy coming in minus the energy absorbed. As I believe that it would be impossible to measure the amount of energy being absorbed in any meaningful way, it would be impossible to quantify the energy balance.

Great question, what law of physics for an open system requires a requirement that the earth’s energy input equals its energy output? Maybe such balance has occurred in the past, but is it physically mandated by physics, thermodynamics etc. If it is, then run away global warming is impossible … and lest you think of me otherwise, I’m a skeptic!
Dan

The comment is made early on that 400 ppm is too small an amount to make much difference to light transmission through the atmosphere. The is a substance, potassium permanganate also known as Condies crystals that was used as an antiseptic (may well be still available from chemists). A piece less than a match head in size in a litre of water turns the water deep purple and almost completely opaque to light transmission. Thats maybe 10 mm^3 say 0.1 gm in 1000 grams or 100 ppm! 400 ppm does indeed profoundly change the opacity of the atmosphere at 15 microns.
RCS june 27th 8:30 am claimed the Lord Monkton’s inversion of the derivative was wrong. It was not wrong if y=x^2 the x = Y^0.5 and dx/dy = 0.5*y^-0.5 = 0.5/y^0.5 but y^-.5 = x hence dx/dy = 0.5/x = 1/2x and dy/dx = 2x. Christopher is completely correct.
However, the comment that the effective emission altitude is at 255 K is not right. There is no effective emission altitude this is a fallacy that has been perpetuated for years and leads to utterly wrong conclusions. The truth is that the emission altitude is highly variable and depends on wavelength. At the green house gas wavelengths the emission is from the top of the green house gas column(which is generally well above the cloud layer). At 15 microns that is approximately the tropopause to lower stratosphere where the temperature is more like 220K not 255K. In the atmospheric window the emission altitude is indeed the surface under clear sky conditions and the top of the cloud layer in cloudy conditions. In fact there is no significant emission from anywhere at 255K (maybe a little from some water vapour wavelengths). In a system as nonlinear as a T^4 relationship one cannot simply use some mythical average when T at different wavelengths is varying between 290K and 220K. The surface of Earth is close to a black body in the thermal IR and indeed it does emit about 390 watts/sqM at an effective emission temperature of about 288K.
However I don’t want to give the impression I completely disagree with lord Monkton because I don’t. I also believe that the entire theory of AGW is utterly wrong. There is a much simpler and even more irrefutable argument. The theory of AGW states that rising CO2 acts as a blanket reducing Earth’s energy loss to space. Solar input is constant so that creates an imbalance which means Earth must warm. Now energy loss to space is measured as outgoing long wave radiation (OLR). So if the AGW theory is right OLR must have been falling since 1975 otherwise there would be no energy imbalance to cause warming. Trouble is NOAA has been measuring OLR since 1978 and it has been rising not falling!!!!!
The theory of AGW absolutely requires OLR to have been falling at least since 1975. It has been doing the opposite. It only takes one irrefutably contradictory fact to destroy a theory and it seems to me this is it.

[quote]
The truth is that the emission altitude is highly variable and depends on wavelength.[/quote]
Hence the use of the word “effective”.
[quote]
The theory of AGW states that rising CO2 acts as a blanket reducing Earth’s energy loss to space. Solar input is constant so that creates an imbalance which means Earth must warm.
[/quote]
And, as the Earth warms, the OLR will increase again. So, at the end of the day, it is not so simple, although I agree that over the last 40 years of rapidly rising greenhouse gases, the Earth should have become somewhat more out of balance and hence OLR should have decreased a bit.
[quote]
Now energy loss to space is measured as outgoing long wave radiation (OLR). So if the AGW theory is right OLR must have been falling since 1975 otherwise there would be no energy imbalance to cause warming. Trouble is NOAA has been measuring OLR since 1978 and it has been rising not falling!!!!![/quote]
You claim that they have been measuring OLR with sufficient accuracy to determine this?!? If they can measure it so well, why do they use the rise in global ocean temperatures to compute the radiative energy imbalance rather than just measuring it directly.
Also, see this article that looks at the actual spectral changes to deduce a conclusion opposite to yours: http://www.nature.com/nature/journal/v410/n6826/full/410355a0.html

We all know that there are issues with all the data sets, and probably with the exception of the CO2 data set, none are fit for purpose.
Even with CO2, we do not yet know how well mixed it is, and whether the distribution of CO2 fits in with the claim that the problem is manmade emissions, or whether it will be more consistent with the bulk of CO2 being natural, and we have little grasp on the CO2 sinks. Why have we not seen any OCO data/plots apart from the initial release plot? What is being hideen/adjusted?
All in all, a very poor state of affairs upon which to base a science, still less to claim any degree of confidence, still less certainty in what is being proclaimed.

Joeldshore;
You seem to be claiming that by adding the word effective makes everything OK and a composite emission altitude can be used. I disagree totally, when one is dealing with temperature ranges of 220K to 290K in a T^4 system the non linearity is profound. Also as CO2 concentrations rise what happens is that a very slightly greater range of wavelengths emit from a 220K source and a slightly narrower range emit from the surface 290K ie: the line broadens slightly.
You also comment that as the earth warms OLR rises again. Sure but then that decreases the imbalance so the Earth re-establishes equilibrium. If OLR returns to its original value the Earth stops warming. However the claim is that the earth will continue warming at an increasing pace and for that the OLR has to remain reduced. As to me claiming they can measure OLR that accurately, no THEY claim they can measure OLR that accurately since they publish the data, and I should point out NOAA is considered one of the most reputable sites for this sort of data and they are hardly sceptical.
As to your question of why they do not compute the radiation imbalance from OLR directly I don’t know (something to ask them) but I can conjecture its because they know it would give them an answer they don’t like (ie: CAGW theory is wrong!). Also to compute energy imbalance from OLR one also needs to know energy input and while they claim the emission intensity of the sun has not changed the energy absorbed depends on albedo which depends on cloud cover and the data I have seen suggests this has decreased by 4% since 1978 which would make a VERY large difference.

I had a look at the paper you cite (or as much as I could since everything but the summary is behind a paywall). No they do not appear to come to a different conclusion. They are looking at emission to space by wavelength and I am guessing finding that at the CO2 line edges the OLR is falling which is exactly what i would expect from rising CO2 (CO2 is a green house gas and green house gases do retain energy – no argument but the massive question is how much). My point is that for AGW theory to be correct the total OLR has to be falling and according to probably the best recording site in the world (NOAA) the satellite data says its rising not falling. To me that’s totally definitive. If you want to argue that rising CO2 is reducing OLR but other “natural” factors outweigh that and cause net rising then I agree its possible but in that case AGW is by far not the dominant effect on our climate and what ever warming there has been must be as a result of an even larger increase in absorbed energy which has nothing whatever to do with CO2. So just maybe AGW could be technically correct but the effect so massively exaggerated that in any practical sense it is wrong. Either way the call to urgently reduce fossil fuel use is misguided and misplaced.

Michael,
So, your standard for data that you like is “They publish the data and therefore it must be accurate to whatever precision I want it to be”? That sounds like a very unskeptical point-of-view and one you surely don’t apply to data that goes against your prejudices!
In particular, there are some data sets in climate science where the short-term variations are captured accurately but the long-term trend is subject to artifacts due various effects such as different satellites, changes in instrumentation, and so forth.
Where is this data that you are talking so much about anyway?

Joeldshore;
You are putting words into my mouth that I did not say. Your quote “They publish the data and therefore it must be accurate to whatever precision I want it to be”. I did not say that. What I said was that NOAA published the data, they are one of the premier sites for such information and they are making the claims as to the data accuracy. As to whether the data accuracy is reasonable or not. They claim a rise of 2 watts/sqM in about 240 watts/sqM or just under 1%. Compare that with the global temperature where the claims are accuracies of 0.1K or even better in 288K or about 0.04% (in the case of the oceans they are claiming accuracies down to 0.01K!!!!). Do I think an accuracy of 1% to measure OLR is reasonable – yes I think that’s plausible, far more plausible than measuring the average temperature of earth to 0.04%. However even this is not the end of the story. If NOAA were claiming a fall of say 5 watts/sqM and the theory of AGW required a fall of 6 watts/sqM I agree you would have a point. But the theory of AGW requires a sizable fall and NOAA is reporting a sizable rise so NOAA’s error would have to be 2-3% or more – enough to change a measured rising trend into a significant falling trend.
The original NOAA data came from http://www.esrl.noaa.gov/psd/cgi-bin/db_search/DBSearch.pl?Dataset=NOAA+Interpolated+OLR&Variable=Outgoing+Longwave+Radiation
This was replotted by http://www.climate4you.com. I have downloaded the original NOAA data and checked that the climate4you plot does correctly reflect the NOAA data and it does.

What renders the equations academic in my mind is that while 390 or thereabouts is nominally escaping 340 or so is coming right back. At the speed of light. Just like it never left. How can we know that it did leave?
Both legs of this “cycle” are greater than TSI. Sensitivity may well have more to do with this quantum whatever it is that no one seems to want to think about than all the differential equations we can muster from our zeroth level understanding.

David, I totally believe in the greenhouse effect. It would be impossible to have a sub cycle of energy greater than input without it. Whether one believes in waves or particles I’m not sure I believe potential energy in this context is any more meaningful than aether.

My point is that there seems far too little wonder at the amazing exchange of energy between the surface and first hundred meters of atmosphere. I believe the most intense co2 bands around 15 microns are completely captured by 60 meters. I do not believe adding a new wild card term “potential radiation” is helpful.

Michael Hammer
June 27, 2015 at 3:35 pm
“The theory of AGW absolutely requires OLR to have been falling at least since 1975. It has been doing the opposite. It only takes one irrefutably contradictory fact to destroy a theory and it seems to me this is it.”
Mike Hammer, wouldn’t OLR decrease during the warming period (warming ‘accumulation’) and then level off during the ‘pause’ in temperature rise. Physically, how does this work?

Hi Gary;
The theory of AGW can be summarized as atmospheric CO2 reduces energy loss to space and this is progressive with higher atmospheric CO2 further reducing energy loss. According to Mauna Loa CO2 has been rising progressively and is still rising so according to the CAGW theory OLR should have fallen and be still falling. Of course as the temperature rises OLR rises with it, after all that’s how the earth re-establishes thermal equilibrium but the CAGW claim is that the Earth is still warming. If we postulate that the pause is because Earth has re-established equilibrium then indeed OLR would not be falling any further but then there would be no latent warming in the pipeline and the total warming from doubling CO2 would only be about 1C which is not catastrophic. You can see the plot of OLR replotted at http://www.climate4you.
My point is simply that CAGW absolutely requires that OLR has been falling and if Earth is continuing to warm then it must still be depressed even if not falling further yet according to NOAA it has been rising. To me that’s an absolute fundamental refutation of the entire CAGW hypothesis.

To David Cosserat at 4:45 am;
I think you are saying that as CO2 increases, temperature rises and since OLR rises with temperature if the CO2 increase is slow enough we could have a gradually increasing temperature which shows up as a slowly rising OLR. Unfortunately this is not possible. For the Earth to warm, its total internal energy must be increasing – you put energy into a system to warm it up and that means for the Earth to warm is has to be absorbing more energy than it is losing. Now the theory of AGW claims that absorbed solar energy is constant and that Earth was in thermal equilibrium before mans widespread use of fossil fuels (ie: Earth’s temperature was stable). Then Earth only warms as long as energy loss to space is reduced below the equilibrium level ie: only as long as OLR is reduced. If a slow rise in CO2 warms the Earth and that increases OLR then the energy loss to space would be higher than required for equilibrium and it would start to cool again. Using the claims of the AGW theory, Earth only continues to warm while OLR is depressed and if the rate of warming was accelerating it would mean that OLR was continuing to fall.
The flaw in your argument is assuming that OLR is simply tracking warming without looking at the implication of rising OLR on further warming.

This may be a new record in pseudoscience, even for wuwt. I commented three times, and all three have not shown up for now over 3 hours, aka, been deleted.
Fortunately there are “screen shots”.
(I love documenting phony sites.)
[document all you want, but all your comments are there. I’m sure you’ll be able to prove something now. -mod]

Here is something I wrote on the topic, focusing on the surface fluxes. The flow diagram from Stephens et al did not copy, but it isn’t needed.
Earth Surface Sensitivity to a Doubling of the Atmospheric CO2 Concentration
by
Matthew R. Marler, PhD
The energy flow diagram of Stephens et al(1), along with some recently published results,
permits a computation of the climate sensitivity of the Earth surface to a doubling of the
atmospheric concentration of CO2, that is a change in the global mean Earth surface
temperature. According to the theory, a doubling of the CO2 concentration will result in an
increase in the power carried by the downwelling long wave infrared radiation (DWLWIR), up
from approximately 346 W/m^2 (for simplicity I am rounding to the unit place and suppressing
the uncertainty) by 4 W/m^2 (2), and the Earth surface will warm until the sum of the upwelling
long wave infrared radiation (UWLWIR), the latent heating of the troposphere (LH), and the
sensible heating of the troposphere (SH) has increased by 4 W/m^2. How much surface
warming might that be? I illustrate by calculating the increase due to a 0.5C increase in
surface temperature.
1. UWLWIR is proportional to T^4, (2) with emissivity constant, so the increase in
UWLWIR, assuming that the global mean surface temperature is equal 288K, works
out to delta U = (288.5/288)^4×398 ­ 398 = 2.8 W/m^2.
2. LH results from the hydrologic cycle, cloud formation and precipitation. The review by
O’Gorman et al(3) reports that a 1C increase in global mean temperature will result in
a 2% ­ 7% increase in the precipitation rate; the lower values are results of GCM
output, and the upper values are results from regressing estimated annual rainfalls on
annual mean temperatures. Using the value 4%, a 0.5C increase in global mean
temperature will produce an increase of 2% of 88 W/m^2 = 1.8 W/m^2. 3. The increase in SH can be estimated from a result reported by Romps et al(4). Their
main result was an increase in the cloud­to­lightning ground strike rate by 12% per 1C
increase in mean temperature over the US east of the Rocky Mountains. The most
important result for this presentation was the estimate of a 12% increase in the power
of the process that generated lightning, and that estimate was not confined to the US
east of the Rockies. Up to a constant of proportionality, the power of the process
generating the lightning was calculated as CAPExPR, where CAPE is “convective
available potential energy” (5) and PR was precipitation rate. Precipitation rate was
used in the calculation rate not because of the latent energy in the water vapor, but
because the precipitation rate was treated as proportional to the rate of transfer of air
(with water vapor mixed in) from the surface to the upper cloud level; and the fraction
of each kilogram of air that was water vapor was treated as constant. That result
depended on the modeled lapse rate and difference between the interior and exterior
of the cumulus column. Assuming that their result is widely accurate wherever those
can be modeled, and PR rate is proportional to the rate of ascension of air, the
increase of SH due to a 0.5C increase of surface mean temperature should be
approximately 6% of 24 W/m^2 = 1.4 W/m^2.
The changes in SH, LH, and UWLWIR sum to approximately 6 W/m^2 (with considerable
uncertainty), so the sensitivity of the Earth surface temperature is approximately (4/6)x0.5C =
0.33C (again with considerable uncertainty). This result is lower than most other estimates,
and it is approximate and conjectural besides, but the computation is straightforward and
based on published research.
Omitted from the foregoing is a potential increase in the DWLWIR from a warmer atmosphere
(the feedback from the feedback). One approach is to compute the ratio of the energy
radiated from the atmosphere to the surface ( 319 W/m^2) to the total energy transferred from
the surface ( 24 + 88 + 398 = 510 W/m^2), and assume that ratio (0.63) applies recursively to
each increase in energy transfer from the Earth surface. Doing that, the effect of a 4 W/m^2
increase in DWLWIR is equivalent to 4(1/0.37) ~ 4(2.7) ~ 11 W/m^2, so the surface sensitivity
is 0.9C per doubling of CO2 concentration, but with substantial uncertainty. If the radiant
energy absorbed directly from the sun by the atmosphere (75 W/m^2) is added to the
denominator to get 585 W/m^2, the ratio is 0.55, and the surface sensitivity is approximately
0.7C.
References
1. Stephens, G. L., J. Li, M. Wild, C. A. Clayson, N. Loeb, S. Kato, T. L’Ecuyer, P. W.
Stackhouse Jr., and T. Andrews (2012), An update on Earth’s energy balance in light
of the latest global observations, ​Nat. Geosci. ​5 ​: 691–696.
2. Pierrehumbert, R. T. 2010, ​
Principles of Planetary Climate , Cambridge: Cambridge University Press.
3. O’Gorman P., R. P. Allen , M. P. Byrne, and P. Previdi, 2011, Energetic Constraints on Precipitation Under Climate Change”, Springer:​Surveys in Geophysics ​: DOI 10.1007/s10712­011­9159­6.
4. Romps D. M., J. T. Seeley, D. Vollaro, and J. Molinari , 2014, ​Science ​, ​346​:851-­ 854.
5. Salby, M. 2012​, Physics of the Atmosphere and Climate , Cambridge: Cambridge
University Press.

“Solar input is constant so that creates an imbalance which means Earth must warm.”
Solar input isn’t constant. The Sun’s output may be relatively constant, but what actually makes it to the earth’s surface in highly variable. Warmists always claim that it can’t be the sun because the sun’s output is constant (which sun spots proves is wrong anyway). What is important is what the sun puts out, it is what makes it to the earths surface and oceans that counts and that depends on clouds, particulate matter and a whole host of other factors.

co2islife;
I could not agree more and just maybe the small amount of warming we have been seeing is due to an increase in solar energy absorbed rather than a drop in OLR. Probably due to a reduction cloud cover since reducing that will increase solar energy absorbed and will also INCREASE OLR since clouds also impede energy loss to space. However the increase in absorbed solar energy is greater than the increase in OLR so it leads to net warming.

Not so simple.
Clouds impede energy loss to space, but also increase albedo.
At night clouds keep the surface warmer, but during the day, they keep it cooler.
And different clouds behave differently.
It is possible to have in increase in cloudiness, but to have the increase be mostly during the day…or mostly at night.
Note that there is a strong indication that much of the recent (prior to the pause) increase in temps is due to higher minimums, and not as much due to higher maximums.
And besides for all of that, the climate has never been stable, over any time scale. So the question that should be determined first is whether or not anything unusual is actually occurring.

Even if solar output was constant, they would still need to assume that the Earth’s orbit is circular (with respect to the Sun,. not the barycentre of the solar system) and that the Earth’s axis was not tilted

Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude, in which event the value for surface flux density FS may well be incorrect and no one has any idea of what the Earth’s energy budget is, and still less of an idea whether there is any surface “radiative imbalance” at all, or the flux density at the Earth’s surface is correctly determined from observed global mean surface temperature by Eq. (1), as all five sources cited above determined it, in which event sensitivity is harmlessly low even under the IPCC’s current assumption of strongly net-positive temperature feedbacks.
Very good.
This was a good essay. Thank you.
My alternate calculation of a surface sensitivity is not an implied criticism, merely a focus on what can be said now (with some hope of not being too inaccurate), about changes of energy flows at the surface. I think that the changes in the non-radiative heat fluxes (wet and dry thermals and rainfall) need more consideration in the published literature than I have seen to date. And I think there should be greater focus on the surface, instead of trying to lump the surface and all the layers of the ocean and atmosphere.

Regarding “However, if Eq. (1) is applied at the surface, the value λ0,S of the Planck sensitivity parameter is 0.215 (Eq. 3)”:
The energy budget diagrams show radiation from the surface being absorbed by greenhouse gases. This happens initially mostly at altitudes well below the effective altitude for greenhouse gas emission of radiation that escapes to outer space. Much of this absorbed radiation from the ground is ultimately reradiated back to the ground, after one or more absorptions and reemissions. This is a radiative positive feedback that causes an increase from the .215 W/m^2-K figure.
In fact, the figure for the surface would be the same as that for the effective altitude for radiating outward to space, assuming the lapse rate does not change as a result of the forcing. If there is a radiative forcing, convection within the troposphere will probably change so as to keep the overall lapse rate near the wet adiabatic lapse rate.

Figure 1 of this article implies to me that most radiation that reaches space exits via the “Radiation Clear Window” as continuum radiation. So that my naive take on this is that it is most appropriate to use the .215 W/(m^2*K) figure as a canonical value, rather than discrete radiation from water bands at higher cooler altitudes. What do you think?http://climatephys.org/2012/06/12/building-a-planet-part-2-greenhouse-effects/

OK, I’ll ask (because I truly want to know).
Of what value is finding the “right” way to calculate a linear constant to describe sensitivity to CO2 doubling when the relationship between energy flux and temperature is known to NOT be linear? (SB Law)
It is possible (though admittedly unlikely) to have a +ve change in average temperature of earth coupled with a -ve change in energy balance. So it seems to me that arriving at a linear constant to describe a system that has wide temperature swings in which P varies with T^4 is just trying to find the right way to calculate a constant with no practical value in the first place.

” Jai Mitchell posted at June 27, 2015 at 11:10 am
So my question to you all is:
With this definitive evidence of heat accumulation in the earth, why do you still believe that humans are not the primary contributor to this effect? ”
He posted this chart as evidence:https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content2000m.png
The chart clearly shows the oceans are warming. The temperature of the atmosphere above the oceans also is warming at about the same trend. Clearly there is a relationship between the two. The problem is that it is common sense that a warming ocean will warm the atmosphere above it. It requires new laws of Physics to explain how the atmosphere will warm the oceans. Will someone please explain how CO2 absorbing IR radiation consistent with -80 degrees C (15 Micrometers) can ever warm the oceans? Problems I see:
1) The oceans are dense heat sinks, and the atmosphere isn’t. There is 2000 to 4000x the amount of heat in the oceans as the atmosphere. Even if the oceans absorbed all the heat in the atmosphere its temperature wouldn’t change.
2) The 15 micrometer wavelength that CO2 emits doesn’t penetrate the oceans, in fact it has a cooling effect of causing surface evaporation. How can IR that doesn’t penetrate the oceans cause it to warm?
3) What wavelengths do warm the oceans, especially the deeper oceans? Isn’t it reasonable that what is warming the oceans is also warming the atmosphere? CO2 is transparent to the wavelengths that warm the oceans.
4) All GHGs are increasing, not just those created by man. Is it logical to think that a natural cause is increasing the non-anthropogenic, and that only man is increasing CO2? Isn’t it more logical to conclude that some natural cause is increasing all GHGs? Segregating out CO2 for preferential treatment from all other gasses makes no sense.
5) The only mechanism by which CO2 can cause climate change is through trapping heat. First problem is that it traps -80 degree C heat, and the second problem is that we haven’t had warming in over 18 years. By what mechanism can CO2 affect climate change if it isn’t through trapping heat?
6) If in fact temperatures were increasing at an increasing rate (2nd Derivative), would the sea level not be increasing at an increasing rate? Why isn’t it if we are warming at an abnormal rate?
7) Statistical analysis of the Holocene Ice Core date demonstrates that we aren’t near the peak temperatures of the past 15k years, and that the variation of the temperature over the past 50 and 150 years (the era man created all the CO2) is statistically identical to the previous 15k years. Why hasn’t the record CO2 caused the earth to warm to at least the peak of the past 15k years if CO2 was truly the cause of the warming?
Climate science isn’t science unless you categorize is as political science. If not, it is an unbelievable case study in propaganda and sophistry. As stated above, the only science that is settles is that climate scientists lie.

Jai, If you are to understand me correctly, you must read what I say and not insert your own thoughts and transpose them for mine.
I said none of those things.
I said what I said.
150 years ago we were in the little ice age.
Sea levels are rising in most places, but there is no acceleration in the trend anywhere. And the actual amount is tiny.
You might want to read up on a little scandal called climate gate, but besides for that, it takes no conspiracy for people to be very wrong.
Smart people have been completely wrong about a great many things.
In fact, the history of medical and physical sciences is a veritable compendium of how wrong people can be, and how stubborn they are in the face of evidence that they are mistaken.
And besides for all of that, the US federal government ALONE larded out $29 billion, just in the PAST YEAR(!) on a giant gravy train of climate related funding. Guess how many grants skeptics get out of that total?
Now, if warmistas get into such a tizzy regarding the possibility that someone on the skeptical side may have gotten paid for their work, why do they also get their panties in a knot when it is suggested that warmistas get the results they are paid to get?
To paraphrase as one recent commenter put it, at any hypothetical meeting of the warmista high council, in which the subject of modifying their stance is raised, whether in light of the pause, or the possibility of renewed cooling in the coming years, the question would certainly hinge on another question…how many of them want to go find new jobs?
The unscientifically unequivocal language that has long been used has produced the net result of these folks painting themselves into the mother of all corners.
When you grow weary of living in a state of cognitive dissonance, come back around with a more open mind.
Everyone here, me especially, will be happy to clue you in and disabuse you of all the misinformation you have been force fed all these years.
The sky is not falling, the world is not ending, the oceans cannot and will not boil, scaring children is a rotten thing to do, and being afraid of things that do not exist is an unnecessary burden to bear.

Co2
thanks for your response, apologies for the delayed response.
The chart shows that the oceans are accumulating heat energy, not much warming. The point is that the amount of heat accumulation shows a massive energy imbalance. Surface air warming is not happening at the same rate it has more variability but since atmosphere only accounts for 3% of total heat accumulation small fluctuations in energy balances between air and oceans can lead to big swings in atmospheric warming rates.
Warming to the oceans is primarily through solar shortwave incident radiation, (ocean albedo absorbs about 70% of this energy), not the atmosphere. However, the increase in IR re-reflection is producing an energy imbalance. This imbalance has been directly measured over a decade using surface detectors.
The reason that a re-reflection of IR radiation can warming things is because IR radiation contains energy the absorption of energy leads to warming. So any increase in incoming energy, if it is absorbed, leads to heating, this is true for high energy particles and gamma rays as well as low-infrared energy and all between.
The oceans are indeed deep heat sinks. They are also variable in their effects on atmospheric temperatures (see 1998 El Nino).
IR does indeed warm water. Place a bowl of water under a heat lamp and see what happens.
The main GHG are water vapor, Methane and CO2. Other incident GHG are NOx CFCs and other trace gasses. Water vapor is a feedback mechanism associated with warming, causing more atmospheric humidity. All other GHGs are attributed to human activity including CO2 which has been isotopically identified as produce from the consumption of fossil fuels,
Of all of the anthropogenic GHGs CO2 has the largest contribution to increased IR. Water vapor is a feedback produce by this warming, leading to some additional warming.
CO2 does indeed trap (re radiate) heat energy downward. Any re-reflection of incident radiation that is absorbed will raise the temperature of the material that is absorbing it. Energy doesn’t have a temperature it raises temperature by vibrating atoms that interact with it.
Sea level has many components, thermal expansion is a major part of it.
This image has the components of recent ocean sea level rise. The red and gold curves in the top half represent the thermal components. Red is near-surface and gold is deep ocean warming. combining these two curves produces an expansion that nearly identically follows the Argo buoy warming data I posted above.http://www.nature.com/nature/journal/v453/n7198/images/nature07080-f3.2.jpg
see: http://earthobservatory.nasa.gov/Features/OceanCooling/page5.php
Finally, the holocene ice core records indeed do show that we are at or above the globally holocene peak temperatures. Please be aware that the GISP2 ice core data records end at 1855 and that there has been significant warming since then.http://www.skepticalscience.com/pics/c4u-chart7.png
I am a proud supporter if American independence, especially food and energy. I really want us to be able to charge our cars with rooftop solar. I hope that you agree!

Jai,
Two one hundredths of a degree of ocean warming, and the resolution and error bars much in debate, seems less than worrisome, as it may not be real.
Especially since the Argo buoys, rather than being the excellent coverage that you seem to want us to believe, are spotty, and random, and miss entire sectors of the ocean, and do not descend to the bottom, and have one buoy for every 300,000 cubic kilometers of water.
Imagine if we measured air temperature using the same methodology?
And I have yet to see anyone come up with an explanation for how the heat got down there while bypassing the surface, or even how it got into the ocean at all.
Tide gages the world over show no acceleration of sea level rise. Photographic evidence shows the ocean was right where it currently is a hundred years ago, as does the observation that many areas have buildings near the sea that are very old.
Fake satellite data showing sea levels doing something completely different than what is shown by actual poles anchored in the sea all over the world is evidence of chicanery, not cause for alarm.
As for warming over the past 150 years…every reader here knows all about the adjustments that obscure any real accounting for surface temps over that period.
Lies and propaganda do not fly among people who know the actual facts.
Charge your car however you like.
Stop supporting policies and politicians who want to make my choices for me, and who are dismantling our economy and energy infrastructure based on a big fat series of lying BS.

Menicholas,
so if I am to understand you correctly, you do not believe that the earth has warmed over the last 150 years, that the ocean sea level is not rising and that the satellite and temperature records are falsified to allow politicians to dismantle our economy and make decisions for you?
So you believe in a grand conspiracy with tens of thousands of scientists working all over the earth in every major country in the world, teaching in universities and they are all working together in a group plan to enrich themselves, destroy America and remove your personal freedom by using a thinly veiled facade that can be easily disproven on the internet?

Woosp, looks like I inserted comment in wrong spot.
Jai, If you are to understand me correctly, you must read what I say and not insert your own thoughts and transpose them for mine.
I said none of those things.
I said what I said.
150 years ago we were in the little ice age.
Sea levels are rising in most places, but there is no acceleration in the trend anywhere. And the actual amount is tiny.
You might want to read up on a little scandal called climate gate, but besides for that, it takes no conspiracy for people to be very wrong.
Smart people have been completely wrong about a great many things.
In fact, the history of medical and physical sciences is a veritable compendium of how wrong people can be, and how stubborn they are in the face of evidence that they are mistaken.
And besides for all of that, the US federal government ALONE larded out $29 billion, just in the PAST YEAR(!) on a giant gravy train of climate related funding. Guess how many grants skeptics get out of that total?
Now, if warmistas get into such a tizzy regarding the possibility that someone on the skeptical side may have gotten paid for their work, why do they also get their panties in a knot when it is suggested that warmistas get the results they are paid to get?
To paraphrase as one recent commenter put it, at any hypothetical meeting of the warmista high council, in which the subject of modifying their stance is raised, whether in light of the pause, or the possibility of renewed cooling in the coming years, the question would certainly hinge on another question…how many of them want to go find new jobs?
The unscientifically unequivocal language that has long been used has produced the net result of these folks painting themselves into the mother of all corners.
When you grow weary of living in a state of cognitive dissonance, come back around with a more open mind.
Everyone here, me especially, will be happy to clue you in and disabuse you of all the misinformation you have been force fed all these years.
The sky is not falling, the world is not ending, the oceans cannot and will not boil, scaring children is a rotten thing to do, and being afraid of things that do not exist is an unnecessary burden to bear.

Once again, getting back to the basics. All these charts, formulas and theories are greats…but they totally miss the point and distract from the key issue. Climate Science is a “science.” All these arguments distract from the scientific method. Warmists will always be able to produce countless nonsensical claims and graphs to get real scientists to chase their tails trying to disprove. This whole article is about charts produced by the IPCC. That is why the scientific method is so very very important and you never hear Warmist referring to it. They have never been able to reject the Null Hypothesis that Man is not causing Climate Change/Global Warming. It only takes one experiment to prove them wrong as Einstein said. That is how science is done. The most fundamental proof that the Warmist must have is a data set that rejects the Null Hypothesis that man is not causing climate change. They simply don’t have it, that is why they rely on theories, models and propaganda. Simply download any Ice Core Data set and test if the past 50 and 150 years is statistically different from the previous 15k years of the Holocene. The data doesn’t exist. If climate science was a real science, that would be game over. Only is a pseudo-science do you call yourself a science and then reject the scientific method. This is pure climate sophistry and fascist style propaganda, it has nothing to do with science.

How much more evidence do you need then to have all the models that rely on CO2 to be a major factor in warming to fail? The only evidence climate science has are computer models and their computer models reject their very own theory. Once again, this isn’t science it is sophistry. The following graph would be game over in any real science.https://wattsupwiththat.files.wordpress.com/2015/06/spencer-73-cmip5-model-fail1.png

Reality trumps experiment every time, Shawn, except to trough-feeding “climate scientists” who are feeding at the trough and politicians who see carbon taxes as a great way to expand government and control.
The graph immediately above compares reality to experiment, and the idea that man has a substantial impact on climate is untenable. Further efforts to foist this falsehood on us by a fear mongering faction must be stopped.

Let us get the terminology right. The models’ outputs in the graph represent neither experiments nor reality. They reoresent a hypothesis. Reality is determined by experiment. In the graph, the experiment is the measurement of global temperatures, which determines the reality that the world has warmed, but at a rate increasingly far below prediction, falsifying the rapid-warming hypothesis – to date, at any rate.

“Could not many of these endless arguments about CO2 be put to rest once and for all with a rather simple experiment?”
Yes. They have been, but many commenters here are either unaware or unable to understand; so the nonsense continues.

What is wrong with this experiment which could be done by undergrads or high school students?
1) Construct 2 identical 1m^2 earth boxes 6?in. Deep
2) construct a darkroom inside an unconditioned building to contain the 2 earth boxes and to shut out all light and other radiation; also a wall is needed between the boxes to block possible IR from CO2 interfering with the other box
3) construct 2 helical coils of flexible water tubing fixed at ?3in from the bottom of the boxes
4) connect input ends of tubes to a hot water tank with identical length tubing
5) connect the output end of tubing to a set of valves, one to return hot water to source, the other to fill identical catch basins to measure the volume of water
6) install 2 ‘clouds’ constructed of non-infrared blocking glass; 1m^2 x ? 6in.
7) Locate the clouds at a height of 2? meters above the sand boxes to avoid restriction of convection; the dark room should be well ventilated at the top of the walls by large apertures covered with shrouds
8) saturate the sand boxes with identical quantities of distilled water ?20gal
9) place thermocouples in the center of each box
10) Circulate hot water through each box until a steady temperature of 90 F is achieved
11) fill one cloud with CO2; the other with a natural air mix of Oxygen and nitrogen
12) stop the circulating water and open the valves to the catch basins
13) maintain the temperature in each box at 90 F by flowing hot water whenever the temperature declines by .5 F, all the while charting the temperatures for both boxes
14) Maintain experiment for x? Hours
15) premise – if the CO2 can warm the box beneath it by back radiation then that box will require less hot water to maintain temperature

“Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.”
Murry Salby also makes two other incontrovertible point:
1) The rate of increase of CO2 in the atmosphere has remained constant at 2ppm per year over the last 30 years.
2) During this time, the rate of producing of fossil fuel CO2 emissions has tripled.
This would indicate that the relationship between fossil fuel CO2 emissions and total CO2 increases is not a simple additive one. So even,if all fossil CO2 emissions stopped altogether, it is not at all clear that total CO2 would not still keep increasing at 2ppm per year.

The rate of increase of CO2 has not always been constant. Just recently. The simple answer is that carbon uptake has increased commensurate with the increase in CO2 emissions, to balance it that increase. If we lower emissions, the rate of ppm increase will go down and could go to zero, even with 5 GT of CO2 emissions.

Walt D, that is exactly the approach Real Scientists should take. Your common sense observation is exactly what is missing from the climate science debate, and highlights where we are losing the debate. We, skeptics, need to get back to the basics. Science isn’t about proving things, science disproves things. That is why we reject nulls, we never accept them. Just remember Einsteins famous quote: “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.
Albert Einstein”
We skeptics are fighting this fight in the most unscientific way. This very posting highlights just how effective the warmists are ad distracting the argument away from science. They have the ability to produce an endless amount of nonsense, so there is no way we will ever be able to expose every flaw in their models, charts and conclusions. They aren’t stupid, they are expert propagandists, and they know the scientific method is their greatest enemy. IMHO skeptics should start demanding that the warmists answer the questions, and not vise verse. They keep the skeptics chasing their tails, and that is their goal.
We simply need to start answering the right questions and turn the table on the warmists:
1) Why if mans production of CO2 has increased exponentially over the past 50 years, why has the trend in atmospheric CO2 remained constant?
2) What is causing the oceans to warm? How could it be due to atmospheric CO2?
3) Why if we have seen an acceleration is warming, why hasn’t there been an acceleration in sea level increase?
4) How does CO2 affect climate change if it isn’t thought warming? We’ve had stable temps for 18+ years, if it isn’t warming, how does CO2 affect climate change?
5) CO2 traps outgoing radiation, how does CO2 lead to record daytime highs when daytime temps are due to incoming radiation? Co2 is transparent to Incoming radiation.
6) 15 microns IR is consistent with -80 degree C, the warmer it gets the less IR CO2 traps. Given that fact CO2 should result in the cooler climates showing greater warming than the warm climates. Data shows otherwise.
7) The colder higher altitudes would also trap more heat, and show relative warming to the areas transparent to CO2. Data doesn’t show that.
8) statistical analysis of the Holocene ice core demonstrates that there is absolutely nothing statistically abnormal about the past 50 and 150 years. That is the bare minimum for their theory to have any validity at all.
9) Demand that they produce a model that has CO2 as a significant variable that accurately models the entire Holocene, or even the past 18 years.
10) Why would NASA ever use ground measurements when they have more accurate satellite data? That alone doesn’t pass the stink test.
11) What kind of science relies on models that don’t do a good job modelling what they intend to.
If we want to effectively battle and expose this science as the fraud that it is, we need to battle it scientifically. We should demand that Congress require that all data sets and conclusions be verifies using double blind testing like the FDA requires. The problem with climate science is that politicians rely on opinions and research of the very people that support the causes and depend on the Gov’t funding. This is the same problem we had with tobacco, but on steroids and no effective government or press watch dog.
Conclusions and research used to support political objectives should require independent verification JUST LIKE THE FDA demands of drug companies and the EPA demands of chemical companies. The data should be turned over to an independent lab along with placebo data, and testing should be performed without anyone knowing what is actually being tested. If congress was to simply demand that common sense approach be taken, an approach already demanded by the FDA and EPA, climate science would vanish overnight. The only reason climate science continues is because they are able to avoid the riggers of the scientific method demanded by the FDA and EPA. No one is holding them accountable. Congress needs to do that, and that is where the efforts of skeptics should be directed, not at exposing every flaw in their models and theories, we all know they are junk, and they can create an endless stream of junk. Fighting it that way will result in the propagandists winning. We need to use the tactics used by the warmists on them. We need to turn the tables.

Menicholas, are you referring to my post?
“Sir, your post contains enough contradictory statements to make my head spin.
Just read 3 and 4 again.
You lost me on this comment…completely.”
Here are #3 and #4:
3) Why if we have seen an acceleration is warming, why hasn’t there been an acceleration in sea level increase?
4) How does CO2 affect climate change if it isn’t thought warming? We’ve had stable temps for 18+ years, if it isn’t warming, how does CO2 affect climate change?
If increasing CO2 at an increasing rate has resulted in global warming at an increasing rate would you not expect sea levels to be increasing at an increasing rate? Or don’t glaciers melt faster the warmer it gets?
We haven’t had warming in over 18 years according to NASA’s most accurate measurements, and yet President Obama is telling me that all these extreme weather events are caused by CO2. How could anything over the past 18 years be due to CO2 is there has been no warming? By what mechanism does CO2 affect climate change if it isn’t through trapping heat. I’ll ignore that CO2 most efficiently traps -80 degree C heat.

Never mind.
I think I see what you are saying.
You are repeating warmista talking points to point up the contradictions?
I see no evidence of warming, let alone accelerating warming.
Your posts are very interesting. Hard to follow sometimes, but is probably just me.

A great deal of interesting information has emerged from this discussion. First, Roy Spencer has characteristically given the most succinct answer to my question. It follows that the SB equation applies not only at the characteristic-emission altitude, where the climate-sensitivity parameter is determined, but also at the surface.
It is also clear that, even on the IPCC’s high-sensitivity assumption, the amount of global warming we can expect this century is small,since it expects only half of equilibrium climate sensitivity to occur within 100 years of the forcing, and particularly small if CO2 concentration does not rise above 0.6 millimoles per mole compared with 0.4 today, as Professor Salby and the apparently increasing capacity of the CO2 sinks suggest. If, as a growing body of papers suggests, temperature feedbacks are not strongly net-positive, then we shall scarcely see 1 Celsius of global warming this century, and perhaps less.
To the few (such as Mr Born) who sneer because I ask questions, I say that the scientific method works by people asking questions rather than assuming religiously that what has been handed down to them from on high is written on tablets of stone. The propensity of know-it-all climate extremists to declare that no questions should be asked is a blot on the face of science. Nearly everyone else has avoided the high-handed, arrogant tone adopted by Mr Born. He would do well to learn how to be civil, for in these threads he has made plenty of mistakes. To be a know-it-all, one must know it all. And even then, being civil helps.

“He would do well to learn how to be civil, for in these threads he has made plenty of mistakes. To be a know-it-all, one must know it all. And even then, being civil helps.” ~ M of B
We could all learn to be civil in these threads and in the posts as well. This is very good advice.
One has to remember that science has been wrong on almost everything throughout its history. Science is a process of getting ever closer to the reality (hopefully) as one theory replaces the other. I say that someday the “CO2 warms the surface” paradigm will die, and I have good reasons to think that — but even if I am wrong there is no reason for unpleasantness by me or by others. M of B makes an important point on disagreeing without being disagreeable.
~ Mark … who always thinks he is right 🙂

“It follows that the SB equation applies not only at the characteristic-emission altitude, where the climate-sensitivity parameter is determined, but also at the surface.”
Hmm – I wonder if your are relying too much on a hypothetical construct here.
There is plenty of discussion on the www about incorrect application of SB (see Jennifer Marohasy for example).
This is a concept of an emitting surface behaving according to SB, some 5km embedded within a gaseous mass. It could have some uses for theoretical discussion, perhaps for visualisation. But no such surface exists in the real world. We cannot observe it, or interact with it in any meaningful way. In other words, it is not scientific.
In the above thread, there is discussion about change in the characteristic emission altitude, as though something physical actually happens.
This is unlikely to be a productive approach and may never help to resolve differences one way or the other. If so, it would be better not to rely on it at all.
Feel free to shoot me down if you disagree.

Monckton of Brenchley: “He would do well to learn how to be civil, for in these threads he has made plenty of mistakes.”
My, my, I appear to have struck a nerve.
Lord Monckton might have saved his keystrokes. I have no interest in what is considered civil or uncivil by someone who routinely refers to those who disagree with him as “trolls” and “bedwetters.”
I sometimes point out errors. Since a recent paper of Lord Monckton’s was replete with them, I did so in that case, but I did so without calling the authors “trolls” or “bedwetters.” Moreover, I was holding back; I had been hoping that at least one of the authors would recognize their errors and make changes on their own. Anyway, pointing out errors hardly makes comments uncivil; it’s how science advances. If Lord Monckton cannot take a candid discussion of his theories, he should withhold them.
I don’t deny making mistakes myself at times. When I do, I acknowledge them. However, Lord Monckton was unable to make an even remotely credible case that I did so in the recent colloquies regarding his paper; all he could do was bluster.
Perhaps that explains his gratuitous attack on me in this thread.

Joe Born,
You are right. I note that Monckton has not acknowledged the error pointed out by you, Roy, me, and others.I suspect that his gratuitous attack on you is due to the fact that you have provided the most clear and careful explanation.

Mr Born is perhaps so habitually nasty that he does not realize when he is sneering. He said upthread that he had previously had some difficulty in explaining forcing to me, the implication being that it was I, not he, that did not understand it. However, it is he, not I, who does not understand it, nor its relationship to the Planck parameter, nor indeed even the value of that parameter. Indeed, as my reply to his initial attempt at an attack on our Science Bulletin paper made quite clear (an attempt full of errors compounded by inconsequentialities), Mr Born had made the elementary mistake, common among those who have little familiarity with the relevant mathematics or science, of assuming that the Earth’s emission temperature increases in the presence of a greenhouse-gas forcing.
As I had previously had to explain to Mr Born, since a greenhouse-gas forcing affects neither the total solar irradiance in the upper atmosphere nor the Stefan-Boltzmann constant nor (to any appreciable extent) the emissivity at that altitude, it cannot affect the Earth’s emission temperature.
In any event, Mr Born has no need to presume to instruct me in what a forcing is: the IPCC, particularly in ch. 6.1 of its Third Assessment Report, defines it quite clearly. It is also entirely plain from our Science Bulletin paper, which Mr Born says he has read, that we understand perfectly well what a forcing is and how it is treated mathematically in the determination of climate sensitivity.
It is time for him to realize that, ever since he twice falsely accused me of having refused to supply data for which he had not even sent me a request, and which was in any event supplied in some detail in our Science Bulletin paper, many of us here have little patience with his generally ill-informed and arrogantly-expressed lectures. He has not proven intellectually honest.
Roy Spencer had succinctly answered my question raised in the head posting: there was not the slightest need for Mr Born to add his characteristic malevolence to this thread.
Mr Born continues to complain that our Science Bulletin paper was defective. He is entitled to his opinion, but his opinion is ill informed, based in any event on an inconsequential corner of our paper, and wrong.

In reply to “Mike M.”, I had indeed acknowledged Dr Spencer’s characteristically concise and to-the-point answer at (if I remember rightly) 9.38 am. “Mike M.” is rather free with his allegations, but should take a little more care to check his facts first.

Actually, my comment acknowledging Roy Spencer’s response was at 9.35 am, not 9.38. But it is there, and was there long before Mike M’s false allegation that I had not acknowledged it. I have also acknowledged it again in a subsequent head posting, So it is, as usual, Mike M who is being dishonest.

Monckton of Brenchley
It follows that the SB equation applies not only at the characteristic-emission altitude, where the climate-sensitivity parameter is determined, but also at the surface.
I lost track of the number of times I brought that point up. I gave up a long time ago trying to get anyone to pay attention. grumble, grumble, grumble. But it is an important point, hope it gets some traction.

Possibly my very first hint that something was amiss with climate science was reading IPCC reports (III or IV, can’t remember anymore) that said the direct effects of Co2 doubling = 3.7 w/m2 = +1 degrees. I knew that the average temperature of earth (as if there is such a thing) was accepted to be 288 K. So I ran the SB Law calculation which showed that 3.7 w/m2 at 288 K would only yield 0.68 degrees of warming. So, where did the 1 degree come from?
As it turned out, the 1 degree came from calculating 3.7 w/m2 against the effective black body temperature of earth, which is 255 K, and indeed running that math produces about a 1 degree temperature increase. Never mind that the effective black body temperature of earth (at equilibrium) doesn’t actually change from a doubling of CO2 (only “where” the MRL is), never mind that we don’t live a few kilometers up in the troposphere (we live on the surface) and never mind that you can’t find a clear explanation of this anywhere in their convoluted reports with vague references that when checked out almost always dead ended at a paywall.
It took me months of ferreting out info to conclude that they had deliberately obfuscated this issue specifically in order to present a larger number than they otherwise could have justified. I wasn’t reading a science document, I was reading a marketing document.

I would recommend everyone interested in this issued, and the integrity of science and out educational system to start promoting the formation of a Scientific Validation Agency or SVA to join/counter the EPA and FDA. The purpose of this agency would be to remove all bias from the conclusions used to form public policy. Data would be handed over to the SVA, the data would come in actual form and distorted form to form double blind tests. The data would then be tested and the conclusions published. The SVA conclusions would then be matched against the conclusion reached in the “peer reviewed” journal. If the SVA double blind conclusion doesn’t match that of the peer reviewed conclusion the authors and reviewers would then have to defend their conclusion, and if they fail, they would be required to return any funding or face fraud charges. If we do that, if we apply the same standard to our public researchers as we do our drug companies, I’m 100% certain there will be no more Climate Science research published regarding CO2 being the cause of global warming climate change.
We simply need to send a message to the academic community that someone is watching them. Eisenhower warned us against exactly what is happening today in his farewell speech.
“Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.
In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.
Today, the solitary inventor, tinkering in his shop, has been over shadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.
The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.
It is the task of statesmanship to mold, to balance, and to integrate these and other forces, new and old, within the principles of our democratic system-ever aiming toward the supreme goals of our free society.”http://www.ourdocuments.gov/doc.php?flash=true&doc=90&page=transcript

Oh good lord!
1) This is a clever example of hanging fools by their own petards. Congratulations!
2) Since “Quid vobis videtur” is often mistranlsated as “what do you think” let me point out that what it really means is something more like “What insights do you have [about the issue at hand] and “insight” originally meant “answers from God”, and therefore offer one: Yes.
3) More seriously, what you illustrate here is part of the problem with these global models: over abstraction and over-simplification reflecting an over grown thumb-nail sketch of how climate works. These kinds of things are perfectly reasonable if your computer is an IBM 7030 and you’re converting lunch napkin notes to a first run card deck – but adding fifty years of grad student tinkering does not turn this into science.
So, yes, the system is now internally inconsistent and I rather think you have a good point here, but the bigger point is that the math is now influenced by models that look very sciency but are actually somewhat simple minded.
To improve on this, imagine dividing the entire system inwards from about 300 miles up down to the core into small blocks (say 1E6 cubic meters or less); finding a way to estimate the energy content of each block, and then tracking the energy movement into, and out of, each one. That would work – and is almost possible. The stuff we do today? doesn’t.

The world’s population will continue to increase.
They will need to be fed.
The only natural way to produce food is a single chemical reaction called PHOTOSYNTHESIS.
Photosynthesis requires sunlight, H2O and most importantly CO2.
We MUST make sure there is sufficient atmospheric CO2 to feed the world, but at the moment it is still perilously low.

We have all assumed that the Sun light “not” reaching the Earth’s surface is reflected into space. What if it is “not” reflected, but warms the upper atmosphere????? Ozone layer etc..
We need to reanalyze all variables!!!!

jlurtz,
“We have all assumed that the Sun light “not” reaching the Earth’s surface is reflected into space. ”
Well, perhaps you assume that but the climate scientists don’t. Look at the diagrams: 79 W/m^2 “solar absorbed by atmosphere” (from the IPCC diagram). Most of that is due to near IR water vapor absorption in the troposphere. A bit (about 5 W/m^2, I think) is absorption of UV in the stratosphere.

Monckton of Brenchley: “the scientific method works by people asking questions rather than assuming religiously that what has been handed down to them from on high is written on tablets of stone.”
Lord Monckton should take his own advice to heart and avoid dogmatism such as he exhibited upthread; I and many others (including, albeit imperfectly, Gerard Roe) have explained that what the feedback quantity is not set in stone; it depends on the level of abstraction. Lord Monckton’s choice is largely harmless in the contexts in which I have seen him use it, but, as others have observed, it can lead to unphysical results at some timescales.

Monckton of Brenchley: “To the few (such as Mr Born) who sneer because I ask questions”
I do not “sneer” at asking questions, particularly that one; it took me the better part of a quarter hour to figure out the answer myself.
But, since Lord Monckton had previously disputed my explanation of the forcing measure on which Dr. Spencer’s response was implicitly based, I thought it might help to elaborate; Lord Monckton’s difficulty may have been rooted in the same misunderstanding that he betrayed in disputing my forcing explanation.
Trying thus to help another to understand something is not “sneering” or “adopting a high-handed, arrogant tone.”

Joe Born,
“Trying thus to help another to understand something is not “sneering” or “adopting a high-handed, arrogant tone.””
Exactly. Especially when that person claims he “only asks because he wants to know”.

Bernard
The most abundant gases in our atmosphere do not significantly absorb or emit in the mid-infrared region; although Nitrogen, the most abundant atmospheric gas does absorbs weakly between 4 and 5 micron and Oxygen has a weak absorption band between 6 and 7 micron (and a relatively strong absorption line around 0.7 micron). Apart from these exceptions, the major constituents of our atmosphere, Nitrogen and Oxygen, are transparent to infrared radiation and do not contribute to the ‘Greenhouse Effect’.
CO2isLife’s diagram is an adaption of a Wikipedia entry (which wasn’t ideal in the first place). By grouping O2 and O3 together It has misled CO2isLife to think that the absorption at 10micron is due to oxygen; it isn’t, it is due entirely to ozone.
Also, labelling the infrared region above 10 microns as ‘microwave’ is not supportable.
The separation between incoming solar and outgoing infrared is quite distinct and I think this diagram shows it better.http://s11.postimg.org/qt4vzvq2b/Sun_Earth_Comparison.png
The main greenhouse gas absorbers are shown here, somewhat clearer because ozone has been isolated.http://s14.postimg.org/eq1gbemht/Absorption_by_Atmospheric_Gases.png
The major greenhouse gas not shown above is water vapour which, if present, forms an absorption continuum spanning many wavelengths.

tty
The atmosphere typically contains 0 to 4 % water vapour. It is effectively zero in arid parts of the world, such as deserts, and in cold parts of the world (where it is frozen out).
You can see this in spectral measurements of downward long wave radiation. Here is one taken in Canada, in February. By matching the wavelengths coming from the atmosphere with the sort of absorption plots shown above, you can identify which gases are radiating.https://scienceofdoom.files.wordpress.com/2010/04/longwave-downward-radiation-surface-evans.png
The first thing to notice is that there is no water vapour contribution, otherwise it would obscure the signature of the other gases.
Notice also the amount of radiation in the 15 micron band, a clear fingerprint of CO2 ‘back-radiation’.
Did you want an experiment to show it was CO2? How about the diagram above?

co2islife wrote: “You won’t believe this one, the source of that chart is NASA. Clearly their client scientists don’t understand the very charts that they publish to support their Global Warming nonsense.”
Wow. Careless of them. But the graph is technically correct (the O2 absorption is the part below about 240 nm) so I doubt they don’t understand it. More likely, they recycled a graph created for a different purpose (one that included a discussion of the absorption of incoming radiation) and did not think about how easily it could be misinterpreted, since, after all, it is perfectly clear to them.

N2 and O2 are transparent to infra-red radiation.
There’s a good reason for that. The infra-rad absorption is possible only for a molecules vibration modes which are low energy. N2 and O2 have only a single vibration mode which is the two atoms getting further apart and closer together, as if connected by a spring. But this mode gives a stiff spring which gives energies in the ultra-violet – higher energy than visible light.
By contrast molecules with three atoms such as H2O, O3 and CO2 have a mode whereby the angle between the three atoms can vary, and the “spring” restoring force on this is much weaker, meaning the minimum vibrational energy is much lower. This allows absorption and emission in the infra-red – a lower energy than incoming sunlight.
The O2 component of “O2 and O3” in the diagram above will thus not be absorbing and re-emitting at the infra-red frequencies, so has no greenhouse effect. Similarly N2. The greenhouse effect from “O2 and O3” above is solely from O3 – ozone.

Climate Pete wrote: “N2 and O2 have only a single vibration mode which is the two atoms getting further apart and closer together, as if connected by a spring. But this mode gives a stiff spring which gives energies in the ultra-violet – higher energy than visible light.”
Uh, no. The vibrational frequencies of N2 and O2 are smack dab in the middle of other vibrational frequencies. They can be observed by means of Raman spectroscopy. They do not absorb or emit at those frequencies because the change in length of the bond produces no change in dipole moment, so they can not interact with an electromagnetic field by absorbing or emitting a photon.

R Stevenson wrote:
“96% of the total LWIR absorbed by CO2 is in the range 12.5microns to 16.5microns.”
Yep, but most importantly when you translate that to black body heat, that range is largely in the – degree range. The CO2 peak of 15 microns represents a black body heat of -80 degree C.
Here is a calculator to calculate out the temperature of the other values:http://www.spectralcalc.com/blackbody_calculator/blackbody.php

Interesting link you gave earlier re-your absorption graph. They do have a NASA logo on their page but appear to be an educational group consisting mostly of research students. I have sent them an email.
Do not be confused by the ‘peak wavelength’ (By the way, you can calculate that yourself from Wien’s Law, just divide the temperature into 3000 for an approximate peak wavelength).
All blackbodies emit over an infinite range of wavelengths. Their spectrum of emission is given by Planck’s Law. A blackbody at 300K (typical Earth temperature) will emit in a spectrum like this.http://s15.postimg.org/srv2tbq6z/plank_300_K.png
Notice that there is a considerable amount of energy in the CO2 absorption band (assuming 12.5 to 16.5 microns – coloured red) even though the ‘peak’ is at 9.7 microns. About 20% of the total energy is emitted in this band.

In dry air (deserts, polar regions) LWIR in the range 12.5 to 16.5 microns is absorbed to extinction after 3600 m of traverse at present 380 ppm CO2 levels. Increasing CO2 would shorten the traverse but would not absorb further energy. In regions where water vapour is present (typically 2000ppm) the distance to extinction is much shorter; LWIR in the range 12 to 25 microns being absorbed in 200 m. Rising and falling CO2 and water vapour levels neither lead to corresponding changes in radiant energy absorption nor a correlation to global warming.

co2islife,
You wrote: “Yep, but most importantly when you translate that to black body heat, that range is largely in the – degree range. The CO2 peak of 15 microns represents a black body heat of -80 degree C.”
I just went to the link you provided and found that at 300 K the peak emission is at a wave number of 588/cm. That corresponds to 17.0 micrometers. But I chose units of wave number, not wavelength and the location of the peak depends on the choice of units. Although you can compute T from the location of the peak, the location of the peak does not “represent” temperature.
The physically meaningful way to relate temperature and wavelength comes from statistical mechanics. For vibrations, the relation is
h*c/lambda = k*T
where h is Planck’s constant, c is speed of light, lambda is wavelength, and k is Boltzmann’s constant. For 15 microns, T = 961 K.
p.s.: If I could make this private I would do so since my intent is only to be helpful. If you want to sound like you know what you are talking about say “temperature” not “heat”.

In dry air (deserts, polar regions) LWIR in the range 12.5 to 16.5 microns is absorbed to extinction after 3600 m of traverse at present 380 ppm CO2 levels. Increasing CO2 would shorten the traverse but would not absorb further energy. In regions where water vapour is present (typically 2000ppm) the distance to extinction is much shorter; LWIR in the range 12 to 25 microns being absorbed in 200 m. Rising and falling CO2 and water vapour levels neither lead to corresponding changes in radiant energy absorption nor a correlation to global warming.

These photons have to work their way out from the core of the Sun to the surface – citation badastromy.
Ian, you won’t insinuate adding CO2 equals earth atmosphere to the core of the sun, won’t you.
Just asking, Hans

Black-body radiation is only observed given a surface for an isothermal object having a thickness much greater than optical absorption depths for frequencies of interest. There is no such surface in the atmosphere! Atmospheric emission spectra essentially mirror absorption spectra as shown by the atmospheric limb REFIR-PAD balloon measurements of Palchetti et al., Atmos. Chem. Phys. 6, 5025 (2006). The derivation for radiation from an isothermal layer of variable thickness is a basic exercise using the Einstein coefficients.http://quondam.hostoi.com/Black_Body_Radiation.pdf

R Stevenson
June 29, 2015 at 1:44 am
96% of the total LWIR absorbed by CO2 is in the range 12.5microns to 16.5microns.
Reply This is for black body radiation at 20 C (293 K) with peak intensity at 9.8microns. Also 86% of the total LWIR absorbed by water vapour is in the range 12microns to 25microns.

Thanks Mike B, Mike A and CO2is life for your replies.
It seems that everyone agrees that O2, and N2 do have a propensity to absorb/emit IR radiation at some particular ‘peak’ wavelengths. Lots of comments also use words like ‘weakly’ or ‘mostly transparent’ or other words implying that it is ‘not much’.
I think everyone also agrees that all matter absorbs/emits IR radiation according to its temperature. This means that O2 and H2 are capable of absorbing/emiting IR radiation at *all* IR wavelengths. What wavelength they do actually emit/absorb at depends on the temperature of the gases.
Does anyone know how much IR radiation comes from atmospheric O2 and H2 because of their temperatures?
The IPCC energy budget diagram only has a down-welling IR category for ‘greenhouse gasses’. This seems to omit the contributions from O2 and N2 due to their ‘peak’ wavelengths above and due to their ‘temperature driven’ emissions. Given that N2 and O2 are 99% of the atmosphere, these might add up to be a big number. I have not seen any quantification of the N2 and O2 contribution to IR atmospheric back-radiation – does it exist?
By the way, I would include O3 with O2 because it is not part of the fossil fuel discussion which makes it part of the ‘natural’ effect.

I know this is a bit out of context but this is a question that I wanted to ask our WUWT community for a while: We know that CO2 is plant food, the planet is greening. Let’s assume we burn all the coal, oil and gas there is, CO2 would rise some more, probably below 1000 ppm, which is about optimum for plant growth. The greening means also that the carbon sinks are increasing. What will happen when human CO2 emissions stop? How fast would the now larger carbon sinks consume the CO2 and could it drop below 200 ppm or will it slowly fall toward pre-industrial levels? We now have the benefit of increased CO2 levels to grow more food, but what after CO2 is declining? We can’t put needed CO2 into the atmosphere forever. Or we have to extract CO2 from sources such as calcium carbonate, for example, but massively, to replenish the atmospheric CO2.

The problem is that CO2 is not the only thing plants need to grow. They need to absorb water and minerals from their roots. If CO2 causes warming then the water may not be present (drought), and some plants cannot absorb minerals as well with high CO2 levels.
Some of these processes also weaken the plant, making it more susceptible to disease and pests. Further, in temperate climes the pests are generally kept in check because they are killed by winter cold. If temperatures rise then they survive winter to cause a nuisance the rest of the year. The dengue mosquito in the USA is one case in point – its range is expanding – although this threatens humans rather than plants. Another pest affecting humans is a brain-eating amoeba which is feeling distinctly more at home in USA ponds than it used to.

In answer to Stephan’s interesting question, once we’ve burned all the fossil fuels we can continue to liberate CO2 to atmosphere for the benefit of plants by burning limestone, which is the principal ultimate CO2 sink.

I was going to say the same, Sir.
And we do burn plenty of it (limestone) already, when we make, among other things, concrete
Of course, if there is indeed a supply of abiotic methane under the crust of the Earth, it may be a very long time before we need to heat rocks to keep plants from starving.

If the mean temp of the surface of the Earth is about 288K; and the mean temp of the effective radiating layer is about 255K; and if the density is greater (water, air, soil, etc) at the surface than at the effective radiating layer, and if some of the energy at the surface is carried to the effective radiating layer by dry and wet (with latent heat) thermals, then it is quite possible that the surface will warm (in response to increased CO2) less than 1C, the effective radiating layer will warm about 2C, and the unweighted average through the atmosphere will be about 1.5C.

A most interesting suggestion from Matthew Marler, which is not inconsistent with the notion that one might apply a separate SB-derived climate-sensitivity parameter for each successive layer of the atmosphere, determined from that layer’s temperature. In that event, one would indeed use the SB equation at the surface, which would give a surface equilibrium sensitivity of 1.2 K per CO2 doubling, assuming a feedback sum 1.55. In effect, the temperature obtaining at each pressure altitude would regulate the amount of heat retained at that pressure altitude, regardless of whether it arrived by radiative or non-radiative transports, from above or below.
The overall system sensitivity would still be determined at the characteristic-emission altitude, but the possibility does exist that the change in temperature at the surface could be smaller than that which the use of the CEA Planck parameter conventionally mandates. One would need to do some math to see whether one would preserve a near-linear temperature lapse-rate under such a regime, and, if so, how the lapse rate would change with temperature.

http://www.theozonehole.com/images/atmprofile.jpg
Matthew,
What you are suggesting doesn’t work. At present the radiative layer is, on average, 5km above the surface – where the temperature is approximately 255K. For convection the rate of decline of temperature with height in the troposphere is roughly constant (because it depends primarily on pressure and height) – so same temperature drop from the surface irrespective of temperature. More CO2 would mean a slower infrared energy flow upwards, so would increase the temperature drop from the surface rather than decrease it.
So to increase the temperature of the effective radiating layer by 2C the surface has to warm by at least 2C. 1C just doesn’t work for either of these two mechanisms. The only possible mechanism would be hydrological cycle effects. Needless to say, if this was the case then it would have to cause more drought over certain latitudes of land to provide the increased latent heat released in the effective radiative layer.

Warming means the net heat imbalance at the top of the atmosphere which does not depend on weather.
Where this heat imbalance goes – whether into surface warming or warming of the deep oceans – is entirely dependent on random weather events such as El Nino / La Nina.
You can use RSS temperatures as evidence that “CO2 has increased, and temps are flat.”, although the RSS figures include only a small element of surface temperatures.
However, what is also true (and not inconsistent with your statement either) is that “CO2 has increased and the net imbalance of heat flows at the top of the atmosphere has increased with it”.
The physics says the net heat imbalance only stops when the surface and all the atmosphere warms up to reach a new equilibrium.

ClimatePete: The only possible mechanism would be hydrological cycle effects. Needless to say, if this was the case then it would have to cause more drought over certain latitudes of land to provide the increased latent heat released in the effective radiative layer.
I think that you need to consider the changes to both dry and wet thermals (hydrological cycles). Romps et al estimated a change in the mechanical energy (CAPE) rate of transfer that would accompany a surface temperature increase, and I used that in my “surface sensitivity calculation”. Estimates of the change in the rate of the hydrological cycle (changes in rainfall) vary from 3%/C to 7%/C (O’Gorman cited in my post) which implies a change in the rate of transfer of latent heat from the surface to the cloud condensation layer. Thus, an increase in the rate of latent heat transfer (thorough the increase in the rate of the hydrological cycle) does not imply an increase in drought frequency, extent, or duration.
So energy goes into the effective radiating layer by 3 mechanisms, but is radiated to space by a single mechanism. Add to that the difference in the mass density between the surface and the effective radiating layer, and it is quite possible for the energy transport rate from the surface caused by a 1C increase in surface temperature to cause a higher increase in temperature at the effective radiating level.
In calculating the sensitivity at the surface, the changes in 3 transport mechanism must be considered. If in addition the increase in ocean surface temperature increases the size of the “iris effect”, then the changes in the radiation energy transfer rate are not simply summarize, but the increase in the net transfer rate from surface to upper atmosphere will probably be underestimated.

Here is a graph to make the point I was trying to make above. Sorry, I used heat instead of temperature…ooops. Temperature is the metric of how hot something is or average heat, heat is the cumulative measure of the kinetic energy.
Here is the SB Chart demonstrating that a black body of -85 degree C has a peak wavelength of 15 microns. By the time the chart gets to 10 microns, the radiation/absorption is down to near 50% of the peak. That wavelength however is also absorbed by H2O which is far more prevalent.http://www.spectralcalc.com/blackbody_calculator/plots/guest1311087559.png
Here is the SB Graph of the earth’s average temperature, or about 18 degree C. As you can see, the earth’s radiation largely avoids the bands absorbed by CO2. Also, to warm the earth you have to trap heat of a temperature greater than the earth’s 18 degree C. Simply absorbing 18 degree C heat of temperature will simply maintain the temperature.http://www.spectralcalc.com/blackbody_calculator/plots/guest1870814792.png
Anyway, all these discussions are moot, the oceans are warming. Can someone please explain how atmospheric CO2 can possibly warm the oceans. If we can’t explain how CO2 can warm the oceans, all this talk of CO2 is pure nonsense. What is warming the oceans is clearly what is warming the atmosphere…unless heat no longer rises in the atmosphere, and travel from hot to cold.

co2islife : “Can someone please explain how atmospheric CO2 can possibly warm the oceans.”
It is called the Greenhouse Effect. Not a great name, but widely used. It is not hard to find descriptions of how it works. You might start with Wikipedia.
I suppose your real problem is a failure to understand something fundamental, but I can not guess what that might be based on what you wrote.

Stephan,
OK, that is a question that can be answered.
Yes, the IR only penetrates into the top mm or so of the water. Where does the energy go then? Does the surface of the water get extremely hot? No, so most of the energy goes somewhere other than emission. Evaporation? Some, but the water molecules can not diffuse away from the surface fast enough for that to provide more than a small fraction of the heat flux. So what is left is that the heat gets transferred deeper into the water by convection (the surface of the ocean is highly turbulent) and conduction. Both processes are much faster in liquid than in air. So the water below the surface is warmed.
I put water in a pot, put the pot on my stove, turn on the stove. Much less than 1 mm thickness of the water is in contact with the pot, but the water gets hot.

co2islife: “Can someone please explain how atmospheric CO2 can possibly warm the oceans.”
sunshine warms the oceans. They cool by at least 3 mechanisms, one of which is radiation to the upper atmosphere and to space. Increased CO2 slows the radiation rate by absosbing radiation in the wavelengths characteristic of ocean surface temperature, resulting in a higher temperature. There is no completely adequate analogy for this effect: “greenshouse”, “blanket”, etc. all fai.

If you did the sums assuming the entire 2W/m^2 went into warming the atmosphere and surface then you would get a very nasty shock indeed. “Pitiful” is not the adjective you would then use to describe 2W/m^2.
As an example, less than 0.5 W/m^2 warms the oceans by 0.02 degrees C per decade. Also a “pitiful” figure. But the oceans can take around 1700 times the heat that the atmosphere does to warm the same degree. If all that heat (unlikely) were to warm the atmosphere then it would go up by 34 degrees C per decade 10% (still unlikely) would be 3.4 degrees per decade. 1% (plausible) is 0.34 degrees per decade or 3.4 degrees per century, and is certainly not “pitiful” warming.
The exact percentage varies with the random weather.
So 2W/m^2 is far from moot and pitiful.

“Can someone please explain how atmospheric CO2 can possibly warm the oceans. ”
That is a serious question. Can anyone explain how atmospheric CO2 can warm the oceans? Can anyone explain how the atmosphere with 1/2000 to 1/4000 the heat of the oceans warm them? How in the world can the oceans be warming and we are blaming CO2 for the warming atmosphere when we can’t explain how CO2 can warm the radiator of the earth, the oceans?

If the oceans are warming it’s from solar radiation, geothermal heat flux through the ocean floor, and volcanic activity. The oceans cover 70% of the earth, average 4,000 meters deep, and IPCC has no idea what’s happening below 2,000 meters.

“If the oceans are warming it’s from solar radiation, geothermal heat flux through the ocean floor, and volcanic activity. ”
Yep, but what does that have to do with atmospheric CO2? How does CO2 cause volcanic activity? How does CO2 warm the oceans? IR radiation does not penetrate the oceans, especially IR at 15 microns.http://www.seagrant.umn.edu/newsletter/2012/07/images/graphic_lightpenetration.jpg
Once again, the whole problem faced by the Warmists is that the oceans are warming. Common sense would lead one to believe that what is warming the oceans is also warming the atmosphere, just like the air above a boiling pot is warmed by the boiling water.
Basically the entire AGW argument boils down to the nonsensical belief that you can effectively warm a bath tub of water using a candle placed above the water…and that candle emits light that can’t penetrate the water.
Here are the numbers:
Heat capacity of ocean water: 3993 J/kg/K
Heat capacity of air: 1005 J/kg/K
The atmosphere has a mass of about 5×1018 kg
The total mass of the hydrosphere is about 1,400,000,000,000,000,000 metric tons (1.5×1018 short tons) or 1.4×1021 kg,
Basically it takes 4x more heat to warm a kg of water than air, and there are a whole lot more kg of water than air.
Putting those numbers into graphical form you get this game over chart for the Warmists.https://noconsensus.files.wordpress.com/2011/04/image2.png
This image pretty much destroys any logical explanation of atmospheric CO2 causing the oceans to warm.http://wattsupwiththat.files.wordpress.com/2011/04/atmosphere-vs-ocean-heat-capacity.jpg?w=720
a more detailed discussion can be found here:http://wattsupwiththat.com/2011/04/06/energy-content-the-heat-is-on-atmosphere-vs-ocean/
Here is another article on the topic:http://scholarsandrogues.com/2013/05/09/csfe-heat-capacity-air-ocean/
Digging up data for this post also discovered that NOAA shows that ocean heat has been increasing:https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content700m2000myr.png
Unfortunately I couldn’t find a NOAA chart for Atmospheric Heat. Strangely I wasn’t able to find any graphics of atmospheric heat content but plenty of them about the oceans. If you can find one please respond to this post with a link to the image.
Lastly, I stumbled upon this jewel. It looks like while the Warmist have been claiming ocean acidification due to CO2, It looks like NOAA claims that while we have been greatly increasing our CO2 production over the past 20 years+, the CO2 ocean flux has basically remained unchanged. Imagine that, more CO2 doesn’t result in more CO2 going into the oceans according to NOAA.http://www.oco.noaa.gov/_images/Graphics/Air-SeaCO2FluxTrend_500x367.png
This jewel demonstrates that cooler water absorbs CO2 warmer water out-gasses CO2…imagine that, Henry’s Law actually explains reality. So once again, the sun warms the oceans, the warmer oceans release CO2, just like a warm can of Coke releases more CO2 bubbles than a cold can.http://cwcgom.aoml.noaa.gov/erddap/griddap/aomlcarbonfluxes.graph
The more you look behind the curtain of Global Warming the less there is to find. The “science” of global warming is a paper tiger, an emperor with no clothes and it is no wonder they want to cut off debate, just like the Wizard of Oz used to curtain to hide his fraud.
Once again, anyone interested in protecting/restoring the integrety of science and academia should demand Congress establish an impartial agency to verify the validity of the conclusion reached by research used to form public policy. The same scientific riggers used by the FDA to verify drug company research needs to be applied to the publicly funded researchers as well. We need to hold these “researchers” accountable. This global warming hoax is the Tobacco hoax on steroids. We don’t have the press, academia and government being the watch-dogs in this case, we have the press, academia and the government as the perpetrators of this fraud.

How can the deep oceans possibly absorb heat?https://wattsupwiththat.files.wordpress.com/2015/06/clip_image010_thumb2.jpg
The above diagram shows a combination of 161 W/m^2 of sunlight and 398 W/m^2 of infrared (IR) radiation hitting the surface of land and sea. How can this possibly result in deep ocean warming to depths of 2000m? Sunlight can penetrate a little, but not too far, and infra-red mostly gives up after 100m.
However, the ocean surface layer clearly can be heated by this total of 559 W/m^2 of sunlight + IR.
The surface layer gets hotter, and some of it evaporates, cooling it a little. It gets hotter again and again there is evaporation.
After a while there has been quite a bit of evaporation and the surface layer is also warmer than the water below.
Surely warmer water is less dense than cooler water? So how could it descend.
Ah, we’ve forgotten the evaporation bit. The evaporation is eliminating water, but the ocean is salty, so the evaporation is also concentrating the salt.
At the same temperature, more salty water is more dense than less salty water. In fact, if the difference in salinity is high enough, warm more-salty water can be more dense than cooler less-salty water.
So the heat can descend into the deep oceans because of the additional salinity of the warm water caused by surface evaporation concentrating the salt in the water.http://aquarius.umaine.edu/images/ed_mixing_sinking.png
As a specific example, water at 15 degrees C and salinity 35.6 PSU has a density of 1.0265 g/cc. Water at 10 degrees C and salinity of 33.7 PSU has a density of 1.0260 g/cc – less dense. So the hotter water will descend because it has higher salinity and is more dense.
And that is how heat from the ocean surface can find itself transferred to the deep ocean, displacing cooler but less salty water which is then forced to rise.

Three Legged Stool of CAGW: 1) Anthropogenic 2) Radiative Forcing 3) GCMs
Leg the 2nd
Radiative forcing of CO2 warming the atmosphere, oceans, etc.
If the solar constant is 1,366 +/- 0.5 W/m^2 why is ToA 340 (+10.7/- 11.2)1 W/m^2 as shown on the plethora of popular heat balances/budgets? Collect an assortment of these global energy budgets/balances graphics. The variations between some of these is unsettling. Some use W/m^2, some use calories/m^2, some show simple %s, some a combination. So much for consensus. What they all seem to have in common is some kind of perpetual motion heat loop with back radiation ranging from 333 to 340.3 W/m^2 without a defined source. BTW additional RF due to CO2 1750-2011, about 2 W/m^2 spherical, 0.6%.
Consider the earth/atmosphere as a disc.
Radius of earth is 6,371 km, effective height of atmosphere 15.8 km, total radius 6,387 km.
Area of 6,387 km disc: PI()*r^2 = 1.28E14 m^2
Solar Constant……………1,366 W/m^2
Total power delivered: 1,366 W/m^2 * 1.28E14 m^2 = 1.74E17 W
Consider the earth/atmosphere as a sphere.
Surface area of 6,387 km sphere: 4*PI()*r^2 = 5.13E14 m^2
Total power above spread over spherical surface: 1.74E17/5.13E14 = 339.8 W/m^2
One fourth. How about that! What a coincidence! However, the total power remains the same.
1,366 * 1.28E14 = 339.8 * 5.13E14 = 1.74E17 W
Big power flow times small area = lesser power flow over bigger area. Same same.
(Watt is a power unit, i.e. energy over time. I’m going English units now.)
In 24 hours the entire globe rotates through the ToA W/m^2 flux. Disc, sphere, same total result. Total power flow over 24 hours at 3.41 Btu/h per W delivers heat load of:
1.74E17 W * 3.41 Btu/h /W * 24 h = 1.43E19 Btu/day
Suppose this heat load were absorbed entirely by the air.
Mass of atmosphere: 1.13E+19 lb
Sensible heat capacity of air: 0.24 Btu/lb-°F
Daily temperature rise: 1.43E19 Btu/day/ (0.24*1.13E19) = 5.25 °F / day
Additional temperature due to RF of CO2: 0.03 °F, 0.6%.
Obviously the atmospheric temperature is not increasing 5.25 °F per day (1,916 °F per year). There are absorbtions, reflections, upwellers, downwellers, LWIR, SWIR, losses during the night, clouds, clear, yadda, yadda.
Suppose this heat load were absorbed entirely by the oceans.
Mass of ocean: 3.09E21 lb
Sensible heat capacity: 1.0 Btu/lb °F
Daily temperature rise: 1.43E19 Btu/day / (1.0 * 3.09E21 lb) = 0.00462 °F / day (1.69 °F per year)
How would anybody notice?
Suppose this heat load were absorbed entirely by evaporation from the surface of the ocean w/ no temperature change. How much of the ocean’s water would have to evaporate?
Latent heat capacity: 970 Btu/lb
Amount of water required: 1.43E19 Btu/day / 970 Btu/lb = 1.47E+16 lb/day
Portion of ocean evaporated: 1.47E16 lb/day / 3.09E21 lb = 4.76 ppm/day (1,737 ppm, 0.174%, per year)
More clouds, rain, snow, etc.
The point of this exercise is to illustrate and compare the enormous difference in heat handling capabilities between the atmosphere and the water vapor cycle. Oceans, clouds and water vapor soak up heat several orders of magnitude greater than GHGs put it out. CO2’s RF of 2 W/m^2 is inconsequential in comparison, completely lost in the natural ebb and flow of atmospheric heat. More clouds, rain, snow, no temperature rise.
Second leg disrupted.
Footnote 1: Journal of Geophysical Research, Vol 83, No C4, 4/20/78, Ellis, Harr, Levitus, Oort

Radiative forcing and energy movement between oceans, clouds and water vapour are not comparable because over an extended period of time radiative forcing is always in the same direction, while the energy flows into and out of the others average to zero.
Hence the longer things go on the more marked the impact of radiative forcing will be. As it stands, one second of incoming sunlight is the same energy as three minutes of RF (at 2 W/m^2)
This is in response to :Nicholas Schroeder said

CO2’s RF of 2 W/m^2 is inconsequential in comparison, completely lost in the natural ebb and flow of atmospheric heat. More clouds, rain, snow, no temperature rise.

Oceans, clouds and water vapor soak up heat several orders of magnitude greater than GHGs put it out.

On the contrary, in the diagram above the downward infrared emissions due to GHGs are 342 W/m^2. which is the same order of magnitude as all the other major flows on the diagram. Incoming sunlight is very similar, outgoing solar plus infrared is very similar. Thermal and other emissions from the surface total 50% more. But 342 W/m^2 is a huge thermal flow, don’t you think, given that it would be zero if there were no greenhouse gases in the atmosphere?

That diagram doesn’t include any of the energy that is changed in form. Sunlight that hits plants and algae is converted to chemical energy, and much of that is sequester into deposits that later become oil. Much of that energy is used to oxidize things, and trigger other chemical reactions. Some of that energy is even converted to electrical/mechanical energy through the solar panels. The only way the incoming energy is equal to the outgoing energy is if there is no photosynthesis. Some of that energy is used to split a water molecule, and create sugars.

co2islife,
That’s very true. However, there are a few factors to consider.
Firstly photosynthesis works on sunlight only, so that excludes the 342 W/m^2 of downwelling IR.
Secondly the efficiency of photosynthesis is poor – 0.1 to 2% generally, but 7-8% for sugar cane (which is a very small proportion of total photosynthesis grown) according to https://en.wikipedia.org/wiki/Photosynthetic_efficiency#Typical_efficiencies
Thirdly, land use changes are roughly neutral. That means energy is absorbed in summer by plants, then approximately the same energy is released in winter as leaves drop and rot.
And lastly the energy used by photosynthesis is minute. It’s only about 10x the human use of energy which is around 2.3 TW, making photosynthesis 23 TW = 2.3 x 10^13 W. Divide by the area of the earth which is 510 million square miles or 5.1 x 10^14 square metres, making it 0.045 W /m^2.
So photosynthesis is only 1/40th the size of the radiative forcing of 2W, and only 1/13th of the size of the smallest flow on the diagram – 0.6W imbalance on the left hand side.
So it’s very questionable that photosynthesis energy flows belong on the diagram as we are talking noise level here.

398-239=159 not 342. Are these some kind of breeder GHGs that emit more than they absorb? Maybe that’s why there are some sectors that object to this on 2nd law considerations.
IPCC has clouds at -20 W/m^2, ten times the cooling of CO2 heating
400 ppm is pretty darn close to zero. The GHE is due to water vapor, not GHGs. See Miatello et. al.

Nicholas,
You are right to question GHG downwards IR of 342 W/m^2, because I was too hasty and it isn’t the downwelling GJG IR, but the total downwelling IR. But the analysis above of what the right answer is omits some flows.
The easiest way to look at it is that the non-IR flows up are 84 + 20 = 104 W and there is 79 W of sunlight deposited in the atmosphere for a total of 183W of non-IR energy deposited in the atmosphere. The IR flows from this are therefore going to be approx 92W up and 92W down.
So the thermal outgoing TOA IR from the ground alone is 239 – 92 = 147, and the thermal down surface IR due to the GHG effect alone is is 342 – 92 = 250 W/m^2.
The numbers now balance because 398W/m^2 IR from the surface ends up with 250 reflected back down from the GHG effect and 147 leaving at the TOA.
So the downwelling GHG IR radiation is only 250 W/m^2 and not 342 W/m^2. My apologies for being too quick first time.

According to “The greenhouse effect and carbon dioxide” Zhong and Haigh, 2013, the outgoing IR flux at the TOA would be 70.6 W/m^2 bigger with no water vapour in the atmosphere, and 25.5 W/m^2 bigger with no CO2 in the atmosphere, with all other constituents unchanged except the one mentioned.
See table 1 of http://onlinelibrary.wiley.com/doi/10.1002/wea.2072/full .
This means that the GH effect of CO2 is approximately one third that of water vapour. The difference is that CO2 levels and GH effect is determined by humans, whereas that of water vapour is approximately constant except for the $64m dollar question of whether temperature increases due to CO2 levels are amplified (and how much) by causing slightly more water vapour in the atmosphere with its corresponding GH effect too.

Way too much confusion over facts. I had to drop off during the discussion of whether O2 and N2 participate in photon absorption or emission. They don’t. It is not a fact that everything emits thermal radiation, and if it is not a fact, it cannot be a principle. O2 and N2 do not have any resonance bands for absorption/emission in the wavelengths we are talking about. (Nearly everything will transport heat by conduction, however.)
Another misconception is that there is some mysterious reservoir of heat in the atmosphere that is either being pumped or drained by absorption/emission. The absorption/emission process, when in equilibrium, is a net-neutral process. There is neither gain nor loss of normal heat (molecular kinetic energy). For every photon absorption that increases the energy of a “greenhouse gas” molecule, to share with the rest of the atmosphere, there is a corresponding spontaneous atmospheric heating of a dumb, unsuspecting “greenhouse gas” molecule that will result in a photon emission. (All thermal processes are stochastic at the molecular level.) The re-radiating molecules are just “pass-throughs.”
The best way to think of the re-radiation process is that it amounts to a scattering of photons in the affected IR band, with effectively half the scatter going up (and out) and half the scatter going down again. (Yes, there are some scattered horizontally, but they wind up being scattered up or down eventually.) The clear atmosphere is just a scattering filter. It gets heated or cooled by contact with the surface of the Earth, and its heat content is miniscule compared to terrestrial surfaces. (Clouds are different creatures, but they are driven by the thermodynamics of gaseous and liquid water.)

So Michael, you do understand that if a gas has a Temperature that is higher than zero K, that implies that the molecules of that gas are in constant collisions with each other. That is what “heat ” is; the energy of those purely mechanical kinetic energy collisions.
Since they can’t be all marshalled to only collide in certain directions, all of that KE can never be extracted to do mechanical work efficiently.
That is why “heat” (noun) is the garbage of the energy kingdom.
And it is during the interaction of two atoms or molecules in a thermal collision, that the electric charge distribution of the atom or molecule becomes distorted, so that a normally charge symmetric gas molecule like N2 or O2 or even an atomic gas like Argon, that in free flight has a zero electric dipole moment, will during a collision develop a non zero dipole moment, and also quadrupolar and other higher order electric moments, which according to Hertz / Maxwell are perfectly capable of radiating EM radiation during the collision.
If you don’t understand why this is so, just consider that the nucleus and the electron “cloud” of an atom both have the same but opposite electric charge, so both can generate the same Coulomb forces.
But virtually all of the KE and momentum of an atom or molecule in motion, is contained in the nucleus, which is typically about 3750 times more massive than the electrons.
The proton mass is 1836 times the electron mass, while the neutron mass is 1837 times the electron mass, so most light atoms have a 1:1 n-p ratio.
And accelerating electric charges DO radiate electro magnetic radiation continuously. Of course a charge acceleration is the same as a varying electric current. And you don’t actually have to have any copper wire antenna to have a varying electric current which can cause radiation.
So diatomic molecules can and do radiate EM thermal radiation when at a Temperature above zero K.
Now MODTRAN as far as I know calculates the molecular absorption spectra based on the known electronic structure of different molecules.
I’m not aware that it even calculates thermal radiation spectra which do not depend on specific molecular or atomic electron structure (quantum mechanics).
Now you can’t call the atmospheric gases O2 or N2 or Ar Black Body radiators, because a black body radiator is a body that completely absorbs ALL EM radiant energy from down to, but not including zero frequency, and up to but not including zero wavelength.
No such object or material exists or can exist, and in fact there is NO material that can absorb 100% of even a single frequency or wavelength, let alone all possible frequencies.
So atmospheric gases are far too low in molecular density to be optically opaque, compared to liquids or solids, but the still can and do radiate EM radiation down at the single atom or molecule level, when they collide with each other.
The time during which two molecules approaching each other to collide, can begin to “see” each other electrically (Coulomb forces), until the crush into each other and then part again, is an eternity compared to the time it takes to emit EM waves from the decelerating and accelerating electric charges in their electrons and in the nuclei.
So the atmospheric gases can radiate thermally, and the spectra depends on the Temperature, and not on the species of molecule.
So I don’t think you can use MODTRAN for that.
In any case, the extra-terrestrial LWIR spectral signature of the earth seems to have a thermal spectrum that is characteristic of the surface and near surface Temperature, rather than any stratospheric Temperature. And the atmospheric gas densities are highest near the surface, so that’s where I would expect the bulk of the atmospheric thermal radiation to come from.
I don’t buy that a planet needs a GHG containing atmosphere or else it will fry because it can’t radiatively cool the atmosphere.
g

The Stanford two mile long electron linear accelerator, exists precisely because accelerating electric charges DO radiate EM radiant energy continuously.
So if you try to accelerate electrons in a circular orbit, the continuous radiation losses eventually eat up all the energy you add per lap and you reach an energy limit.
Protons are 1836 times more massive than electrons so you can get a factor of 1836 times as much KE with protons.

Thanks for the condescension, George. I have three degrees in Aeronautics and Astronautics, specializing in gas physics. My day job used to be designing multi-megawatt infrared lasers that propagated through the atmosphere. O2 and N2 were perfectly transparent for our purposes. They are not dipole molecules and cannot execute a quantum transition allowing for the absorption or emission of a photon. At least not at energies lower than the ionization energy.
The sort of thing you are talking about is an “induced dipole,” resulting from the temporary distortion of electronic fields during a molecular collision. In the 1970s, Abraham Hertzberg of the University of Washington came up with a concept for a laser utilizing this concept, but subsequent experimental work showed that the effect was nil.

There is no known state transition where the rotational/vibrational energy associated with a GHG molecule absorbing a photon can be converted into the linear momentum associated with its kinetic temperature. A collision may induce the re-emission of a photon from an energized GHG molecule and while from a macroscopic point of view this is equivalent to an increase in linear momentum, they are quite different and only photons can participate in the radiant balance of the planet. Trenberth conflates these because otherwise, there is no wiggle room to support CAGW.

co2isnotevil
[Too much] CO2 is evil (based on your definition above) and here is why.
Every time a CO2 molecule collides with another molecule then the available energy is redistributed. The thermodynamics of this are well know, and say that all the available vibrational or translation modes get an energy of kT/2 on average. k is Boltzman’s constant and T is the temperature in K.
Modes include x, y and z direction velocities which are not quantised so always get their fair share. =kT/2.
Then all molecules (but not single atoms like Argon) can have rotational energy in at least two directions, but rotation about the axis joining the two atoms for a two-atom molecule do not get any energy at all.
And lastly we have internal molecular vibrational modes. For a two-atom molecule the only available mode is a stretching and shrinking of the bond, and the energies are generally too high to allow measurable activation of the vibration at normal surface temperatures. For a three-atom molecule (H20, CO2, O3, CH4) there are other vibrational modes available with lower energies which cause the IR spectrum given in figure 4a of http://onlinelibrary.wiley.com/doi/10.1002/wea.2072/full (Zhong and Haigh 2013 – The greenhouse effect and carbon dioxide). However, because the energy of these other vibrational modes are not kT/2 (at room temperature), they are not 100% populated. But they have a significant population unlike the 2-atom molecule modes.
For CO2 the vibrational energy can be converted to linear or rotational energy of that or another molecule. The time to spread energy evenly between the various modes which take kT/2 each depends on the collision rate, which increases with temperature and density. Close to the ground IR radiation absorbed into vibrational modes gets thermalised quickly (a microsecond), but high up it takes some time to do this, which is just proportional to temperature and pressure. Nevertheless, it does happen everywhere at timescales significant in talking about CO2 as a GHG.
See http://rabett.blogspot.co.uk/2010/11/its-turnstile.html
So CO2 vibrations caused by absorption of IR pass energy on to other modes faster than you can say boo to a goose, and are also populated by energy from them. Statistically 6% of CO2 molecules are activated at room temperature at any one point in time.
This stuff is just standard thermodynamics, by the way. The only complicated thing about it is the calculation or measurement of the energy required to activate the various vibrational modes – and generally the theory and measurement agree well.

Climate Pete,
You are also confused by the conflation of energy transported by photons and energy transported by matter. Redistributing energy from an energized GHG molecule can only happen upon the collision with a ground state GHG molecule, in which case the state of the colliding molecules can flip, however, this can not happen upon the collision with a non GHG molecule which has no resonant mode to exchange the required quantum of energy with. Ultimately, this energy of a state change will end up emitted by a GHG molecule as a photon, either spontaneously or consequential to a collision with a non GHG molecule.
If what you say is true, then the spectral characteristics of the energy emitted by the planet would have almost no energy in the various GHG absorption bands as all of the energy in those bands would be absorbed and converted into the kinetic energy of translational motion. Instead, we measure about half the energy there would have been had there been no GHG absorption which is clear evidence that the energy of absorbed photons stays with photons, half of which ultimately escape the planet and half of which are returned to the surface, warming it beyond what the post albedo solar input can do on its own. How can you explain the failure of this simple test of your hypothesis?

“Redistributing energy from an energized GHG molecule can only happen upon the collision with a ground state GHG molecule”
..
That is not true. If two energized GHG molecules collide, and one has a large amount of kinetic energy, it can impart a fraction of that energy on to the molecule it collides with.

Yes, but I’m considering that most GHG molecules in the atmosphere are quickly returned to the ground state by the collision with an N2/O2 molecule and that few, if any, are in high enough states of excitation where complex state exchanges are possible with other highly energized GHG molecules.

This is what I love about the field of “climate science.” They rush to a conclusion, the only conclusion, that being CO2 is the cause without ever even really trying to understand what is going on.
“IR does indeed warm water. Place a bowl of water under a heat lamp and see what happens.”
I don’t believe that is the case if you use an IR lamp with peak radiation of 15 microns emitting at the same energy level as the earth. About 25 W/M^2/micronhttp://s11.postimg.org/qt4vzvq2b/Sun_Earth_Comparison.png
Here is a blog posting by one of the greatest climate scientists out there asking for answers.
Can Infrared Radiation Warm a Water Body?
April 21st, 2014 by Roy W. Spencer, Ph. D.http://www.drroyspencer.com/2014/04/can-infrared-radiation-warm-a-water-body/
He starts his blog posting highlighting what I’ve been saying:
“I frequently see the assertion made that infrared (IR) radiation cannot warm a water body because IR only affects the skin (microns of water depth), whereas solar radiation is absorbed over water depths of meters to tens of meters.”
He concludes where I would have though a real science would have started…designing a simple experiment.
“I would like to hear what others know about this issue. I suspect it is something that would have to be investigated with a controlled experiment of some sort.”
I would like to see the results of this experiment. How difficult can be it to shine an IR light emitting 15 microns at the proper intensity directly onto a fish tank of water. My bet is that the temperature of the water won’t change one iota. More importantly is that even if it did warm, it wouldn’t be due to CO2. CO2 and H20 basically absorb the same wave lengths, and H20 is at a much higher concentration in the atmosphere, especially directly over water. In reality what is most likely happening, if in fact GHGs are the cause of the warming, it is due to the sun warming the oceans, increasing evaporation, and the H20 that is causing the warming, not CO2.
Figure 10 illustrates that a film of water thicker than 0.05 mm will absorb most
of the infrared radiation longer than two microns.https://www.watlow.com/downloads/en/training/STL-RADM-89.pdf
It looks like 10 microns at high enough intensity may warm water, but we already know that. It most likely causes evaporation.
If far infrared rays of about 10㎛, equivalent to oscillatory wavelength range of water molecule, are irradiated, the resonance absorption occurs, leading to decrease of clusters and faster movement of water molecules. In other words, the water is activated.http://www.supergreenuk.com/whyinfrared.html

An ordinary bottle of water at about 15 deg C radiates LWIR EM radiation at about 390 Wm^-2 just like the mean Temperature earth surface.
So that is what you need to use as a LWIR source to demonstrate the green house gas warming by CO2 or other GHG molecules.
A “heat” lamp or light bulb can be up to half the temperature of the solar surface, or ten times the earth surface Temperature. So it will have a total BB emittance of up to 10,000 times that of the earth surface, and its peak spectral radiant emittance can be up to 100,000 times that of the earth surface. And that emission is at wavelengths totally absorbed by microns of water.
So using any kind of heat lamp is a fraud. Use a battle of water; or even a Coke or Pepsi would do (chilled to 15 deg C.

You guys have missed the point. If the ocean water was not salty then there would be no mechanism to transmit heat to the depths of up to 2km at which the ARGO probes have measured an increase in temperature.
The water is heated at the surface. Evaporation (of which there is plenty) makes the surface water more salty because the water containing the salt evaporates but the salt does not, and more salty water is more dense. if it is salty enough, the warmer water can descend because it is more dense than colder water which also happens to be less salty. See here for a full description and chart of densities by temperature and salinity.

The other fly in this IR warming the oceans ointment is that the IR the CO2 would be “trapping” would have been emitted from the ocean itself. Considering that the CO2 molecule emits IR back in a sphere, only a small % of the radiation it absorbs would be re-radiated back to be re-absorbed by the ocean. I don’t see how heat emitted by the oceans can be re-radiated back to the oceans causing them to warm. Even the IR lamp experiment wouldn’t prove much given that the IR lamp would be new energy being introduced to the system, whereas the CO2 theory recycles emitted energy.

OK, here’s another experiment.
Place a closed gallon jug of water under the heat lamp and a gallon in an open pan. Which one will get the hottest?
Place a gallon jug of water in the shade in 115 F Phoenix and a gallon in an open pan. Which one will get the hottest?
The bottled up water gets hot, but the pan that’s free to evaporate stays relatively cool.
That’s why if you want to keep your pool at 85 F you still have to heat it even though it’s105 F air temp.
That’s how those canvas water bags work, a wet bandana, an evaporative cooler.

Here is another simple experiment:’
1) Have an insulated bath in a closed glass container filled with pure CO2 or at least 7000 PPM
2) Have an insulated bath in a closed glass container filled with N2/O2
Heat the bath to 99 degree C, turn out the lights to replicate night, and measure the decay in temperature of the two baths. If the bath in the container filled with CO2 cools slower than the other one than CO2 may actually lead to warming of the oceans. My bet is that it won’t. Of course 100% pure CO2 isn’t representative of anything, and a chamber filled with 7,000 PPM CO2 might be better considering that is the highest level reached in the past 600 million years.
I would love to see the results of this experiment if anyone out there can use their lab facilities to run it. The results should then be presented to congress. The simpler the experiment the better.

What I find so fascinating is that there are so many unanswered questions regarding this “settled” science. There very fact that “scientists” claim that something as complex as the climate can ever be settled pretty much proves real scientist don’t occupy our climate science departments, especially just 30 years ago they were screaming an ice age was coming. The arrogance is of epic proportions. Give that a simple question like can ocean radiated IR be re-radiated to cause the oceans to warm can’t be answered definitively pretty much proves the climate scientists really aren’t interested in truly understanding the system. The most basic of experiments seem to have been overlooked. Even their models fail miserably. Anyone with an ounce of knowledge about building multivariate models would know that they have systematical over-weighted an insignificant coefficient. Climate science is like a dietician that created a model about weight loss trying to sell Under Armor shoes. Their model would be weight loss = f(exercise, caloric intake and a dummy variable for if you owned Under Armor shoes). When they created the model they weighted the model as such Weight Loss = 0.2x caloric intake + 0.2 exercise + 0.6x Under Armor Shoes. This makes a great marketing piece, but even an idiotic model like the one outlined would have better results than the IPCC CO2 focused climate models. The only way you get a systemic overestimation of temperature is if you have a model biased to a faulty variable. We see it in market models all the time. The Climate Scientist are learning lessons Wall Street has known for decades. If we can’t model the S&P 500, we certainly can’t model something infinitely more complex like the climate 100 year in the future.

This recent chart by Dr Box pretty much proves just now nonsensical the conclusions reached by climate “scientists” are. They clearly can’t see the forest through the trees. Just look at this chart.
Volcanic activity is blamed for the cooling of the period 1850 to 1900, as demonstrated by the chart labels.
Point #1) Dr Box doesn’t seem to grasp the simple concept that is the volcanoes were causing cooling, the temperatures were artificially depressed, meaning the actual warming since 1850 to 2000 would have been LESS if were not for the volcanoes. That means the according to the graph there would have been LESS that 1 degree C warming since 1850.
Point #2) It cooled from 1940 to 1990 without any real volcanoes, during a time in which CO2 dramatically increased. Why not blame CO2 for the cooling if we blamed the volcanoes for the cooling? What caused the cooling between 1940 to 1990?
Point #3) We aren’t even above the 1930 peak?
Would someone point to a period were the case can be made that CO2 is causing warming? How does this chart condemn CO2?http://notrickszone.com/wp-content/uploads/2015/06/Box-Fig-11.jpg
More:http://notrickszone.com/2015/06/29/greenland-temperatures-weaken-theory-co2-drives-climate/#sthash.uXe6WS2L.oItaVklW.dpbs
More:http://agwunveiled.blogspot.com/

In addition, the sensitivity is higher at lower temperatures and we expect more variability. At 1K, the sensitivity is about 64 C per W/m^2, while at 287K its only about 0.2C per W/m^2. A second factor with an additional effect on the sensitivity is that when ice covers the surface, the effective negative feedback from the difference in reflectivity between the surface and clouds disappears. Of course, the very definition of the IPCC metric of forcing excludes the negative feedback effect from cloud reflection and is why this is often overlooked.

“I suggest you look at global temperature anomalies instead of one geographical location. When did the top of the Greenland ice sheet become a proxy for the entire globe?”
I would but that data is so manipulated and inconsistent it is hard to put any faith in it. Given that CO2 evenly blankets the globe I find it hard to understand how the impact of CO2 can vary much, in any model its impact should be a constant, unless for some reason the physics of CO2 absorption vari per geographical region.
Anyway, here is another question to ponder. How does CO2 impact daytime temperatures? How can CO2 be causing the record high temperatures?
Given this chart that clearly demonstrates that CO2 is transparent to visible light, and visible light is what warms the earth. How can record high daytime temperatures during the day be caused by CO2? Once again, this chart should be shown to congress and a Warmist should be forced to answer that question. The very foundation of this climate change theory is pure nonsense. The real evidence of the fraud is that climate scientists know this chart well, and yet they remain silent when the press and president promote this lie. All evil needs to prevail in society is for good men to do nothing. The silence of climate “scientists” in the face of such outright lies speaks volumes about the integrity of climate “science.”http://www.ces.fau.edu/ces/nasa/images/Energy/GHGAbsoprtionSpectrum-690×776.jpg

Joel Jackson wrote:
“I see your problem.
You were listening to the media instead of looking at the scientific literature.
..
Got anything better than blog links?”
I read your post claiming there was no consensus of global cooling in the 1970. That research has about as much credibility as the claim that there is a consensus today for global warming…err, sorry, climate change. BTW, just look at the ice core data. can anyone show me a period where climate change wasn’t the norm? It would be abnormal is climate wasn’t changing. The very fact that they claim climate change is abnormal proves this “science” is nonsense, and to claim man can change a pattern that has existed since the beginning of time is even more nonsense, and to make matters worse, global cooling threatens mankind, global warming doesn’t, and we are almost certain to have another ice age, and never in 600 million years has CO2 resulted in run away warming, not even when it was 7000 PPMs.

While we are trying to apply common sense to an extremely difficult to define atmospheric physics problem let us see what nature has done to solve the problem for us. All received solar energy must be balanced by radiation leaving earth. Water vapor is the mechanism which does conservatively 90% of this job other than the IR radiation window which sends 33-40 watts/m^2 directly to space. We ‘know’ that water vapor does the balance (by some estimates ~104 watts/m^2) since there is no other competing mechanism to do the job. Mountains of studies of the complex atmospheric physics addressing convection, radiation, condensation, rain, etc. etc. have not reached a consensus on exactly how, the fact remains that water vapor does it. Water vapor NET effect is cooling in response to solar warming, as such it is a negative feedback.
How could any small increment (even a few watts) added to water vapor production, including positive feedback vaporization at the surface ever be construed to suddenly become a warming effect on climate? Such an illogical conjecture boggles the logical mind.

Ronald,
If you want to know why “only a few watts” – normally held to be up to 2W/m^2 can have such a warming effect long-term then do some simple physics for yourself. Assume the atmosphere is equivalent to 10m depth of water (it is almost) and that the heat capacity of water is 4.2J /g / degree C (not such a good match). Then try percentages of 100%, 10% and 1% of the heat going into this 10m depth of water. You will soon see why the climate scientists are worried.
Better to do the sums for yourself, because then you are more likely to believe the answer. You could always put the working up here and hopefully receive gentle corrections.

Here’s a calculation for you.
At 100% positive feedback, 100% of surface emissions would be captured by the atmosphere, half would escape to space and half returned to the surface since the atmosphere emits over twice the area it’s absorbing from. Of this half returned to the surface all is re-emitted and subsequently absorbed by the atmosphere, half of which escapes and half of which returns and so on and so forth. If Pi is the post albedo input power, the power entering (and in LTE leaving) the surface is Pi*(1 + 1/2 + 1/4 + 1/8 + …) = 2*Pi W/m^2. One W/m^2 of Pi to the surface results in 2 W/m^2 of emissions by it at 100% positive feedback, corresponding to s sensitivity of less than the lower limit claimed by the IPCC of about 2.2 W/m^2 of incremental surface emissions (0.4C) per W/m^2 of incremental input.
It would interesting to hear you actually answer the fundamental question raised by this article which is where is the origin of the roughly 16 W/m^2 of incremental surface emissions that must be emitted for a 3C average surface increase from only 3.7 W/m^2 of forcing. This is a gain in excess of 4, while the theoretical upper limit at 100% feedback is only 2.
While you’re at it, explain why each of the 239 W/m^2 of post albedo input doesn’t result in the 4.3 W/m^2 of surface emissions predicted by the consensus sensitivity, yielding a surface temperature close to the boiling point of water. How does the incremental gain get this high when the T^4 relationship between Planck emissions by the surface and temperature dictate that the incremental sensitivity must be less than the average for all previous W/m^2 of forcing and the average for all 239 W/m^2 of input is only 1.6 W/m^2 of surface emissions per W/m^2 of forcing resulting in a temperature of about 287K?

Christopher,
Consensus climate science conflates the EM energy transported by photons (Planck emissions) with the non EM energy transported by matter (latent heat, thermals, etc.). They must do this in order to provide the wiggle room to support a higher sensitivity and claim that the effects of latent heat and clouds provide significant positive feedback to amplify the sensitivity above that dictated by the slope of the Stefan-Boltzmann relationship, which most should agree is the zero feedback sensitivity.
The magnitude of the feedback required to amplify 1 W/m^2 into the 4.3 W/m^2 necessary to sustain a 3C rise from only 3.7 W/m^2 of forcing is staggering. They support this by incorrectly mapping Bode’s control theory to the climate. The error is that Bode’s analysis assumes an amplifier element with an external source of power that measures the input+feedback to decide how much to deliver to the output. The climate amplifier is mostly the atmosphere which consumes input and feedback power to produce its output power and this COE constraint has never been accommodated by consensus climate science. This actually sets the absolute upper limit of how much 100% positive feedback can do at 2 W/m^2 of incremental surface emissions per W/m^2 of forcing which is well below the lower limit set by the IPCC of 1.5 C per W/m^2, or about 2.2 W/m^2 of surface emissions per W/m^2 of forcing.
A hurricane also contradicts the claim of massive positive feedback from latent heat and clouds, for if this was true, a Hurricane would leave a trail of warmer water in its wake, rather than the trail of colder water we observe and that indicates negative feedback. A Hurricane is just a self contained version of the global heat engine that drives the weather and of course the second law tells us that a heat engine can not warm its source of heat.
Another problem is that the metric of forcing is ambiguous. The IPCC would call 1 W/m^2 of instantaneous incremental post albedo input power 1 W/m^2 of forcing, but would also call 1 W/m^2 of instantaneous decrease in the transparent window from increased GHG’s 1 W/m^2 of forcing. The difference being that 1 W/m^2 of incremental post albedo solar power passes right through the atmosphere, while in the steady state, only about half of the surface power absorbed by GHG’s reaches space and the remainder does back to the surface to make up the difference between emissions at its higher temperature and the available post albedo input power.
George

George,
Believe it or not, climate scientists have had a pretty good handle on how the sunlight, IR and mass transfer of the atmosphere works since 1964 when Manabe and Strickler showed you can only reproduce the atmospheric temperatures by height from a simple model if you include both greenhouse gas radiative transfer and convection in the model. And the results are pretty good, given it’s an “average lattitude” calculation not taking lattitude into account. See http://www.gfdl.noaa.gov/bibliography/related_files/sm6401.pdf
And they understand most of the processes down to a fine level of detail, which generally is not understood here. The training is available in various free online courses, but few here choose to avail themselves of it.
Sure there are some things which need to be understood better by climate scientists, and progress is being made as time goes on. But you do not need to understand absolutely everything in order to know what is going to happen when CO2 levels go up – 1964 Manabe and Strickler level of understanding is adequate for a first stab, which would not include the amplifying effect of increased water vapour.
As far as absorption through the atmosphere of sunlight and infrared radiation is concerned there is a very important finding from very simple physics. If, though the atmosphere, the sunlight absorption rate was higher than the infrared absorption rate by atmosphere of the same density, then the earth’s surface would be colder than you would expect from the outgoing IR radiation at the top of the atmosphere.
But because the atmosphere is pretty transparent to sunlight but pretty opaque to IR, then the surface is warmer than you would expect (called the greenhouse effect).
It probably wouldn’t come as a surprise now to you to learn that if the rate of absorption of sunlight and IR were equal, then the surface temperature would be exactly equal to what you would predict – no warming or cooling. This isn’t because there’s no greenhouse effect, but because it would be exactly countered by a “fridge” effect in stopping sunlight getting to the surface.

Climate Pete,
The temperature profile in the atmosphere is largely irrelevant and is mostly a function of a gravity induced lapse rate. All that really matters, relative to the LTE response of the system to change, is what physical laws are relevant at the 2 boundaries of the atmosphere. One is the boundary with space and the other is the boundary with the surface. Without an atmosphere both boundaries are the same and I expect you to agree that in LTE, Pi=Po and the Stefan-Boltzmann Law precisely predicts what the average temperature would be and the slope of this relationship at some temperature is the sensitivity at that temperature.
At the top boundary, only COE matters and that in LTE, the average energy flux entering the planet must be equal to the average energy flux leaving the planet. The same holds true for the boundary at the surface, although there is also non EM energy entering from the surface that must be returned to the surface in some form (believe it or not, most is returned as warmed liquid water and not as radiation). The surface, whose intrinsic emissivity is near unity must also obey the Stefan-Boltzmann Law and the processes driving the change from one LTE state to another must obey the second law of thermodynamics. All three of these laws must be violated in order to support the high sensitivity claimed by the IPCC. How do you reconcile this failed test of the CAGW hypothesis?

To clarify, the three tests of CAGW where it fails to conform to physical laws are:
1) COE -> the required positive feedback is > 100% implying that new energy must be coming from somewhere.
2) Stefan-Boltzmann -> The incremental sensitivity must be less than the average for all W/m^2 that preceded.
3) Second Law -> If water vapor and cloud feedback exhibited net positive feedback, hurricanes would leave a trail of warmer water and not the trail of colder water observed.

The temperature profile in the atmosphere is largely irrelevant and is mostly a function of a gravity induced lapse rate.

This is just not true. Gravity induced lapsed rate causes a reduction of temperature with height, but the temperature profile shows as many regions where temperature increases with height as it does regions where it reduces. So surely with such a profile there must be some significantly complex processes going on which you must understand before looking at the two interfaces.
All that really matters, relative to the LTE response of the system to change, is what physical laws are relevant at the 2 boundaries of the atmosphere.
The same physical laws apply everywhere in the atmosphere (and everywhere else for that matter).
If the greenhouse effect becomes larger due to more CO2 and/or water vapour, then this changes the temperature distribution. This has to change the energy flows at the two interfaces, because the temperatures are not the same.

To clarify, the three tests of CAGW where it fails to conform to physical laws are:
1) COE [conservation of energy] -> the required positive feedback is > 100% implying that new energy must be coming from somewhere.

The GHG effect never breaks any physical laws. It does not conjure up any new energy. All it does is stop a proportion of the energy coming in as sunlight at the top of the atmosphere (TOA) from escaping as infrared from the TOA by reradiating it down towards the surface.
Think of it as a transistor. A transistor does not create energy, but it allows a small control signal (CO2) at the base / gate to have a much larger effect allowing or restricting an energy flow which comes from the power supply – in the case of AGW by the IR coming originally from the surface.
Since there’s plenty of energy leaving the TOA there’s plenty of scope for CO2 warming causing increased water vapour in the atmosphere, and the increased water vapour then causing an increased GHG effect and a further temperature rise. All without creating energy anywhere because plenty is already present.
But the feedback from increased water vapour is less than 100%, because otherwise a small increase in the CO2 GHG effect would soon cause the atmosphere to lock up into a high temperature state. That doesn’t stop you having a multiplication factor greater than one i.e. water vapour causes a doubling of the temperature rise for a given rise in temperature due to the CO2 GH effect. But that isn’t the same as a feedback factor.

2) Stefan-Boltzmann -> The incremental sensitivity must be less than the average for all W/m^2 that preceded.

Sorry, I don’t get your point reading this in conjunction with your other post above. Certainly there aren’t any surfaces with unity emissivity – that would be a black body surface.
Could you rephrase the point in some other way please.

3) Second Law -> If water vapor and cloud feedback exhibited net positive feedback, hurricanes would leave a trail of warmer water and not the trail of colder water observed.

This doesn’t sound right.
Hurricanes get the energy to whip up huge fast atmospheric vortices directly from a very warm sea surface. if they remove energy from the sea surface it is surely going to cool. Further, once formed, the fast winds will also cause increased evaporation, which you would expect to reduce the
temperature.
Before the hurricane forms you would expect a warmer sea surface temperature than normal, and the GH effect from increased CO2 may or may not have been a cause of the increased sea surface temperatures. But during the actual hurricane lasting some days but not months, greenhouse warming is far too slow a process to have any effect on what happens.

Climate Pete,
You misunderstood some key points. First. I never said the GHG effect violates COE, but that the consensus sensitivity that presumes massive positive feedback violates COE by requiring positive feedback in excess of 100% where anything above 100% requires an external source of power (i.e. powered gain). For a surface at 287K, increasing this by 0.8C increases surface emissions by 4.3 W/m^2 which is 430% of the initial forcing. Can you say where the 3.3 W/m^2 are coming from without invoking positive feedback in excess of 100%? Of course you miss this because you implicitly ignore the relevance of SB relative to the surface temperature and its Planck emissions and will likely deny that increasing the surface temperature by 0.8C increases Planck emissions by 4.3 W/m^2 per the SB relationship. BTW, you do understand that the EQUIVALENT average surface temperature extracted from weather satellite data assumes an ideal black body surface and that this remotely sensed equivalent temperature is very close to the average temperature measured by surface thermometers implying that assuming unit emissivity is a reasonable approximation.
Referring to your transistor analogy, a transistor amplifier requires an external power supply and effectively measures the input to decide how much power to deliver to its output from its external supply. The climate has no such external supply and the input power (and feedback power) are consumed to supply the output power. This mistake was originally made by Hansen in an 1984 paper applying control theory to the climate and when Schlesinger ‘fixed’ some other mistakes in the Hansen paper he actually made it worse by breaking the only 2 things Hansen had right, which is quantifying sensitivity as the dimensionless ratio of output power to input power and conceptualizing the effects of GHG’s as gain rather than as feedback. The COE violation supporting consensus feedback ‘theory’ has never been corrected as Hansen’s and Schlesinger’s erroneous papers have been cited as foundation science as far back as AR1.
While the planet, relative to the surface, has an emissivity close to 0.6, the surface itself has an emissivity very close to 1. This reduction in emissivity as seen from space is a property of GHG’s and clouds in the atmosphere but is not intrinsic to the surface itself. This plot shows how the average of many 10’s of billions of remote sensed measurements from weather satellites confirms that from space, the planet appears to be very close to an ideal gray body whose temperature is the average surface temperature and whose emissivity is about 0.6.http://www.palisad.com/co2/fb/Figure1.png
Again, I stand by my assertion that all that matters for the LTE RESPONSE TO FORCING is the behavior at the boundaries of the atmosphere. This is classic black box modelling or reverse engineering where the unknown atmosphere is replaced with the simplest structure that has the correct behavior at its boundaries. As I have continued to say, consensus climate science adds all kinds of complexity to work around the problem that first principles physical laws can not support a high sensitivity.
If you still believe that clouds and water vapor/latent heat exhibit the massive positive feedback required to support the consensus sensitivity, how can you explain the surface cooling that results from a hurricane and which is a clear signature of net negative feedback?

Addressing the second Law issue, under what circumstances will feedback from clouds, water vapor and latent heat become massively positive enough to support the consensus sensitivity when a Hurricane is just a spatially condensed version of the heat engine that drives all weather and exhibits the clear signature of net negative feedback on the surface temperature? Of course, all heat engines are subject to the Second Law, so what physics to you propose allows violating this law outside the confines of a Hurricane?
TO BE CLEAR, I’m not saying that the GHG effect violates any of these laws but only that the high sensitivity claimed by consensus climate science violates these laws.

Now, I will explain the graph. The Y axis is power emitted by the planet and the X axis is the surface temperature. Each of the roughly 23K small red dots is the average emissions vs. average surface temperature for one month of data across one 2.5 degree slice of latitude. Roughly 3 decades of satellite imagery across 72 2.5 degree slice of latitude are represented. The larger dots are the averages for each 2.5 degree slice across the entire data set. Blue dots are the N hemisphere slice and green dots are the S hemisphere slices, although they align closely with each other and mostly appear as black dots. The slope of the best fit to the data at 287K is about 0.3C per W/m^2 and represents an upper limit on the sensitivity. The radiative transfer model used for determining the surface temperature is the relatively simple one used by Rossow to produce the ISCCP data set. My tests using a more accurate HITRAN driven line by line model don’t make much difference and if anything, the results get even closer to that of an ideal gray body.
If we superimpose the post albedo input power vs. surface temperature across the same slices of latitude, the results get even more interesting.http://www.palisad.com/co2/fb/Figure3.png
The slope of this relationship is only about 0.2 W/m^2 and is aligned with the theoretical behavior of an ideal gray body shown by the magenta line which is the unit emissivity SB relationship at the average surface temperature shifted up and to the left by the gain of 1.6 which is the reciprocal of the measured emissivity of about 0.62.
Keep in mind that this is not the highly processed, homogenized sparse data whose speculative interpretations are all that can support CAGW, but real data with nearly 3 decades of continuous and complete coverage of the entire planet where planet emissions are directly measured by satellite sensors and the post albedo input is trivially calculated from visual imagery. Moreover; all of the processing of the data in these graphs was done by GISS. My plots only present their data in a more revealing form.
Its illogical to deny that the data precludes a high sensitivity as it unambiguously supports a low one, yet somehow this is consistently denied by consensus climate science.

https://wattsupwiththat.files.wordpress.com/2015/06/clip_image010_thumb2.jpg
co2isnotevil,
Let us just concentrate on the one key point, which is the source of the energy for increased surface IR emissions after water vapour GH feedback has kicked in. That is where the misunderstanding is.
You are correct that if the surface warms it will emit more IR. The Stefan-Boltzmann law will give a reasonable indication of the scale of the extra IR emissions, though ocean (non-ice) emissivities / absorptivities (which must be identical) are below 90% up to 20um, and there is a lot of ocean. But the best way to approach the problem is from cause and effect.
Firstly, the energy available to the surface to emit as IR (and other things) comes from two major source :
– incoming sunlight, not changed much by increasing CO2 (maybe some by cloud changes)
– downwelling IR, of which most comes from the GH effect
When additional CO2 is added to the atmosphere, you hopefully agree it causes more downwelling IR, which eventually increases the surface temperature. Increasing surface temperature (from SB) increases the upwelling IR plus convection and sensible upwards heating. The increase in downwelling IR is at the expense of the IR which exits at the top of the atmosphere and is reduced.
This means that the increase in surface temperature due to CO2 is not the result of more energy being created. It is the same energy going around the surface emission / upwelling IR / GH effect reflection / downwelling IR / surface absorption route, except it is doing slightly more cycles around this route, which means the power emitted from the surface will be increased.
As the surface and atmospheric temperatures increase due to the CO2 GH effect, this enables more water vapour to persist in the atmosphere, because the saturated water vapour pressure is higher with the higher temperatures. Since water vapour is also a GHG, then the downwelling IR once more increases. You get exactly the same effects as for initial CO2 increase above.
The fraction of upwelling IR reflected increases due to the increased GHG atmospheric content, become downwelling IR, again at the expense of IR leaving the top of the atmosphere. So there is a further increase in both the upwelling IR power and the downwelling GH IR power. But because the outgoing TOA IR is being further reduced by the water vapour GH effect, there is less energy leaving the system. Therefore the increased surface temperatures and surface emission powers are due solely to retaining a higher fraction of the incoming sunlight energy in the system, and not due to any breaking of the first law.
So perhaps a better analogy than a transistor would be a microwave cavity. I apologise if you are not into microwave cavities and will think of another analogy in due course.
Assume a constant generator power, but intially the generated frequency does not match the cavity resonant frequency. The losses will be high and the microwave energy density in the cavity will be low. But if you change the frequency to bring it in line with the cavity resonant frequency then the microwaves are reflected more times before being absorbed, and the microwave energy density increases. This increase has nothing to do with an increase in the power of the microwave generator, purely that the losses due to detuning are reduced.
So it is with the greenhouse effect. The incoming sunlight gets converted to IR at the surface, and can then go round the surface emission / upwelling IR / GH effect reflection / downwelling IR / surface absorption route more times with an increased GH effect.
Note that the above process does not depend on whether the water vapour amplification factor is less than one or more than one. It is purely a matter of how many times the IR can go round the loop, which is dictated by the GH effect reflection fraction of IR back downwards. And in turn the number of times the IR goes round the loop determines the energy density and therefore the surface temperature and emission rates.
So in response to

Can you say where the 3.3 W/m^2 are coming from without invoking positive feedback in excess of 100%?

the answer is :

Whatever the size of the feedback, the extra power at the surface comes from the number of time the IR goes around the surface / upwelling / GH effect reflection / downwelling / surface look before it escapes. However, this does not result in any significant additional energy because it is contained and no extra power gets radiated up through the TOA. In fact the radiation of power up through the TOA is reduced.

The number of times IR goes around the loop is not high. In fact from a previous post of mine it is (342 – 92)/ 398 = 250/398 which is around 0.63. The point is that any increase in this figure will inevitably cause an increased surface temperature and surface emissions without requiring any magic energy addition.

The only source of input power is the Sun. Yes, some fraction of the surface emissions absorbed by the atmosphere are returned to the surface and this makes up the difference between the 239 W/m^2 of post albedo input from the Sun and the 386 W/m^2 emitted by the surface. The remainder exits into space to make up the difference between the surface and cloud emissions passing through the transparent window of the atmosphere and the required 239 W/m^2 of LTE output emissions. For the combined EM energy of photons emitted by the surface and absorbed by GHG’s and the water in clouds, about half must be returned to the surface and the remaining half exits to space in order for LTE to be achieved. This is consistent with energy entering the atmosphere across half the area which energy is leaving the atmosphere. This also indicates that non EM power entering the atmosphere (latent heat, thermals, etc) has no effect on LTE and must be exactly offset with additional power returned to the surface, mostly in the form of rain and weather. If it did have an effect on LTE, offsets would be required to make the system balance and yet no offsets are required to fit the measured data when only EM energy is considered relative to the EM balance of the planet.
Yes, the extra power comes from the return of power entering the atmosphere from the surface, but as you point out, it only goes around 0.62 times, resulting in a sensitivity of 1.62 W/m^2 of surface emissions per W/m^2 of input forcing and not the 4.3 W/m^2 of incremental surface emissions per W/m^2 of forcing claimed by the IPCC. The same W/m^2 going around and around assumes that all the power absorbed by GHG’s is returned to the surface, when the data tells us that only about half of what is absorbed is returned to the surface. Again, this is a consequence of the errors made when applying control theory to the climate and failing to account for the COE constraint arising from the lack of powered gain that Bode’s model it’s based on otherwise assumes.
Certainly, incremental CO2 has an effect on the amount of energy captured by the atmosphere and according to line by line analysis, doubling CO2 decreases the power passing through the transparent window by about 3.7 W/m^2 and no where near the more than 16 W/m^2 that needs to be supplied to support a 3C increase in the average surface temperature. Moreover; only half of this is available to be returned to the surface as the remainder eventually escapes out into space.

Reading these comments makes me realize that I’m in den of physics affectionados. To head off the need for endless mia culpa responses I may need to state the obvious that I was taking the tack of LM of B in the opening conjecture; that of accepting the IPCC claims of CO2 forcing a positive water vapor feedback. And even if it did result in additional water vapor, it would ultimately result in negative climate feedback through the normal ongoing hydrologic cycle processes.
First I do not agree that CO2 is a forcing parameter in this surface boundary layer since it has no additional source of energy to add to the system. It picks up its additional share of the 15u spectrum where it is immediately thermalized within 10’s of meters of surface due to the short mean free path of IR/CO2 at this density. This results in IR capture and boundary warming at a slightly lower height than before CO2 doubling. To the extent that this warming might be transmitted to the water surface it could contribute to additional water vaporization but the of available energy through the surface mass convection/conduction resistance will limit energy rate to the surface to prevent any additional total vaporization taking place. I still believe in the law of conservation of energy no matter how confusing the understanding of the physics may be.
After convection of the water vapor to altitudes where condensation, freezing and heat release can take place there is the possibility of ‘greenhouse’ warming due to the much described raising of the 15 u spectrum radiation to higher altitudes along the negative lapse rate temperature line. However given the 15 u spectrum radiation temperature measurement of 217 K, ie the tropopause temperature, one must ask just how much greenhouse effect is taking place. In any event because of the a fore mentioned energy limits there will be no additional water vapor to enhance the effect.

This is another place where the conflation problem I mentioned adds confusion. The kinetic temperature of atmospheric CO2, N2 and O2 are all the same and this is a property of matter and its collisions. A thermometer will respond to both collisions owing to kinetics and absorbing photons. In principle, the power behind the two manifestations of temperature can perform equivalent amounts of work, but other than that, they are independent. Think of shining a laser through the atmosphere at a LWIR wavelength that the atmosphere is completely transparent to. While the laser is on, a thermometer in its path will read a higher temperature, but will immediately return to the kinetic temperature once the laser is turned off. If not for this property, laser weapons would be useless.

I’m sorry, has anyone been able to explain how CO2 can cause record high daytime temperatures? That to me seems to be a very very very basic question, but unless I’ve missed it, I don’t think anyone has an answer…yet.

CO2 doesn’t directly cause record high daytime temperatures. In fact CO2 actually warms the nightime temperatures more than the daytime, and the winter temperatures more than the summer, because the sunlight obviously builds up the temperatures faster than IR emissions lower then, so CO2’s most marked effect is to stop the heat from leaving the surface once the sun goes down or is less powerful in winter.
Record high maximum (daytime) temperatures are thus less common than record high minimum) nightime temperatures.
But obviously if you start the day with a higher minimum temperature, then you are more likely to go on to develop a record high maximum temperature, which, for some funny reason seems to be the only thing anyone cares about.
Incidentally we had one yesterday in London – 37 degrees C at Heathrow is a record for London in July, and I have to say it was unbearably hot walking to the tube (underground, subway) and I had to wear a hat.

People simply have to stop saying dumb things like “…CO2’s most marked effect is to stop heat from leaving the surface…”. Build your arguments on actual physics. And don’t say “You know what I meant.” If you meant to say “…added CO2 slows the cooling rate…”, then actually say that. The whole “humans are destroying the Earth” meme is built on inaccuracies like this and I’m tired of it.

So? Peer reviewed or pal reviewed like all the CAGW work? On blogs everybody takes a shot, not just co-authors and good buddies. Goodard is reviewing NOAA & NCDC data which has obviously been “adjusted.”

The point about peer review is to do with the scientific consensus on climate science. A consensus consists of three things.
1. A correspondence of published results identifying that AGW is happening, across a wide variety of methods used.
2. An agreement across climate scientist in different countries, different climate science disciplines as to what constitutes AGW and how it should be measured e.g. radiative forcing, ocean heat content, surface and stratospheric temperatures etc.
3. A set of people working on it who agree about it and are from different politics, countries, races.
And that is why the national science academy of every country which has one (and this includes the Vatican which has the Pontifical Academy of Science) has put out a statement saying that AGW is for real and we should do something about it.
And here is a very telling point. Going back 10 or 15 years the weather forecasters were about 50:50 on whether AGW was real. Because weather forecasts were only good for limited times they had little formal training on certain long-term weather effects. However the advent of much faster supercomputers enabling ensemble weather forecasts has increased the time period over which forecasts are accurate. This required the weather forecasters to mug up on those longer-term weather effects. As part of the learning process they have generally looked at AGW too.
And guess what! Nowadays the vast majority of weather forecasters accept AGW, and some even work it into their forecasts to better inform the public. That includes those who did not accept AGW before.
In other words, the more a bunch of scientists understands the detail of climate and longer-term weather, the more likely they are to accept AGW.
In order to reject AGW you have to believe that a) the vast majority of the experts (those who have been working in the field for a long time and have a number of climate science publications to their name) are lying or stupid. Not likely. b) All the science academies of the world are wrong. Not likely. c) The more you know about climate science the more wrong you get. Not at all likely.

Could you rephrase the point in some other way please.
“
3) Second Law -> If water vapor and cloud feedback exhibited net positive feedback, hurricanes would leave a trail of warmer water and not the trail of colder water observed.
This doesn’t sound right
IPCC gives clouds a -20 W/m^2 RF. That’s cool & kewl.

I should point out that cloud feedback isn’t always negative. Below 0C when the surface is covered by ice and snow, increasing clouds warm the surface. Above 0C, increasing clouds cool the surface because incremental reflection is a larger effect than incremental trapping of surface heat by clouds. As a result, the sensitivity is higher at lower temperatures (not withstanding the the additional effect of the T^4 relationship between temperature and input/output power). There being far more surface area > 0C than < 0C, the average equivalent feedback from clouds is net negative.http://www.palisad.com/co2/sens/st_ca.png
The cloud model I've found that most accurately predicts the measured cloud properties is one where cloud volume increases monotonically with the surface temperature and water vapor as the ratio between cloud height and cloud area dynamically adjusts to achieve the steady state equilibrium that results in the warmest possible surface temperature given the input power constraints and average atmospheric properties. CO2 concentrations affect the average atmospheric properties so doubling it is easily quantifiable and results in an equivalent forcing of less than 3.7 W/m^2 resulting in an even smaller LTE effect on the surface than even most skeptics will claim.
Many of the speculative estimates of a high sensitivity apply the same broken homogenization techniques to extrapolate polar sensitivities to the tropics. Keep in mind that at 1K, the Stefan-Boltzmann sensitivity alone is about 65C per W/m^2, at 255K its about 0.3C per W/m^2 and at 287K its about 0.2C per W/m^2. Adding the effects of clouds at temperatures below 273K significantly increases the apparent local sensitivity to incremental solar input. At 0C, the sensitivity is very high owing to melting/forming ice, but this also represents a tiny fraction of the total surface and can not be extrapolated to represent the whole planet, but often is.

Cloud feedback is probably very mildly positive
See http://www.skepticalscience.com/clouds-negative-feedback.htm
High level clouds provide positive feedback. Low level clouds provide negative feedback. There is some evidence that increasing temperatures means less low-level cloud, but the evidence is not conclusive at present.
Either way, the evidence is that the effect is not large.

Climate Pete,
Yes, the net effect from water vapor, clouds and latent heat is very small and the net feedback acting on the climate system is close to zero, although it is both quantifiably and demonstrably slightly negative (sorry, but SS caries little credibility with me). At zero feedback, the climate sensitivity is the slope of the Stefan-Boltzmann relationship at the current average surface temperature, or about 0.2C per W/m^2 and is why the measured sensitivity is about 0.2C per W/m^2.
I expect you to agree that for a Earth like planet with no atmosphere, or even an atmosphere with no GHG’s or clouds, the Stefan-Boltzmann LAW would exactly quantify the average temperature of the surface relative to total input forcing and the slope of this relationship would be the sensitivity of the surface temperature to forcing. Thus the zero feedback sensitivity is the slope of the SB relationship at the current average temperature.

co2isnotevil,
I said clouds. I did not say water vapour.
It’s pretty clear from physics that the warmer the atmosphere is the more (gaseous, invisible) water vapour can reside in it. This is hardly new science.
And since you and I both understand that all the water vapour in the atmosphere causes around 3x the GH effect that all the CO2 does, we both understand that more water vapour has to mean more GH effect.
Since this represents a very distinct mechanism, for amplification of the AGW effect from CO2, the onus is on those who feel it can be ignored to justify why the mechanism just does not apply.
And it is never a matter of credibility. It is about physics and physical mechanisms.
While I agree with you on planets with no atmosphere, ours most certainly does have an atmosphere with a dynamic and continuous IR radiative energy flux both up and down which provably does change the temperature profile with height.

You accept that the relationship between forcing and the surface temperature for an Earth with no GHG’s or clouds in its atmosphere is completely specified by the SB Law and that this results in a sensitivity of less than 0.2 C per W/m^2. You seem to agree that the net feedback from clouds/weather is low, so the net feedback from water vapor, clouds and latent heat (all the things that manifest the heat engine driving weather) must also be low. You can’t separate the effect of water vapor without considering all of the other offsetting effects, for example, reflection by incremental clouds. Examining the heat engine that drives weather shows us the net of all of these effects combined.
You seem to be overwhelmed by the apparent complexity of multi-path fluxes traversing through the atmosphere. I surely understand as this confusion is purposeful based on how the consensus has framed the science. If you stick to the electromagnetic behaviors at the boundaries and extract a transfer function for how surface emissions/temperature change in response to solar input, this confusion quickly fades away and the reality of a low sensitivity is unavoidable.
You still haven’t explained why the high sensitivity claimed by the IPCC can violate so many first principles laws, other than by blindly claiming that they do not, even in the face of overwhelming evidence to the contrary.

co2isnotevil,
I’ve put together a very simple spreadsheet model of the flows, assuming every flow into the atmosphere itself is split by GHG reflection factor IR downwards and 1 – that upwards. It shows that for a 6W initial RF at the TOA the rise in temperature at the surface (calculated from the SB emission surface output power required to restore the equilibrium) is 2.6 degrees C. This is 0.45 K / W / m^2 – twice your figure. If you allow for 90% of emissivity (based on mostly ocean) then the figure becomes 0.5 K / W / m^2. This seems to be in line with a temperature rise of 2 degrees C.
The model input is the GHG reflection coefficient, which changes both RF and temperature rise.
I’ll change it a more accurate spreadsheet model handling the surface emissivity and absorption coefficient properly, then release the model.
However, your claim that sensitivities above 0.2 K / W / m^2 have to break COE are not looking good at this point.
Without thinking about it too much it does seem as if your result are out by a factor of1 / (1 – 0.6) = 2.5, which must have to do with the fraction of upwelling IR which is reflected back to the surface by the GH effect.

Are you among those who deny the conflict of interest at the IPCC or are you among those who more dangerously considers it a necessary means to an end? Do you deny that if the physics is correct and mankind’s CO2 emissions have an effect somewhere between benign and beneficial, the IPCC has no reason to exist or do you simply deny the immutable physical laws that conclusively support a sensitivity far less than needed to result in catastrophic consequence?
You seem to think that because the IPCC exists, CAGW must be important when the only reason consensus climate science foolishly believes that CAGW is important is because the IPCC exists. If it never existed, the fear of CAGW would be a distant memory and the science would have been settled long ago. It’s an abomination of science that we allowed a political body to interfere and in fact direct scientific progress. Once partisan politics chose sides, any sense of objectivity completely disappeared. Maybe you’re not a scientist and this doesn’t bother you. I am and it bothers me to no end and my sole motivation here is to free science from the corruption of politics.
I suggest you review the charter of the IPCC which by design has driven the climate science consensus since its inception. They only support and summarise science consistent with their reason to exist and have systematically excluded or denigrated science that is not. The decades of bias has turned climate science into a joke, tainted peer review and publishing (who wants to do climate science that the IPCC doesn’t recognise) and used lies and misstatements of fact to foment political divisiveness all in an effort to justify redistributive economics through climate reparations. The IPCC is engaged in the crime of fraud against humanity and those behind this fraud should go to jail.
Regarding GCM’s, I’ve developed, debugged and validated more models of more kinds of systems that you can imagine including feedback control systems that make the climate system look trivial by comparison. When a model doesn’t converge to the same answer from different initial conditions, the model is surely broken. The climate model I work from has no assumptions, no arbitrary coefficients, strictly follows physical laws and converges to the same final state regardless of initial conditions. Until you can point to a GCM with these properties, they have no interest to me. The typical GCM has so many dials and arbitrary coefficients that you can get any answer you want to see. GCM’s are barely good at predicting the weather a week out and that’s what they were developed for. Do you really think the divergence problem doesn’t affect multi-decade simulations of the weather?

Its not the fourth root of 0.9 that’s important, but the non-linearity of the T^4 relationship that is being ignored. Consensus climate science supports expressing a sensitivity as degrees per W/m^2 by claiming that the relationship around the current temperature and emissions/forcing is approximately linear. This is certainly true, but they made a novice mistake and assert a sensitivity based on the current operating point and the origin, rather than a sensitivity based on the slope of the relationship at the current operating point.
239 W/m^2 / 287K = 0.82 C per W/m^2
slope of SB at 287K = 0.2 C per W/m^2
This is the kind of mistake I would expect from a high school physics student and not from ostensibly intelligent scientists. Few believe that scientists could make this kind of silly mistake and is why many have trouble believing how incredibly wrong consensus climate science really is.
The spreadsheet you sent is nonsense. Where does 450 W/m^2 of upwelling emissions coming from when the surface temperature is no where near the 298K required to emit this much. Are you assuming that latent heat and other non radiant forms of energy are subject to GHG absorption? Also, where does the 6 W/m^2 come from? Line by line simulations of the standard atmosphere tells us that that doubling CO2 reduces the transparent window by about 3.7 W/m^2 resulting in only 1.85 W/m^2 of equivalent forcing from the Sun.

It really seems to me that you can’t get past the obfuscation perpetrated by Trenberth where he conflates the energy transported by photons (EM energy) with the energy transported by matter (non EM energy) relative to establishing the radiant (EM) balance of the planet.
If you consider only the photons,
The surface emits 385 W/m^2 of LWIR photons
The incident power is 239 W/m^2 of visible photons
The deficit entering the surface is 146 W/m^2
Accounting for emissions from clouds and the fraction of surface emissions that pass through clouds, the transparent window of the atmosphere is about 93 W/m^2 of LWIR photons.
The deficit exiting the planet is 239 – 93 = 146 W/m^2
For a transparent window of 93 W/m^2, 385 – 93 = 292 W/m^2 of surface emissions are being absorbed by GHG’s and clouds.
The 146 W/m^2 deficit to the surface plus the 146 W/m^2 deficit into space is 292 W/m^2 and exactly equal to the amount of surface emissions absorbed by the atmosphere where half escapes to space and half returns to the surface.
If the non EM flux between the surface and atmosphere made any difference at all, an offset would be required in order for balance to be achieved. No such offsets are required.

Minor correction. The 93 W/m^2 transparent window is from surface emissions from cloudless skies and the fraction of surface emissions that pass through clouds. Cloud emissions that pass through the transparent window are accounted as part of the absorbed surface emissions that are eventually emitted into space. The calculation of the net transparent window is as follows:
fraction of surface covered by clouds = 0.67 (from ISCCP)
fractional width of the transparent window for the clear sky = 0.47 (from HITRAN based simulations)
average fraction of surface emissions that passes through clouds = 0.275 (from ISCCP)
net transparent window fraction
((1 – .67) + .67*.275) * .47 = .2417
385 * .2417 = 93 W/m^2

co2isnotevil,
OK the Excel version of the spreadsheet did not start off in the right state, and did not contain the simple instructions which were in the Google Drive version, but that was not updateable.
Here is a correct version http://api.ning.com/files/wElhaJF9fXKRHZ-FY15ZBHqq78OgIptC0hihYW2WoaVr1BBF7svOsx11Os4Oc5lhAesu*imSVp5ugY3wOqExz3YydJdgk2Sl/SurfaceEmissionsByGHGReflectionCoefficientV3.xlsx
You have to use it by changing values to work out sensitivity. You can’t just read it off a single version.
The spreadsheet’s assumption is that the energy entering the atmosphere gets redistributed down with a GHG reflection coefficient, and up with one minus the GHG reflection coefficient. The calculation for more CO2 or water vapour is straightforward – you change the GHG reflection coefficient upwards to a value of your choosing. It doesn’t particular matter what you choose as long as it is a small change. My choice was the increase GHG reflection from 0.6 to 0.61.
The first thing that then happens is that you can record the RF which that causes. This figure started off as zero with GHG reflect = 0.6, but on increasing it to 0.61 the RF becomes 6 W/m^2.
Now you have to change the spreadsheet to reach the equilibrium ECS state. You do this by changing the value in cell E9 until either it matches C16, or the RF has been reduced to zero.
Then the equilibrium IR upwelling is used as input to the inverse of the Stefan-Boltzmann law which you know all about, to predict what the new surface temperature has to be to give the new upwelling IR energy. In the example the new surface temperature is 295.5. Helpfully the original temperature is in the cell below and is automatically subtracted from the new temperature to give the value in the red box. In this case the answer is 2.65 degrees. So the climate sensitivity is 2.65 / 6 = (K / W / m^2).
This use of Stefan Boltzmann is exactly what you were requiring, so this is what happens here.
Since the spreadsheet implements both conservation of energy and the Stefan Boltzmann calculation of surface temperature, then it proves that a climate sensitivity of 0.44 W / m^2 can be achieved without breaking any laws of physics.
This conclusion is directly contradictory to your claim that the sensitivity has to be 0.2. It is a very simple spreadsheet which just implements the flows given in the pretty flow diagram posted many times by me above.
Ensure you have a play with the spreadsheet, check that it works, and take the time to understand how it works because it is the basis for my claim that 0.2 is not the relevant climate sensitivity given by applying SB properly to the radiation flows in the climate system.
Note too, that the climate sensitivity does not depend much on the change in the GHG reflection coefficient, provided the changes are small. Using 0.63 instead of 0.62 gives a sensitivity of 0.45 instead of 0.44, for instance.
Any approach to calculation of climate sensitivity that ignores the impact of the internal climate GH effect flows is not correct. The spreadsheet takes these flows into account.

Climate Pete,
I’ve had a chance to look at your spread sheet and have identified several errors.
When you apply the nonsense parameter ‘GHG REFLECTION’ to the total input to the atmosphere, you are inferring GHG effects on things like latent heat, thermals and other forms of energy that are not in the form of photons and not subject to GHG absorption or GHG reflection, whatever that is. The actual amount of surface emissions from GHG absorption that is returned to the surface are 1/2 of what was absorbed and the remaining half exits to space. This is dictated by geometric properties and I verified this conclusively with real data in a previous posting. Why do you have such a problem understanding that only radiant energy matters for the radiant balance of the planet? Trenberth conflates these because he didn’t, he has no wiggle room to make a case for an otherwise impossibly high sensitivity.
You keep repeating the same errors over and over and I keep having to point them out.
Another error you made is to lump together power passing through the transparent window of the atmosphere with GHG captured power that eventually escapes to space. These are completely different from each other relative to GHG effects.
George

Another error in your spreadsheet is one commonly made by those pushing the CAGW POV. You are increasing the deficit at the TOA and decreasing the effective emissivity independently. This is counting the effect twice. Remember that the IPCC quantification of the effect of CO2 as equivalent forcing means that only forcing changes and all else remained constant, i.e. forcing is an equivalent change in post albedo input power keeping all else constant. You can either change the absorption (part of what you called GHG reflection) or increase the deficit at TOA keeping the absorption constant by increasing solar input. You can’t do both.

Climate Pete,
I’ve updated the link athttp://www.palisad.com/co2/corrected.xlsx
to include reporting differences from nominal and added some comments specifying values to use for emulating 3.7 W/m^2 of solar forcing and a 3.7 W/m^2 decrease in the width of the transparent window (GHG forcing). Note that the IPCC considers both to be equivalent to forcing, yet they have significantly different effects!
I took the time to understand your spreadsheet well enough to find the errors, please see if you can identify errors in mine. You may need to consult with your friends at SS or RC, but be forewarned, once you understand my spreadsheet well enough to look for errors, you will have no choice but to believe that the IPCC’s consensus is wrong and the skeptics are right.
George

“These questions have been settled by science.” Surgeon General
IPCC AR5 TS.6 Key Uncertainties. IPCC doesn’t think the science is settled. There is a huge amount of known and unknown unknowns.
According to IPCC AR5 industrialized mankind’s share of the increase in atmospheric CO2 between 1750 and 2011 is somewhere between 4% and 196%, i.e. IPCC hasn’t got a clue. IPCC “adjusted” the assumptions, estimates and wags until they got the desired mean.
At 2 W/m^2 CO2’s contribution to the global heat balance is insignificant compared to the heat handling power of the oceans and clouds. CO2’s nothing but a bee fart in a hurricane.
The hiatus/pause/lull/stasis (IPPC acknowledges as fact) makes it pretty clear that IPCC’s GCM’s are not credible.
The APS workshop of Jan 2014 concluded the science is not settled. (Yes, I read it all.)
Getting through the 1/2014 APS workshop minutes is a 570 page tough slog. During this workshop some of the top climate change experts candidly spoke about IPCC AR5. Basically they expressed some rather serious doubts about the quality of the models, observational data, the hiatus/pause/lull/stasis, the breadth and depth of uncertainties, and the waning scientific credibility of the entire political and social CAGW hysteria. Both IPCC AR5 & the APS minutes are easy to find and download.

IPCC most certainly does not say mankind’s share of atmospheric CO2 is between 4% and 196%. It says mankind is responsible for virtually all the rise from 280 ppm to 400 ppm.
That does not mean there is not a rapid dynamic exchange of CO2 between atmosphere, biosphere and oceans. But generally it is a net zero exchange.
2W/m^2 sounds small until you start to work out how much surface and atmospheric temperatures would rise if only a few % of this went into heating them. Do the sums for yourself.
And sure, the heat handling of oceans is much bigger than this – most of the RF ends up heating oceans. But it is the tiny bit that doesn’t which matters.
And weather effects totally swamp 2 W/m^2. The warming predicted is much much smaller than the daily range of temperatures. but in the long term a tiny movement of average temperatures is important, so you have to look at impacts carefully, not dismiss them on the basis something is very small so must be unimportant.
UAH 6.0 lower tropospheric data set has removed most of its dependence on surface temperatures, like RSS. UAH 5.6 showed signficant warming and did include a signficant weighting of surface temperatures.
So all you can actually say from RSS and UAH 6.0 is that it looks like a particular range of heights (lower troposphere) with varying contributions (but including far less of the surface than it once did) show static temperatures for a period. It doesn’t actually say much any more about the surface temperatures which cause drought and evaporation and other problems. There’s a good case now for saying only the surface temperature data sets, no longer the satellite data sets, give any meaningful data about surface temperatures.
However, it’s not the surface temperatures which prove AGW. It is the radiative forcing, which you seem to accept as 2W / m^2.
The thing the GCM’s have to get right to be correct is not surface or lower tropospheric temperatures (which are at the mercy of weather), but the radiative forcing. And this they appear to do. If this is correct then over the long term the random weather fluctuations will settle down to an average state and the temperature projections of the GCM’s wil be validated.
You refer to the IPCC AR5 etc. as expressing doubt. This is not the case. It expresses uncertainty which is what all good science is expected to do – you need to know the potential error bounds of your findings. But this is not the same thing at all as saying that the conclusion is in doubt that AGW is real and caused by humans. The IPCC is pretty clear – humans cause AGW.
The uncertainty and scope for improvement is key to getting a more certain (less uncertain) range into the next IPCC report, which is pretty important for determining how much faster or slower efforts to reduce CO2 emissions should go.

But until those at the IPCC who drive the so called consensus can wrap their collective heads around a sensitivity closer to the 0.2C per W/m^2 +/- 20% dictated by first principles physics then the 0.8C per W/m^2 +/- 50% dictated by political necessity, climate science will remain controversial.
BTW, you should also agree that science does not operate by consensus, but by the scientific method. A consensus is only required to ‘settle’ subjective controversies which still doesn’t imply universal acceptance. Science is, or should be, completely objective and driven only by the scientific method.

Science does indeed operate by the scientific method. And over a period of time the scientific method leads to a consensus. This will be after a body of knowledge has been probed by a variety of different techniques and by a diverse range of scientists. And that is where climate science is at. not everything is known, but there is a large shared body of knowledge confirmed by multiple lines of evidence.
The IPCC do not drive any such consensus. All they do is to summarise all the research results available in different areas of climate science. Where the evidence all points in one direction that is easy. Where different analyses and different modelling points to a large possible range of values it is the IPCC’s job to summarise that into a range and give an expert view as to the strengths and weaknesses of the different lines of research leading to a diverse outcome.
As far as the sensitivity goes, this article argues that the lower end of the IPCC AR5 1.5 to 4.5 K warming should now be increased – http://www.skepticalscience.com/challenges-constraining-climate-sensitivity.html .
Within the article one heading which is very relevant is “Nature As An Ensemble Member, Not An Ensemble Mean” which sums up the correct way to think about it very well for me. The section goes on to explain that the limited number of GCM runs, started with random slightly differing initial conditions, which approximately follow the actual temperatures since 1998 do not produce any lower sensitivity than those which do not follow actual temperatures.

Re: Christopher Monckton of Brenchly on Kiehl & Trenberth’s Budget
Kiehl & Trenberth’s Energy Budget (1997), above, is the cornerstone of IPCC’s climate modeling, featured prominently in both the TAR and the AR4. Kiehl’s name appears 68 times in the TAR; Trenberth’s 76 times. Both are contributing authors to the TAR, and Trenberth to AR4. The first time their names appear in the mainbody of the TAR is as a reference to their Budget. The Budget is the second figure in TAR, preceded only by a cartoon of the climate system.
The Budget is a static, radiative forcing model of the the long-term equilibrium state of the climate system, which requires the net inflow of energy at both the top of the atmosphere and at the surface to be zero. K&T (1997) p. 198. IPCC’s paradigm for anything but minor climate models, also called Radiative Forcing (though defined differently), implicitly adds a few parameters to the K&T budget, animating it computationally into a scientific climate model, that is, a model that can predict (e.g., temperatures) in response to inputs (called forcings).
A point of clarification on equilibrium: IPCC uses the word equilibrium 418 times in its third and fourth assessment reports, including reference titles. The usage explicitly refers to thermodynamic equilibrium twice in reference titles, and again in a few applications to ocean chemistry which refer to the work of Zeebe & Wolf-Gladrow. There are 17 citations to thermodynamics, though never the Second Law, which would imply thermodynamic equilibrium. In one instance, IPCC refers to equilibrium thermodynamics, and to be fair, two of those 17 citations note that the work was without regard to thermodynamics. Thermodynamic equilibrium requires simultaneous mechanical, chemical, and thermal equilibrium. The latter requires a uniform temperature throughout the system, ruling out both thermodynamic equilibrium and thermal equilibrium from the climate system, all as those terms are used in thermodynamics.
IPCC routinely runs its climate models until they are fully adjusted to any change in radiative forcing, a process it calls running an equilibrium climate experiment. However, this definition does not specify which of the many state parameters undergo adjustment, though any such list certainly ought to include temperature. The problem is that the Budget schematic includes 15 explicit parameters (thermodynamics refers all such parameters as coordinates), and a few other implicit parameters, such as greenhouse gas concentration (though not temperature). Consequently, IPCC’s definition of equilibrium is at best ostensible, that is, defined by its instant usage. Thus equilibrium inherited from the K&T energy budget (“K&T equilibrium”) is simply net zero energy at the surface and at the top of the atmosphere. Note that the energy balance is more than radiation, including thermals and evapotranspiration. Additionally, K&T balanced the energy in the atmosphere, the middle node of its three-node model.
The K&T model does not explicitly contain temperature, nor does it rely on emissivity, an essential parameter in equations for radiation and absorption spectra and in Stefan-Boltzmann’s equation. The authors’ 1997 paper mentions temperature just once, and that is to 288K (14.8ºC) to support its choice of 390 Wm-2 for longwave surface emission using the S-B equation. The budget schematic will balance not just for 390 Wm-2, but for any longwave surface emission between 0K and at least 292K (18.9ºC). That is, the equilibrium specific to the energy budget and the radiative forcing models is not just at 288K (14.8ºC), implied by 390 Wm-2, but to 14.7 or 14.9ºC, or the nominal Equilibrium Climate Sensitivity (ECS) of 3K, i.e., 17.8ºC, or any other temperature up to at least 18.9ºC (proof on request). Of the Budget’s 15 parameters, one is solar radiation, a constant, and another is the surface longwave emission, an independent parameter (equivalent to surface temperature). The Budget implements two constraint equations for balancing, leaving 11 degrees of freedom for an uncountable number of possible configurations.
The fact that other papers use surface radiation values other than 390 Wm-2, i.e., 396, 398, 398±5, and 398.2, is a matter of the free choice of the respective authors. Any of these values can be the basis for K&T Equilibrium. Even less important than this seemingly bothersome variance in stability is the conversion from the longwave surface emission to temperature, and whether the emissivity is correct, or whether an average surface and an average surface temperature has any factual basis (that is, whether those parameters are measurable). If the emissivity, for example, is inappropriate, the assigned temperature label changes, not the energy balance.
When IPCC introduced the K&T Budget as TAR Figure 1.2, it made a great, indelible, unwarranted presumption:For a stable climate, a balance is required between incoming solar radiation and the outgoing radiation emitted by the climate system. Therefore the climate system itself must radiate on average 235 Wm−2 back into space. Details of this energy balance can be seen in Figure 1.2, … . TAR, ¶1.2.1 Natural Forcing of the Climate System, p. 89.
This assumption, that the budget represents a stable state, is not found in the K&T 1997 paper. When AR4 reintroduced the budget as FAQ 1.1 Figure 1, that presumption quietly vanished. Nevertheless, the notion persists in climatology and its literature. It underlies IPCC’s climate modeling paradigm that lets its models seek new higher temperatures in equilibrium climate experiments. The notion is repeated in the article above:Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.
Nothing in the K&T paper, nor implicit in its Energy Budget, comprises even an implicit restorative force, in any direction, as a result of a climate forcing. Nothing is present in the budget analogous to a state of least potential energy in mechanics, which would prefer one state of K&T equilibrium over other states. The only stability in thermodynamics is thermodynamic equilibrium, the state of maximum entropy, which does not exist in IPCC claims for its modeling, much less in the thermodynamics of the real world.
When IPCC animated the K&T budget, it found that the atmospheric CO2 it could arguably attribute to man was insufficient to show the presumed greenhouse catastrophe. So, among other things, IPCC modeled CO2 as a forcing to initiate warming, triggering additional water vapor as a positive feedback, a sound reliance on the Clausius-Clapeyron relationship. However, IPCC failed to report that a change in surface temperature would likely affect about a half dozen of the baseline budget parameters. For example, IPCC ignored Henry’s Law and the positive feedback of CO2 from a warmer ocean. Nor did IPCC model the effects of increased water vapor on cloud cover, the most powerful feedback in all of climate, positive with respect to the Sun and negative with respect to warming from any source.
In 1938, Guy Callendar published his pioneering calculations, including many of the features of today’s AGW model, but most significantly an ECS of 2ºC. AR4, ¶1.4.1, p. 105. That was 77 years ago, decades before most professional journals became house-organs for doctrines de jour instead of advocates for science. He was published, notwithstanding the opinion of his most senior reviewer, meteorologist and director of the Met Office from 1920 – 1938:Sir George Simpson expressed his admiration of the amount of work which Mr. Callendar had put into this paper. It was excellent work. It was difficult to criticise it, but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere, and he felt that the actual numerical results which Mr. Callendar had obtained could not be used to give a definite indication of the order of magnitude of the effect. Bold added, Callendar (1938), Discussion, p. 237.
Undeterred, IPCC, relying on radiative forcing and the K&T model, today provides several values for ECS, each with a confidence level. Its ECS numbers are 4.5ºC (83%), 3ºC (50%), 2ºC(17%), and 1.5ºC(10%). Logarithmically, these lie on a straight line (R^2 = 0.98; p ~ 0.045•ECS2), yielding a 95% confidence level of 1.05ºC. Today’s measurements by Lindzen and others are all below 1ºC. A scientist can be at least 95% confident that the AGW model is invalid, depriving the K&T Budget of its utility.
Except for the politics and emoluments, Kiehl, Trenberth, and friends at IPCC have jointly managed to retrace Callendar’s footsteps.

Jeff,
I wish to add that thermodynamic equilibrium consists of 2 parts, a kinetic equilibrium from matter obeying the Kinetic Theory of Gases and fluxes of photons obeying the Laws of Quantum Mechanics. While joules are joules, a mistake often made is that these 2 distinct manifestations of temperature must be in LTE with each other for the system itself to be in LTE. The counter example is a laser passing through an inert gas. A thermometer placed in the beam will register a higher temperature only while the beam is active and the beam has no effect on the kinetic temperature of the inert gas molecules.
The only possibility for conversion between the two is by liquid water absorbing photons or by liquid water re-emitting photons as BB radiation. In LTE, these two processes must be equal and opposite, thus no net conversion between forms actually occurs.
George

Re: George @ co2isnotevil, 7/4/15 @ 12:13 pm
Very interesting. In researching your note, I found this:Real atmospheres are not in local thermodynamic equilibrium [LTE] since their effective infrared, ultraviolet, and visible brightness temperatures are different. Scattering is another non-local thermodynamic equilibrium effect.http://scienceworld.wolfram.com/physics/LocalThermodynamicEquilibrium.html
The climate problem is to estimate the long term (three decades plus) statistics of weather, and first with regard to an unmeasurable, hypothetical, global, surface temperature. Candidate models must yield a prediction experimentally shown to be better than a chance prediction. Radiative Forcing and GCMs in their various forms have failed, and for a multitude of good causes.
However, I hold great hope for heat modeling (known vernacularly and redundantly as a heat-flow model) with only a few nodes in addition to the K&T model. These are the Sun, deep space, and the deep ocean with something akin to the K&T model in the middle. These models will also add heat capacity, heat resistance, and long term ocean circulation, so they will be dynamic and transient rather than imaginary and static equilibrium models. This means the trend should be toward the macro, not the micro; somewhat like GTE (Global Thermodynamic Equilibrium) but without the equilibrium, and not LTE, even before discovering the Wolfram Research warning sign.

CM of B
Any diagram of a complex process, such as the energy budgets presented here, always contains implicit assumptions and consequently has limitations of use. In my opinion the most obvious limitation in the diagrams is the assumption of a common base level for the land and ocean surfaces. The diagrams clearly fail to take any account of the significant variation in elevation that occurs at the Earth’s surface. The surface elevation of each of the major continents is obviously not sea level, Africa for example has an average elevation of 600m (~2,000ft).
In addition we know that mountains generally experience a different climate, with higher levels of rainfall and lower temperatures, than the surrounding plains and that this rainfall is a consequence of moisture lifted from the ocean surface to altitude by convective and/or orographic atmospheric processes. Water lifted up onto a high altitude land surface and deposited as rain or snow possesses potential energy. This potential energy is released during stream flow, allowing for both the natural erosion of the land and also the transport of sediment, as the water moves down slope to lower levels. The water’s potential energy and can also be harnessed for power generation and this available energy is a component of the overall energy budget that is missing from the diagrams.
In addition to the budgetary issues of potential energy, there is also the issue of latent heat. The energy of latent heat is stored in matter not as kinetic energy of particle motion (heat) but as bonding energy of molecular separation (phase change). Consequently, because the storage of latent heat is not a thermal process, the transport of latent heat is not a radiative process. Latent heat transport can only occur as a component of the mass transport of a condensing fluid.
In addition to the vertical mass movement of water, which is implicit in the diagrams as latent heat transfer, we also have the mass movement of the non-condensing gases associated with the water cycle. The circulation of moist air aloft and its separation into dry air and condensed precipitation that falls out back to the surface demands a vertical return, not just of the precipitated water, but also of the dry air. In both cases the descent of mass (water and dry air) in a gravitational field produces thermal energy, as the potential energy both of the descending water and also the descending dry air is converted into energy of motion. The descent of atmospheric fluids and gases returns heat energy to the ground, warming the surface air as the Foehn wind for example, clearly demonstrates.
As a foot note, the history of radiation budget diagrams goes back further than is generally thought. The earliest example I have seen is Figure 4 “Schematic representation of heat balance of earth and atmosphere” in the 1954 paper by Henry G. Houghton On the Annual Heat Balance of the Northern HemisphereJournal of Meteorology, Vol. 11, No. 1, 1-9

Re: Philip Mulholland, 7/5/15 @ 4:29 am:
Science imposes no requirement that a model have fidelity to the real world. Even observing that a model must link to the real world through facts (observations reduced by measurements and compared to standards), the facts can themselves be reductions via models. The Global Average Surface Temperature is just such a fact. It matters not that the calculation of GAST includes adjustments for altitude or local weather phenomena, or even for the fact that the average surface is some kind of an imaginary dilute mud of earth and water. The value of a scientific model is solely in its predictive power, not its fidelity.
Newton made models of the highest quality — laws, as it turns out — using parameters (previously) unknown to human senses and which some contemporaries even doubted existed, e.g., forces (especially at a distance), mass, momentum, and inertia. Models of physics routinely bear no resemblance to the physical world they predict. In addition to Newton’s Laws, consider the whole of thermodynamics and Henry’s Law of Solubility, a couple of compound laws that directly bear on climate. There is no premium on fidelity of emulation, on copying the real world.
Of course, such principles of science do not apply to climatology, the poster child for Post Modern Science. In PMS, the models don’t even have to work, so long as they are (1) peer-reviewed, (2) published in certified journals, and (3) lay claim to support by consensus of some narrow body of certified practitioners. The problem arises when the public and Modern Scientists get wind of the fact that these (post modern) scientific models, fully validated through the three intersubjective tests, are powerless to predict anything significant.
As to the history of radiation budgets going back to 1954, Kiehl and Trenberth (1997) noteThe first such budget was provided by Dines (1917). P. 197. The heat balance of the atmosphere. Quart. J. Roy. Meteor. Soc., 43, 151–158. P. 207.

Jeff Glassman: July 5, 2015 at 7:28 am
Jeff, You say “The value of a scientific model is solely in its predictive power” – I agree
You also say “not its fidelity” – I totally disagree.
The fidelity of a model is its ability to determine the present from prior data. We establish the fidelity of a model by conducting a blind test, the model is created without using all the available priors. We then run the model and “predict” the missing data. So you see Jeff, fidelity is prediction after all. If the model cannot predict the present from the past, then it clearly cannot predict the future from present and so the model has no value.
BTW thanks for the additional reference to Dines 1917.

Re: Philip Mulholland 7/5/2015 @ 1:38 pm
You’ve defined fidelity differently, I think to incorporate predictive power. I meant fidelity in the sense of emulating the real world by copying the real world, feature by feature, in whole or in any part. The distinction I intended is that between a simulation and an emulation. To help Monckton, I was taking issue with passages such as this from your post at 4:29 am:The diagrams clearly fail to take any account of the significant variation in elevation that occurs at the Earth’s surface.
You seemed to be asking that the budget diagrams explicitly show surface elevation variations. That would be the feature of an emulation, which science neither requires nor rewards.
When a model has demonstrated predictive power, it has a degree of validity. Everything in the real world is taken into account, as you say. Future generations of the model, might strengthen that taking into account, one way or another, as its designer sees fit. Improvement in predictive power might include a higher degree of emulation, but need not. About the only other kind of improvement science rewards for the same predictive power is the principle of Occam’s Razor: simplification.

Re: Philip Mulholland, 7/5/15 @ 3:58 pm:
Keep looking. To be relevant, you need a definition applicable to the discussion comparing simulation and emulation, one applicable to climate modeling. That’s not easy to find, and I regret not having a recommendation for everyone. A short form is that emulation means to model objects in the real world by copying them observable-by-observable, fact-by-fact.
By contrast, simulations, especially on a large scale, deal explicitly or implicitly with statistical objects, which are not directly observable. Temperature for example, while spatially observable and measurable (microparameters), it is not so globally — not for the lumped parameters representing the atmosphere or Earth’s surface. Nor are the temporal average temperatures directly observable. Still, local measurements do yield quite usable global estimates, both spatially and globally (macroparameters). And so long as the estimating rules remain unchanged, science looks for models to predict those estimates. Distinguishing simulation and emulation on the basis of scale can be helpful.
Climatologists are now engaged in changing the estimating rules for temperature to make their estimates fit their model predictions. That may not seem right to everyone, but it’s OK in the postnormal, academic world so long as the effort is (1) peer-reviewed, (2) published in certified journals, and (3) claimed to be supported by a certified consensus.

Climate Pete Said:
“CO2 doesn’t directly cause record high daytime temperatures. In fact CO2 actually warms the nightime temperatures more than the daytime, and the winter temperatures more than the summer, because the sunlight obviously builds up the temperatures faster than IR emissions lower then, so CO2’s most marked effect is to stop the heat from leaving the surface once the sun goes down or is less powerful in winter.”
Bingo, if CO2 were the cause of any warming it would be at Night, and it wouldn’t be warming, it would be slower cooling. To implicate CO2 one would need to demonstrate that the spread between the Daytime Peak and the following Nighttime peak was narrowing. I have not found any data showing that the spread between day and nighttime temperatures has been narrowing. I also haven’t seen any data showing that the winters have been getting warmer relative to the summers.
Bottom line, increases in daytime temperatures is due to more radiation reaching the surface and oceans. That has nothing to do with CO2. To implicate CO2 one would need to show that Nighttime termperatures in areas void of humidity would be showing narrowing of the spreads, ie nighttime temps in the desert. I doubt anyone can produce a data set showing that desert nighttime temperatures have been narrowing the spread with daytime temperatures. Also, the best proxy for CO2’s impact would be the extreme S Pole, and that data station hasn’t shown any warming in 50 years.

It looks like nights have been warming, but it isn’t due to CO2. The very fact that there doesn’t seem to be any research on the spreads between daytime and nightime temperatures in the dry deserts pretty much proves either the climate “Scientist” aren’t looking, or they don’t know what questions they should be asking. BTW, every Ice Core Data set has temperature and CO2, clearly demonstrating that they are looking only at CO2. If all you have is a hammer, everything looks like a nail.
“I spoke with Phil Duffy, Climate Central’s chief scientist, about why nighttime lows are warming faster than the daytime highs. He replied that the answer isn’t straightforward, and then he referred me to research that has shown that an increase in cloudiness (as well as a few other factors) has warmed nights more than days. During the day, clouds both warm and cool, as they act like a blanket to reflect heat back to the surface (warming), but they also reflect sunlight back to space (cooling). At night, they only warm temperatures, acting like an insulating blanket. Thus, nights warm more than the days, and this is exactly what climate models predict. In fact, this is a good example of climate models making a prediction (warmer nights), and then having the prediction born out by the data.”http://www.climatecentral.org/blogs/record-warm-nighttime-temperatures-a-closer-look

Other questions Climate Scientists Don’t seem to answer:
1) How many ice core data sets show that current temperatures are at peaks for the holocene?
2) How many ice core data sets show that the temperature variation over the past 50 and 150 years is statistically different (2 std dev) from the previous 15k years?
3) How many ice core data sets show similar ranges and variations? Basically, how good of a proxy is an ice core?

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy