The Logarithmic Effect of Carbon Dioxide

The greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere, so instead of the average surface temperature being -15° C, it is 15° C. Carbon dioxide contributes 10% of the effect so that is 3° C. The pre-industrial level of carbon dioxide in the atmosphere was 280 ppm. So roughly, if the heating effect was a linear relationship, each 100 ppm contributes 1° C. With the atmospheric concentration rising by 2 ppm annually, it would go up by 100 ppm every 50 years and we would all fry as per the IPCC predictions.

But the relationship isn’t linear, it is logarithmic. In 2006, Willis Eschenbach posted this graph on Climate Audit showing the logarithmic heating effect of carbon dioxide relative to atmospheric concentration:

And this graphic of his shows carbon dioxide’s contribution to the whole greenhouse effect:

I recast Willis’ first graph as a bar chart to make the concept easier to understand to the layman:

Lo and behold, the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas. One thing to bear in mind is that the atmospheric concentration of CO2 got down to 180 ppm during the glacial periods of the ice age the Earth is currently in (the Holocene is an interglacial in the ice age that started three million years ago).

Plant growth shuts down at 150 ppm, so the Earth was within 30 ppm of disaster. Terrestrial life came close to being wiped out by a lack of CO2 in the atmosphere. If plants were doing climate science instead of us humans, they would have a different opinion about what is a dangerous carbon dioxide level.

Some of the IPCC climate models predict that temperature will rise up to 6° C as a consequence of the doubling of the pre-industrial level of 280 ppm. So let’s add that to the graph above and see what it looks like:

The IPCC models water vapour-driven positive feedback as starting from the pre-industrial level. Somehow the carbon dioxide below the pre-industrial level does not cause this water vapour-driven positive feedback. If their water vapour feedback is a linear relationship with carbon dioxide, then we should have seen over 2° C of warming by now. We are told that the Earth warmed by 0.7° C over the 20th Century. Where I live – Perth, Western Australia – missed out on a lot of that warming.

Nothing happened up to the Great Pacific Climate Shift of 1976, which gave us a 0.4° warming, and it has been flat for the last four decades.

Let’s see what the IPCC model warming looks like when it is plotted as a cumulative bar graph:

The natural heating effect of carbon dioxide is the blue bars and the IPCC projected anthropogenic effect is the red bars. Each 20 ppm increment above 280 ppm provides about 0.03° C of naturally occurring warming and 0.43° C of anthropogenic warming. That is a multiplier effect of over thirteen times. This is the leap of faith required to believe in global warming.

The whole AGW belief system is based upon positive water vapour feedback starting from the pre-industrial level of 280 ppm and not before. To paraphrase George Orwell, anthropogenic carbon dioxide molecules are more equal than the naturally occurring ones. Much, much more equal.

Sponsored IT training links:
Worried about 642-426 exam results? Join 642-446 training program and get complete set of 350-029 dumps with 100% success guarantee.

436 thoughts on “The Logarithmic Effect of Carbon Dioxide”

This seems a perfectly clear exposition of what’s wrong with AGW theory.

So what in the above it beyond the capabilities of main-stream journalists? If I was a Harabin or Black, I would want to know from the Met Office experts exactly where the errors are in this, because clearly there MUST be some serious mistake, as we know from the Met Office just last week that AGW is real, and MUST be due to man-made interference with nature.

So seriously, if any main-stream journalists get to read this, why are you not doing your jobs and investigating? Surely it is the low-level arguments such as this that need to disproved by the AGW supporters who constantly tell us that the science has been clear for 150 years.

I don’t know if this post was supposed to be misleading and confusing but is certainly is.
Forcing is logarithmic! Surprisingly, climate scientists were aware of that.
The final graph is complete junk. Comparing forcing due to CO2 alone with forcing due to CO2 + feedbacks and finding that the latter is larger is pretty trivial.

Excellent post for the layman. But it would be nice to have a few citations. What is the source of the claim that “the first 20 ppm accounts for over half of the heating effect ?”

But the following is the true statement that is gives some perspective (and puts a smile on my face): If plants were doing climate science instead of us humans, they would have a different opinion about what is a dangerous carbon dioxide level.

The IPCC projection looks decidedly odd even for a layman. Unbelievable in fact. Since the amount of warming is also in doubt due to poorly sited and deleted thermometers I’m beginning to wonder what the AGW hypothesis has left to support it. Manic rants from those about to lose their cash cow appears to be the last resort. Even chairman Rudd has gone quiet and Penny Wrong has gone off to buy some floodwater.

The temperature sensitivity of CO2 is clearly not logarithmic over the entire range. The logarithmic relationship appears to range from about 40ppm to about 200ppm. After that it looks more like a 1/x type relationship. Maybe the whole curve is closer to 1/x. Has anyone tried doing such a plot?

I’ve read Davids papers on this before (I’m a layman I hasten to say). Apart from a derivative graph on the Junk Science site and a quote from Fred Hoyle I’ve not been able to dig out any further papers that support this idea. I’m told the science is settled, but something as basic as this seems to cut right to the heart of the whole argument…how could something as simple as this have been overlooked?

You mean like the melting ice-caps decreasing albedo. How’s that particular forcing working out for you?

The idea that water vapour suddenly kicks in at a particular CO2 concentration is extremely odd. Water vapour levels doesn’t depend on CO2 levels, just temperature and wind. We already have parts of the world which remain at high temperatures year round — the tropics. They would already be exhibiting the vapour feedback, and have done so for centuries. We have parts of the world which are always cold — the poles. No substantial mechanism effect there because they don’t get hot enough. So any water vapour effects will have to come at the margins, and they won’t generate the accelerating effect you need.

The idea that water vapour will cause accelerated warming seems to rest on the idea that the earth is a consistent temperature. Since it isn’t, there can be no magic kick-in point.

Nice one David,
I can’t stand it when an alarmist states (unchallenged) that the sceptics have not come up with one argument to upset the settled science. Well how about we get this into the head of the next talking head (to challenge) – hey dude CO2’s contribution is logartithmic not linear – heck that scientific enuff for ya. Plimer touches on it in his book – but well explained here, good job. I like the line about the point of view of the plants. You know, I reckon corals and foraminifera might like it for their skeletons too.

I’ve read in literature somewhere that CO2 contributes to about 25% of the greenhouse effect, i.e. 7-8°C. Can you cite where the 10% value comes from?
And there are other logarithmic curves, e.g. Lindzen, that are less flat after the first 300 ppm than the ones presented above. I’m wondering which are correct. Has Willis’s graphs been peer-reviewed? (Not that it makes a difference).

Its a curious thing that this point – well made by David here, still gets re-iterated and as far as I am aware, without response from the modellers. In the middle of last year I wrote a book aimed at my fellow environmentalists where I outline this issue – especially the 300% ‘gain factor’ in the equations – which is not so easy to derive from IPCC documents and for which I must thank Christopher Monckton in his article for the American Physical Society, where he tracks it down to James Hansen way back at the very beginning. It is a theoretical feedback, as Richard Lindzen pointed out at IPCC-1 in 1990! The modellers have taken the ‘warming’ (partly it now appears to have conjured and manipulated in the ‘gridded data set’ process) as evidence of the theoretical projection – and small wonder that when, after 2002, the warming ceased (no appreciable rise in upper oceanic heat content which is where 80% of the ‘warmth’ is held) that Kevin Trenberth at NCAR states in exasperation that it is a ‘travesty’ that they can’t account for the ‘lack of warming’.

You would think that at least one environmentalist from the long list of IPCC supporters would have either written to me, or pointed out in numerous public talks and discussions, where the refutation can be found – or that the MetOffice would have issues some guidance. Not one word!

So – given that this blogsite is visited by the orthodox – he is a challenge – please explain in simple terms, what is wrong with David Archibald’s presentation. I, for one, am open to listening and being re-educated – I care about the future of humanity, biodiversity…..the planet, but right now much of what I and others greatly value is threatened not by the projected consequences of carbon dioxide, but by the supposed remedy for climate change which will seriously and immediately damage landscape, biodiversity and community throughout the world, not to mention draining the pockets of taxpayers in a feeding frenzy of ‘jobs-for-the boys’ (Friends of Rajendra Pachauri rather than Friends of the Earth) – the technologies of turbines, barrages and biofuels.

So lets have an intelligent dialogue around this central issue – please!

Richard Telford (01:00:29) :
“I don’t know if this post was supposed to be misleading and confusing but is certainly is. Forcing is logarithmic! Surprisingly, climate scientists were aware of that.
The final graph is complete junk. Comparing forcing due to CO2 alone with forcing due to CO2 + feedbacks and finding that the latter is larger is pretty trivial.”

Looks like you have an excellent opportunity to demolish the sceptics with at most one follow-up submission which I hope you’ll make. Please avoid abuse such as ‘complete junk’ (makes you feel better but does not enlighten us) and tell us exactly where the author has gone wrong and mislead us

Why is the last paragraph complete junk? Forcing is logarithmic. Surprisingly, climate scientists were aware of that….. were they? So why do you get such alarming amounts of warming out of a trace gas?

Either you know a lot that needs to be explained to the rest of us, or you don’t now nuffin.

If these graphs display the reality then why does the IPCC and the MET office want – seriously want – to continue propating the myth of AGW what is the purpose? If someone could describe to me in simple detail the reason behind the corruption of data and evidence that at the end becomes simple propaganda then whilst not being happy with the current situation I could at least understand why it exists.

You see it is a really big issue because if the BBC is correct according to their documentary serious about the solar system (last night) we only have 5 billion years left to sort the problem out.

Within that time frame the sun will implode and turn planet Earth into Walkers crisps hopefully if Al Gore is still around then he will get fried first so I am not going down to the gym and then restrict my diet to the minimum amount of calories so I live long enough to watch the episode on reality TV, I cant wait!

I’m a little confused on this. The logarithmic nature of the Beer-Lambert law is well established but my understanding is that it applies even with an abundance of IR radiation that CO2 can absorb and dissipate as kinetic energy. Now, I’m under the impression that there exists a scarcity of IR radiation in the wavelengths which CO2 absorbs and the only effect of increasing the amount of CO2 in the atmosphere is that this finite amount of available energy gets absorbed closer to the ground.

So, is it correct that there are, in fact, two influences here, i.e. (a) the logarithmic impact on temperature according to the Beer-Lambert law and (b) the finite amount of reflected IR energy already being absorbed to extinction by the CO2 already in the atmosphere. Can someone please clarify this for me?

MC – thanks for the link : brilliant debating, just superb. Monckton, with wit and humour aplenty, describes the reduced greenhouse effect of each additional atmospheric CO2 molecule over its predecessor, so this debate is not entirely off topic. It is well worth a watch if you have a spare 80 minutes or so.

If their water vapour feedback is a linear relationship with carbon dioxide, then we should have seen over 2° C of warming by now. We are told that the Earth warmed by 0.7° C over the 20th Century.

I’m not sure what claim the first sentence is based on. Can the writer elaborate?

My initial thoughts when I look at the post. The current radiative forcing at top of atmosphere from 380ppm of CO2 is around 1.7W/m^2 and from all increases in “greenhouse” gases = 2.4W/m^2. What I would call “non-controversial physics”, because it nicely ignores the rest of climate effects – the radiative-convective effect in isolation.

When you calculate the “rule of thumb” surface temperature increase from these 2 numbers above from the Stefan-Boltzmann equation you get 0.5’C and 0.7’C increase in surface temperature respectively. This is without feedbacks. You can see this all laid out in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers

But these are just rules of thumb – a “ready reckoner” approach to save standing in the long queue for the GCM each time you want to know something..

As far as I understand the point of this post, the climate modeling community is wrong because the current temperature increase isn’t 0.5’C x 2 or 0.7’C x 2 ?

I have my skepticisms about the climate models, but is this critique based on what climate modelers say? Is there a paper to reference?

The water vapor feedback is perhaps one critical aspect of climate models.
Do climate modelers presume it linear from pre-industrial times?
Do they calculate it to be linear from pre-industrial times?

I don’t know the answer. There seems a presumption in the article but no reference. It would be nice to check.

3. Where does the forcing calculation in the 2nd graph come from and what is it saying?
The IPCC, effectively quoting Myhre (1998), says radiative forcing at top of atmosphere = 5.35x ln(C/Co), where Co is industrial levels of CO2, 278ppm. It is specified in boring detail (see it at the http://scienceofdoom.com reference above) and tells us the expected addition to radiative surface forcing.
This 2nd graph says “Net downwards forcing” – at surface? at TOA? And is this after feedback, before feedback?

Contrast the comment by Antonia (02:14:59), which attempts to get more information from Richard Telford, with the following one from Ronaldo (02:17:36) which is guaranteed to kill any serious discussion.

“The whole AGW belief system is based upon positive water vapour feedback starting from the pre-industrial level of 280 ppm and not before.”
This is just assertion. No reference, no cite. It’s a fabrication.

Positive feedback, assumed by the AGW hypothesis, can’t just kick in today or at 280ppm: it would have to apply at all CO2 concentrations if such feedback exists. If every 20ppm CO2 causes 0.46 degC warming, then this would hold for concentrations below the level of today, and because of the logarithmic effect, each 20ppm reduction in CO2 concentration would give a monotonically increasing effect on cooling (lack of heating). So going down to the so-called pre-industrial level of 280ppm would appear to knock out about 2.5 degC of heating, if this positive feedback be true.

The fourth graph says it all, if your calculations are correct: the cumulative effect declines to zero at around 280ppm. No – that can’t be right: any cumulative effect must go through the origin: there can be no good reason why positive feedback would kick in at 280ppm. I would expect to see the cumulative effect starting at the X-Y axes origin and monotonically increasing with CO2 concentration, BUT with the increase in the cumulative temperature per 20ppm CO2 decreasing with increasing CO2 concentration.

which describe experiments which used spectral analysis of downward longwave radiation to determine the contribution of the various GHGs to the total signal, show that CO2 doesn’t contribute more than 35W/m2 at its most active range and would likely be below 10W/m2 throughout most of the Tropics and Subtropics.

A great post by David; of course the effect of increasing CO2 on temperature has been known to be miniscule by the IPCC because that is why they have invented the enhanced Greenhouse effect which depends on the slight warming from increased CO2 releasing water into the atmosphere with its much greater greenhouse effect.

This is wrong on many levels despite Mr Telford’s typical warmist snark. First there has not been increased levels of water going into the atmosphere; Paltridge’s excellent paper establishes that with the ghost in the machine of Miskolczi present in the stability of optical depth over the enhanced greenhouse period.

Secondly, the role of clouds has been profoundly misunderstood by AGW proponents; the recent Pinker et al dispute between Monckton and Lambert in their Sydney debate shows this; the SW flux findings of Pinker, most likely caused by cloud variation, are sufficient to explain recent warming. In this respect Monckton, despite misunderstanding cloud forcing, was correct about climate sensitivity to increases of CO2; this tiny CS from ^CO2 must be based on the log effect described by Archibald and this fact coupled with the moderating role water plays against temperature movement in any direction fundamentally contradicts AGW.

Senior scientists at the Wadia Institute of Himalayan Geology (WITG) has rejected the Global Warming Theory and told that the Himalayas are quite safer zone on earth, where Global Warming has no role in controlling the conditions. They also said that the conditions of Himalayas are controlled by the winter snowfall rather than external factors like much hyped Global Warming.

God save me from scientists and pseudo scientists, you are all as bad, if you are so concerned about the environment then for goodness sake do something practical eg dont buy biscuits or anything else that uses palm oil.

50,000 Orang Utans have already been sacrificed for your own personal health and wellbeing before you even consider the average Americans concern about the cash in his pocket, palm oil being the cheapest vegetable oil available, then if you moan about water shortage then consider that it takes 14000 litres of fresh water to make one litre of biofuel.

Who for goodness sake cares whether or not Co2 is logarithmic or suffers from attention deficit disorder and maybe subject to rabies, what we do know is that none one single prediction made by the IPCC, Gore or Hansen has come true therefore commonsense would clearly indicate that its all hot air and that only someone severely retarded would want to continue this idiotic debate rather than actually do someone about the absolute destruction of our environment.

Americans should eat less hamburgers not because cows belch but because more rainforest is destroyed each year just to fuel the average Americans desire to cheap subsidised food.

Get your face out of the screen, go out and let some daylight into your challenged brains and recognise that none of you bellacheing about pointless statistics well change anything or is your chosen sense of status more important that the biodiversity that you think will be saved by your craven indulgence?

I would urge WUWT readers to take anything written by David Archibald with a large pinch of salt.

The main (only) point of debate between responsible sceptics and AGWers concerns feeback. Sceptics think it’s likely to be small or even negative – AGWers think it will be large and positive.

David A says “ Carbon dioxide contributes 10% of the effect so that is 3° C . This is rubbish. Jack Barrrett, a leading expert in spectroscopy – and a sceptic (see http://www.barrettbellamyclimate.com/page3.htm ) reckons the effect of CO2 is more like 9 deg (see the link to his paper on Warwick Hughes site). In fact David Archibald’s own graphs suggest his numbers are wrong. The Modtran plots show the net downward forcing increasing by ~25 w/m2 (~235 w/m2 -> ~260 w/m2) due to the current concentration of CO2.

Is David saying that 25 w/m2 only equates to a 3 deg rise?

How does that square with his claims that weaker solar output will result in a temperature decline of ~2 deg over the next “few years”. TSI measurements show that the sun’s output varies by ~0.1% or ~0.24 w/m2 at the earth’s surface.

In this WUWT post, Richard Lindzen estimates 1 deg increase from a doubling of CO2

There are many other reputable (not AGW) scientists who say much the same. If the feedback effect is small (or negative) then we don’t have a problem and it’s quite possible that natural variation will ‘hide’ most of the effect. CO2 will, though, have an effect. There are also very good reasons that continuing to add CO2 will not, as David says result in the effect being ‘tuckered out’, but result rather in indefinite warming. I’m not prepared to go into that now, though.

That CO2 forcing increases logarithmically with concentration has been known for over a century. Arrhenius (1896) did the necessary calculations:
“if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression”
The IPCC are well aware of this
“Note that for CO2, RF increases logarithmically with mixing ratio” (AR4 WG1 chapter 2.3.1)
and this knowledge is implicit in all the model projections. To imply otherwise is disingenuous.

Graph 4 is simply wrong. The bar chart is warming per 20ppm increase in CO2 concentration, the line is the cumulative effect. The units of these two parts are different – the first is deg C/ppm, the second is deg C. They are incomparable.

That problem is fixed in the last plot: at least the units are the same in this figure. But there is a second problem. The natural CO2 forcing is shown without any feedbacks, whereas the anthropogenic forcing is shown with feedbacks. This is misleading. Nobody would argue that the natural changes in CO2 are not magnified by feedbacks (try to explain the glaciations without feedbacks) One can argue about the magnitude of the feedbacks. Perhaps the IPCC has them too high. Perhaps too low.

Thank you, David. This one simple fact has not been repeated often enough. Dare I say it has been suppressed in the mainstream debate.

I know that most disinterested people I talk to are under the impression driven by media alarmists that as atmospheric CO2 levels increase the temperature follows in a linear, if stochastic, fashion. That is the single greatest myth behind AGW demagoguery.

Watts and others should repeat some version of this post once a month for the next decade. It can not be stated often enough!

So a doubling of CO2 relative to pre-industrial times leads to an increase of 2W/m2. The difference between the solar minima and solar maxima IIRC is about that much.

The doubling of CO2 relative to pre-industrial times leads to an increase od ~3.7 w/m2. When looking at the difference between soalr minima and maxima you need to look at insolation, i.e. what the earth receives, not TSI. If TSI increases by 2w/m2, the earth’s surface, on average, only receives 25% of this (think day, night, winter, summer). Of that ~30% gets reflected back to space due to the earth’s albedo. A 2 w/m2 increase in TSI equates to an increase of ~0.35 w/m2 averaged over the earth’s surface. You should take a bit more notice of Dr. Leif Svalgaard’s posts and it a little less of Dr. David Archibald’s.

Let’s see, usually we are told by AGW skeptics that the atmosphere is too complex to understand. That the sophisticated mathematical models that run on supercomputers cannot possibly come close to the real climate. But, now we are to believe that some guy with a graphing calculator has got it all figured out! He didn’t even need calculus, just an ln x button. Think of all the tax dollars that could have been saved!

I have been on the receiving end of proponents’ reply to the temperature changes mentioned in the above post.

The counter argument is that we need to make a distinction between the “equilibrium” climate sensitivity, and “transient phase”. We are suppiosed to be in the transient phase at the moment, and it is argued that the climate will take centuries to settle to equilibrium. According to this argument, we would not expect to see the equilibrium conditions for a long time to come.

I have not found this to be convincing.

Even if we were to accept that the present condition is the transient en route to a much higher equilibrium, the transient should still be evident in measurement. IPCC AR3 Chapter 9 has a beautiful chart of the “big red spot”, which show how atmospheric heating above the surface is necessary to observe heating at the surface (if heating is due to radiative physics). We should be able to see the red spot forming by now (transient or not) – especially if some past warming has been attributed to CO2.

I know of no confirming measurements which do so. Some people claim that cooling in the ionosphere is evidence – but cooling in the ionosphere without warming further down does nothing to explain recent warming. In the absence of such evidence, I’d look upon the hypothesis as falsified.

Secondly, what I consider to be the unphysical argument of amplification of temperature change by positive feedback. The term “amplification” means a dimensionless constant, where a change of an input variable results in a greater change of an output variable with the same dimensions. In the case of climate sensitivity, we’re talking about units of temperature or units of radiative flux at the surface (take your pick), where a change of input results in some multiplied change, when the system eventually settles to equilibrium.

Given that amplification is dimensionless, it’s clear that there is an increase of energy from the input to the output. We need to identify and account for this energy to make a convincing case for amplification in climate sensitivy.

And this applies equally to feedback systems – without an “auxiliary” source of energy, feedback cannot amplify a signal. Trying to say it does would be like arguing that I could jump into a basket and lift myself off the ground using the handle. You might get that to work in the cyberworld of computer models, but it doesn’t work in the real world.

I’m not trying to say that amplification is wrong. I just haven’t seen the full explanation of energy flows, so references would be welcomed. And until I see this, I would tend to view amplification of the climate sensitivity as another falsified hypothesis.

That was the seal of approval. Real Climate felt they could no longer ignore it, they had to try to counter it. Thanks guys. Without that sort of feedback, you don’t know how effective you are.

There are two sorts of IPCC scientists – the ones that fake the historical record and the modellers who generate warmings for a doubling of CO2. We hear a lot about the former but the latter are required to give credence to the alarmist projections. Looking at a graphic that Roy Spencer produced in early 2008, there are 21 model results contributing to the IPCC concensus, with the lowest warming 2.5 degrees and the median 3.5 degrees.

But it had been bugging me for a while that global warming belief system has their heating from the pre-industrial level, and not some other point. Never mind that Spencer has shown that the feedback is negative, not positive. What the AGW belief system requires is that the system is quiescent up to the pre-industrial level and then it just explodes. It requires everybody to believe in fairies at the bottom of the garden. It defies the laws of physics and nature. It is the big lie upon which the whole AGW edifice is founded.

Richard Telford (01:00:29) :
I don’t know if this post was supposed to be misleading and confusing but is certainly is.
Forcing is logarithmic! Surprisingly, climate scientists were aware of that.

Then why do their models produce an algebraic [arithmetic] result?

I think some of the answer is that if CO2 are climbing exponentially, then the log() of that is a straight line. The catch is that current CO2 levels can better be modeled, I believe (i.e. no references and I wouldn’t believe this if I were you), as a baseline plus an exponential. The log() that is quite a bit
flatter until the exponential overwhelms the baseline.

Get your face out of the screen, go out and let some daylight into your challenged brains and recognise that none of you bellacheing about pointless statistics well change anything

My bellyaching changed the opposition leader here in Australia and stopped the proposed ETS in it’s tracks. Why is that important? Well if the likes of Watt McIntyre Monckton et al weren’t bellyaching, you and I and our nations would be that much poorer due to these grab taxes like the ETS. Why is that important? Look at the environments in poor countries compared to well off countries. The people in poor countries don’t give a chit about the environment, they rightfully care about where their next meal will come from.

I’m guessing you made your remarks after being well informed by reading these sorts of blogs for many months now. Why aren’t you outside getting some fresh air? Or were you naughty and just quipped after reading one or two threads?

More on my note above – my comment about log(exp()) being linear above assumes a graph of the function over time – Archibald’s graphs show temperature wrt to CO2 concentration! I think models and warmists talk about a linear increase over time, this may be a major disconnect between Archibald and them!

Where Archibald has a straight line for modeled warming, it should be a log() curve (or log(baseline + exp()) as I mention before).

Your post is interesting, but rests on an arbitrary choice that skews the results.

Your 10% of the greenhouse effect for CO2 is an arbitrary choice, although it is within the range supported by the literature.

Going back to Ramanathan and Coakley 1978, we see that they get a wide range for CO2’s contribution to the greenhouse effect. They get between 9 and 26 percent. The range is calculated by first removing all CO2 from the atmosphere and recording surface temp (9%) then removing EVERYTHING BUT CO2 from the atmosphere (26%) This was done in a basic radiation code and has been repeatedly verified with slight variations of no more than a percent in the 30 years since R&C.

But what exactly does the 9% tell us? Unfortunately, the 9% doesn’t tell us much. Since concentrations of WV are held steady in the radiation code to get the 9% number, there is an assumption of 0 feedback for CO2. (In other words, 9% assumes that none of the WV in the atmosphere is a result of the warming caused by CO2) Since your arbitrary choice is so close to this no-feedback number it is not surprising that you get odd results when compared to climate model predictions.

If we knew the exact contribution of CO2 to the GHE, we would know the strength of feedbacks and have a pretty good estimate of climate sensitivity to CO2 doubling. Unfortunately we do not know this with the certainty you imply in this post.

So essentially your analysis proves that if you select a small effect for CO2, you will get results that support a small effect. An equally arbitrary (and equally supported) choice of 20% for CO2’s contribution would get completely different results.

I am interested in this subject because it involves such a vast waste of taxpayer money which could be put to much better use. We are in a serious economic recession and out politicians want to handicap us further by imposing taxes on industries that emit CO2. I am horrified by the vast lie that is AGW.

I was brought up on David Archibald’s excellent ‘Solar Cycle 24’, and I am very grateful to have been given a copy of it by an alert friend.

The question of CO2 warming effect being logarithmic seems to me to be a crucial fundamental question. It either is, or it isn’t. Why isn’t this question discussed more often, and more emphasis not put upon the answer? The Earth’s atmosphere has been in equilibrium for the past 500 million years, and life has flourished.

We are bumping along the bottom of the amount of CO2 seen during the past 500 million years. I gather that the average amount of CO2 in the atmosphere during the past 500 million years was 2,500 ppm. Why are we now so paranoid about a mere 388 ppm. And even more paranoid about the mere 15 ppm produced by our burning of fossil fuels?

It seems to me an absurd collective madness has overtaken most of the population. Get a life? Get a grip.

The Earth’s climate has been in equilibrium for millions of years. There must be a robust mechanism that keeps it in equilibrium. CO2 has been as high as 5000ppm during the past 500 million years. There was no runaway warming. Life flourished.

I was brought up on David Archibald’s book ‘Solar Cycle 24’. It was an excellent book

David, as a skeptic who is well versed in the logarithmic nature of CO2 forcing, I dont have a clue which message you are trying to give here. The co2 range of interest is 100 to 1500 ppm, and already Arrhenius confirmed in 1906 that this leads to a 1.2 degree temperature rise for every doubling. The only debate is now about feedbacks: Is miskolczi right that tau is a constant (which means co2 is compensated by less water vapour) or is IPCC richt that that the feedback is positive. This posting is really not helping in this debate.

So … when is this information going to be published in a peer reviewed journal?? .. OR .. has it already been published in a peer review journal?

It would seem to me that in order to make a valid claim that the IPCC scientists are cherry picking their information, alternative information needs to be published in peer reviewed journals. Granted, I’m aware of the conspiracy to block alternative information, and to prevent alternative information from getting to the mainstream. But still, you would run a better chance of making a valid claim if it were published.

I’m a staunch skeptic and find much wrong with Climate Science from a scientific perspective. However, I find myself scratching my head wondering why “science” is not driving ahead according to the accepted protocol. … Publish!

There are so many things. Station drop out, solar forcings, this issue on CO2 forcing, bad models, bad thermometers, etc etc .. but it seem little of it gets published in journals .. just internet.

Dear Dr.Archibald, stick to your brilliant approach of the Sun cycles. Greenhouse doesn´t exist, it´s dead. The so called “green-house effect” is for closed systems, it really means “trapped heat”. Our earth, HOLY GAIA for the world government conspirers believers, it is not a closed system but an open system…and, as Lord Monckton has shown, based on satellites observed energy balance, energy loss from our planet is greater than energy gain. Temperature is but an almost subjective reference.
The famous physicist Niels Bohr, at the beginning of the 20th.century described this “green-house effect” as non existing:

“The final graph is complete junk. Comparing forcing due to CO2 alone with forcing due to CO2 + feedbacks and finding that the latter is larger is pretty trivial.”
————-

Feedbacks are infinitely complex, and the portion that deals with cloud cover, and changes in sea ice and snow cover are poorly understood. Only the water vapor aspect is well understood, and is widely agreed upon as being positive. Accurately assigning a quantity to that positive value for water vapor is another matter. There is wide disagreement (depending on which Climatologist you ask) whether cloud feedbacks due to increased CO2 forcing are positive or negative. What is known is that cloud feedbacks are highly chaotic. Feedbacks due to CO2 forcing don’t just “take off” at some imaginary threshold, but rather logically follows a more smooth curve. The idea that a diminutive increase in forcing due to an increase in atmospheric CO2 could be amplified several times by positive feedbacks (with the assumption that the positive feedbacks overwhelm the negative feedbacks) runs parallel to the concept of an over-unity perpetual motion machine, thus my skepticism. Besides that, the Stefan-Boltzmann law for radiation of a black body predicts that there is always present a strong negative feedback for any type of forcing that changes the temperature of the black body.
It would require a googolplex of data points to actually measure the feedbacks existing on the globe for a single day. Good luck with that. All we have is some limited data, fuzzy theoretical math equations, and GIGO computer models to predict feedbacks.
Here is a statement found at:http://www.faqs.org/faqs/sci/climate-change/basics/
“These figures are compatible with the IPCC estimate of about 1.5 to 4.5 o C surface
warming for a CO2 doubling.”
The range of that estimate runs the gamut of 300% for the maxima over the minima. Nothing to see here, move along.

Hottel et. al. developed a method of calculating the impact of CO2 on radiant heat transfer in the atomsphere. Leckner greatly improved on the method. Both take into account all of the “fudge factors” that relate to absorbance. Full spectra, concentration, distance, reradiation within the gas, etc. This work that was ignored by climate scientists is used by everyone else who needs to calculate radiant heat loss in the atmosphere (so many people do this even engineers like me can learn it). Applying this work to the atmosphere shows that the logarithmic relation is an approximation. A log/log relation is more likely. Anyway, the graphs of forcing one gets using these methods is similar to the f=5.35ln([CO2]/{CO2{284}). It just flattens more at higher concentrations (above 100 ppm). It does indeed prove that most radiant heat effects begin at very low levels of gas. Using this method, the maximum absorbance by CO2 would be about 45 W/m³. This is attained at about 200 ppm. (100 ppm is 40) 22.5 W/m³ is attained at about 6 ppm. If Anthony is interested, I can send him a paper I’ve written that clearly shows the method for obtaining these including sample calcs, etc. that make creating the graphs relatively trivial.

You make some valid points but start a shouting match with other posters quite un-necessarily so.

The reason this and similar articles on this site have validity is because the foundation of the hypothesis that man made CO2 emissions will cause dangerous global warming is at the heart of the debate.

Finally does anyone know what the CO2 level was during the MWP and earlier warm periods?

Does anyone remember where that wonderful little article is which shows how much the CO2 levels have increased… by showing a graph of atmospheric gases scaled from 0 to 100%? It really puts 300 PPM in perspective.

I’ve seen elsewhere analysis of the two papers you cite as not only discussing the total intensity/amount of Long Wave Radiation, but also as documenting the lack of a measurable correlative difference in the re-radiated (outgoing) LWR given the increase of CO2 ppm in the atmosphere. In other words, as observational data that directly refutes the claims of CO2 greenhouse effects, as postulated by the AGW theory.

If something can be confirmed, via directly measurable observational data, e.g. Einstein’s prediction of the effects of gravity on light, confirmed by observations during a solar eclipse – then how is a departure from the postulation (absence of predicted behavior) not a credible refutation?

Following on from that, if the main premise is demonstrably inaccurate (in other words, wrong), how is anything based upon the hypothesis anything other than pure malarky?

CO2 – if the numbers don’t fit, you must acquit. And question why so many are so latched on to the one theoretical model that has even the slightest possibility of being influenced by the manipulation of human behavior, becomes extremely relevant, although this thread is not the appropriate venue to explore it.

Just to keep it in proportion, this is the average composition of the
atmosphere up to an altitude of 25 km.
Nitrogen N2 78.08%
Oxygen O2 20.95%
Water H2O 0 to 4% (variable, affecting total)
Argon Ar 0.93%
Carbon dioxide CO2 0.0360%

I just love someone popping in to discuss “responsible” scientists. Nonsense!

Scientists may be wrong in their theories, or correct, but if they are really scientists, the term responsible doesn’t belong in the discourse.

Referral to socially based grading of behavior is a warning flag that the discussion isn’t about science at all.

Attempts to hide data, algorithms, ad hominem attacks, referrals to discredited papers, and appeals to higher authority are all venal attempts to hide non-science, both from the lay public, and other, competent scientists.

This thread is a poster child for the Warmists method of doing business. It wouldn’t matter if the poster were wrong, it happens frequently that a scientist is wrong. The point is, scientists put everything out on the table to be viewed, right or wrong.

The old painting a window glass pane with successive coats of white paint, and hoping to turn the room completely dark trick. After a few coats nothing else much happens. That’s called the logarithmic effect, once you have about three coats of paint, it’s all the paint has got that can be used to block light.

If the positive feedbacks proposed by the IPCC and their merry band of modelers, were true, the earth would be a fireball already. Once the climate ran away, or latched up, it would be stuck hot, or cold, for all time.

Watching a discovery channel presentation last week, they went all over the temperature ups and downs, the sea level ups and downs, and the CO2 levels ups and downs. It’s been far hotter than today, far colder than today, the sea levels have been 300 feet higher and 300 feet lower, the atmospheric CO2 level has been lower and much much higher than today, ice covered the whole planet, and completely melted everywhere, and we are still here. Very enlightening, here is the link … http://dsc.discovery.com/tv/prehistoric/prehistoric.html … watch if you dare.

The effect of plate tectonics was clearly shown in determining earth’s climate.

If only the media would tell the truth … scientists could get back to real science, and stop with the silly stuff. The alarmists have clearly gone past the expiration date on the public’s “Issue Attention Cycle” with their hoaxing. Time to move on.

Suggestion, next a writeup on the absorption of CO2, the lab experiments that prove it, and how it plays into this posting. May not be for the laymen though.

Wind Rider (06:12:02) :If something can be confirmed, via directly measurable observational data, e.g. Einstein’s prediction of the effects of gravity on light, confirmed by observations during a solar eclipse
Are you sure?, That was only diffraction.

The primary global warming potential action of increasing CO2 is said to be an increase in water vapor (not clouds, water vapor) due to the warmer air. Show me one graph that demonstrates a significant and increasing trend in water vapor that lies outside what is expected to occur during natural water vapor increasing events (IE El Nino, warm PDO, etc). Without increasing water vapor, the AGW theory and the models are proven false.

Just to remember some facts:Facts about CO2:
CO2 it is not black, but trasparent and invisible
CO2 is the gas you exhale. You exhale about 900 grams a day of CO2
CO2 that you exhale is what plants breath to give you back O2 (oxygen) for you to breath.
CO2 is heavier than air, it doesn´t fly up, up and away CO2 is a trace gas in the atmosphere, it is the 0.038 per cent of it, or 3.8 parts per ten thousand.
The atmosphere, the air you know, does not have the capacity to “hold” enough heat, it only “saves” 0.001297 joules per cubic centimeter, while water , the sea you know, has 3227 times that capacity (4.186 joules).
Would you warm your feet with a bottle filled with air or filled with hot water?

It is not “greenhouse gases in the atmosphere” which raise the temperature of the planet above the -15° average calculated for a planet at our distance from the local star, but the ability of the atmosphere to absorb heat by conduction from the surface and distribute it around the globe in an attempt to establish homogeneity.

The presence of any atmosphere, with or without attendant greenhouse gases, would accomplish this to some degree.

It is the lack of a fluid atmosphere with this ability which causes the temperatures on the Moon to range from -233° to +123° over the diurnal cycle.

The greenhouse gases play a minor role in radiative blocking, but not nearly enough to account for the 30° difference between here and the Moon.

The primary global warming potential action of increasing CO2 is said to be an increase in water vapor (not clouds, water vapor) due to the warmer air. Show me one graph that demonstrates a diminution of OLR (a growing imbalance between incoming shortwave and outgoing long wave infrared radiation) that lies outside what is expected to occur during natural events. Without decreasing LWR, the AGW theory and the models are proven false.

The CO2 AGW theory, as it stands, can be tested with measurements beyond temperature.

I’m a bit confused with the pre-industrial CO2 levels as well. Actually up to about say 1979.

The measurement done was that for the summer or winter deposits of CO2? What with the winter levels not being so high.

And with what we supposedly know now about CO2, it mostly being concentrated in a couple of streaks around Earth hiking the trade winds and jets streams, isn’t it then kind of looney to rely on measurements from the ever so central place called antarctica, well central for birds that can’t fly anyway?

Dear Dr. Archibald,
CO2 in Icecores in no way reliably represents the original atmospheric CO2 level because of fractional processes. The fossil leaf stomata indices for example show CO2 concentrations between 270ppm and 326 ppmv while the Taylor Dome icecore showed only concentrations between 260 and 264 ppmv in the last 6000 years. the chemical measurements of CO2 in the 19th century showed values up 420 ppmv.

As most plants receive more CO2 than the present day concentration they grow faster and fatter. In order to do that they must be absorbing solar radiation and converting it into potential chemical energy, (fats and sugars, etc.), at a faster rate. Such conversion by photosynthesis could be represented as ‘lost heat’ in the earth radiation budget in that radiation that arrives at the surface then ‘disappears’ from the equation. (Any satellite image you look at shows the vegated / rural areas as much darker than the cities even though they are always cooler than the cities. (Transpiration only can go so far to explain the difference IMO.)

1) 33K is not correct, because it is calculated with present albedo. 70% of present albedo is created by clouds, which should not be there since the hypothetical Earth got no “greenhouse gases”. With cloudless Earth, the difference should be some 15K. (I will not go further, considering that such Earth can not have oceans and should be more Moon-like etc.)

2) Presence of greenhouse gases expects existence of oceans and clouds. Clouds cools Earth. Condensed water vapor rains on the surface and cools it by evaporation. Oceans absorb a lot of heat, effectively cooling the surface again. Net effect of “greenhouse gases”, mainly water in various forms, is cooling effect.

3) Earth is warmer, because it has atmosphere consisting of 99% nitrogen and oxygen. Atmosphere absorbs and keeps heat, therefore our night is not as cold as on the Moon. Also, our days is not as hot as on the Moon.

4) Mars has thin atmosphere consisting of 95% CO2, and its temperature is equal to the theoretical value 210K. Its effective CO2 concentration is roughly equal to water vapor + CO2 on Earth, but has no visible effect. Venus has dense atmosphere consisting of 95% CO2, and its temperature is very high. However, Venus is both closer to the Sun and has much higher surface pressure. Its temperature in 1bar altitude, corrected for its Sun proximity, yields average Earth temperature again. –> It does not matter much, from what is the atmosphere composed, but how much of it is present.

5) I believe water vapor has dampening effect, warming nights and cooling days, but thats all. There is no increased “greenhouse effect” observed in polar regions, where is only a little humidity and rising CO2 should increase temperatures by far most. “GH” effect causing +33K is theoretical construction, which wrongly attributes observed reality. How can you recognize IR radiation coming from 300ppm CO2, compared to radiation, emitted by 900,000 ppm warm oxygen and nitrogen?

In case I am wrong, either there are negative effects swallowing all the CO2 addition or per Miskolczi, total “GH” effect is constant, modulated by changes in water vapor.

How does an individual assess competing claims on an issue of this complexity? You’ve got to have sympathy with the journalists who basically say “whoa, this biologist from Stanford must know what he’s talking about.” Of course with a little historical perspective we can remember that the entire academic world of Geology was wrong about plate tectonics a few decades ago… In the case of AGW even looking at temperatures doesn’t really help that much because “weather is not climate” and in my opinion even if it started decisively heating up again, that still wouldn’t indicate that greenhouse gases emitted by human activity had anything to do with it, necessarily. It’s a vexing question.

a missed ‘key’ here is the same concept for h2o vapor. Absolute humidity again is a log function. If you assume an increase – say 5K and recompute absolute humidity based upon the standard climate assumption of constant relative humidity, you’ll find there is an increase of 0.3 in absolute humidity. This is far less than a doubling. Note that h2o has much higher concentrations than does co2 and it is much more potent. I haven’t used archer’s modtran calculator to manipulate h2o content but I’ve got my on 1-d model. H2o effects are around 8-10 w/m^2 increase for a doubling versus 3.7 w/m^2 for a co2 doubling.

The net results indicate that a 5 deg C increase in temperature results in less forcing than a co2 doubling. That means we’re missing over 3 deg C of warming to achieve that 5 deg C rise after accounting for h2o vapor and co2 – ignoring additional cloud formation etc. Try it for a 2 deg C rise and you’ve got the same problem, co2 doubling is good for less than 1 deg C on its own and a 2 deg C rise will support much less than a 5 deg C rise – which was roughly comparable to co2 but a little less. For 2 deg C rise, we’re still missing almost half of the necessary forcing as the h2o is going to contribute only about 1/3 what a co2 doubling does.

don’t look now but they’re trying to move the goal posts. the extra will be the methane beast that they didn’t know know about in any quantitative fashion (and still don’t) and didn’t program their models for – the ones that promise 5 deg C rises. LOL

“The greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere…”

to say:

“Insolating gasses keep the Earth 30° C warmer than it would otherwise be without an atmosphere…”

I like the analogy of the earth as hot water bottle over the earth as a greenhouse…I think the hot water bottle is more accurate and the CO2 component contributes an interesting added warming effect, but it is small compared to how the sun heats water, and the water then heats the atmosphere.

With that global warming is like religion, except that
forsy church picks from ordinary people, and the spec from GLOBAL WARMING
want to cut the cash already WHOLE COUNTRY, a much higher driving school:)

I have a great deal of sympathy with the view taken by Mari Warcwm 05 41 01 .As with so many things, money seems to hold sway over common sense. In the UK we have a government which is practically bankrupt proposing to spend millions we do not have on wind farms which will not work. They also plan to bury CO2 under the North Sea – another Green dream which has already been shown to be impossible.
The AGW pseudo science is so deeply entrenched in money that scientific argument will never shift it.
According to James Delingpole the BBC Pension Fund has £8 Billion invested in the Institutional Investors Group on Climate Change – other people similarly misled include several County Councils and other Pension Funds.
This probably explains the BBC’s reluctance to see the light. To know Just how many more of our sources of information are tainted in this and similar ways would be most interesting.

First there is very little extra energy available to be absorbed by CO2 as seen in the first two graphs. That is what the logarithmic relationship is all about.

Second H2O is a much bigger player as seen by the total amount of energy absorbed by H2O vs that absorbed by CO2.

Then you must add in the amount of water vapor vs the amount of CO2 in the atmosphere and the tremendous variability in the amount of water in the atmosphere by location and time.

Finally there is the percentage of CO2 generated by mankind (3.1%) compared to the total annual amount of CO2 produced. ( Mankind 23,100 million metric tons/yr vs a total of 793,100 million metric tons/yr.)

Catastrophic Mann-made Global Warming is laughable when you actually look at the facts. I do not care what type of multiplier “forcings” the IPCC and climate scientists try to conjure up to explain how a minuscule amount of man-emitted CO2 is going to make the sky fall.

“Let’s see, usually we are told by AGW skeptics that the atmosphere is too complex to understand. That the sophisticated mathematical models that run on supercomputers cannot possibly come close to the real climate.”

The basic physics of the effect of CO2 is well known and well documented.

All other things being equal a doubling of CO2 gives you somewhere in the neighborhood of a 1 degree C rise in temperature. There isn’t any scientific disagreement on this point.

Where the disagreements are is what happens to other things if one raises the tempurature 1 degree C.

Does the amount of the water vapor substantially change? Does the water vapor end up as clouds? Does the water vapor end up as snow? If the earth warms by 1 degree ‘C’ do substantial amounts of methane get released from frozen bogs?

The AGW’ers believe that the earth is very senstive and a minor change in one variable will cause catastrophic changes in other variables.

They also believe that a major change in demand for fossil fuels will not cause a major change in the cost relationships between fossil fuel energy and non-fossil fuel energy.

To get a doubling of Atmospheric CO2 we need to be emitting at the 60 Gigaton a year rate, but we are only emitting at the 30 Gigaton a year rate.

Somehow in the magical world of an AGW’er, humanity doubles its demand for fossil fuels, but the price relationship between coal,natural gas,oil,wind,nuclear and solar stays the same, so government intervention is required.

The entire CO2 question is really a political and worldview struggle. The current “Global Warming” group are true religious believers and likely composed of a majority of people who long for a completely different social system. I suspect they desperately hate the petroleum industry and their real goal is to shift financial and political power to a paradigm that fits their definition of acceptability; the financial, economic and human induced harm be damned.

I enjoyed your post, it makes sense to me but even if you are completely wrong it makes little difference to the real struggle at hand. For the sake of real science and openness, the current gate-keepers of the global warming religion must be defeated. Their corruption of the spirit of science must be relegated to the dust bin of history along with the carpetbaggers that ride their coattails.

Has anyone taken a chalk in his/her hands?. Well, you know it comes from the trillions of lime (calcium carbonate, chalck) deposits all over the earth. Wanna know fossil CO2? just see those inmense deposits: They TELL YOU how much CO2 there was in the past. See?. Well, all the rest is but the expression of a dying subculture, of naive ideologies, a product of too much hemp in the 1960’s. All those “philosophies” have taken you to the state of affairs that you msm qualifies as an inminent, and yours only to enjoy,”armageddon”.

The natural CO2 forcing is shown without any feedbacks, whereas the anthropogenic forcing is shown with feedbacks. This is misleading. Nobody would argue that the natural changes in CO2 are not magnified by feedbacks (try to explain the glaciations without feedbacks) One can argue about the magnitude of the feedbacks.
******

Ice-albedo feedback is the major feedback during glacial transitions. This feedback is having little affect now because the remaining glacial ice @ Greenland & Antarctica is at such high latitudes that relatively little sunlight is reflected (and hence our relatively stable temps during interglacials). Only when cold periods occur often enough and/or snowfall increases that snowcover survives much of the summer at lower latitudes will the ice-albedo affect become significant. No changes in CO2 are required — the CO2 changes during glacial transitions are the result of ocean in/outgassing from temp changes, not the other way around.

Some have decried why conventional media outlets don’t pick up on stories like this. Regardless of the merits of the article, the fact is that a good portion of the journalists and reporters get befuddled and have their eyes glaze over at the word “feedback” much less “logarithmic.” In other words, they don’t know how to write up their stories because they forgot all that “stuff” from High School and they don’t think anyone will read it.

Excellent article but I still have problem with the base methodology of deriving from a mixed gas even the reduced level of GWG attributed to 0.002% increments of the composite mixture.

To call this an empirical derivation is ‘I think’ misleading. There may not be supercomputers involved but there are still a lot of assumptions of understanding … you could say derivations based on a ‘model’.

My preference is to simply strike this term in the equation to 0 (or insignificant) given we consider the ‘base case’ (0 on ‘y axis’) the suffocation of plant life and not even consider what happens below ~200 ppm (0.02%).

True, CO2 increases should have a roughly logarithmic effect, but water vapor feedback should ALSO have a logarithmic effect. I think that’s one item missing in their logic when the Catastrophic Anthropogenic Global Warmers present their apocalyptic scenarios.
About 4.6 billion years ago, the sun was roughly 70% as luminous as it is now. Using a linear extrapolation of luminosity increase, the sun must have been only 75% as luminous as it is now at 3.8 billion years ago, yet by that time the world had oceans and life, and a mostly nitrogen atmosphere, as it does now. OBVIOUSLY the feedbacks must be mostly negative, probably due to clouds, else the world’s oceans would have boiled away billions of years ago, else the earth would have been a frozen, lifeless slab 3.8 billion years ago.

Here is a short, animated PowerPoint Show with audio to illustrate why the “greenhouse” effect of CO2 is roughly logarithmic. The 15 micron band of radiation from the Earth reached 100% absorption when CO2 rose to nearly 300 ppmv prior to the industrial age. That is why the roughly 100 ppmv added by recent human activities has had a minimal effect. Even if CO2 goes up another 100 ppmv, to 500 ppmv over the next 50 years or so, the effect will be minimal. This is compatible with David Archibald’s charts.

A longer version of the above, showing why human-caused “global Warming” is not a crisis and why water vapor most likely has a net negative feedback, is available here.

Henry @ David Archibald
Sorry, but I am not sure where this information comes from that
“Carbon dioxide contributes 10% of the effect ”
first of all, how do we know for sure that CO2 is a greenhouse gas?
The trick they used (to convince us) is to put a light bulb on a vessel with 100% CO2.
But that is not the right kind of testing.
You must look at the spectral data. Then you will notice that CO2 has absorption in the 14-15 um range causing some warming (by re-radiating earthshine) but it also has a number of absorptions in the 0-5 um range causing cooling (by re-radiating sunshine). So how much cooling and how much warming is caused by the CO2? How was the experiment done to determine this and where are the test results?

David, as a skeptic who is well versed in the logarithmic nature of CO2 forcing, I dont have a clue which message you are trying to give here. The co2 range of interest is 100 to 1500 ppm, and already Arrhenius confirmed in 1906 that this leads to a 1.2 degree temperature rise for every doubling. The only debate is now about feedbacks: Is miskolczi right that tau is a constant (which means co2 is compensated by less water vapour) or is IPCC richt that that the feedback is positive. This posting is really not helping in this debate.

Thanks for your contribution. I fear it will do little good, though. David’s argument is similar to many that were being put forward on ‘maverick’ blogs when I first starting looking into AGW several years ago. Eventually you learn to filter out this sort of rubbish.

Of course we know that the CO2 effect is logarithmic. There is no-one on either side of the debate who doesn’t acknowledge this. I, like you, am not sure what David’s motivation for this post is (another book perhaps?)

Anyone who thinks that the effect of COO2 in the atmospehere in insignificant should take a look at emission spectra graphs. Steve McIntyre looked into the issue a couple of years ago. See

In this post, he included a graph which showed a comarison of theoretical and observed radiances for a clear atmosphete at 15N; 215W (see Fig 3). Steve makes the following comment

The large notch or “funnel” in the spectrum is due to “high cold” emissions from tropopause CO2 in the main CO2 band. CO2 emissions (from the perspective of someone in space) are the coldest. (Sometimes you hear people say that there’s just a “little bit” of CO2 and therefore it can’t make any difference: but, obviously, there’s enough CO2 for it to be very prominent in these highly relevant spectra, so this particular argument is a total non-starter as far as I’m concerned. )

The temperature sensitivity of CO2 is clearly not logarithmic over the entire range. The logarithmic relationship appears to range from about 40ppm to about 200ppm. After that it looks more like a 1/x type relationship. Maybe the whole curve is closer to 1/x. Has anyone tried doing such a plot?

Would it not be relatively simple to create practical experiments to show the effect of different levels of greenhouse gasses – or has this already been tried and, if so, what were the results?

The BBC made a simplistic experiment to “prove” that higher C02 “caused warming” during Copenhagen by heating 2 plasic bottles of “atmosphere”, one with more C02 than the other, with two electric light bulbs. There was no measurement of the amount of C02 being used or the relative heat of the 2 bulbs so the experiment imho was useless. The temparature of both bottles shot up with the increase in the one with the higher C02 being slightly ahead of the other. My conclusion was that the main reason for the increase in heat was the light bulb (“sun”?), not the C02 but the audience appeared to be convinced.

If no experiments have been performed, would it be possible to create containers with exactly the same atmospheres in them but then add extra levels of ppm of C02 and/or other trace gasses to certain containers and then observe what happened naturally to the temperatures of each container over time ?

The enhanced greenhouse effect appears to be a misnomer, because the positive feedbacks do not appear to be caused by the c02, but caused by the rise in temperature caused by the co2 (or am I missing something?). This would suggest that any temperature increase causes an increased temperature (i.e. the sun warms up slightly causing water vapour to increse and icebergs to recede etc. etc.)

Systems that exhibit this phenomena in electronics are described as having hysteresis (they tend to lock at their maxima or minima, and are used to remove noise when moving from an analog to a digital world). It seems bizarre for scientists to be characterizing a natural system to be exhibiting more positive feedback than negative – at least without defining when the negative feedback will kick in.

Any computer model with a little bit of excess positive feedback is going to predict whatever it is modelling is going to hell in a handcart over sufficient iterations. These modellers should be made to repeat before they go to work “any natural system that had unrestrained positive feedback would have destroyed itself before I got to model it”.

John Finn (08:00:45) The large notch or “funnel” in the spectrum is due to “high cold” emissions from tropopause CO2 in the main CO2 band. CO2 emissions (from the perspective of someone in space) are the coldest.

And how is this differentiated from a gas mixture with no CO2, in determining this effect? How do you remove all the other explanations for the observed behaviour of a mixed gas? I have yet to see a ‘controlled experiment’ that cleanly shows this exists and on the surface the very idea of this massive energy foci appears implausible. Something else is going on and only the ‘political focus’ on CO2 becomes self evident the more you dig down into this subject. That is a very interesting (old) story in itself.

“As most plants receive more CO2 than the present day concentration they grow faster and fatter….
I wonder then, given CO2’s logarithmic, ‘diminishing return’, nature, could the above negative feedback of photosynthetic radiation loss, at some higher PPM concentration, utltimately overtake CO2’s GHG effect on an incremental basis?”

I love it. A provable negative feedback effect from increasing CO2. Mann-made CO2 causes GLOBAL COOLING. I guess we will have to keep this one under raps or the political types will use it when the climate turns cooler for the next 30 yrs.

@ScientistForTruth:
“No – that can’t be right: any cumulative effect must go through the origin: there can be no good reason why positive feedback would kick in at 280ppm.”

Actually, there can. Around that level, most of the moisture in the atmosphere today would have precipitated out as snow due to the cold, i.e. what happens when you are in a death spiral headed toward an Ice Age, the poles, where most of the warming happens, would be bone dry in the atmosphere, any atmospheric moisture would come from polar ice evaporating due to low vapor pressure. What we need to see is a similar curve for the forcing of H2O (according to whatever law it behaves by, linear, log, geometric, etc) along with what temperature to expect each given level of atmospheric H2O.

Some say H2O is negative forcing in its sensitivity, others say positive. If it were truly, solidly negative, then a H2O rise will always be the trigger of a new ice age, but ice cores don’t reflect that, instead you see slow CO2 drawdowns causing gradual cooling over thousands of years (excepting freshwater injection events like LD, but those only happen at the end of a glaciation, not the start). If it were truly positive, then a H2O rise like that were seeing would always be the trigger of a catastrophic flooding of the globe, but we don’t see that either, the only catastrophic floodings happen at the end of glaciations (draining Agaziz, filling the Med, the Red, and the Black seas..) .

Because atmospheric water vapor both cools during the day, and reduces cooling at night, its more fair to say that increased water vapor is going to increase volatility in climate in both directions, which I believe is what most middle of the road climate folk believe anyways. The big question in dispute is how sensitive H2O is to CO2 levels.

Is anyone out there thinking?
CO2 is a gas found in the atmosphere. As is true of many molecules, CO2 responds to certain incoming radiation (photons) by jumping to an excited state. I am citing the obvious here. What then? Does the CO2 remain excited forever? No. The excited state decays back to an unexcited state, releasing the quantum of energy (this time without regard to direction).
The absorbed energy CANNOT remain in the atmosphere, in excited CO2 molecules. In the outgoing energy from earth, only a certain portion is found in energy levels to which CO2 responds. It is kind of like a food fight. If one CO2 molecule gets the photon, another one does not. So distributing the energy among more CO2 molecules means (inevitably) that fewer of them (proportionally) will be excited.
This non-equation view should suffice to explain why there cannot be a linear relation between CO2 concentration and infrared absorption and re-radiation.
Looked at from a statistical perspective, such a relation is tailor-made for a logarithmic relation.
And that is what the data shows.
That first 20 PPM of CO2 ALL get excited.
But saturation sets in. In fact I would suggest exploring the logistic curve as an even better model.
Oh, well. I am not being paid by any oil companies, so I have nothing to prove.

IMHO it all comes down to feedbacks. A doubling of CO2, by itself, would cause warming of about 1C. The IPCC predicts a much greater increase of up to 6C based on climate models. But all of their climate models assume there are positive feedbacks which amplify the effect of the CO2. Professor Richard Lindzen in his paper “Deconstructing Global Warming” shows that the overall feedback is in fact negative. http://www.globalwarming.org/wp-content/uploads/2009/10/lindzen-talk-pdf.pdf
That paper is based on measurement of radiant energy leaving the earth, not on modeling assumptions.

Positive feedbacks imply an unstable climate that would be prone to going into a runaway effect. A negative feedback implies that the earth has a natural regulating system which tends to stabilize temperature.

Hey everybody!, climate changers found another scaring menace:http://news.yahoo.com/s/mcclatchy/20100307/sc_mcclatchy/3444187
They are looking for a way out, however this too is a lie as PDO being negative (lower sea temperatures) increases oxygen solubility in sea water.
Another Ban kee moon science-global government marketing.

This post is misguided on several levels. First, it assumes that the effect of CO2 is saturated over the whole frequency spectrum, which it is not. Some parts of the IR spectrum are indeed saturated, and CO2 increase won’t have any effect. But in other regions of the spectrum, and particularly at the poles, the edges of the CO2 bands are anything but saturated.

If you think of the atmosphere as a blanket with holes in it, you don’t get much improvement in the blanket by stretching a thin film over the intact part, but you do improve by covering the holes.

Second, it’s simply silly to say that the earth came close to catastrophe by going below the threshold for plant growth. What pulls CO2 out of the atmosphere is largely plant growth. It’s self limiting; as CO2 levels drop, plants grow less, and that limits further decrease in CO2.

If no experiments have been performed, would it be possible to create containers with exactly the same atmospheres in them but then add extra levels of ppm of C02 and/or other trace gasses to certain containers and then observe what happened naturally to the temperatures of each container over time ?>>

Simulated earth atmosphere with 2.6% water vapor and two different levels of CO2 (one double the other) and measured absorption of LW being transmitted through it. Concluded IPCC estimates were high by 80X. However, my assumption is that this was done at room temp. Other temperature ranges and other water vapor concentrations would give different results. Earth is inconveniently round, spinning, and has the atmosphere on the outside as opposed to a cylinder with atmosphere on the inside.

It is feeblemindling groping in the dark, trying to guess phantoms by empty discourse, if temperatures will scorch us or not, and what causes what is it what we call it temperature.
J.C.Maxwell put it this way:“…when, however, there is a general transference of particles in one direction, they must pass from one molecule to another, and, in doing so, may experience resistance, so as to waste electrical energy and generate heat”
“On physical lines of force” , by J.C.Maxwell.

Hi could someone tell me where I can find what the total Infra red energy coming to the Earth from outer space, which is of the frequency which can be absorbed or radiated back to earth or reflected back to space by co2?

These modellers should be made to repeat before they go to work “any natural system that had unrestrained positive feedback would have destroyed itself before I got to model it”.

Now that is funny…and sad. Difference between an engineer who has to make something work and an academic that merely has to satisfy his “customers” desire for results in a certain form. What if we added a little feedback into the funding cycle? Observable predictions result in more money, missed or unexplained observations result in less money? Would that have runaway negative feedback?

Larry – Interesting comment. I’ve always thought there was something strange about things were framed in terms of forcings, feedbacks, positives, negatives, fasts and slows. Perhaps someone was trying to avoid the simpler but peculiar sounding formulation…. warming is going to cause warming. What I’d like to say is that warming causes more humidity and more humidity will cause more frequent rains and more rain presupposes more clouds and even the IPCC admits that the effects of clouds on climate are poorly understood. Ergo, dire predictions are less than plausible and certainly uncertain.

You are correct. A system which had a never-ending positive feedback loop would essentially self-destruct. Most sane people realize that the climate of the earth is not such a system. It is known that the warming effect of CO2 is logarithmic and not terribly significant, and it is now postulated that water-vapor feedback is actually negative rather than positive, so the climate seems to have a thermostat, so to speak.

“As most plants receive more CO2 than the present day concentration they grow faster and fatter. In order to do that they must be absorbing solar radiation and converting it into potential chemical energy, (fats and sugars, etc.), at a faster rate. Such conversion by photosynthesis could be represented as ‘lost heat’ in the earth radiation budget in that radiation that arrives at the surface then ‘disappears’ from the equation.”

Yes, photosynthesis is strongly endothermic and increasing concentration of CO2 increases photosynthesis, so more cooling. Transpiration is important (latent heat of evaporation) but decreases somewhat with increasing CO2 as stomata close up. The plant canopy couples its cooling to the surrounding air by conduction, so to all air molecules, not just the tiny CO2 component.

Everyone knows that it’s cooler on a grass lawn than a concrete or asphalt yard. Some buildings deliberately grow grass on their roofs for their cooling effect.

Of course, all this endotherm becomes exotherm when the vegetation is burned and it reverts again to CO2 and water.

If people want carbon capture and storage, with cooling to boot, they could simply plant fast-growing trees, cut them down after 30 years and store the remains (including as the structure in permanent buildings). Much simpler than trying to do it on a coal-fired power plant.

David, To me an illuminating additional graph would show the theoretical cumulative radiative forcing of H2O and CO2 based on the assumption that incremental CO2 forces additional atmospheric H2O ie a stacked version of your last graph above (but ignoring other H2O feedback effects such as cloud effects and latent heat of evaporation etc) and ranging from 20 to 400ppm CO2 and beyond. I assume the forcing effects of CO2 and H2O are individually logarithmic with the H2O part being larger. Such a graph would show the effective baseline around which the other feedbacks are hypothesised to force reality. Of course my request assumes we know the pre-industrial level of H2O in the atmosphere and it’s relationship to temperature.

… Over the past 140 years the British weather observatory situated in the Himalayas revealed a temperature drop of .4 degrees.

Think about it. This is the data you can trust. When a meteorological station is located far from urban environment and volcanic activity, when people recording the measurements are not financially interested in altering them, and when they don’t pick the lowest temperature dip as a starting point for comparison, suddenly there is no “global warming” and never was.

For this reason, and for many other equally important reasons, the United Nations should be boycotted and abolished. It is a criminal organization that causes enormous harm to freedom, culture, and civilization.

it was established before AGW ideology that c02 bandwidths are fixed at between 7 and 8% of outgoing atmospheric energy. Outgoing atmospheric energy is anything from 1-5% of the radiation budget (mainly from soils) that makes itself available to c02. Measured as temperature c02 capture is 0.15C, certainly not 3C.

only the stefan boltzmann equation could increase this 8% budget artifically to an absurdly higher figure to give 3-5C in the near future (the basis on which the calculation was made due to anthropogenic c02 emissions)

in fact there are many poorly applied equations used to contrive and exagerrate the greenhouse effect in order to increase the alarm and produce a hypothetical future 3-5C increase based on anthropogenic emissions. true,, deserts could give off more radiation- 10%, but matter at 15C, which is quite cool, thermalises so as not to radiate beyond 1% of its energy, besides which at those temperatures, IR radiation bypasses c02 – especially desert regions

The temperature difference is explained by air pressure and the ideal gas law in a closed system. The atmosphere isn’t a closed system – The experiment would have to be 350ppm in a vessel and 450ppm in a larger vessel to create the same pressure. Then the experiment ( as tried by Anstrom) would reveal fairly objective results. The external heat source would have to correspond to earth temperatures between -40C and 45C

“As I understand the process at the primary sampling station for CO2 at Mauna Loa, a sampling tube is purged with inert gas and then a sample of air is drawn.

This sample is measured spectrographically and the daily results are added; has anyone ever established that the purging is fool-proof?

Has any study ever investigated the accuracy and reliability of the sampling process?”

Mauna Loa is the world’s largest volcano, and a very active one. Ever heard of how many gigatons of CO2 volcanoes put out?

Mauna Loa used to have pineapple plantations on its slopes near the laboratory, but these have dwindled away since the 1950s. Anyone know that pineapple plants, using the unusual CAM photosynthesis pathway, are among the world’s most efficient sequesterers of atmospheric CO2? The increase in ambient CO2 when removing pineapple plantations is well established.

Mauna Loa must be one of the worst places on earth to establish a benchmark for CO2 measurements. Unless you had a particular agenda, of course.

The very premise of this article that 30C increase in surface temperature is explained by greenhous effect is wrong. This difference is due to convection. It is the difference between surface temperature (+14C) and the temperature at lower boundary of stratosphere (-18C), since there is no convection above this altitude. But atmospheric convection is adiabatic: the same body of air ascending to, say, 10 km, will expand due pressure drop and so get cooler without loss of its heat content. This adiabatic cooling has nothing to do with trapping of infrared radiation, this is simply laws of gas expansion. What the Wood’s experiment really shown is that greenhouse effect does not operate in real greenhouses, the only heating is due to suppressing of convection. Does it operate in real atmosphere? Probably, yes, but trapping of heat will only enhance convection and so enhance convective cooling. No real experiment with real atmosphere is possible, but some observations indicate that the main heat-trapping gas is water vapour: day-to-night temperature difference is much higher in deserts, where air humidity is near zero, than in wet regions. During night convection is weak or absent, and most cooling is radiation cooling, and it seems, the only difference is water vapour content. CO2 does not enter into this effect, and its contribution is not known and can be immesurably small.

It absorbs radiation at 13.7-16.3 microns with a peak of 15 microns – yet radiation on average leaves earth at 10 microns, which equates with 15C, or 288K. 15 microns equates with subzero temperatures that can be found at the poles – so heat capture of c02 in the atmosphere is a rather rare event, and is fixed at around 6-8% of atmospheric thermal energy, achieved by the 1st 100ppm where its absorbtion window closes – well outside of normal temperatures. Its true that a c02 molecule’s stretching mode would allow it to transfer energy to other atmospheric molecules, such as the ghg water vapour, but this requires so much energy that it doesn’t occur even at 300K, with the c02 absorbtion bands, and there’s some 3,000 other mlecules apart from c02 in a given volume of air, making collisions between thermally excited c02 molecules very unlikely. Molecules of like kind are more efficient at transferring energy to one another. In the absence of such, thermal degradation takes place very quickly. (a billionth of a second), so vibrationally excited c02 thermalises very quickly with oxygen and nitrogen

“As a non-scientist, can anyone please explain why there is no convective heating
of the atmosphere? Are thermals due entirely to CO2 in the deserts?”

Richard, I fly small airplanes in the desert, and CO2 has nothing to do with thermals. You get thermals that make you tighten your seat belts because there are few clouds. The sun differentailly heats the ground depending on local albedo. e.g. black highways are hotter than vegetation. This sets up convective cells. Air rises over the hot spots on the ground and settles over the cooler ground spots. Makes bumpy air.

In places like Florida, where there is a lot of humidity, clouds form where the updrafts take the water up to the altitude at which the water vapor turns to liquid droplets.

Shade (08:15:35) :
Would it not be relatively simple to create practical experiments to show the effect of different levels of greenhouse gasses – or has this already been tried and, if so, what were the results?

The BBC made a simplistic experiment to “prove” that higher C02 “caused warming” during Copenhagen by heating 2 plasic bottles of “atmosphere”, one with more C02 than the other, with two electric light bulbs. There was no measurement of the amount of C02 being used or the relative heat of the 2 bulbs so the experiment imho was useless. The temparature of both bottles shot up with the increase in the one with the higher C02 being slightly ahead of the other. My conclusion was that the main reason for the increase in heat was the light bulb (“sun”?), not the C02 but the audience appeared to be convinced.

If no experiments have been performed, would it be possible to create containers with exactly the same atmospheres in them but then add extra levels of ppm of C02 and/or other trace gasses to certain containers and then observe what happened naturally to the temperatures of each container over time ?

One problem with these simple experiments is the climate system is not simple at all.

Richard Telford (01:00:29) : “I don’t know if this post was supposed to be misleading and confusing but is certainly is.”

Obviously, at least one person has been misled and confused. This is a good thread and perhaps based on reader comment the post can be supplemented with additional material that will lessen Mr. Telford’s confusion. Dismissing his comments out-of-hand is neither productive or polite. Clarification seems warranted.

A better link than above and a nice short description of the origins of this discussion

Not a good experiment, IMO. There are escape paths for internal heat other than back through the window, and the Earth isn’t accurately modelable as a small, fully-enclosed greenhouse lined with black cardboard.

Oh, and he’s using sunlight already filtered by the atmosphere as input. But that’s probably just a quibble.

“Any computer model with a little bit of excess positive feedback is going to predict whatever it is modelling is going to hell in a handcart over sufficient iterations. These modellers should be made to repeat before they go to work “any natural system that had unrestrained positive feedback would have destroyed itself before I got to model it”.”

Generally, positive feedback is a very bad thing as it causes instability and oscillation. Biological systems have strong negative feedbacks otherwise life would have died out long ago.

A small amount of positive feedback can sometimes be accommodated but there is then need for ‘intelligent’ control, monitoring and possible intervention to ensure that a runaway problem doesn’t develop. We run incandescent lightbulbs from voltage sources, relying on a NEGATIVE feedback mechanism – as the filament heats, its resistance increases, reducing the current and so throttling back on the I^2R heating losses. Equilibrium is quickly established, and the system is intrinsically stable even if one varies the voltage. Not so with driving an incandescent lightbulb with a current source where the system then has POSITIVE feedback: as the filament heats its resistance increases so I^2R heating losses increase, so it gets hotter, so its resistance increases more, so I^2R losses increase even more. Even with this regeneration, there will be a low value of current at which (a not-very-stable) equilibrium will be attained. However, there is a critical current above which the system becomes unstable and thermal runaway ensues until the filament blows. Depending on thermal inertias, a small glitch that takes the current momentarily over the critical current can prove fatal to the system by pushing it into a region of instability from which there is no recovery (without external intervention). So, as a designer, you would never knowingly run a filament lamp from a current source. As a designer, you wouldn’t design a biosphere with positive feedbacks either if you wanted it to be stable.

As Richard Lindzen has remarked, any significant positive feedback in the climate system is a problem for the theist because it would be evidence of ‘UnIntelligent Design’. That’s not compelling for the atheist, but even there one could consider the argument that of all possible earths that could exist, the only ones that can persist are those that don’t exhibit positive feedbacks. Since this earth exists with a flourishing biosphere and has without doubt persisted for a very long time, then this evidence would tend to militate against the presence of positive feedbacks as well. Of course, these are metaphysical arguments, but when someone propounds a crazy idea like AGW it’s worth doing a reality check by taking a look outside the box.

One feedback which has not been discussed here (unless I have missed a comment) is the presumed increase of outgoing CO2 from the soil when climate is getting warmer. During this winter there was a study where the authors claimed to have quantified this effect: for each degree oC there would be 7 % extra warming because of extra release of CO2. According to climate models a standard value for this feedback has been fixed to 40 % extra warming. The difference would be 1.07 oC contra 1.40 oC.
Thus we have in fact two uncertain factors in IPCC models, this outgoing CO2 and the clouds.
I wonder if Hans Erren could comment on this.
Larry Huldén
Finnish Museum of Natural History

Richard Telford (04:39:45) :
The natural CO2 forcing is shown without any feedbacks, whereas the anthropogenic forcing is shown with feedbacks. This is misleading. Nobody would argue that the natural changes in CO2 are not magnified by feedbacks (try to explain the glaciations without feedbacks) One can argue about the magnitude of the feedbacks. Perhaps the IPCC has them too high. Perhaps too low.

Well…actually we would. Not only the absolute value of the “magnification” of feedbacks, but the direction. As for your glaciation analogy, lookup orbital cycles. See if that helps your understanding.

Hi JonesII
Thanks for the ‘Magnetic drain’ link.
Here is a graph showing huge drop in the intensity of the Earth’s GeoMagnetic Field (GMF, vertical component) at latitude of 36 degrees South.http://www.vukcevic.talktalk.net/LFC13.htm
Location east of Concepcion (in the Andes near the Argentinean border) the GMF has one of the largest drops anywhere on Earth (in 1600 was 54 microTesla , in 2010 is 14.6 microTesla ) i.e. in 1600 GMF was 370% stronger than it is today.
You can find the South Atlantic GMF sweep on: http://www.vukcevic.talktalk.net/GandF.htm

Congrats David. That is an excellent, clear and easy to understand exposition, and illustrates beautifully the nonsense of water vapor positive feedback. I did a check of all the NH high latitude temp. stations a couple of years ago, and the 1976 PDO shift shows up very clearly, especially for Alaska and Siberia. In greenland it gets obscured to some degree by NAO shifts. For Alaska, if you factor out other impacts, like paved runways in the ’90s, and UHI increases you also have the “no further warming”.
Clearly, the process is:
– hypothesize warming
– build a model to prove warming
– tune the model so it gives the desired warming
– invent a mechanism that explains the fudge factor used to tune the model
WV positive feedback is the mechanism for post 2000 warming.
The fudge factors to make the models backcast are also interesting. Very slow mixing between the surface and deeper layers of the ocean are necessary to give the needed CO2 pulse lifetime and then the cooling from ca 1945 to 1975 is aerosols. Noone explains the source of the aerosols, nor why they ceased to be effective after 1975. With 2 undemonstrated mechanisms you can make the models backcast pretty well, and then with a third one you can get a “catastrophic” (also undemonstrated) forecast.
Ain’t AGW wonderful??

These modellers should be made to repeat before they go to work “any natural system that had unrestrained positive feedback would have destroyed itself before I got to model it”.
————-
Reply:
You’re absolutely right Larry. And without going into a whole lot of theoretical or graphical persuasion, what you say is obvious. Why? From a geologist’s standpoint, if there was truly a “tipping point”, the earth would have tipped a long time ago when the CO2 levels were many times what they are today. Yet we don’t see the oceans boiled up into the sky in a soup so thick you could cut it with a knife. And those that propose “global warming” as the cause of earth’s 5 major extinction episodes rather than catastrophic impacts are simply ignoring the obvious.

Before those proposing a “tipping point” gain any credibility whatsoever, they need to propose an “untipping point”. Please identify the mechanism. Barring that, it’s all pretty much fantasy. I simply grab a photo of the earth taken from space as Exhibit A. And I’m pretty certain that’s how the earth appeared before the industrialized era began 150 years ago.

“Is anyone out there thinking?
CO2 is a gas found in the atmosphere. As is true of many molecules, CO2 responds to certain incoming radiation (photons) by jumping to an excited state. I am citing the obvious here. What then? Does the CO2 remain excited forever? No. The excited state decays back to an unexcited state, releasing the quantum of energy (this time without regard to direction).”

Hey, hang on: I’m getting uncomfortable with some of the comments here. CO2 is a MOLECULE like water is a molecule. Molecules can be excited into resonance in a way that atoms can’t. Nils Bohr was right about atoms, for example sodium absorption and emission spectra, but molecules are different: they can have resonances between the atoms that atoms themselves can’t have. When you put your food in the microwave oven the frequency of the magnetron is tuned to excite a molecular resonance for water, not an atomic one: the non-ionizing radiation is not exciting electrons into different states in the individual atoms. Air is made up of O2, N2, CO2 and CO2 molecules, and Ar atoms.

In the IR spectrum of emission from the earth’s surface, the only atmospheric absorption that takes place worth considering are molecular absorption mechanisms, as the atomic absorption/emission lines are way out of the spectrum. If that’s the case then what Nils Bohr said was not relevant to this case as we are talking about a different mechanism – molecular resonance, not quantum mechanics.

Let’s see, usually we are told by AGW skeptics that the atmosphere is too complex to understand.”

This relates to the chaotic nature of the climate. Within Newtonian mechanics the famous 3 body problem is chaotic and the paths of the individual bodies cannot be calculated exactly because of the uncertainty of the exact initial conditions. But it is known that the maximum possible distance between any 2 bodies will be determined by the balance between kinetic and gravitational potential energy.

Similarly the climate is governed overall by an energy balance but the predicted configuration of the atmosphere is unpredictable. Hence we know the earth can’t get warmer than the sun as an extreme example but we can’t predict how winds and ocean patterns would change.

John Finn…your application of “appeal to authority” is humorous, entirely fallacious from a logical pov, but definitely humorous.

Whereas you appear prepared to believe any old rubbish as long as it supports your fervent wish that CO2 should have no effect. Well, suit yourself, but when you find that AGWers are able to ridicule sceptic arguments don’t start whining.

Any computer model with a little bit of excess positive feedback is going to predict whatever it is modelling is going to hell in a handcart over sufficient iterations. These modellers should be made to repeat before they go to work “any natural system that had unrestrained positive feedback would have destroyed itself before I got to model it”.

Which is why I believe they do not show their results for farther than 100 years out. Not that the results would be useful, but they would show if the models are stable.

Funny thing is that you don’t hear much about tipping points from the climate “scientists” these days although it was all the rage for a while. Perhaps they were afraid some one would tumble to the fact that the models are broken Fundamentally.

Murray (10:27:29) :
Very slow mixing between the surface and deeper layers of the ocean are necessary to give the needed CO2 pulse lifetime and then the cooling from ca 1945 to 1975 is aerosols. Noone explains the source of the aerosols, nor why they ceased to be effective after 1975.

Couldn’t have anything to do with a world war and over 500 atmospheric nuclear bomb tests could it!

Can Mr. Archibald please include some citations? I have no trouble believing that the IPCC models assume that water vapor feedback effects start at 280ppm. They do much worse. For instance, they parameterize total solar effects as having 1/14th the forcing effect of CO2, when numerous studies show a .6-.8 degree of correlation between solar activity and past temperature change.

The only solar variable included is TSI, which is accorded a forcing of .12 W/M2, compared to 1.66 for CO2.

Providing the citation allows other people to USE the information because they can back it up. Similarly, we need the citation in order to use Mr. Archibald’s information. He is making a pretty important claim. It would be nice to be able to verify and cite it.

Well I for one have a problem with the entire premise of this essay. For a start “Climate Senistivity” is defined as the permanent increase in the mean global surface temperature of the earth for a doubling of the CO2 abundance in the atmosphere. I googled dozens of papers; in fact dozens of pages of papers which citre this definition or the equivalenty in slightly different words. The IPCC evidently even gives a value for it namely 3.0 deg C per doubling. Well actually to be more accurate they say 3.0 +/-1.5 deg C, a 3:1 spread in value. I don’t know whether that is a GCM modelled value, or an actual planet earth observed value.
In any case they have a Temperature versus log CO2 relationship; that’s not the same thing as a “Forcing” in Watts per metre squared versus log CO2.

Now your curves look very pretty, especially that first one, the modtrans logarithmic plot.

Now toss in a 3:! spread about that nice curve, and then try to convince me that the relationship is still logarithmic; well more likely to be logarithmic than say linear.

Well the relationship between mean global surface temperature and “Forcing” in Watts per m^2, is not even linear. Well the simplest assumption that the connection exactly follows black body radiation laws, would make the “forcing” go as the 4th power of the absolute temperature. Luckily, if the relationship was logarithmic, that 4th power merely changes the scale.

Unfortunately the thermal processes that remove heat from the earth are a lot more complex than simply black body radiation; and the rate of heat loss globally is not simply related to the mean surface temperature.

But the proof of the pudding is in the climate data over a longer period of time. Your pre-inductrial levels of CO2 to today, are not even one half of one octave of doubling, so to twice today, is less than 1 1/2 doublings.

Hoe about five doublings; well halvings anyway, that have taken place over the last 600 million years.

Now there you have CO2 dropping from 7000 ppm (25 times today’s value) down to your 180 ppm low. Yet over most of that 600 million years, the temperature remained constant at 22 deg C.

So currently, the earth is in an anomalous cold phase, the likes of which we haven’t seen in over 300 million years; well maybe 260.

It would be nice if somebody was able to show us some believable observational data, either measured or believable proxy data, covering at least one octave of CO2 doubling, which confirms a logarithmic relationship as more likely than a simple straight line linear relation.

It would be even nicer if someone would offer even a simple physics model for why the earth’s mean surface temperature should be expected to vary as the logarithm of the atmospheric CO2. Ther’s no such physical connection that I am aware of; so I’d like someone to point to such a theory.

Well a lot of people like to point to Beer’s Law, which governs the transmission of light through absorptive media; optical glasses for example.

The net transmission decays exponentially with thickness of the (assumed homogeneous) absorbing medium. The concept is simple; the probability of absorption of any single photon is a constant for a particular wavelength and thickness of the sample, and of course on the nature of the material. The probability of the photon passing through many such layers is simply the product of the (transmission) probablilities of all those layers.

Well Beer’s law works quite well, if you measure the transmitted energy with a monochromator. Some “fast cut” long pass color filter glasses can easily reduce the transmission to 0. 001 % in say 3 mm of glass, just a few hundred nanometres longer in wavelength than the wavelength which passes 50% (the cut-off wavelength for that sample.

But don’t expect to get only 0.001% of the energy transmitted through that same sample. Replace the monochromator with a wide bandwidth detector, and you will find orders of magnitude more energy than the absorption curves claim. The problem is that such materials fluoresce, and the energy absorbed by the glass at one wavelength is then re-emitted at a longer wavelength, and emerges out the other side simply shifted to a longer wavelenght. You can stack up a whole series of such glasses with increasing cut-off wavelengths, and each will sequentially shift the wavelength so that it passes through the next layer too. So Beers law is NOT always followed, in the case where the energy can be re-emitted at longer wavelengths.

The best one can hope for in say visible light absorption filtering, is that the visible wavelength energy that is absorbed by the filter results in heating, and the final emission is in the LWIR spectrum; hopefully a long way away from an area that can influence whatever system the filter is part of.

Well guess what happens in the atmosphere when GHG molecules e.g. CO2 absorb specific photon energies in the 13.5 to 16.5 micron range. That energy becomes thermalized, and transmitted to the ordinary atmospheric gas molecules; which eventually radiate a thermal continuum LWIR spectrum, based on the temperature of the atmopshere; not the temperature of the original emitting surface.

There’s no evidence that such processes conform to Beer’s Law, or in any other way exhibit a logarithmic response as to the resulting mean earth surface temperature rise.

Al Gores famous graphs in his book, of ice core temperatures and CO2 both have the same general shape, as he makes clear in his book, and apparently did so in his movie as he waved them in front of the audience, and suggested that they awere the same. If one was the logarithm of the other they certainly wouldn’t look the same.

The 600 million years of proxy data cited above certainly do not support a logarithmic connection between either mean global surface temperature, or “Forcings” and CO2 abundance in the atmosphere.

Earth’s comfort temperature range, is being controlled by something a whole lot more influential than atmospheric CO2.

For one thing, over the earth’s extreme total temperature range from -90 C, to over +60 C (all of which could be co-existing simultaneously); the maximum possible surface emittance bounded by black body radiation laws, covers a 11 to one range of Watts per square meter (“forcings” if you like). That sets the maximum energy available to be captured by CO2, depending on location on earth. There isn’t any global network that is sampling the value of “Climate sensitivity” all over the earth to arrive at the IPCC’s 3.0 +/-1.5 deg C per doubling of CO2.

And the definition says nothing about CO2 getting any assistance or support from any other GHG; that is the result of doubling CO2; period. Other GHGs each have their own effect and they are unrelated to how much CO2 is present.

To me, the whole idea of “Climate senitivity” and a logarithmic relationship between mean global surface temperature and the log of CO2 abundance in the atmosphere simply doesn’t hold water, either experimentally or theoretically.

But that is just my opinion of course; I’d be happy to learn of either a theory or measured data showing otherwise.

Alan D McIntire (07:56:20) :… OBVIOUSLY the feedbacks must be mostly negative, probably due to clouds,…

Well, if the geologic record of earth’s temperature is correct then temperature definitely hits some sort of a severe negative feedback ‘wall’ around +22C. It’s as though we have a giant air conditioner out there with its thermostat set at +22C.

If not because of water vapor then what else could it possibly be? We already know that warming happens the least in the tropics and most at the poles so the key to the negative feedback from water vapor appears to be found in whatever is happening in the tropics – IMO large quantities of water vapor’s latent heat being convected way up there above most of the GHG’s. As the earth gets warmer – tropical conditions would expand to higher latitudes thus enhancing the negative feedback over a larger area.

There’s a lot wrong with it. The measurements are clearly taken at locations which are contaminated by local effects. There are plenty of places where you can measure 500 ppm but they do not provide a global representation of CO2 in the atmosphere, i.e. CO2 is not “well-mixed” in the atmosphere. The Beck numbers make no sense whatsoever. According to Beck, there is an increase of around 150 ppm between 1930 and 1940. Where did that lot come from? Modern fossil fuel emissions are about 7.5Gt Carbon per year which corresponds to ~3.5ppm. Around half of that is absorbed by natural sinks in the ocean and the terrestrial biosphere (there was a post on WUWT about this a bit back). This gives us an average increase of just under 2ppm per year. What do you imagine caused the 150ppm increase?

It gets worse. According to Beck there is a similar rise in the 1820s. There are also annual increases (and decreases) of 50 ppm in a single year. How this paper is being treated seriously by anyone is beyond me.

I don’t see how there can be, they are actual measured values, not guesses from Ice cores, Tree Rings, Leaves etc.
But the IPCC & Climate Scientists seem to want to ignore written History, because it is Inconvenient.
Just look at Australia’s history compared to current “Unprecedented” temperatures, droughts & Rainfall etc. They just love to apply that “Unprecedented” when it is obviously not so.

Some parts of the IR spectrum are indeed saturated, and CO2 increase won’t have any effect. But in other regions of the spectrum, and particularly at the poles, the edges of the CO2 bands are anything but saturated.

And the net result of predicted strongest increase of greenhouse effect is..
Antarctic – cooling?

Arctic – AMO oscillation?

Where is the strengthened greenhouse effect fingerprint, when the most sensitive areas on the Earth show nothing or just regular variations, well correlated with oceanic oscillations?

If adding CO2 means more water vapor and accordingly more greenhouse from the water vapor, then this will, in turn create more heating, water vapor and yet more heating. This appears to be a classic system with positive feedback which would limit at some value unrelated to the initial CO2 increase.

David,
As this paper is a pretty fundamental & concrete analysis of the potential effects of CO2, could you please provide references for the various facts at the front end of the paper – such as GHG’s providing 30 deg C or warming, CO2 being 10%, etc – It is important to have those for all to see for independent verification – as all analysis flows from those assumptions. Just setting high standards for work presented here to help improve the case being made & show that skeptics are scientists with the highest & most transparent of standards

John Finn…your application of “appeal to authority” is humorous, entirely fallacious from a logical pov, but definitely humorous.

Whereas you appear prepared to believe any old rubbish as long as it supports your fervent wish that CO2 should have no effect. Well, suit yourself, but when you find that AGWers are able to ridicule sceptic arguments don’t start whining.

I don’t believe I exposed any of my beliefs in pointing your transparent attempt at “appeal to authority”.

The issues are much deeper than a silly jihad between “Warmers” and “Skeptics”. The travails of “Steady State” vs ‘The Big Bang” crowd (fyi, the name Big Bang was an insult just as denier is used today) is an interesting study in scientific jihad. One thing I do know, models and simulation are only tools that can, at best, aid in understanding. The results of M&S are only truly useful if they have gone through IV&V and that they can predict new things. Ultimately, empiricism has to rule the day.

In ordinary Optical absorption theory, a basic assumption is that an incident photon has a certain probablilty of being captured in passage therough some small thickness of material. To particle physicists, this is a simple concept, where a given potentially absorbing atom/molecule is considered to lie at the center of a target area; its “Capture crossection”, and the assumption is that if the appropriate particle (including a photon) strikes that target area, then the contemplated reaction occurs. Well of course it is a statistical probability so there isn’t really a go/no-go decision made if the target is hit or missed. The Units of “Crossection” are typically “Barns”, yes as in can you hit the broad side of a barn. One barn is 10^-24 square cm, so one might argue that is is one picon square; which makes a pecon pie among the world’s smallest.
In a typical solid, the molecular/atomic density is so high, that a capture crossection would have to be very small to have so many target areas overlapping in a thin section, that a photon was bound to hit something eventually.

In the case of non-fluorescent solids, and say visible light spectrum wavelengths, crossections can be sub-atomic dimensions. For nuclesr reactions they are even smaller, since the incoming particle has to interract with the nucleus, rather than with the surrounding electron cloud, in atomic of molecular capture events.

The result of optical absorption in non-fluorescing materials, is that the captured energy, ultimately appears in the form of heat; thermal agitation of the atom or molecule, that is communicated to surrounding , molecules.
The quantum physicists might refer to such events as “phonon” interractions; a phonon being a quantum of accoustic energy; aka thermal vibration or “heat”.
Solids can have significant “specific heats”, so the temperature rise caused by a photon capture can be extremely small. As a result the increase in BB like thermal radiation from a solid optical medium absorbing photons, might be too small to easily detect.

In any case, the result of the overlapping of crossection targets, is that the transmission, (tau) = exp(-alpha.x) where x is the distance travelled, and alpha is the absorption coefficient.
This is essentially Beers Law, or sometimes referred to as the Beer-Lambert Law.

In a liquid such as say ocean (sea) water , alpha has values as low as 10^-4 cm^-1 at the lowest which is about 470 nm in the blue region. Over the near UV (300 nm) to near IR range (800 nm) , alpha is always less than 0.01, so the light decays to 1/e (37%) in about one metre.
Sea water is most absorptive at 3.0 microns where alpha has a value of about 9,000 cm^-1, so you get 1/3 transmission after only about 1.1 microns of distance. Over most of the IR range longer than 2.5 microns, alpha is about 1000 cm^-1, so you get 37% transmission after about 10 microns for most of that range except the 3 micron abyss.
The temperature rise situation is similar to solids, but you now have the convective effects of heating to siphon off heat to a greater body of water.

So now what about the effects in the atmosphere, where similar molecular absorptions can take place in IR active molecules such as CO2.

Well once again you get the same absorbed photon energy being thermalized by conduction to the ordinary atmospheric gases of N2 and O2; except at bery high altitudes, where the mean free path between collisions is long enough for spontaneous decay of the CO2 excited state to occur.

But now we have a somewhat differnt situation from the soid or liquid case.

The specific heats of gases are orders of magnitude lower than for liquids, and solids; so the result of that CO2 photon capture for an LWIR photon from the surface (or elsewhere) is a MUCH GREATER TEMPERATURE RISE; compared to that seen in solids or liquids.

The result it that the intensity of the increased LWIR continuum thermal radiation from the ordinary atmospheric gases, is much greater than occurs in solids, with there much higher specific heats.

But be careful here; although a given atmospheric region may have a greater temperature increse from LWIR absoirption, that very same low specific heat, means that, the radiation of the thermal emission from that gas region, also results in a greater temperature drop for that pice of gas.

So the atmosphere is a very poor “heat” source, in terms of how much thermal radiation it can emit, for a given resulting drop in gas temperature; compared to what happens when the solid ground, or the ocean surface emits LWIR tot eh atmosphere. The atmosphere is not a very stiff source of thermal radiation, because of the low molecular density. And yes when it does radiate, that emission is pretty much isotropis, so opnly about half of it returns towards the surface.

That was the seal of approval. Real Climate felt they could no longer ignore it, they had to try to counter it. Thanks guys. Without that sort of feedback, you don’t know how effective you are.”

While I do try to contribute something meaningful to this blog from time to time, I defer for the most part, to the better informed participants on this site for more in depth analysis of the topic.

I do however, understand and appreciate the value of clustersourcing the topic in order to flesh out the fact from the fiction. I know how effective this process is because there is virtually no vested interest on the part of the participants. We have a more altruistic motive for participation here. The betterment of everybody.

The average LWR has not been decreasing, and there seems to be little correlation between outgoing LWR and the troposphere temperature: click

It’s clear from your graphs that neither the lower troposphere temperature (LTT) nor the outgoing longwave radiation (OLR) have been doing anything spectacular for the past 30 years. On the other hand, and this is just an observation from my Mark III eyeball, there does seem to be something of an “inverted, lagged, correlation” between them.

If you invert your temperature graph and move it to the left a bit (about 8 months), it tracks fairly well with the radiation graph. Not a perfect match, and there are surely other things happening, but on the whole, (and pardon this phrasing) it’s “consistent with” the following sequence:

I have no idea how long it takes for the troposphere to “catch up” to what is (mostly) happening on the ground and oceans, but the response certainly can’t be instantaneous. Nevertheless, the “correlation” might be just an artifact of a short data set, or “whatever it is” that the UAH and RSS people do when processing raw satellite data. (I have some misgivings about that.)

Larry, Jay, John Finn, RockyRoad, and others who have discussed positive and negative feedback mechanisms:

Is there anything logically wrong with the following AGW position?

1) There are natural negative feedback mechanisms to absorb CO2.
2) These negative feedbacks kept CO2 in check and/or reduced CO2 from previous extremes without causing a runaway hot earth (unchecked positive feedback).
3) The negative feedbacks are now being overwhelmed by man made emissions that are increasing CO2 dramatically faster than in the past.

The Carbon Change Agent Programme
SAVE MONEY. MAKE MONEY. BECOME A CARBON CHAMPION. Specialist staff from the UEA’s Low Carbon Innovation Centre (LCIC) will be delivering a training programme to develop Carbon Change Agents in businesses and organisations across Norfolk.http://www.uea.ac.uk/nbs/evolve/carbon

7 March: UK Tele: Richard Gray: Row over leaked climate emails may undermine reputation of science
The Royal Society of Chemistry (RSC) and the Royal Statistical Society (RSS) have both issued statements declaring that it is essential that scientific data and evidence compiled by researchers be made publicly available for scrutiny.
Their comments come after the Institute of Physics said that emails sent by Professor Phil Jones, head of the CRU, had broken “honourable scientific traditions” about disclosing raw data and methods…
Dr Don Keiller, deputy head of life sciences at Anglia Ruskin University, however, claims that Professor Jones and his colleagues conspired to withhold information in case it was used to criticise them.
He said: “What these emails reveal is a detailed and systematic conspiracy to prevent other scientists gaining access to CRU data sets. Such obstruction strikes at the very heart of the scientific method, that is the scrutiny and verification of data and results by one’s peers.”
Professor Darrel Ince, from the department of computer science at the Open University, added: “A number of climate scientists have refused to publish their computer programs; what I want to suggest is that this is both unscientific behaviour and, equally importantly ignores a major problem: that scientific software has got a poor reputation for error.”http://www.telegraph.co.uk/earth/environment/climatechange/7385584/Row-over-leaked-climate-emails-may-undermine-reputation-of-science.html

when will ALL the data, raw and adjusted, plus methods, etc be released? surely it’s time to force complete disclosure.

We get everybody on earth to converge on one spot. The earth gets heavier at that spot and the tilt of the earth’s axis changes (we “tip” it the other way). All we need to do is figure out if we want more tilt or less tilt.

I was reading here ( http://www.aip.org/history/climate/simple.htm ) that viewing the atmosphere as a series of distinct layers as opposed to as a single “slab” increases the likelihood that adding more CO2 will cause more radiative absorption. If CO2 doesn’t “capture” the outgoing radiation in one layer, a higher layer might do so, and still “trap the heat”.

That seems logical on the surface, like having 50 blankets instead of one, but I’m not sure how many layers they would need to propose to make the curve fit the theory. And I can’t imagine a large increase in effect even if it were true. Since greenhouse theory depends on the troposphere being well-mixed, that means the troposphere layer number always has to be 1.

The layer suggestion might hold up better in the stratosphere, but I know next to nothing about the stratosphere. Can anyone enlighten me in this area?

8 March: USA Today: Al Gore’s climate groups unite as he sees ‘massive’ opposition
“There has been a very large, organized campaign to try to convince people that it (global warming) is not real, to try to convince people that they shouldn’t worry about it,” Gore said during an interview on the Norwegian talk show Skavlan to promote his newest book Our Choice: A Plan to Solve the Climate Crisis. Gore said:
In my country, the oil and coal companies spent $500 million last year just on television advertising just on these questions. There are now five anti-climate lobbyists on Capitol Hill in Washington for every member of the House and Senate. So it’s been a very massive, organized campaign.
To bolster their muscle, two groups that Gore founded in 2006 announced Friday that they are merging.
The union of the Washington-based Alliance for Climate Protection and the Nashville-based Climate Project will create, they said, “one of the largest non-profit educational and advocacy organizations in the world.”
The unified group, which will carry the Alliance’s name, will have branches in eight countries, more than 200 staffers in 30 U.S. offices and 3,000 volunteers in 55 countries.
Some of its funding comes from Gore, who won the 2007 Nobel Peace Prize for his warning about climate change. It gets 100% of the proceeds of both his new book — which uses recycled paper — as well as his 2006 best seller, An Inconvenient Truth.http://content.usatoday.com/communities/greenhouse/post/2010/03/al-gores-climate-groups-unite-as-he-sees-massive-opposition/1

@ P Gosselin (02:19:05) : “I’ve read in literature somewhere that CO2 contributes to about 25% of the greenhouse effect, i.e. 7-8°C. Can you cite where the 10% value comes from?”

You ask a good question. A quick search came up with the following, although it is only 3.6% rather than 10…. Hope it helps.

TABLE 3.
Role of Atmospheric Greenhouse Gases
(man-made and natural) as a % of Relative
Contribution to the “Greenhouse Effect”
Based on concentrations (ppb) adjusted for heat retention characteristics

Completely O/T but I couldn’t help noticing that the “rogue” Himalayan British weather observatory which has recorded global cooling is in Almora. “Return to Almora” is the name of R K Pachauri’s allegedly smutty first novel!

[Beck’s] measurements are clearly taken at locations which are contaminated by local effects. There are plenty of places where you can measure 500 ppm but they do not provide a global representation of CO2 in the atmosphere, i.e. CO2 is not “well-mixed” in the atmosphere. The Beck numbers make no sense whatsoever.

You are providing incorrect information.

You say that “the Beck numbers make no sense whatsoever.” Beck only collated and reported on the data provided by many internationally esteemed scientists, including Nobel laureates, who performed tens of thousands of CO2 measurements.

There were numerous scientists doing the work that Beck reported [all amateurs in those days, although a few did one-off contract work]. Being a scientist meant that your reputation was everything, unlike today, where corrupt degree holders scheme to finagle the system for grant money. If any scientist in the 1800’s was caught fudging data, he was finished.

The CO2 samples taken, typically between ≈1-3% accuracy, equate to about 4 – 12 ppm when measuring CO2 at 400 ppm. A one percent tolerance is very accurate, even by today’s automated standards, when measuring atmospheric CO2.

Beck reports on six of the locations where CO2 measurements were taken. The only populated location was Leige, a relatively small town in the 1800’s. The other samples were taken in very sparsely populated locations: an island in the Baltic sea, the Geissen weather station, the Baltic sea coast, a high mountain outside of Helsinki, the desolate Ayrshire coast in Scotland, and on fourteen extended ocean crossings on scientific expeditions, from Europe across the Atlantic, to the tropics, Australia, North and South America, the North and South Pacific ocean, Greenland, the Arctic, Spitzbergen, and Antarctica. No samples were taken in large cities or industrial areas.

Thus, the CO2 samples [along with other samples such as ocean pH – which turns out to be the same as today’s ocean pH] were the average readings taken from many unpopulated and very sparsely populated areas in both hemispheres, on mountains, on seashores and on mid-ocean crossings. Contrast those numerous, isolated locations with today’s main CO2 reporting source, located on the Haleakala volcano on Maui.

And the samples taken were not just a handful. Dr Kreutz took 64,000 separate CO2 readings at the Geissen weather station over two years. Wattenburg used 310 separate sampling stations; other scientists provided similar amounts of CO2 data.

Since measurements began in 1812, Nobel laureates such as Krogh and Warburg, and their colleagues Haldane, de Saussure, Bunsen, Callendar [who selected the lowest CO2 values and deleted all data outside a ± 10% bandwidth], and other well known scientists collaborated in the project. All took copious notes and made detailed drawings of their test apparatus – something Phil Jones and the rest of the alarmist scientists either consistently neglected to do, or they lost the original data.

If Mann, Briffa, the CRU crew and the rest of today’s grant seeking scientists had the rigor of the 19th century amateurs, they would not be despised for their self-serving gaming of today’s climate industry, while claiming their findings are “robust.” But neither would there be a runaway global warming scare.

For more information on Dr Beck’s paper: click [the site is very interactive; click around to find information].

I have a question about the graph showing heating effect per 20 ppm of CO2: a doubling of CO2, by itself with no forcings or feedbacks, is supposed to raise temps by 1°C by the time 550 ppm is reached, yet the graph only shows about 0.4°C of increase over the same period, why the discrepancy?

@Larry Huldén (10:15:33) :
To my knowledge CO2 levels in the Eemian did not exceed current values, which means that 10 ppm/K is a good number for global CO2-outgassing as response to temperature increase. In other words, nothing to worry about.

Corrolary: the dominant cause of CO2 rise is the burning of fossil fuel.

Re: Smokey (Mar 8 14:22),Dr Kreutz took 64,000 separate CO2 readings at the Geissen weather station over two years.
Yes, he did. And they are extremely erratic (Fig 5). They vary up and down between about 310 ppm and 550 ppm. They don’t correlate with Beck’s global figure at all.

He even (Fig 8) shows one of his sites with a 100 ppm variation overnight.

The chemical analysis may have been accurate. But they are not measuring global CO2.

The earth’s temperature has stayed between 12C and 22C for hundreds of millions of years. Is that a true assumption? If true, wouldn’t a strongly negative feedback system having dead-band model it? Such a feedback system would servo to 290 deg K with a +/-5 deg K dead-band (non-linear region) .

Larry, Jay, John Finn, RockyRoad, and others who have discussed positive and negative feedback mechanisms:

Is there anything logically wrong with the following AGW position?

1) There are natural negative feedback mechanisms to absorb CO2.
2) These negative feedbacks kept CO2 in check and/or reduced CO2 from previous extremes without causing a runaway hot earth (unchecked positive feedback).
3) The negative feedbacks are now being overwhelmed by man made emissions that are increasing CO2 dramatically faster than in the past.

These feedbacks are referring to specifically to the growth of CO2. It could actually be argued that there are both positive and negative feedbacks here. For example, the amount of CO2 produced by fossil fuel burning adds is equivalent to ~3.5 ppm but only ~2 ppm is being added each year. It seems as thoughthe system is responding and at least partly offsetting the increase.

However the CO2 feedbacks (+ve or -ve) simply determine whether we’ll double the pre-industrial level in 2050, say, or 2070 or even 2100. The feedbacks referred to earlier by myself relate to the forcing feedbacks at a given concentration of CO2. It is these feedbacks where there is disagreement. To explain:

If CO2 doubles from 300 ppm to 600 ppm then radiative transfer calculations suggest that this will cause the earth to warm by a bit more than 1 deg. That’s the basic temperature rise due to CO2 alone. Now then, because the surface and atmosphere is warmer it’s possible that increased evaporation will occur. Also a warmer atmosphere can hold more moisture (water vapour) which might mean that the greenhouse effect is amplified further (water vapour is a ghg) and so we get further warming in addition to that from CO2 alone. That is a simplified explanation of why climate models get warming of 3 deg per CO2 doubling. However, the assumption seems to be that all the extra water vapour goes into warming the planet. I’m not convinced by this.

I first became convinced that AGW is greatly exaggerated when I found out it relies on positive feedback to magnify any forcing by at least 3. That should surely have meant the earth would have shot off to an extreme, never to return, ages ago. The earth would be either permanently frozen, or boiling hot, unless the forcings were miraculously minute. A volcano would have led to an ice age. The heat from a major asteroid hitting earth would have made the earth being boiling hot for ages.

I started to say in a previous comment that assuming a magnification of 3 times due to feedback is incompatible with a relatively stable system like the earth’s climate, and started to analyse the effect of feedback combined with a logarithmic forcing due to CO2. Someone pointed out that the effect of CO2 can’t be logarithmic since ln(0) is negative infinity. I would say that it’s approximately logarithmic over a wide range where we are, but goes to a very small linear effect at low concentrations. If there was one molecule of CO2 in the atmosphere, and you added a second, the effect would be almost exactly linear since the second order effects would be tiny.

It now seems that the feedback reduces the change to maybe 1/3 of the forcing. Even if there was no feedback opposing changes, ignoring the relatively stability of the earth’s climate, even if we could double the CO2 in the air before it dissolved in the oceans or we went to nuclear power etc, that would mean a rise of about 1.2 degrees Celsius. I think that would make the world a better place, more like the prosperous medieval warming than the cold times of the Great Famine and the Little Ice Age.

When we have accurate temperature measurements over the last century, without “artificial adjustments” and “normalisation”, this would be an interesting calculation:
Calculate the forcing due to CO2 over the last 100 years.
For various assumed feedback magnifications of forcing, calculate the effect of CO2.
Subtract the effect of CO2 from the measured temperatures to get the temperature as determined by natural variation.
Compare the calculated variation for the various assumed magnifications.
The AGW theory is that natural variation is small and CO2 effects are large and dangerous.
Given that it cooled from about 1940 to 1970 when industry got going, and then cooled from 1998 to 2010 when CO2 production was at it’s maximum, I’m sure that large positive magnifications would need more natural variation to fit, and that negative feedback with say 1/3 magnification would need much smaller natural variation. In other words, scary scenarios for the future need a large positive feedback but that’s incompatible with the actual temperatures and CO2 of the 20th century.

Thank you for elucidating so clearly fact that CO2 hypothesis is fallible. Considering the light’s electromagnetic wave properties, the CO2 absorption and radiation account, in the ‘light’ of spectrum’s wavelengths and atom’s energy levels, may be as instructive.
Thanks again.

You miss the point I was making. I wasn’t suggesting that the ‘Beck’ measurements were wrong or inaccurate I was suggesting they were taken from different locations and so were inconsistent and from locations that were inappropriate.

They are useless in providing any comparsion to the current well-mixed levels. Here’s an example from an Excel file of Beck’s data.

In 1843 CO2 was 308.6 ppm
In 1844 CO2 was 400 ppm

Now it’s quite possible these were both highly accurate readings but I doubt if they were taken from the same location. If we took a measurement from the centre of London or Paris or New York I’m sure it would be well in excess of 400 ppm but it would not be representative of global CO2 concentrations.

Oh, great. Another pseudoscientific article. If someone shows me that net downwards forcing (backradiation) is an observed property of the atmosphere, I’ll show a fraudulent representation of physics.
Also CO2 is NOT a strong absorber of IR. CO2 is much poorer at absorbing heat than air. The rate of emission of CO2 is inversely proportional to its rate of absorption. CO2 temperature always LAGS dry air when equal volumes are heated (with CO2 at 100% concentration). CO2 is a poor absorber of heat period. Compared to water vapor CO2 is insignificant to do anything but provide life to the biosphere.

Larry, Jay, John Finn, RockyRoad, and others who have discussed positive and negative feedback mechanisms:

Is there anything logically wrong with the following AGW position?

1) There are natural negative feedback mechanisms to absorb CO2.”

This is not the way a negative feedback works. A negative feedback simply produces an input to the system with the opposite polarity of the output, and depending on the strength of the negative feedback this might reduce amplification or null it altogether. An example would be: Rising CO2 level leads to a fall in humidity according to F. Miskolczi’s theory. But the physical mechanism is not that important: Important is that negative feedback can be independent of absorption of CO2.

“3) The negative feedbacks are now being overwhelmed by man made emissions that are increasing CO2 dramatically faster than in the past.”

A simple linear negative feedback is proportional to the output of the system, so the stronger the output rises, the stronger the negative value fed back. So, no, it can’t be overwhelmed.

Of course the feedback might be nonlinear, might have a time lag associated with it (in a physical system the size of the earth, probably a noticeable one on the order of at least days if not months, years or decades), might be a logarithmic response etc…

It’s difficult to say without a model of a physical mechanism. The AGW scientists have never talked much about negative feedbacks or i didn’t listen, i always hear “positive feedback” from them… Miskolczi, Lindzen, Eschenbach have described negative feedback mechanisms.

For a stable system, the negative feedback must have an amplification factor between 0 and -1 i would say; a greater negative value would lead to rapid and amplifying oscillations (ever greater extremes, which we don’t observe).

A consequence would be that the input perturbation – the warming of the surface through increased CO2 – would not be entirely compensated: The output of the system must be perturbed slightly to be able to feed back a compensation value. So a negative feedback close to -1 would lead to a near-compensation of the “warming due to increased CO2 ‘forcing'”, but not compensate it completely.

(My usual model of a negatively fed back operational amplifier – i don’t know whether any “credible climatologist” (of the Hansen school of thought, i hope that is not taken as an insult) ever thought about it this way; or whether they even know about negative feedbacks)

I keep reading that greenhouse gases are the reason the average temperature of the Earth is approximately 33 degrees K (or C) higher than the average temperature would be in the absence of greenhouse gases. Just like Juraj V. (06:54:22), I find this hard to believe. My reasons for disbelief are as follows.

(1) I’ll assume the average temperature of the Earth is 15 degrees C or 288 degrees K. Subtracting 33 degrees C gives a temperature of -18 degrees C or 255 degrees K. Thus, for greenhouse gases to warm the Earth 33 degrees C, the average temperature of the Earth in the absence of greenhouse gases must be 255 degrees K.

I believe the argument for a “greenhouse-gasless” average Earth temperature of 255 degrees Kelvin is based on five assumptions: (a) the Earth’s surface acts like a “grey body” absorber/radiator, (b) the temperature of the surface of the Earth is everywhere the same (both day/night and at all latitudes/longitudes), (c) the average albedo of the Earth is approximately 0.3–which implies an average absorptivity of approximately 0.7, (d) the emissivity of the Earth is unity, and (e) the Earth exists in a directional electromagnetic radiation field having a power density of approximately 1,367 Watts per square meter. The computation of the power absorbed by the Earth is the product of (i) the incident power density, (ii) the average Earth absorptivity, and (iii) the cross-sectional area of the Earth (pi times the radius of the Earth squared). The computation of the power radiated by the Earth is the product of (i) the temperature of the Earth in degrees Kelvin to the fourth power, (ii) the surface area of the Earth (4 times pi times the radius of the Earth squared), (iii) the Stefan-Boltzmann constant, and (iv) the average Earth emissivity. Setting these two powers equal and solving for temperature one gets approximately 255 degrees K for the temperature of the Earth.

A critical flaw with this approach is that for thermal radiation from “grey body” surfaces, the principle of detailed balance (which is a qualitative form of Kirchoff’s law) requires that the emissivity and the absorptivity of a surface be equal. Thus, it is inappropriate to simultaneously use an absorptivity of 0.7 and an emissivity of 1. When the preceding model is used with the single change that the average absorptivity equals the average emissivity, the Earth’s temperature is approximately 278 degrees K, not 255 degrees K; and provided the absorptivity is not zero, the temperature is independent of the absorptivity.

This still leaves 10 degrees C to be accounted for; but not 33 degrees C as is often claimed.

(2) Then like Ron E Seal (06:28:48) and Ken Coffman (07:05:02) have pointed out, the presence of an atmosphere (greenhouse or otherwise) will have an effect on the Earth’s average temperature. In the first place, the surface of the Earth can no longer be treated as a “grey body” radiator. Electromagnetic radiation through and by gases voids the use of grey body radiation laws–conduction and convection must be taken into account. I believe such computations are extremely complex–especially for a non-inertial (rotating) Earth. As such, it seems eminently reasonable that an atmosphere like the Earth’s but devoid of all greenhouse gases might raise the average surface temperature of the Earth 10 degrees C above what it would be in the absence of an atmosphere.

Bottom line, I see little or no justification in Mr. Archibald’s statement: “The greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere…“

Hmmm … I sometimes wonder if these posts (head post + ensuing discussion) aren’t some sort of over-all general competency test … doing a word-search on the volume of text and posts above, I found zero mention of the following terms WHICH underpin the of physics of ‘radiational forcing’ that CO2 (and H2O vapor) are intimately involved insofar as the surface-to-space energy budget.

“Atmospheric window” – an area in/about 10 um that enjoys a clear view from surface to space (save for moderate to heavy overcast/clouds). Coincidentally, the ‘warm earth’ produces a spectral peak in the Planck curve in this atmospheric window especially as warmer earth surface temperatures. The spectrum either side of this window are bracketed by (variable amounts of) water vapor and CO2 absorption (which also serve to determine how wide this ‘window’ ultimately is)

“Planck’s curve” – a curve denoting wavelength versus spectral emission (radiation, as a verb) strength. Notably, the spectral energy in this curve is proportional to Temperature to the 4th power.

I note that MODTRAN is used in the first graph to depict ‘net downward forcing’ energy; MODTRAN can also be used for calculating ‘upwelling’ LWIR radiation from the earth’s surface should internally be using Planck’s Law and the ‘spectral opening’ at 10 um to determine the energy budget/energy flow into space

An interesting experiment of the student: Using MODTRAN plot David Archibald’s graph axis for CO2 (ppm) versus a fixed earth surface temperature; (is IT, the resulting func logarithmic? what does a delta change in temperature vs total W/m2 upwelling LWIR look like?)

Because of overlaps in the absorbing spectra, calculating the effect any particular gas is not straightforward. The best way to illustrate this is to look at a hypothetical case or thought experiment.

If CO2 were removed from the atmosphere while leaving all other ghgs at the same concentrations then we would be left with 91% of the current greenhouse effect. However, if all other ghgs were removed leaving only CO2 then 26% of the current greenhouse effect would remain. The CO2 contribution, therefore, is somewhere between 9% and 26% of the total. But (and it ‘s a big but) if we removed CO2 thus cooling the atmosphere it’s unlikely that the water vapour concentration would remain constant.

so the warmer it gets, the FASTER the rate at which the earth sends heat to space goes up. Now it is of course much more complex than that, but it is easy to see that the +ve inputs can only raise temperature for so long before the -ve inputs overwhelm them.

Bravo, Reed Coray! It’s just amazing how many lapses in phyical reasoning are encapsulated in the usual, simplistic, radiation-only view of Earth’s thermodynamics. You’d think that those who call themselves climate scientists had never heard of enthalpy or understand that evaporation from the oceans necessarily cools the surface. All this without even touching the misguided notion of “positive feedback.”

One of the things I’m finding interesting about this thread is that Dr. Svalgaard has not yet turned up to give a thump to Dr. Archibald. One of these days those two Renaissance Men will get together on something they agree on and set us all back on our heels. I’m waiting for the day. In the mean time, I sit and learn.

“Also a warmer atmosphere can hold more moisture (water vapour) which might mean that the greenhouse effect is amplified further (water vapour is a ghg) and so we get further warming in addition to that from CO2 alone. That is a simplified explanation of why climate models get warming of 3 deg per CO2 doubling. However, the assumption seems to be that all the extra water vapour goes into warming the planet. I’m not convinced by this.”

See my comment at 12:41:13. If there is a logarithmic relationship between temperature increase and concentration of water vapor, then how could any small addition of water vapor caused by the additional heat from CO2 be significant?

I was reading here ( http://www.aip.org/history/climate/simple.htm ) that viewing the atmosphere as a series of distinct layers as opposed to as a single “slab” increases the likelihood that adding more CO2 will cause more radiative absorption. If CO2 doesn’t “capture” the outgoing radiation in one layer, a higher layer might do so, and still “trap the heat”.

The layer suggestion might hold up better in the stratosphere, but I know next to nothing about the stratosphere. Can anyone enlighten me in this area?
”
Bill, a multislab situation provides more accuracy because each has its own temperature, pressure, and gas concentrations. It doesn’t mean though there’s more chance to capture – in fact – it’s less. As pressure drops, the lines become narrower so there’s less energy that can be collected. Also, each slab has far less material than the whole so there’s still only so many molecules of co2 between here and the top of the atmosphere.

Another factor is that for the troposphere and most all of the stratosphere and thermosphere, there is radiation away from the slab as well as absorbed by the slab. In fact, the lapse rate is related to this as a conservation of energy. Lower down, when surrounded by slabs above and below that are similar in temperatures, there is a bit of equilibrium going on – or so it would seem. Higher up, you’ve got all the radiation coming through the slab (in the LWR arena) coming from below. One can approximate with the stefan’s law grey body concept. Stefan’s law is for radiation in a hemisphere and there are two hemispheres – the outbound and the inbound. It’s got to radiate equally in both directions and the energy has to balance with what is absorbed, which is essentially only from below as you get higher – as there’s low lwr coming down so there’s got to be a T drop – unless there’s some additional energy coming in.

General comment – I am rereading Tommy Gold’s Deep Hot Biosphere and his chapter on the carbon cycle is now extremely relevant – put very simply, in order for life to exist on Earth, apart from having a massive atmosphere which ensures the presence of liquid water, CO2 is continually extracted from the atmosphere and precipitated into sediments, coral reefs etc. This requires a fresh source of CO2 that Gold proposes comes via the breakdown of hydrocarbons by a deep hot biosphere which then emits, mainly methane, and some CO2.

So the existence of life on the Earth’s surface that is dependent on photosynthesis for its existence, and consumes carbohydrates, actually relies on an even deeper biosphere that feeds on upwelling hydrocarbons whose metabolising products are methane and hence CO2 which feed the surface biosphere.

To paraphrase Obiwan Kenobi, we are in a symbiotic relationship with the deep hot biosphere, surely you must understand that! :-)

..
A critical flaw with this approach is that for thermal radiation from “grey body” surfaces, the principle of detailed balance (which is a qualitative form of Kirchoff’s law) requires that the emissivity and the absorptivity of a surface be equal. Thus, it is inappropriate to simultaneously use an absorptivity of 0.7 and an emissivity of 1. When the preceding model is used with the single change that the average absorptivity equals the average emissivity, the Earth’s temperature is approximately 278 degrees K, not 255 degrees K; and provided the absorptivity is not zero, the temperature is independent of the absorptivity.

This still leaves 10 degrees C to be accounted for; but not 33 degrees C as is often claimed.

(2) Then like Ron E Seal (06:28:48) and Ken Coffman (07:05:02) have pointed out, the presence of an atmosphere (greenhouse or otherwise) will have an effect on the Earth’s average temperature. In the first place, the surface of the Earth can no longer be treated as a “grey body” radiator. Electromagnetic radiation through and by gases voids the use of grey body radiation laws–conduction and convection must be taken into account. I believe such computations are extremely complex–especially for a non-inertial (rotating) Earth. As such, it seems eminently reasonable that an atmosphere like the Earth’s but devoid of all greenhouse gases might raise the average surface temperature of the Earth 10 degrees C above what it would be in the absence of an atmosphere.

Bottom line, I see little or no justification in Mr. Archibald’s statement: “The greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere…“
”
Emissivity and the grey body assumption are an engineering approximation. In reality, one should expect the body to have emissivity and absorption by wavelength to be consistent at each wavelength. Note, incoming solar is for a planck curve of about 6000k and peaks around 500 nm. The albedo is 0.7. The Earth is radiating outward at a T of near 288k and it is radiating totally in the longwave IR. The effective emissivity is going to be approximately 1 at the lwr. Note too that the albedo of 0.7 is a combination of surface and cloud albedo and the actual average surface albedo is around 0.08 while the clouds contribute around 0.22 so in no way do you have a problem here.

One does have the problem of such things being radiative only – but then the idea is to conceptually explore the conditions associated with such. Around 100 w/m^2 is the convection and water vapor cycle. any increase in T results in increases of convection and evaporation. It’s not that big a deal to assume an average or typical value or look at radiative only as that is a best case (actually worst case) scenario for problems in transfer of heat. Since the surfac an d most Ts in the atmosphere are triple digit – 200-400 K, one also can deal more with pertubations than having to worry about individual calculations for every variation above or below the mean with T^4 rather than T.

The figure of 1DegC warming for a 3.5W/m^2 increase in Radiative Forcing (which the IPCC makes clear is a forcing at the Tropopause) is for the Tropopause area only.

At the surface, the sensitivity is between 0.095 and 0.15 DegC/W/m^2, depending on the assumption you make for evaporation (there is a wide range of views – the mainstream seems to lie between 2% and 6.5% increase per DegC).

So the same forcing translated to the surface would produce an extremely worrying 0.3to 0.5 DegC temperature rise, implying a 1% to 3% increase in water vapour.

To maintain the median IPCC forecast of 3DegC at the SURFACE, the implied conditons at the surface are:
1. Evaporation increased between 6% and 20%.
2. Surface Forcing increased by between 21W/m^2 and 32W/m^2

So we have the situation, according to the Proponents, that a doubling of CO2 causes a radiative imbalance of 3.5W/m2 at the Tropopause. This Radiative Forcing translates to an increase in Surface Forcing (at equilibrium- this is not a transient) of between 21-32W/m^2.

In detail, how is this increased forcing maintained? Where is the accounting of it, Watt by Watt?

The global warming theory does provide a number of testable hypothesis. Generally each component of the following assumptions can be tested. This may be a new explanation of global warming theory for some of you.

1) If CO2/GHGs double, there should be an increased forcing of 4.0 watts/metre2 at the tropopause emission layer which is now 255K or 240 watts/m2 (on average 5 kms up, not the surface);

2) If there is an increase of 4.0 watts/m2 at the tropopause emission layer, temperatures at that layer will increase by 1.2C (according to the Stefan Boltzmann equations).

3) If temperatures increase by 1.2C at the (former level of the) tropopause emission layer, water vapour will increase providing an additional indirect 4.0 watts/m2 of forcing.

4) If there is 8.0 additional watts/m2 of forcing at the tropopause emission layer, another 1.75 watts/m2 of indirect and surface albedo affects will appear in the long-run and (and another 1.75 watt/m2 of humidity forcing will result from those indirect effects) and, in total there will now be 11.5 extra watts of forcing at the emission layer.

5) If there is an extra 11.5 watts/m2 at the layer which was 255K or 240 watts/m2, temperatures at this layer will increase by 3.0C according to the Stefan Boltzmann equation.

6) If temperatures increase by 3.0C at the former level of the tropopause emission layer, the layer itself will increase in height by 461 metres. This newer higher emission layer will still be in equilibrium (over the long-term) with the solar forcing of 240 watts/m2.

7) If the tropopause emission layer is now 461 metres higher, and if the adiabatic lapse rate of 6.5C/km stays constant, the surface temperature will now increase by the same 3.0C .

8) And thus we have 3.0C per GHG doubling of 4 watts/m2 (or a impact of 0.75C/watt/m2).

There is your global warming theory in a nutshell which is not really explained anywhere else like this that I have seen.

We can test all of these assumptions:

1) humidity levels are not increasing at the tropopause emission layer.

2) the adiabatic lapse rate should not be considered as a never-changing entity. The Stefan Boltzmann equations predict that it should increase slightly as the temperatures increase so that the surface should only warm by a little more than half of the tropopause.

3) I have never seen a proof of Myhre’s GHG doubling forcing estimates.

John Finn (15:43:17)
‘The CO2 contribution, therefore, is somewhere between 9% and 26% of the total. But (and it ’s a big but) if we removed CO2 thus cooling the atmosphere it’s unlikely that the water vapour concentration would remain constant.

John Finn (04:44:49) : Thanks for your clarification. I was actually pointing out that, while people here seem quite happy to attribute fluctuations in the climate to solar cycles (and also the LIA), they don’t seem aware just how small the change in energy input needs to be to cause that change. As you point out, the increased energy retention brought on by CO2 doubling is actually bigger than the much touted “it’s the sun, stupid”.

Sergey (09:37:55) :
The very premise of this article that 30C increase in surface temperature is explained by greenhous effect is wrong. This difference is due to convection.

Thank you. Convection trumps radiative effects in every complex system. Why the effect seen in the backwards derivation of the magical properties of CO2 I don’t know.

Considering the history and persistance of this line of reasoning I don’t blame David for putting forward this intellectual argument.

There is only one comment I haven’t made yet and that is given the logarithmic effect of CO2, I suppose if we remove it from the atmosphere its effect goes to infinity :D … so like many have noted here, until I see some empirical experiment that shows something interesting and not some assumption laden model based derivation disguised as an experiment … the CO2 effect is ZERO in my mind and simply does not exist. It is only a small part of layers of a mixed gases of certain densities of the layered fluids that blanket the surface of the Earth and the phenomena we are observing should be renamed the ‘Blanket Effect’.

Ben W (05:57:46), HelmutU (06:44:57) and a number of other comments seem to have hit on the biggest problem with the CO2 graphs. David Archibald’s explanation of the diminishing logarithmic effect of CO2 is reasonable, however accepting claims of pre-industrial levels for CO2 of 280 ppm seems to have little foundation.

Splicing under sampled and inappropriate tree ring data to cherry picked UHI contaminated surface station readings produced temperature graphs showing “unprecedented warming”. These were offered as valid reconstructions by advocate scientists who intentionally hid the divergence between the two sets of rubbish data where they overlapped.

Splicing ice core readings from CO2 poor areas of the planet in which CO2 is not evenly mixed in the atmosphere with modern readings taken from the side of an active volcano produced graphs showing rapid increases in CO2. These were offered as valid reconstructions by advocate scientists who have intentionally ignored the large number of direct chemical measurements from the last 200 years that diverged from their ice core proxy data.

That is another of the Climate-gate lies. Dr. Callendar and early CAGW proponent apparently successfully sold the idea that we could not trust the laboratory measurements of CO2 in the 1800s. Why? Because they measured atmospheric CO2 not 280 parts per million but varying over the place, from 330 -440 ppm, as Georg Beck showed.

The AGW Cassandras would rather read and realy on ice core proxies or chicken entrails, instead of of lab measurements. Scientists of two centuries ago conducted atmospheric composition studies and 93,000 measurements from lots of scientific teams publishing in scientific journals then, revealed that and also showed the rises due to the massive Tambora and Krakatoa volcanic eruptions.

We would be puzzled when comparing the Vostok ice core samples with their publications, were it not for Dr. Zbiegniew Jaworowski, the IPCC past ice core chairman, recognized for his world-leading ice core expertise. He said CO2 readily forms hydrates at modest pressure as in buried ice. It forms such hydrates and then stops at 280ppm, in the air bubbles in the ice.

He insisted that ice core readings had to be corrected for this effect to get a correct reading. But this is not done, and they have succeeded in convincing you that 280- ppm is the pre-industrial level, when it is not. The Moana Loa labs Loa-gate, created the image that the Vostok readings are accurate when they matched CO2 readings. But in reality, they merged data separated by 83 years to match and show a continuously rising curve. It is a pure scandal and bunk.

Reed Coray (15:25:22) :
A critical flaw with this approach is that for thermal radiation from “grey body” surfaces, the principle of detailed balance (which is a qualitative form of Kirchoff’s law) requires that the emissivity and the absorptivity of a surface be equal. Thus, it is inappropriate to simultaneously use an absorptivity of 0.7 and an emissivity of 1. When the preceding model is used with the single change that the average absorptivity equals the average emissivity, the Earth’s temperature is approximately 278 degrees K, not 255 degrees K; and provided the absorptivity is not zero, the temperature is independent of the absorptivity.

No, the critical flaw with your approach is that the frequency range of the insolation is not the same as the frequency range of the emission. Therefore it is quite acceptable to use an absorptivity of 0.7 and an emissivity of 1 (the correct values).

It seems that about everyone knows that the “greenhouse gas hypothesis” should not be equated to what really happens in an actual greenhouse, because it ignores convection. Well, folks, the “atmospheric greenhouse gas hypothesis” suffers the same problem, IMHO.

The figure of 1DegC warming for a 3.5W/m^2 increase in Radiative Forcing (which the IPCC makes clear is a forcing at the Tropopause) is for the Tropopause area only…..
“”

At the tropopause (assuming std 1976 atm values), one sees 3.7w/m^2 decrease in transmitted IR from the surface. For the atmosphere, it seems that the actual average sensitivity is around 0.22 K rise per w/m^2 increase. For 3K that’s about 14 w/m^2 increase. THe number comes from the avg values associated with Earth – 33 K rise due to atmosphere, 288.2k avg surface T, 235w/m^2 average emission to balance the 235 avg incoming solar with an absorption of around 150 w/m^2 (which includes cloudy skies not just clear sky conditions). This also accommodates the avg contribution of convection.

for the co2 doubling, this is 3.7 w/m^2. Assuming a 5 K rise in column T, the absolute humidity should increase by 30% which corresponds to less than another 3.7 W/m^2 contribution to the needed forcing. At 0.22 sensitivity. we’ve got less than a 1.7 K rise in T which means the h2o vapor is contribution is way too small to generate anything close to what is needed. Consequently, we’re missing almost 3 1/2 deg. of the original 5k presumed temperature increase (for the h2o increase calculation).

What happens above the tropopause (and below) is also interesting. Increasing the absorption means increasing the emissivity. The atmosphere radiates more at the same temperature. By 70 to 100 km altitude, note also that the co2 doubling difference is back down to just over 2W/m^2 (or so I seem to recall at the moment). The lapse rate depends now upon the conservation of energy. Increases in ghgs mean increases in emissivity and hence a drop in T due to the need for energy balance.

??? How? At the surface warming leads to more water because of evaporation from the surface. But what is the source up there?
“”

gee nick – better be careful or you’ll become a skeptic.

h2o vapor gets there by the usual method. it’s a lighter weight molecule so tends to rise, it absorbs solar energy so it tends to form a hot air bubble (skinless hot air balloon). Ultimately, despite carrying copious amounts of energy aloft, it cools off and drops the h2o vapor out of the air parcel to bring it into line with the humidity for the upper temperatures, dropping solid or liquid h2o and permitting the cycle to continue carting up the heat of evaporation.

Bill Illis (17:11:20) :
The global warming theory does provide a number of testable hypothesis. Generally each component of the following assumptions can be tested. This may be a new explanation of global warming theory for some of you.

1) If CO2/GHGs double, there should be an increased forcing of 4.0 watts/metre2 at the tropopause emission layer which is now 255K or 240 watts/m2 (on average 5 kms up, not the surface);

No, the forcing should stay the same at that point but the layer altitude will change.

Great comment Bill. The thread has turned itself into a pretzel of algorithms to determine exactly when the baby will poop. When it could be just as simple as checking the baby’s diaper using the good ol’ fashioned smell test.

JonesII (06:25:55) :
Wind Rider (06:12:02) :
If something can be confirmed, via directly measurable observational data, e.g. Einstein’s prediction of the effects of gravity on light, confirmed by observations during a solar eclipse
Are you sure?, That was only diffraction.

The confirmation of general relativity had nothing to do with diffraction. Diffraction is an optical effect due to light’s wavelike behaviour when passing close to obstacles or through small gaps. The confirmation Wind Rider is discussing was a difference between the effects of gravitational attraction considered as a classical force on a moving projectile, namely the photon travelling at c (Newtonian physics) and that predicted if space were curved (from memory, the latter is something like twice the former, but don’t hold me to that).

The temperature sensitivity of CO2 is clearly not logarithmic over the entire range. The logarithmic relationship appears to range from about 40ppm to about 200ppm. After that it looks more like a 1/x type relationship. Maybe the whole curve is closer to 1/x. Has anyone tried doing such a plot?

A logarithmic curve is a 1/x curve (roughly) over narrow ranges.

Do they still teach algebra in high school?

No need to be insulting, especially as he is right and you are wrong. The OP talked about a range and speculated about “the whole curvet”, meaning outside the range. The fact is a 1/x curve is bounded and a log curve isn’t, making them fundamentally dissimilar, and the OP was talking about the entire range, not a narrow range in which the approximation is valid.

len (17:33:31) :
Sergey (09:37:55) :
“The very premise of this article that 30C increase in surface temperature is explained by greenhous effect is wrong. This difference is due to convection.”

Thank you. Convection trumps radiative effects in every complex system. Why the effect seen in the backwards derivation of the magical properties of CO2 I don’t know.

Do the math, radiation is the dominant heat loss route from the surface and the only heat loss route to space.

Considering the history and persistance of this line of reasoning I don’t blame David for putting forward this intellectual argument.

That’s something of an overstatement, the original posting is largely rubbish!

The forcing equation (apparently due to Willis E) is physically nonsense so anything derived from it is meaningless.
The graph with the red line going on to 6ºC is deception, superimposing a graph of ºC/doubling on a graph with an axis of ºC/20ppm, in reality the redline should increase to ~0.16ºC. Of course that ignores the fact that all the 6ºC is the top of the possible range (and isn’t all due to CO2 anyway).
And so on with more similar rubbish.

There is only one comment I haven’t made yet and that is given the logarithmic effect of CO2, I suppose if we remove it from the atmosphere its effect goes to infinity :D … so like many have noted here, until I see some empirical experiment that shows something interesting and not some assumption laden model based derivation disguised as an experiment … the CO2 effect is ZERO in my mind and simply does not exist. It is only a small part of layers of a mixed gases of certain densities of the layered fluids that blanket the surface of the Earth and the phenomena we are observing should be renamed the ‘Blanket Effect’.

CO2 represents the large majority of the permanently radiatively active gases in the atmosphere: CO2, 385ppm; CH4, 1.8ppm; N2O, 0.3ppm, i.e. about 99%.

Phil, in what sense are you using the word “permanent”? Do you mean that if you could name each of your lil’ CO2 molecules, you would find the same ones 10 years from now? Or do you mean that CO2 is a permanent gas, always present, even though individual molecules are re-absorbed into the Earth’s recycling system and then reappear sometime later? Isn’t that the case then with water vapor as well? Always present but always being recycled.

CO2 represents the large majority of the permanently radiatively active gases in the atmosphere: CO2, 385ppm; CH4, 1.8ppm; N2O, 0.3ppm, i.e. about 99%.

False. H2O’s concentration is 30 times+ that of CO2, it is approximately equally radiatively active and uniquely able to condense, evaporate, convect, and even take solid form. Sorry, seems H2O is the 800 pound gorilla driving this GHG bus.

Everything about this material reeks of authoritative fraud. Greenhouse gasses do not add an iota of heat to the atmosphere, because the atmosphere is cooled by radiation which goes aroung them rather than through them, as demonstrated by Lindsen and Choi. Does a gate half open keep half of the sheep in? Blocking half of the wavelengths with greenhouse gasses does not keep half of the heat in the atmosphere.

Furthermore, the temperature of the atmosphere equillibrates with the rate of heat entering the planet from the sun and rate of heat leaving the planet. The equilibration temperature is totally independent of how heat enters the atmosphere. The heat enters the atmosphere about a hundred times faster through conduction and convection than through radiation. This is why cooling fans are used in electronics rather than relying upon radiation.

“CO2 represents the large majority of the permanently radiatively active gases in the atmosphere: CO2, 385ppm; CH4, 1.8ppm; N2O, 0.3ppm, i.e. about 99%.”

You seem to have left one rather significant component out of your calculation, namely H2O. The two papers I referenced in my comment from earlier today, which constitute most of scientific effort to actually quantify the contributions of the various “greenhouse gases, seem to indicate that, if CO2 is significant at all, its impact is likely limited to high latitudes in winter, polar environs, and possibly large desert areas. The common denominator being severely reduced H2O in the overlaying atmospheres. The Evans and Puckrin paper did claim to find an increase of 3.5W/m2 in their measured values versus preindustrial numbers they arrived at via a computer model, but their data tables indicate that the increase was almost entirely due to differences in readings from the Canadian winter. The values they derived for the summer season were in fact an exact match for what their model showed for preindustrial times. For the summer season the total downwelling longwave radiation was about 270W/m2 of which only 10.5W/m2 was attributable to CO2. Even the decidedly warmist authors were forced to comment on how elevated levels of H2O dramatically suppressed the CO2 response. At Tropical and Subtropical latitudes, where at least theoretically most of the extra evaporation needed to fuel the enhancement of the CO2 signal would occur, the predicted total of DWL is well above the level measured in the Canadian summer, which would indicate that in those environments CO2 would contribute only 2-3% of the “greenhouse effect” and that is for the total CO2 in the atmosphere. Any contribution from marginal increases in CO2 would be reduced proportionately.
Even if we stipulate to the observed 3.5W/m2 increase the paper claims, the fact that it was present for at most half the year, outside of the polar regions themselves, suggests that the effect hardly represents a global phenomenon. I can’t see many of the citizens of Canada, Siberia, or other northern climes jumping to embrace draconian measures to suppress CO2 emissions based on the notion that CO2 will make their winters less cold.

Re: cba (Mar 8 19:34),“h2o vapor gets there by the usual method”
I kn ow how it gets there. The question is, why does tropopausal warming produce more of it? The air at that level is mostly pretty unsaturated – H2O is just another gas. Why would it move in response to a temperature differential?

V
red432 (06:56:16) :
How does an individual assess competing claims on an issue of this complexity? You’ve got to have sympathy with the journalists who basically say “whoa, this biologist from Stanford must know what he’s talking about.” Of course with a little historical perspective we can remember that the entire academic world of Geology was wrong about plate tectonics a few decades ago… In the case of AGW even looking at temperatures doesn’t really help that much because “weather is not climate” and in my opinion even if it started decisively heating up again, that still wouldn’t indicate that greenhouse gases emitted by human activity had anything to do with it, necessarily. It’s a vexing question.
============
A few decades ago doctors were wrong about what caused ulcers. Does that mean I shouldn’t trust my doctor?

Phil. (20:24:16) :
Do the math, radiation is the dominant heat loss route from the surface and the only heat loss route to space.
… CO2 represents the large majority of the permanently radiatively active gases in the atmosphere: CO2, 385ppm; CH4, 1.8ppm; N2O, 0.3ppm, i.e. about 99%.

More statements like ‘believe me because I said so’. Why is it that Einstein requires ‘gravitational lensing’ around the sun to be observed and documented while IPCC ‘climate science’ is funded by billions of dollars with gobblygook justification that can’t even be backed up with rhetorical certainty. It would be nice to be kissed before being … Arrhenius was the first great ‘Aesthetic Luddite Scientist’ and all I see here is more of the same.

I am going to search through the thread for a link to some real empirical data I know isn’t there. I’m sure I would have run into something in the past year but I just keep running into PUD … and it seems to be going septic.

No, the critical flaw with your approach is that the frequency range of the insolation is not the same as the frequency range of the emission. Therefore it is quite acceptable to use an absorptivity of 0.7 and an emissivity of 1 (the correct values).

Phil, I believe you are wrong. The spectral shape of radiation emitted from a black body surface at a temperature “T” degrees Kelvin obeys Planck’s law. By integrating Planck’s law over frequencies from zero to infinity, one obtains a total emitted power that is proportional to “T^4”. In general, if either (a) the emitted spectral shape does not obey Planck’s law, or (b) the integration is not over the interval zero to infinity, the total emitter power is no longer proportional to “T^4”. Thus, to use the equation for total emitted power: Total Power = “a” * “sigma” * “area” * “T^4”. where “a” is the emissivity and “sigma” is the Stefan-Boltzmann constant, the spectral shape of the emitted power must obey Planck’s Law.

The emissivity of a surface is the ratio of the power radiated by that surface to the power radiated by a black body at the same temperature and same surface area. In general, emissivity can be a function of frequency. However, the common meaning of a “grey body” is that the emissivity is NOT a function of frequency. Thus, the spectral shape of the power radiated from a grey body also obeys Planck’s law, but with a constant scaling factor between 0 and 1 in the open interval sense. This implies that for a black body and a grey body at the same temperature and of equal surface area, the ratio of grey body emitted power to black body emitted power (the emissivity) is a constant for all frequency intervals. Since Kirchoff’s law requires that the emissivity and absorptivity be equal, a grey body that absorbs a fraction “a” of the power incident on the body, will radiate a fraction “a” of the power radiated by a black body at the same temperature. This statement is true indendent of the frequency of the insolation power and independent of the temperature of the grey body.

Since the argument for the 255 degree K Earth surface temperature employs the T^4 law, you are caught between a rock and a hard place. You can use the T^4 law, but then the absorptivity and emissivity must be the same. Or you make the emissivity frequency dependent, but then you can’t use the T^4 law.

John Finn (10:59:36) :Whereas you appear prepared to believe any old rubbish as long as it supports your fervent wish that CO2 should have no effect. Well, suit yourself, but when you find that AGWers are able to ridicule sceptic arguments don’t start whining.

Problem is that the AGWers are providing plenty of ridicule, but most of the *refutation* is coming from the sceptical side.

Anything and everything is subject to being ridiculed, but ridicule isn’t refutation. If I said, “Guache sticks to sharks,” you can ridicule it all day long, but unless you refute it by showing that it *can’t* — because water-based paint dissolves in water — I’ll continue to support my statement.

In a static environment, the answer would probably be “Yes, a little” — but we live in a dynamic environment. CO2 levels were bobbling up and down before humans appeared, and they’ve continued to do so, pretty much independently of whatever contributions we’ve made.

No, the critical flaw with your approach is that the frequency range of the insolation is not the same as the frequency range of the emission. Therefore it is quite acceptable to use an absorptivity of 0.7 and an emissivity of 1 (the correct values).

Phil, I believe you are wrong. The spectral shape of radiation emitted from a black body surface at a temperature “T” degrees Kelvin obeys Planck’s law. By integrating Planck’s law over frequencies from zero to infinity, one obtains a total emitted power that is proportional to “T^4”. In general, if either (a) the emitted spectral shape does not obey Planck’s law, or (b) the integration is not over the interval zero to infinity, the total emitter power is no longer proportional to “T^4″…….

Reed is correct about this. Using the frequency form of Planck’s Law to find the radiated power involves the integral: ∫f3df/(ehf/kT – 1). Changing to the variable x = hf/kT brings the well-known T4 factor outside of the integral, and gives a new integral: ∫x3dx/(ex – 1). If the limits are not (0,∞), however, each limit will depend on T, as in x1 = hf1/kT, and x2 = hf2/kT, and thus the new integral will be a function of T, and not just a simple number. For example, if you consider only the range of longwave frequencies where hf is significantly less than kT, the power emitted in that range is proportional only to T, not T4.

Re: dr.bill (Mar 9 02:40),Reed is correct about this.
No, I think Phil is correct. cba explained it here. Kirchhoff’s Law applies over frequency intervals, and an albedo of 0.7 for SW coupled with IR emittance of 1 is quite possible, and indeed true. KL requires only that the IR absorbance also be 1.

Reed’s and your calculation only show that the Stefan-Boltzmann T^4 dependence would not then (exactly) apply. Phil didn’t say it would.

If I understand the above comments, it seems that there is broad agreement that the CO2 GHG effect is logarithmic and therefore gives fairly minor warming. The serious warming depends on a positive feedback loop involving increased atmospheric water vapour, its GHG effect, more warming, more water vapour, etc.

If this were true, initial warming for any reason, such as changes in the earth’s orbit, could trigger catastrophic warming due to water vapour. What about localised runaway warming where massive heat and water vapour is produced by a volcano? The earth would have been roasted to a crisp millions of years ago.

I don’t buy any of this. Water provides negative feedback. It is the climate’s thermostat. It is a GHG, but it also reduces incoming energy (as clouds) and through its phase changes and transportation throughout the atmosphere it gives sophisticated and complex control of heat. The climate scientists choose to ignore most of this because they don’t know how to model it. I don’t blame them for that, but I do blame them for the simplistic nonsense about runaway warming.

I find the constant reference to the 30 degree greenhouse effect very puzzling. Based on the Stefan-Boltzmann law, the Earth as a black body should be about 278 Degrees and not the often quoted 258 degrees. Most climate scientists ingore the fact that the Earth is a system involving oceans and the atmosphere and both are contributing to the socalled greenhouse effect. You cannot just say the earth without the atmosphere would be a certain temperature because you have no scientifically valid basis for such an assessment. The albedo factor often used refers only to the visible light albedo. More than half of the sun’s radiation is outside the visible range and exhibits a much lower albedo. Further, the visible albedo which has been measured is a variable and seems to be controlled somehow by solar activity. Throw in the fact that the solar constant is not infact constant and you can see that any analysis based on the 30 degree warming factor is spurious. There are far too many variables and far too little understanding of the processes involved.

It would be interesting to see an analysis of the entire system including the fact that solar heating is a three dimensional phenomenon with ocean heating occurring up to several hundred meters. In the tropics, that heat cannot move to the surface as the surface waters are always warmer. So the heat then moves toward the poles where the cooler waters are (iaw the second law of thermodynamics, the most often violated principle of physics by climate scientists). In the tropics you will find the so called greenhouse effect is only about 5 or 6 degrees. If the effect was atmospheric wouldn’t be nearly constant around the globe? I would like to see some discussion of these points if there is anyone out there with scientific answers.

Nick Stokes (14:44:09) :
Re: Smokey (Mar 8 14:22),
Dr Kreutz took 64,000 separate CO2 readings at the Geissen weather station over two years.
Yes, he did. And they are extremely erratic (Fig 5). They vary up and down between about 310 ppm and 550 ppm. They don’t correlate with Beck’s global figure at all.

He even (Fig 8) shows one of his sites with a 100 ppm variation overnight.

The chemical analysis may have been accurate. But they are not measuring global CO2.

I greatly respect your opinion. So let’s throw out the Giessen data. In fact, let’s throw out all of Beck’s data reconstruction. What are we left with?

We are left with the presumption that the atmospheric CO2 concentration has been rock steady, at the same pre-industrial level for thousands of years. What does this mean?

It means that the LIA, and the MWP, the RWP, and other very significant natural climate changes were not influenced at all by CO2 levels, once again falsifying the repeatedly falsified CO2=CAGW conjecture.

Oh, and how in 1999 it was shown beyond dispute by Richard S Courtney that the after the fact, invented, fudged, “coolings factors” used in GCMs were admitted by the modellers as false.
ie FRAUD.
All peer reviewed and published in 1999
– where were you all…
Looking at a Hocket Stick. Dooooh…

Why, were the false cooling factors needed in the models,
because the assumed warming mechanism was making too much heat…

For the layman reading this blog. Let me put my spin on the subject .
Lets say that I have a pool water. 50 meters by 50 meters. Over the center of the pool, I have a bunch of rocks weighing between 20 and 50 lbs. I’m going to drop the rocks in no particular order creating different highs and lows of waves reaching the edge of the pool. I also place a rubber ducky about 5 meters from the edge of the pool. As the wave the rocks the rubber ducky. The rubber ducky makes its own little wave. On paper or computer model that the little ducky wave is adding to the source wave in the center of the pool. It looks good on “paper” but they can’t physically measure the ducky wave at the center of the pool. So what happens when I stop dropping rocks. The water becomes calm and at the very end you can look at the rubber ducky and see the little tiny wave it created. The AGW people say “see, if we add more rubber ducks, we will create a tidal wave and the pool will explode.” But the fact remains, the water at the center of the pool is calm and the energy from the rubber ducky didn‘t add any energy to the source point of the wave created by the rock.

I find the constant reference to the 30 degree greenhouse effect very puzzling. Based on the Stefan-Boltzmann law, the Earth as a black body should be about 278 Degrees and not the often quoted 258 degrees.

I really enjoy Phil. Everything he says is usually right. Its like a puzzle that you have to look at really closely to figure out what the missing piece is.

Like radiative heat loss is the dominant heat loss route and the only heat loss route outer space, do the math, way bigger than convection. He’s right of course, the point being that convection causes HUGE changes to how much is radiated and from where and in terms of how much escapes and how much doesn’t.

Then there’s that whole thing about CO2 being the majority of the “permanantly radiatively active” gases in the atmosphere. No need to get into what the definition means because it is completely misleading as it has nothing to do with the issue. The issue is what gases are in the atmosphere that are radiatively active at any given time and what are their relative concentrations? Totaly different answer.

It is tactics like these, aimed at people to whom the answer is calculated to be slightly over their heads and raise doubt as a consequence that leads me to discount his answers even on topics where the discussion is well over my head. It is a tactic that I see used repeatedly by strong promoters of AGW and raises the same thought in my mind each time I see it.

just one little question, we are talking about c02, i am wondering since the human race is growing fast,nearly 7 billion of us, and we all breathe out c02, how much of that 280ppm is human waste.

Would less humans mean less co2?

Keep in mind that humans have replaced other species in vast numbers. That is part of being at the top of the food chain. So, while there are more humans, there are fewer bears, lions, tigers, gators, wolves, etc.

Even so, I suspect large mammal contributions to overall CO2 to be minuscule.

Re: dr.bill (Mar 9 02:40),Reed is correct about this.
No, I think Phil is correct. cba explained it here. Kirchhoff’s Law applies over frequency intervals, and an albedo of 0.7 for SW coupled with IR emittance of 1 is quite possible, and indeed true. KL requires only that the IR absorbance also be 1.

Reed’s and your calculation only show that the Stefan-Boltzmann T^4 dependence would not then (exactly) apply. Phil didn’t say it would.

My note was about the temperature dependence, which can be substantially different than the T^4 behaviour, depending on which frequency slice you’re looking at. There’s no dispute about that, but people tend to forget it, just as they forget that not everything is a blackbody.

Emissivity, albedo, and all those things, are a can of worms. If you really need to know them for a particular problem, you have to measure them in situ and use them under those same conditions. This is what makes them an “engineering approximation”, because there is no way to find them from first principles. They vary not only with the properties of the object itself, but also with a host of other time-and-location-dependent factors. Trying to assign “single-values-for-all-time” to such things for something as temporally and spatially diverse as the Earth is simple nonsense.

I find the constant reference to the 30 degree greenhouse effect very puzzling. Based on the Stefan-Boltzmann law, the Earth as a black body should be about 278 Degrees and not the often quoted 258 degrees. Most climate scientists ingore the fact that the Earth is a system involving oceans and the atmosphere and both are contributing to the socalled greenhouse effect. You cannot just say the earth without the atmosphere would be a certain temperature because you have no scientifically valid basis for such an assessment. The albedo factor often used refers only to the visible light albedo. More than half ….
“”
Usually, it’s Bond albedo which is the whole enchilada – uv – IR, not that it is something well measured over the long run. It’s about 0.3 with that breaking down to 0.22 for clouds & atmosphere and 0.08 for Earth (being that oceans and h2o liquid have very low albedo for light incident at high angles relative to the horizon).

What is being done is the averaged power per surface area of the Earth is being corrected for loss due to albedo reflection, which amounts to about 1/3 of the incoming power. Certainly, with no atmosphere or ghgs in an atmosphere, there’d be no clouds and no liquid water so the albedo would probably be around 0.15 (like Mars and Moon). One can either go with the current albedo for a direct comparison between things or one can inject more uncertainty and try comparing apples and oranges with many varying parameters and no way to grasp the significance of anything.

Considering the effect of the clouds on albedo (and also upon the ‘blocking’ or limiting of the outbound lwr), they are a substantial determiner of our temperature – and undoubtedly primary in a negative feedback control system regulating our planet’s temperature (somewhat along the line of Lindzen’s Iris effect). However, in order to understand and explore what happens, one must isolate the various factors to see how they function. Toss them together in a model and you’ve gcm garbage, incapable of being understood or verified or falsified.

Nick,

I guess I don’t understand the depth of your question about how h2o gets somewhere or why you think it’s related to a lapse rate or temperature difference. h2o is a lighter molecule which tends to provide a lower density – which has buoyancy. It also absorbs IR so it’s going to heat up from the IR – which isn’t in the physical meteorology 101 text. Hot gas expands – reducing density – increasing buoyancy. The lapse rate doesn’t matter, it’s the T differential between moist and dry gas and the overall density difference. That’s undoubtedly driving most of the vertical mixing and no one objects (that I’ve seen) to the notion that the atmosphere is well mixed. All I can see is that either you’re not understanding something mentioned above, or you are hung up on something else perhaps more complex but also perhaps less relevant or that maybe you should introduce.

Toyotawhizguy: “Only the water vapor aspect is well understood, and is widely agreed upon as being positive.”

But is this true? I thought that part of the water vapor aspect was more water vapor available to develop cloud cover which would tend to be negative.

——————-
Yes, clouds are formed from water vapor, but the feedbacks from clouds are very different than for water vapor, thus Climatologists treat clouds as a completely separate entity from water vapor as regards feedbacks. I agree with you about the negative effect, and think it’s most likely that the net effect of clouds is negative, but what do I know?
Water vapor is mostly transparent to sunlight in the visible range, and does absorb a bit of sunlight in the shorter IR bands. But water vapor is the prevalent greenhouse gas, as it absorbs large amounts of long wave IR radiation radiated by the earth. If CO2 forcing produces more water vapor, that aspect (ignoring clouds) is a positive feedback (But how much?).
See this graph for the Solar Radiation Spectrum that shows water vapor’s effect on incoming sunlight.

This graph shows how greenhouse gases absorb earth’s longwave radiation (Notice the extent that the absorption bands for water vapor overlaps with the absorption bands for CO2! Redundancy in the greenhouse effect!)

“A picture is worth a thousand words and a graph is worth a thousand pictures.”

Louis Hissink (04:31:44) :
Nick Stokes (14:44:09) :
Re: Smokey (Mar 8 14:22),
Dr Kreutz took 64,000 separate CO2 readings at the Geissen weather station over two years.
Yes, he did. And they are extremely erratic (Fig 5). They vary up and down between about 310 ppm and 550 ppm. They don’t correlate with Beck’s global figure at all>>

Yes of course, why didn’t I see it before? Tracking diurnal changes in CO2 to understand daily cycle effect of vegetation completely invalidates the guy’s ability to derive an annual trend from the same data.

Hey CRU, GISSS, all you guys. Have any of you got daily temperature swings of say 10 degrees in your data? Really? You do? Well, I am sad to tell you that this means you are incompetent, your data is invalid, we’re measuring annual trends in tenths of degrees here and you guys have botched it. Delete your data, pick up your last pay check and turn out the lights on your way out.

well, I am also with Larry here, and I must add that up to now nobody has been able to actually show me from a decent experiment (which btw must involve actual sunshine and earthshine – not just a simple heat retention experiment ) that CO2 is a greenhouse gas. By definition: a greenhouse gas: the nett effect must be warming. The warming must come from the re-radiation of earthshine in the 14-15 um range. But CO2 has a number of absorptions in the 0-5 um range. So CO2 also re-radiates sunshine which causes cooling. They only recently discovered that CO2 also has absorptions in the UV range, so I donot think that we ever have done any decent testing on the whole spectrum that would involve what I am thinking of. I suspect that what we will find if we could construct such an experiment, is that it is pretty much evens with the cooling and warming of CO2 – in other words the net effect in W/m3 CO2 [0.04%] /24hours is close to zero.

This is based on the fallacious saturation argument isn’t it? Downward forcing becomes maximized because the all earth’s IR is absorbed by GHGs. But that ignores the fact that the GHGs themselves re-emit the IR which causes the whole atmosphere to warm. The saturation argument is how your transposed from your first figure showing an increase in forcing, to the second one showing almost no change in forcing.

The natural CO2 forcing is shown without any feedbacks, whereas the anthropogenic forcing is shown with feedbacks. This is misleading. Nobody would argue that the natural changes in CO2 are not magnified by feedbacks (try to explain the glaciations without feedbacks) One can argue about the magnitude of the feedbacks. Perhaps the IPCC has them too high. Perhaps too low.

If there was no atmospheric circulation, the so-called greenhouse effect would make the Earth’s atmosphere ~140C warmer than it currently is.

Since, we know that the net “greenhouse” effect is only ~30C of warming… The net atmospheric feedbacks are strongly negative.

We have a fairly good idea of how much the Earth has warmed over the last 150 years (~0.7C)… We have fairly good reason to say that CO2 levels have climbed by ~100ppmv over the last 150 years. If we ascribe all of the warming to increased CO2, we can calculate that the maximum possible greenhouse warming that could result from a doubling of pre-industrial CO2 (275ppmv to 550ppmv) is a bit less than 2C.

Since we know that at least half of the warming over the last 150 years is due to solar variation and/or natural climate oscillations and we know that most of the other half is due to uncorrected UHI, human error and fraud… We can conclude that almost none of the warming over the last 150 years is the result of the 100ppmv increase in atmospheric CO2. Therefore, even less than almost none of the warming that may occur along with the next 100ppmv increment of CO2 will have been caused by the CO2.

Hi James. Just like too high sugar intake, CO2 can get dangerous if it reaches about 8-10% in your direct surroundings due to dilution of oxygen intake,. (MAK-value= 9000 mg/m3). However, the books say that only at 20-30 % it becomes lethal. Your body is used to absorb CO2 (think of cooldrinks).

Some (work) places have set a level of ca. 1% (1000 ppm) as a save working level.
We know that during the carniferous period CO2 ran into the thousands of ppm causing tremendous growth. So let us stick to 1 – 2 % as safe.. I do believe that more (than 400) rather than less is better i.e. if it can be proved that CO2 is not or not much of a greenhouse gas. See my comment at 7:29. CO2 and water vapor are sort of like your father and mother. More CO2 will stimulate growth – better crops and more forests.

“No, the critical flaw with your approach is that the frequency range of the insolation is not the same as the frequency range of the emission. Therefore it is quite acceptable to use an absorptivity of 0.7 and an emissivity of 1 (the correct values).”

Phil, I believe you are wrong. The spectral shape of radiation emitted from a black body surface at a temperature “T” degrees Kelvin obeys Planck’s law. By integrating Planck’s law over frequencies from zero to infinity, one obtains a total emitted power that is proportional to “T^4″. In general, if either (a) the emitted spectral shape does not obey Planck’s law, or (b) the integration is not over the interval zero to infinity, the total emitter power is no longer proportional to “T^4″. Thus, to use the equation for total emitted power: Total Power = “a” * “sigma” * “area” * “T^4″. where “a” is the emissivity and “sigma” is the Stefan-Boltzmann constant, the spectral shape of the emitted power must obey Planck’s Law.

Which is fine for the emission from the Earth which occurs above 5μm with an emissivity of ~1, if it emitted any in the visible around 0.5μm then you’d have to take into account the lower value of ε(ν). The insolation spectra of the Earth and its emission spectra are over different wavelength ranges which have different emissivities.

A C Osborn (03:35:46) :
It looks as if Phil is happy to ignore any and all Refutations of his beliefs and states his “Facts” with such conviction he must be a Climate Scientist.

I don’t talk about my ‘beliefs’ here only facts and they haven’t been refuted. Reed posted concerning emissivity/absorptivity (see above) while I was asleep (I am allowed to sleep aren’t I?) and I’ve responded to it

Sorry James. I made a simple 10x error. The safe working limit is 9000 mg/m3
= 9 g/m3. 1m3 air = ca. 1204g so 9/1204×100= 0,75%.
1000 ppm = 1 g/kg = 0.1%
So let us stick to 0.1%-0.2% CO2 as perfectly safe and promoting growth.
we are currently at close to 400 ppm = 0.04%

This post is misguided on several levels. First, it assumes that the effect of CO2 is saturated over the whole frequency spectrum, which it is not. Some parts of the IR spectrum are indeed saturated, and CO2 increase won’t have any effect. But in other regions of the spectrum, and particularly at the poles, the edges of the CO2 bands are anything but saturated.

If you think of the atmosphere as a blanket with holes in it, you don’t get much improvement in the blanket by stretching a thin film over the intact part, but you do improve by covering the holes.

Second, it’s simply silly to say that the earth came close to catastrophe by going below the threshold for plant growth. What pulls CO2 out of the atmosphere is largely plant growth. It’s self limiting; as CO2 levels drop, plants grow less, and that limits further decrease in CO2.

Gail’s REPLY
You forgot the oceans. CO2 is more soluble in Cold water than in hot.

Hi could someone tell me where I can find what the total Infra red energy coming to the Earth from outer space, which is of the frequency which can be absorbed or radiated back to earth or reflected back to space by co2?

Lots of interesting stuff even for those who will get lost in the maths. He thinks the “greenhouse effect” is mythical. If you look at nothing else, take a gander at the physicist’s summary. Apologies if posted earlier, but it’s new to me and may be to others.

No, the critical flaw with your approach is that the frequency range of the insolation is not the same as the frequency range of the emission. Therefore it is quite acceptable to use an absorptivity of 0.7 and an emissivity of 1 (the correct values)…….
……
Since Kirchoff’s law requires that the emissivity and absorptivity be equal, a grey body that absorbs a fraction “a” of the power incident on the body, will radiate a fraction “a” of the power radiated by a black body at the same temperature…….

Sorry to butt in, but Kirchoff’s law which forces emissivity and absorptivity to be equal, is only true FOR A CLOSED SYSTEM THAT IS IN THERMAL EQUILIBRIUM. such as a cavity containing radiation. And in that case, it is a SPECTRAL equality as well; the two must be equal at each and every possible wavelength.

So kirchoff’s law does not apply to EM radiation or absorption on planet earth; ofr one thing, nobody even imagines that the LWIR emissivity matches the absorptivity for radiation in the solar spectrum range.

The whole idea of “flat plate” thermal solar energy collectors is based on the fact that the surfaces are prepared so that the solar spectrum spectral absorption coefficient, is very high (visually black), but the LWIR thermal emmissivity for the spectrum corresponding to the surface operating temperatuere is low; this can be achieved for exammple with broad band optical interference filter layers on the surface.

So Phil is in fact correct; and the total emittance and total absorptance can be different.

You have to really go out of your way to set up a condition in which Kirchoff’s Law DOES APPLY. So I’m with Phil.

We know that slavishly reducing the CO2 amount from 388 parts per million is not necessary and wrong and AGW is a hoax.

But a question:

What level of CO2 would be too high?

1500 parts per million?

500 parts per million?

2500 parts per million?

“”

Seems like the extremely dangerous number is around 6% which corresponds to about 60000 ppm. That (I believe) is where it’s not possible for lungs to function anymore. That is quite a bit more than 6000 ppm which is around the upper limit that the Earth has apparently had in the distant past. Operating at 6000 ppm is probably around where there could be problems. It’s also probably beyond the point that could be attained by burning all fossil fuels over a period of time. I think some commercial greenhouses may operate in the range between 500 and 5000 ppm due to enhanced growing efficiency and where it is actually worth the cost of the equipment and co2 to do so.

According to “Fundamentals of Statistical and Thermal Physics”, Reif, McGray-Hill, 1965, page 385

“A good emitter of radiation is also a good absorber of radiation, and vice versa. This is a qualitative statement of ‘Kirchoff’s law.’ Note that this statement refers only to properties of the body and is thus generally valid, even if the body is not in equilibrium; but we arrived at this conclusion by investigating the conditions which must be fulfilled to make the properties of the body consistent with a possible equilibrium situation.” (emphasis in the original).

I interpret the above statement to mean the emissivity and the absorptivity are equal even when a body is NOT IN THERMAL EQUILIBRIUM with its surroundings.
I believe I noted that the Earth’s emissivity (and hence absorptivity) might be a function of frequency. Assuming that is the case, the Earth’s emissivity/absorptivity at IR can and likely will be different from the Earth’s emissivity/absorptivity in the visible light portion of the electromagnetic spectrum. The problem I have is with the equation used to compute the power radiated by the Earth: emissivity times area times Stefan-Boltzmann constant times temperature in degrees Kelvin to the fourth power. I admit the equation is simplistic and doesn’t apply to the real world, but it is the formula used by people to justify the 255 degree temperature. For this equation to be used, the emissivity must be the same at all frequencies. The equation may be a reasonable approximation for Earth-like frequency-dependent emissivity and if someone can so demonstrate, then I’ll better understand the 255 degree number. However, I await that event.

Re: cba (Mar 9 06:52),
What I’m saying about tropopause warming is this. Surface warming is expected to lead to an increase in specific humidity. The reason is that there is a large reservoir of water at the site of warming which can evaporate.

At the tropopause there isn’t. Now it may happen that the surface warms also, and SH at the tropopause increases for that reason (water from the sea). But there’s no local source up there. And although WV is mixed, there’s no mechanism that would cause more of it to move to the tropopause in response to temperature there.

Re: davidmhoffer (Mar 9 07:07),…daily cycle effect of vegetation completely invalidates the guy’s ability…
Yes, it does. Because the fluctuations that he’s seeing have nothing to do with global CO2. And there’s no reason to expect that they’ll average out to mean zero. If there are local CO2 sources, they’ll bias the measurement high, to an unpredictable extent.

Modern CO2 measurements taken far from sources do not show these fluctuations. They show steady behaviour, with a gradual increase, and a very small seasonal cycle.

Beck’s pre-1960 curves are quite different. There’s no reason to believe the world changed radically in 1960. The logical explanation is that if you sum a random collection of measurements with all sorts of local effects, you measure something quite different to what eg the Scripps sites are measuring now for global CO2.

Well at a pinch, I can only cite two references that I have here at my desk. The first is from:- “The Infra-Red Handbook.” Edited by William L. Wolfe, ( Profesor of Optical Sciences, University of Arizona, Consultant to Infra-red Inaformation Analysis (IRIA) Center, Environmental Research Institute of Michigan; and George J. Zissis Director Emeritus of IRIA.
The Handbook was prepared for the Office of Naval Research, Department of the Navy, Washington DC. I have the 1985 Revised Edition.
ISBN: 0-9603590-1-X
On page i-30 (I’ll have to spell out the math.
“1.3.3 Kirchoff’s Law. By considering two bodies IN THERMAL EQUILIBRIUM, one a black body and one arbitrary, one can show that:-
(alpha) = Integral, 0-infinity[alpha(lambda).dlambda] =
(epsilon)= integral, 0-infinity[epsilon(lambda).dlambda)
It can also be shown that alpha(lambda)=epsilon(lambda)
where the temperature of both bodies is the same and the spectral region of consideration is the same for both alpha and epsilon. Since these depend upon the total power law, they also depend on THERMAL EQUILIBRIUM.

Emphasis added by me, alpha is absorptance, and epsilon is emittance.

My second reference is the Handbook of Optics sponsored by the Optical Society of America (I’m a regular member), edited by Walter G Driscoll and William Vaughan. Chapter 1 on Radiometry and Photomoetry, edited by Jay F. Snell, Tektronix, Beaverton Or.
Page 1-22, Kirchoff’sLaw.
28 Kirchoff’s Law is a consequence of the necessary existence of an energy balance between emission and absorption for a body IN AN ISOTHERMAL BLACK ENCLOSURE AND IN TEMPERATURE EQUILIBRIUM WITH THE ENCLOSURE. Since the radiation field in such an enclosure is isotropic, the directional spectral emissivity and the directional spectral absorptance of the body are equal.

“Born and Wolfe” as in Max Born and Emile Wolfe, simply mentions Kirchoff’s Law (among many of his laws) but does not deal with it, since their textbook is mostly involved with the electro-magnetic theory of Optics, rather than the thermodynamic theory. The theories of propagation, diffraction, interference etc, which are electromagnetif field phenonena; are unrelated to the thermodynamic principles of radiative energy.

Note that the second citation, is even more restrictive than the first, in demanding that the enclosure be ISOTHERMAL in addition to demanding equilibrium.
Warren Smith has an excellent treatment of Black Body Radiation; but once again , from a diffrerent point of view, than thermodynamics, so although he has the best treatment of the Planck Law and related laws; like Born and Wolfe, he does not deal with Kirchoff’s Law. Before his relatively recent death, Warren Smith was with Infra-red Industries in Santa Barbara California; and along with Rudolph Kinglake of Eastman Kodak, was the Dean of Optics in America.

The proof of Kirchoff’s law, follows the principle of showing a violation of the second law of thermodynamics, by creating a higher temperature in a receiving body, than a source body, if the equality conditions aren’t met.

Like I said; you have to really go out of your way to construct a case where Kirchoff’s law applies (in practice).

Bearing in mind any of the above that one might consider relevent; I should simply mention that in the question of earth cl;imate system from a radiative balance point of view; the major SOURCE of EM radiation energy, is the sun at a not too isothermal temperature of about 6000 K; while the “Sink” body, the earth is at about 288 K on average, but about 183, to 333 K total temperature range; neither isothermal, nor in thermal equilibrium with the sun.

Besides the earth rotates every 24 hours, so there is no way it could be in thermal equilibrium.

I still support Phil’s assertion that the absorptance (alpha) and the emittance (epsilon) can be different; especially for totally different spectral ranges.

In my humble view, this is the rock that breaks the CO2 driven AGW hypothesis. It seems completely obvious that effect of each added parcel of CO2 is almost completely masked by the near 100% IR absorption bands from the CO2 that is already in the atmosphere.

In my view, the AGW proponents would have to present *hard* scientific evidence showing that this logarithmically diminishing effect of CO2 was not true if they wanted to re-establish their case.

I think one might easily demonstrate this CO2 masking by using a couple of overlay transparencies – one showing the pre-industrial absorption bands and an overlay showing the effect of the CO2 added in the modern era.

So, having done the maths (in my country we do it in the plural), the dominant heat loss route from the surface is in fact water vapour.

Of the Surface Radiation, only 26W/m^2 is absorbed by the atmosphere (the other 40w/m^2 goes straight through out to Space).

So as far as the atmosphere is concerned, it gets its energy from the surface in the following proportions:
Latent Heat: Three Fifths
Conduction: One Fifth
Radiation: One Fifth

When the surface warms, the Radiation proportion decreases and the Latent Heat proportion increases (this will be true if evaporation increases, which is uncontroversial).

The levels at which these fluxes enter the atmosphere are different:
Conduction (one fifth) at the base of the column.
Radiation (one fifth) into the lowest 500m or so
Latent Heat (three fifths) into the clouds.

So a heating of the surface changes the distribution of flux into the atmosphere from low down to higher up.

As I understand the process at the primary sampling station for CO2 at Mauna Loa, a sampling tube is purged with inert gas and then a sample of air is drawn.

This sample is measured spectrographically and the daily results are added; has anyone ever established that the purging is fool-proof?

Has any study ever investigated the accuracy and reliability of the sampling process?

This graph is the grail — the foundation itself.

I would hate to learn that it gets dirty like my rain gauge.

I wish people could get past this one. There are a number of CO2 measuring stations around the planet. All of them tell the same story.

Yes, Mauna Loa is an active volcano, I used to be able to see it from my house when I lived in Hawaii. However, it’s a great place to take CO2 measurements. They are taken at night, when the air is descending from high altitude as part of the day/night sea breeze/land breeze cycle common to islands. This is air which has been way up high crossing the Pacific, and passes over very little land before being sampled. And true, sometimes the samples are contaminated by volcanic CO2.

Wikipedia is wrong when they say that the measurements are adjusted when this happens. They are not. It is immediately obvious when the air is contaminated with volcanic CO2, because as you might imagine the CO2 levels spike off the charts. These samples are not used for baseline CO2 measurements (although they are used to estimate the amount of CO2 being outgassed by the volcano). Since they take several measurements per night, this generally does not leave them without data. The volcanic CO2 is also usually near the ground, so the measurements are taken both at the ground and from tall towers as well. If these two agree and there is no unusual readings, they know they have measured good air.

So although as you might surmise I am a suspicious SOB who doesn’t believe anything related to climate science without a very hard look, I am satisfied that the data coming from Mauna Loa are valid and can be relied on. See this site for more information.

He set out to determine if CO2 fluctuated due to local diurnal cycles of vegetation and effects of wind speed on accumulation. He showed that they did. This does not mean he calculateD annual fluctuations in the same way. The fact that he understood the fluctuations caused by these and other factors does not suggest that he neglected them when calculating annual trends, just the opposite.

<>

Really? Are you certain? Because I’ve been reading an awful lot recently about TREE RING DIVERGENCE and how researchers had to perform the “trick” of substituting temperature for tree ring data because the tree rings “suddenly” stopped tracking temperature for no apparent reason right around 1960. What limits tree growth besides heat and moisture? CO2 perhaps? Is it possible that under a certain level of CO2 tree growth is stunted? So perhaps the tree rings and the CO2 are saying the same thing?

SORRY! messed up the edit, I keep forgetting this blog understands html. Last part of post above shoud have read:

Nick Stokes;
Beck’s pre-1960 curves are quite different. There’s no reason to believe the world changed radically in 1960. The logical explanation is that if you sum a random collection of measurements with all sorts of local effects, you measure something quite different to what eg the Scripps sites are measuring now for global CO2>>

Really? Are you certain? Because I’ve been reading an awful lot recently about TREE RING DIVERGENCE and how researchers had to perform the “trick” of substituting temperature for tree ring data because the tree rings “suddenly” stopped tracking temperature for no apparent reason right around 1960. What limits tree growth besides heat and moisture? CO2 perhaps? Is it possible that under a certain level of CO2 tree growth is stunted? So perhaps the tree rings and the CO2 are saying the same thing?

1) The oceans as a whole shed more thermal energy through evaporation than through all other processes combined. They cover >70% of the Earth’s surface.

2) The air-sea temperature differential is negative in the global annual-average sense. In other words, on a climatic basis, the sea heats the air and not the other way around!

3) GHGs produce no thermal energy whatsover, but merely redistribute it through radiative transfer and molecular collisions with “inert” atmoshperic constituents. This is a capacitance effect and not a “forcing.”

4) IR absorption by water vapor covers a much wider range of frequencies than the discrete lines of the trace gases. Its highly variable presence in the atmosphere makes the correspondingly wide “atmospheric window” highly variable also, allowing IR to escape to space more sporadically, but no less effectively.

5) As a consequence, the Earth need not heat up to shed any “excess heat” that may be attributed to increased CO2 concentrations and backradiation at certain LW frequencies. It’s the power density integrated over all thermal frequencies–and not just in certain bands–that determines the surface temperature.

In summary, the climatic regulator is found not in the trace gases, but in the rate of the chaotic hydrological cycle. You can argue all you want about various theoretical constructs, but those are the essential facts as evidenced by observations.

Nick Stokes;
Beck’s pre-1960 curves are quite different. There’s no reason to believe the world changed radically in 1960>>

My apologies, I looked at Beck’s graph again and realised that the big drop in CO2 it shows starts about 1950, not 1960. So then I looked up divergence and it turns out that I had THAT date wrong too. Turns out it started showing up about…1950.

I don’t know if this was posted already: during the Jurassic, CO2 was over 2,000 ppm (more than 5x current), average global temp was about 22C (currently 13C) and life was doing wonderfully. How does this compare to the IPCC models?

Other fact: when I was growing plant clones on agar, the incubator was set for maximal growth conditions for plants: lots of light, 30C, 80% humidity and 10% CO2 (that’s 100,000 ppm).

agw is going to raise the T of the surface facilitating more evaporation. Higher up such as the tropopause, which is at a conservation of energy temperature level, you have effectively a higher emissivity caused by increased ghg levels. that means the atmospheric parcel radiates more power at a given temperature. Assuming it was in a relative equilibrium condition prior to the addition of the ghgs then the temperature must drop as it is radiating more energy out – pretty much at a 2x rate compared to the increased absorbed energy.

exactly what happens is a guess. not every surface point has additional h2o available to become vapor. if the upper atmosphere fails to increase h2o vapor, it merely means there is going to be less of a gh effect. the constant relative humidity assumption is made and is supposedly sorta right on average. Of course, the relative humidity is essentially limited to 100% as super saturation results in a “cure” for too much h2o. One can assume that either the rh stays constant or it decreases with T. More surface heat is going to invoke more of a water cycle and absolute humidity will increase and that is the only thing that counts relative to radiative transfer. The water cycle though is simply going to move more power through regions of lower radiative transfer and act as negative feedback. It’s also going to invoke additional cloud formation which results in a net increase in albedo and will cool the Earth down on average as the water vapor cycle is going to be a time of day dependent activity.

According to “Fundamentals of Statistical and Thermal Physics”, Reif, McGray-Hill, 1965, page 385

“A good emitter of radiation is also a good absorber of radiation, and vice versa. “””

And if the latter statement was correct, then spectrally selective solar thermal energy collection panels simply would not work; providing experimental proof of the falsity of that assertion.

George, I first want to state, and I am NOT being sarcastic, that I appreciate the dialog.

However, I don’t think your above statement regarding the falsity of Reif’s assertion is incorrect. A good emitter is not material that radiates massive amounts of power. A good emitter is a material that radiates a large fraction of the power that a black body radiator at the same temperature would radiate. A good absorber is a material that absorbs a large fraction of the power that a black body would absorb–which is all the power incident on it. In this sense, a black body is both a perfect absorber and a perfect emitter. This concept does not imply that a black body can’t absorb energy, and by various means convert that energy into either work or energy in other forms. Thus, the concept that a good emitter is a good absorber does not imply solar panels can’t convert solar radiant energy into electrical energy.

If all materials were perfect reflectors (i.e., any energy incident of the surface is reflected by the surface), then your comment about solar panels would be correct.

The best model I am familiar with for a black body radiator is a small opening in a cavity whose interior walls are at a common temperature. Independent of that temperature, all energy incident on the hole from the outside will enter the cavity and eventually be absorbed by the walls of the cavity. Similarly, all energy incident on the hole from inside the cavity leaves the cavity. Thus the hole is perfectly efficient–both for absorption and emission. This concept does not requre that the amount of energy per unit time entering the cavity must equal the amount of energy per unit time leaving the cavity. The amount of energy incident of the hole from the outside is independent of the temperature of the walls of the cavity. The amount of energy incident of the hole from inside the cavity is a function of the temperature of the cavity walls. Thus, the statement that “A good emitter of radiation is also a good absorber of radiation, and vice versa” does not imply that net energy can’t be absorbed by the body, and hence does not imply solar panels “can’t work”.

George, sorry. Instead of writing: However, I don’t think your above statement regarding the falsity of Reif’s assertion is INcorrect., I should have written: However, I don’t think your above statement regarding the falsity of Reif’s assertion is correct.

Not only do they not use readings obviously contaminated by Volcanic CO2 emissions that spike their sensor, they also do not use ANY readings outside of their standard, whatever that is. In fact, their problem last year was they did not have data for 20 DAYS of a month due to this issue and posted a very low reading which they took back after being questioned about its validity. They told us they implemented infilling to deal with that type of issue.

So, just like temperature data, Mona Loa CO2 readings are now partially made up.

That leads me to question what they were doing about loss of data before?? Is it just recently that their standard is causing them to not use a lot of data?? Why would that be??

So, having done the maths (in my country we do it in the plural), the dominant heat loss route from the surface is in fact water vapour. “””

I’ve not spent a whole lot of time trying to evaluate all the thermal processes exchanging energy between the surface, and the atmosphere/space, so I am somewhat dependent on what others have claimed them to be. And Phil has cited some of the numbers that Trenberth gives, and which are enshrined in the official NOAA energy balance picorial chart. And I have to say I am quite suspicious of some of Trenberth’s numbers. Well for a start, I disagree totally with the phony construct of dividing the solar insolation by 4, since the relation between incoming flux, and surface temperature reached is non-linear, so that process under values the peak surface temperatures reached and so under-estimates the LWIR emission.
But they cite 390 W/m^2 for the Surface radiation (total). Well that pretty much corresponds to the BB Stefan-Boltzmann value for a 288K black body.
Next they claim that 350 W/m^2 out of that 390 is absorbed by the atmosphere, and a mere 40 W/m^2 escapes via the “atmospheric window”.
I find that difficult to believe that only about 10.25% of the surface LWIR directly escapes.

In addition to the 350 W/m^2 the atmosphere absorbs, they add another 67 from direct solar input, 24 from convection from the surface, and 78 from latent heat of evaporation giving a total of 519 w/m^2 absorbed by the atmosphere from all sources. In my book, the atmosphere could care less what the sources of that energy are; the source makes no difference to the heating of the atmosphere.
Then Trenberth has the atmosphere emitting 324 W/m^2 earthwards, and 195 skywards; and right now, there is no way I can rationalize that. Unless I am just plain stupid, that result has to be totally wrong.
At any level in the lower atmosphere, where the air density is high enough to have a mean free path, shorter than the mean lifetime of the excited states of the GHG molecules, the energy intercepted by atmospheric molecules is mostly transferred to the ordinary atmospheric gases; by collisions, before spontneous decay can occur. That is why I say that the source of the atmospheric absorbed radiation does not matter; in the end it heats N2, O2, and Ar.
So at any level in that region, where the above is true, the ordinary atmospheric gases emit LWIR thermal radiation in an isotropic angular distribution pattern. That means that half of that thermal emission is directed upwards, and half is directed downwards.

Above that emitting layer, the air density and GHG molecular density is lower, and the temperature is lower; while below that emitting layer, the air density is higher and the air temperature is higher (in general; not talking storm conditions here).

So for the air above the layer, and in particular for the GHG molecules, the lower collision frequency, and velocity, due to the lower density and temperature, results in the GHG molecules above, having a NARROWER absorption spectral width, than the GHG molecules in the emitting layer; and fewer GHG molecules. That means that the absorption coefficient for the thinner colder air above is less, due to Doppler and collision frequency being lower. So some of the upwards emission spectrum that previously got absorbed by the sample layer, will now proceed right on through the higher altitude layer, because of the narrower absorption band above.

Conversely, the atmospheric LWIR emission that is directed downwards, now sees a denser and hotter atmospheric layer and a higher density of GHG molecules, so the absorption coefficient for the LWIR is higher the further down the back radiation propagates; so the absorption band gets wider, and re-absorption is more probable for downward directed radiation, than it is for upward.

So at any layer, you start with an even split of upward and downward radiation, and the upward path is favored over the downward path, by a narrowing absorption band, and lower GHG density; compared to the downward path with its increasing Doppler and pressure broadening of the absorption bands.

So someone is going to have to do some heavy arm twisting, to convince me, that the total atmospherioc LWIR radiation is directed preferentially downwards rather than up wards; so I am in complete disagreement with Trenberth on that score.

One other little burr under the saddle is the idea of treating the downward (back radiation) as a “forcing”, as if it was a bonus to be applied to the incoming sunlight of 168 W/m^2 that Trenberth says the surface absorbs.

Well hold on a minute. The solar spectrum radiation from the sun, finds itself mostly impinging on deep oceanic waters; about 73% of the total earth surface; and an even greater percentage of the extra-polar surface area where most of the solar energy arrives; so that energy passes quite deeply into the ocean, with the highest spectral irradiance portions going deepest.

On the other hand, the 324 w/m^2 that Trenberth claims is back radiation that strikes the surface, is LWIR and virtually all of it, is absorbed in the top 10 microns of the ocean surface (mostly), where it contributes little to the total energy of the ocean, but is most likely to cause prompt evaporation from that selectively heated surface film.

So to treat that source as simply a “forcing”, as if it was just an adjustment to soalr flux, is a bogus assumption as far as I can see.

George, sorry. Instead of writing: However, I don’t think your above statement regarding the falsity of Reif’s assertion is INcorrect., I should have written: However, I don’t think your above statement regarding the falsity of Reif’s assertion is correct. “””

Well Reed, my command of the King’s English may not be all that great; but when I read… “”” “A good emitter of radiation is also a good absorber of radiation, and vice versa. “””

…. in my minds eye, that word (A) can be replaced by the word (Any); at least they are not incompatible claims.

But if I was to replace the word (A) with the word (Some); then the two statements are quite incompatible; but now I would say that the statement was correct; as amended.

But then I’m not so confident that the same result holds true for ‘Mercan, as it does for the King’s English.

But your typo did not throw me off balance.

But since we do have plenty of “lay folks” visiting here; I do try to be as pedantically accurate as I can, to avoid miscommunication. Perhaps the experts may find that tedium; I consider it necessary.

Well at a pinch, I can only cite two references that I have here at my desk. The first is from:- “The Infra-Red Handbook.” Edited by William L. Wolfe, ( Profesor of Optical Sciences, University of Arizona, Consultant to Infra-red Inaformation Analysis (IRIA) Center, Environmental Research Institute of Michigan; and George J. Zissis Director Emeritus of IRIA.
The Handbook was prepared for the Office of Naval Research, Department of the Navy, Washington DC. I have the 1985 Revised Ed. …

“”

Well, I suppose Reif takes the cake for the worst textbook I ever had one or more classes from.

Stefan’s law is the integrated result of Planck’s law over angle and wavelengths(or all frequencies). Emissivity as shown in Stefan’s law is an engineering number. However, when dealing with gases rather than solids or liquids that actually have a Planck continuum of emissions, one can use an emissivity as a function of wavelength with the Planck curve to generate an emission spectra. Actually that emissivity is the spectral absorption and the Planck curve actually becomes the distribution of energy states. where a molecule absorbs in the spectra, it emits in the spectra – assuming that the Boltzman (Planck BB curve) distribution for energy actually has molecules at that energy state for the given T. This is your Kirchoff’s law consideration. An equilibrium is necessary, but it is LTE, local thermodynamic equilibrium, which is necessary to have locally uniform temperatures between the various types of molecules, such as IR absorbers and non IR absorbers. Otherwise, there is no single temperature common to the different molecular types.

Another way to put Kirchoff’s law in perspective is the Einstein coefficients. There’s one for absorption, one for spontaneous emission, and one for stimulated emission (like a laser). If you have a gas at temperature T and are illuminating it with a BB curve at temperature T, you will not have absorption lines and you will not have emission lines, but rather you will have the BB curve. If the T of the gas is higher than that of the BB curve, you will have emission lines. If the T of the gas is less, you will have absorption lines.

In this, Kirchoff’s law really does apply assuming the same temperature, gas & BB radiator. (your thermal equilibrium). It is also by wavelength (or frequency) at each and every wavelength.

NOte, an interesting configuration is to take two concentric spheres with an atmosphere between that is very thin so approximately, the spheres have the same surface area and all downward or inward radiation impinges upon the inner sphere. Set the same temperature for both spheres and you will have radiation going between them and absorption and emission by the ‘atmosphere’ shell. It will be in equilibrium and no net energy will be absorbed or emitted. Now, lower the T of the outer shell down – say to 3 kelvins or 0 kelvins. What you have here is a different equilibrium form so that the outer atmosphere at the outer shell is at the same T as the outer shell and the inner shell area atmosphere is at T, just like that shell. You have a gradient or lapse rate in temperature where the T at any point above the inner shell is determined by the conservation of energy (or power flow in and out of a small parcel) and that includes all mechanisms of energy flow.

the Earth must be in a radiative equilibrium – what comes in must go out on the moderately short term. The Sun is in a hydrostatic equilibrium so that gravitation attraction must be balanced by pressure preventing the Sun from collapsing. At the average orbital distance, the top of the atmosphere radiation amounts to around 1365 w/m^2. That varies by about 90 w/m^2 during the year due to perihelion and aphelion distances. The fact that Earth rotates basically provides a fairly uniform distribution of power (for the most part) over practically the entire surface on a 24 hour average. No rotation or slow rotation would have a significant effect, especially on approximations. That’s one of the many huge problems with Venus comparisons.

One can though get far more out of these simplifications and approximations than one might initially imagine.

“”” Thus, the concept that a good emitter is a good absorber does not imply solar panels can’t convert solar radiant energy into electrical energy. “””

Reed, if you read my statement again, I think you will conclude that my example of a case where Reif’s assertion was incorrect, was a case of a solar thermal collector; not a PEV solar panel.
I even went to some pains to explain the general strategy for designing such thermal solar collectors (which I feel is how most solar energy ought to be gathered (as thermal energy))

If in fact one were designing a PEV solar cell panel instead, then it would be desirable to do the exact opposite surface preparation to reject solar spectral wavelengths somewhat longer than the silicon bandgap wavelength (around 900 nm) in order to minimize heating of the solar cell panel, which will lower the efficiency of the solar cells (by dropping their open circuit Voltage, and ncreasing their reverse leakage currents).

When articles start off with the assertion that “greenhouse
gasses keep the Earth 30° C warmer than it would otherwise be without
them in the atmosphere” , I feel there is likely little to
learn from them .

The -30c null hypothesis is based on the
non-physical assuption that the naked earth would absorb with its
observed about 0.7 absorptivity , yet emit as a black body with an
emissivity of 1.0 . This is a non-physical and mathematically
intractable assumption . A far more realistic and mathematically useful
comparison is that we are about 9c warmer that a gray ( flat spectrum )
body in our orbit . Unlike what Marty Hertzberg calls the “cold earth”
hypothesis , the flat spectrum case is orthogonal to the ratio in the
correlations between the earth’s spectrum and that of the half
millionth of the sky the sun subtends and the that of the rest of the
3k celestial sphere which defines the “greenhouse effect” . See my http://CoSy.com for the
computations in several array programming languages .

Spector (16:05:02) :
In my humble view, this is the rock that breaks the CO2 driven AGW hypothesis. It seems completely obvious that effect of each added parcel of CO2 is almost completely masked by the near 100% IR absorption bands from the CO2 that is already in the atmosphere.

In my view, the AGW proponents would have to present *hard* scientific evidence showing that this logarithmically diminishing effect of CO2 was not true if they wanted to re-establish their case.

I think one might easily demonstrate this CO2 masking by using a couple of overlay transparencies – one showing the pre-industrial absorption bands and an overlay showing the effect of the CO2 added in the modern era.

George Smith. If you care to continue writing, I’d be happy to read what you have to say. However, I think I’ve exhausted my “understanding” of “grey body” absorption and “grey body” emission. So, unless a specific point comes up that I’m almost positive I know is incorrect, I’m going to let this subject go. I still believe it is incorrect to use a T^4 law for emission when the emissivity is a function of frequency. Furthermore, although it’s a matter of semantics, I believe a “grey body” is a surface whose absorptivity and emissivity are equal and independent of frequency. The absorptivity/emissivity of a grey body surface may change with time, but at any instant in time, the emissivity and absorptivity are equal. I may be wrong, but as the saying goes: “that’s my story and I’m sticking to it.”

My, My, My. What a confused mess this post is. Either David Archibald doesn’t have the slightest grasp of logical consistancy, or he does and his post is deliberately deceptive. You decide.

He begins with the basic point that without any greenhouse gases in the atmosphere the temperature would be 30 DegC colder. This is not contentious, its basic Thermodynamics. And the known logarithmic radiation behaviour of CO2 is also well known.

Then further down he has a dramatic looking graph of Temp change per 20 ppm of CO2 with this line leaping up from it of ‘cumulative’ temperature change for 6 DegC ‘IPCC’ models. Plotting cumulative temp amd temp per ppm on the same graph? How bizarre!. But gee it looks dramatic.

(by the way, the IPCC DOESN’T HAVE any models. It simply reports on the modeling done by research groups around the world. The IPCC is a REPORTING agency).

Then further down we have the corker. A bar graph of cumulative temp due to CO2 (and CO2 alone!) vs CO2 level and then the ‘Anthropogenic’ component based on 6 DegC models from the earlier graph added on. Makes the Anthropogenic part look all rather outlandish doesn’t it?

Bingo. If no one notices, deception complete, the gullible gulled.

So what is wrong with what he has shown?

Notice how his final bar graph shows ‘Natural’ Warming due to CO2 varying from around 2-3 DegC across the graph. WHAT HAPPENED TO THE 30 DEGC RISE? If the Earth would be 30 DegC cooler without any GH gases, what happened to it on David’s graph?

Simple, he has shown the CO2 ONLY component of ‘Natural Warming’ against the TOTAL warming – CO2, Water Vapour, Methane, Ozone, CFC’s, Albedo Change, Aerosols etc for the ‘Anthropogenic Warming’ section. Not comparing apples with apples. And he has taken the most extreme end of the range of models reported by the IPCC of 6 DegC. The middle of their range is more like 3 DegC.

So, to make his last graph more realistic, it should show the impact of all the factors. At which point his ‘Natural’ warming would be around 30 DegC high. And the ‘Anthropogenic’ component would then rise another 3 Deg C.

The problem for David is that that would be far more accurate but it wouldn’t be dramatic. It might even show that the ‘IPCC’s models are actually consistant with the logarithmic nature of CO2. And he doesn’t want that does he?

Then there is this gem: “The IPCC models water vapour-driven positive feedback as starting from the pre-industrial level. Somehow the carbon dioxide below the pre-industrial level does not cause this water vapour-driven positive feedback. If their water vapour feedback is a linear relationship with carbon dioxide, then we should have seen over 2° C of warming by now. We are told that the Earth warmed by 0.7° C over the 20th Century.” WRONG!!

The models (not the IPCC’s as I pointed out) model water vapour – period. Nothing to do with pre or post-industrial levels. Then he says “we should have seen over 2° C of warming BY NOW” (my highlight).

WRONG! He is trying to sell the line that the warming associated with a certain level of CO2 should all have happened RIGHT AWAY. The models do predict warming of around 2° C from current levels of CO2. But even if CO2 levels where frozen right here it would still be several decades before we reached 2° C. The oceans are still absorbing vast amounts of heat and are lagging behind in the warming. And the 2° C figure is based on the warming impacts of the GH gases. It doesn’t include the countervailing cooling effects of Aerosols (air pollution) – Climate scientists believe around 1/2 the extra warming from GH gases is being masked by the cooling effect of aerosols. So, in a few decades time when humanity has, hopefully, stopped producing air pollution then 2° C seems like dead certainty. Thats why the climate scientists are using the word ‘commitment’ for this rise. Its already locked in and in play. David doesn’t mention any of this.

Finally note David’s reference to weather in his home town of Perth. Don’t look at the world wide, the global picture. Instead have a look at what’s happening in your backyard! Again, wrong thinking.

And by the way David, as a fellow Aussie, over in Melbourne we have seen temperature rise, prolonged drought and terrible bushfires. Did you ever consider that the local climate in Perth may be more affected by the fact that all you weather comes from off the Indian Ocean and may not change as much as regions whose weather is generated over land – local factors producing local outcomes. The Global in AGW means just that. Local can’t be used to judge anything.

So to summarise, David has presented data which as far as I can tell is factually correct. But he has presented it in distorting and ultimately deceptive ways, has not compared apples with apples, has attempted to make an argument by ignoring factors necessary to that argument, has appealed to you to use inappropriate short term thinking and localised ‘my backyard’ thinking.

And he is making the nonsense assertion that ‘The IPCC’ is claiming that ‘Natural Warming’ from Natural CO2 is sooo different from ‘Athropogenic Warming’ from Anthropogenic CO2.

See, if the IPCC and the Warmists can be shown to make such ridiculous claims, surely we can dismiss it all.

But as I have attempted to show here, it is David who has made the ridiculous claim by using lousy and distorting methods and faultu logic to create a false perception in the minds of the readers of his post.

And go back through all the comments on this post. Look at the praise for what David has said. He has achieved his goal.

Since you gave the rest of us “gullible gulled” the choice to decide whether David Archibald is either an Innocent Fool or a Smart Nasty Con Man, the response of this gullible reader would be neither. David Archibald has written an interesting article, and those two choices are not a sufficient critique. You could have asked him questions directly, and let the rest of us think about the different viewpoints.

My suggestion would be for you to write your own article for WUWT, explaining the situation as you see it.

Because you stated that “David has presented data which as far as I can tell is factually correct,” it could be interesting. But of course, you would be opening yourself to the same pot shots that Archibald and the rest of us gullible folks just got from you.

Also, since Archibald used Willis Eschenbach’s charts, you should consider yourself fortunate that you didn’t level your critique against Willis instead. Smart move.

George E. Smith (18:24:32) :
And I have to say I am quite suspicious of some of Trenberth’s numbers. Well for a start, I disagree totally with the phony construct of dividing the solar insolation by 4, since the relation between incoming flux, and surface temperature reached is non-linear, so that process under values the peak surface temperatures reached and so under-estimates the LWIR emission.

The division by 4 is necessary to determine the average incoming flux at the surface, if you wanted to calculate the local intensity you could scale by cos() but if you integrate over the whole surface you’d still end up with a factor of 4.

But they cite 390 W/m^2 for the Surface radiation (total). Well that pretty much corresponds to the BB Stefan-Boltzmann value for a 288K black body.
Next they claim that 350 W/m^2 out of that 390 is absorbed by the atmosphere, and a mere 40 W/m^2 escapes via the “atmospheric window”.
I find that difficult to believe that only about 10.25% of the surface LWIR directly escapes.

It’s ‘atmosphere + clouds’, average coverage 62% iirc.

In addition to the 350 W/m^2 the atmosphere absorbs, they add another 67 from direct solar input, 24 from convection from the surface, and 78 from latent heat of evaporation giving a total of 519 w/m^2 absorbed by the atmosphere from all sources. In my book, the atmosphere could care less what the sources of that energy are; the source makes no difference to the heating of the atmosphere.
Then Trenberth has the atmosphere emitting 324 W/m^2 earthwards, and 195 skywards; and right now, there is no way I can rationalize that. Unless I am just plain stupid, that result has to be totally wrong.

You’re not stupid but some ‘minor arm twisting’ is in order. ;-) Again it’s ‘atmosphere + cloud’, for your conundrum consider that the base of the clouds are appreciably cooler than the cloudtops.

Something that doesn’t make sense about the believers in dangerous manmade co2 is why do they come here? The science is settled. Take it easy. Put up your feet boys. Forget about defending the ‘settled science’.

Let the science, the data, do the talking.

But wait—the data is doing the talking and showing your predictions from your computer climate models are wrong.

And also, you don’t include the negative feedback from clouds in those computer models. That factor alone shows how wrong you all are.

With that, I can see why you boys are here, and everywhere else in the media, trying to convince everyone to not believe the data that shows you are wrong.

What your reply suggests in a rough and preliminary way is that at 388 parts per million we still have a large & substantial amount of cushion to work with before reaching an upper limit of CO2.

And, if CO2 parts per million has gone up 100 parts or so in the last 150 years, then at this juncture there is no looming crisis.

So, what we need is a refinement and increased resolution of that upper limit of CO2.

This would seem a more clear-cut objective than splitting hairs about whether 395 parts per million of CO2 or 360 parts per million CO2 makes a difference.

And, puts the endeavor more in a pure science realm as opposed to the current situation where political agendas have corrupted the science

Henry@James
We agreed that the safe working limit of CO2 is 0.75% or 7500 ppm. This is what the books are saying.
we know that up to 1000 -2000 ppm (0.1.-0.2% ) would be advantageous for life,
promoting crop- and forest growth, provided we can prove beyond doubt that CO2 has little or no real warming effect. We are now at 400ppm or 0.04%. Which brings me back to the beginning:

@ David Archibald
Sorry, but I am not sure where this information comes from that
“Carbon dioxide contributes 10% of the effect ”
first of all, how do we know for sure that CO2 is a greenhouse gas?
The trick they used (to convince us) is to put a light bulb on a vessel with 100% CO2.
But that is not the right kind of testing.
You must look at the spectral data. Then you will notice that CO2 has absorption in the 14-15 um range causing some warming (by re-radiating earthshine) but it also has a number of absorptions in the 0-5 um range causing cooling (by re-radiating sunshine). So how much cooling and how much warming is caused by the CO2? How was the experiment done to determine this and where are the test results? If it has not been done, why don’t we just sue the oil companies to do this research? (I am afraid that simple heat retention testing will not work here, we have to use real sunshine and real earthshine to determine the effect in W/m3 [0.04]CO2/24hours)

How do they know which country emits what?Can they measure the emissions from each country,or do they guess based on what each country does every year?

Good question, Noelene. The Carbon Dioxide Information Analysis Center (CDIAC) has this as well as other information. Look here for for carbon data, and here for emissions data. They describe the methods they use.

Not only do they not use readings obviously contaminated by Volcanic CO2 emissions that spike their sensor, they also do not use ANY readings outside of their standard, whatever that is. In fact, their problem last year was they did not have data for 20 DAYS of a month due to this issue and posted a very low reading which they took back after being questioned about its validity. They told us they implemented infilling to deal with that type of issue.

So, just like temperature data, Mona Loa CO2 readings are now partially made up.

That leads me to question what they were doing about loss of data before?? Is it just recently that their standard is causing them to not use a lot of data?? Why would that be??

Mauna Loa is pretty active at the moment. I have no problem with the infilling. At this point, the monthly/annual cycle of carbon dioxide is well understood. CO2 is a “well-mixed” gas, so the day-to-day fluctuations are not that large.

…
Then Trenberth has the atmosphere emitting 324 W/m^2 earthwards, and 195 skywards; and right now, there is no way I can rationalize that. Unless I am just plain stupid, that result has to be totally wrong.

I agree. The only way a greenhouse can get enough energy to heat the world and provide for all the losses is if it has two physically separated radiative areas. Here is my redo of Trenberth that shows how this would work.

Also please see my post called The Steel Greenhouse, which lays out the mechanics of the greenhouse effect.

OK. I’m going to break my rule of silence–sort of. I was reviewing Phil’s (10:26:06) comment, and I’m pretty sure he’s wrong. I get the feeling Phil thinks the “average emissivity” of the Earth is an average over frequency. Specifically, Phil wrote:

“In order to determine the average absorptivity of the Earth you’d integrate over all frequencies:
∫A(ν)Isun(ν)dν/∫Isun(ν)dν

Similarly for the Earth’s emissivity:
∫ε(ν)Iearth(ν)dν/∫Iearth(ν)dν

Where Isun is the emission spectrum of the sun and Iearth is the emission spectrum of the earth so while A(ν)=ε(ν) at any ν the averages are not the same because the spectra aren’t the same.”

I strongly believe although Phil’s definition of average emissivity being the average over frequency is a valid operation, it does not apply to problem at hand. Phil treats each square meter of the earth as having the same property–that property being the emissivity (ratio of actual radiated power at frequency f …to… the radiated power at that frequency from a black body at the same temperature). Phil then computes the “average emissivity” by averaging this ratio over frequency. If this is the definition of average emissivity, then a square meter of the Earth’s surface does not emit total power that is proportional to T^4. This can be seen by inserting a frequency dependent term in Planck’s law and then performing the integration over frequency. The resulting total emitted power from the Earth’s surface will NOT be the “average emissivity” times the Earth surface area times the Stefan-Boltzmann constant times the Earth temperature to the fourth power.

Rather, when using an “average emissivity” in conjunction with the T^4 law, the emissivity is averaged over space, not frequency. That is, for each unit area of Earth surface, the emissivity is frequency independent; but from unit area to unit area, the value of the emissivity varies. The “average emissivity” is the weighted average (weighted by area) of the differential area emissivities. This definition of “average emissivity” is consistent with the T^4 total power radiation rule because the T^4 law applies for each differential area; and if we assume the Earth temperature is uniform over its surface (an inherent assumption when the area used in the formula is the total area of the Earth), then the T^4 term can be treated as a constant with respect to integration over the surface area of the Earth.

When articles start off with the assertion that “greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere” , I feel there is likely little to learn from them .

The -30c null hypothesis is based on the non-physical assuption that the naked earth would absorb with its observed about 0.7 absorptivity , yet emit as a black body with an emissivity of 1.0 . This is a non-physical and mathematically intractable assumption .

Bob, you misunderstand what is being done. Actually, the assumption is different. It is that in the thought experiment of no greenhouse gases, it is assumed that the amount of sunlight currently not being reflected by the albedo would be warming the thought experiment earth.

The albedo reflects about 30% of the sunlight, so it is not currently warming the earth.

Since we want to compare like with like, in our thought experiment we maintain the same amount of energy absorbed by the earth as is absorbed today. And as a result, in our thought experiment, the earth gets about 70% of the 345W/m2 of incident energy, or about 235W/m2.

Assuming for the thought experiment that the earth is a blackbody, the temperature corresponding to 235W/m2 is about -20°C, on the order of thirty degrees or so cooler than we are with the same albedo plus a greenhouse effect. You will get a slightly different number if you use the IR emissivity of the planet (generally taken to be on the order of 0.95 or so), but the basic principle is sound.

Willis Essenbach wrote at 21:29:37, 9th March:
“The only way a greenhouse can get enough energy to heat the world and provide for all the losses is if it has two physically separated radiative areas.”

That’s pretty interesting, as there are two or three radiating to space areas, I reckon:
1. A water vapour area, somewhere around the cloud tops. this does most of the emitting to space, because it is hotter (around -10DegC), and because water vapour can emit across a wide range of frequencies.
2. A CO2 area, the bottom of which I make to be around the Tropopause, and extending right up through the Stratosphere. So much of it quite cold, around -55DegC, but with a neutral or positive temperature gradient.
3. An Ozone radiation zone in the lower stratosphere, ie somewhere near the top of the CO2 layer.

I think one might easily demonstrate this CO2 masking by using a couple of overlay transparencies – one showing the pre-industrial absorption bands and an overlay showing the effect of the CO2 added in the modern era.

Spector, you can do this easily using the MODTRAN line-by-line online radiation calculator.

It is worth noting that MODTRAN gives a smaller number than the IPCC for a doubling of CO2. For a clear-sky doubling from 375 to 750, MODTRAN gives a surface warming of 1.3°C …

To get this figure, first run MODTRAN at 375 ppmv. Then write down the amount of radiation escaping the earth, “Iout”.

Then set the CO2 to 275 ppmv and run it again. You will notice that “Iout” is smaller (because more is absorbed).

But Iout must remain constant (the heat gained by the earth must be lost to stay in equilibrium.) So you need to input a “Ground T offset”, in effect warming the earth to increase Iout to its original number. For clear sky and relative humidity remaining constant, the offset is 1.3°C.

OK. I’m going to break my rule of silence–sort of. I was reviewing Phil’s (10:26:06) comment, and I’m pretty sure he’s wrong. I get the feeling Phil thinks the “average emissivity” of the Earth is an average over frequency. Specifically, Phil wrote:

“In order to determine the average absorptivity of the Earth you’d integrate over all frequencies:
∫A(ν)Isun(ν)dν/∫Isun(ν)dν

Similarly for the Earth’s emissivity:
∫ε(ν)Iearth(ν)dν/∫Iearth(ν)dν

Where Isun is the emission spectrum of the sun and Iearth is the emission spectrum of the earth so while A(ν)=ε(ν) at any ν the averages are not the same because the spectra aren’t the same.”

I strongly believe although Phil’s definition of average emissivity being the average over frequency is a valid operation, it does not apply to problem at hand.

It certainly does (and any other one).

Phil treats each square meter of the earth as having the same property–that property being the emissivity (ratio of actual radiated power at frequency f …to… the radiated power at that frequency from a black body at the same temperature). Phil then computes the “average emissivity” by averaging this ratio over frequency.

And if you have an heterogeneous surface you integrate over the whole surface so you end up with a triple integral: over ν and two spatial dimensions.

Gee George, why not? Everyone else on the planet seems to believe in his figures.

Phil is right; the cloud cover is supposedly 62%. Trenberth combines three cloud layers (49%, 6%, & 20%) to get that figure. He calls it “random overlap,” but to me it looks like an application of the Inclusion-Exclusion principle. The application of the principle is rather cumbersome. I prefer the shorter version: taking the complement of the product of the complements or (1 – (1 – 0.49)*(1 – 0.06)*(1 – 0.20)) = 61.6%. His famous energy diagram should state: “62% cloud cover assumed,” but it doesn’t. After this calculation, the term “cloudy” is completely ambiguous throughout the rest of the paper. Every time you see “cloudy,” does Trenberth mean 100% cloudy, 62% cloudy, or something else? We really don’t know.

He computes latent heat flux from total global precipitation: 984 mm/yr. He gets 78 W/m^2, which implies the latent heat of vaporization of 2500.79 kJ/kg. That’s the energy required to convert water at 0 ºC to water vapor at 100 ºC. I’m not sure that’s a correct global average value, but I’m not a climate scientist.

He uses two methods to compute the sensible heat flux; however the bulk aerodynamic formula works for me. That number is 24 W/m^2.

My favorite computation is the value for the atmospheric window, and I quote:

“The estimate of the amount leaving via the atmospheric window is somewhat ad hoc. In the clear sky case, the radiation in the window amounts to 99 W/m^2, while in the cloudy case the amount decreases to 80 W/m^2, showing that there is considerable absorption and re-emission at wavelengths in the so-called window by clouds. The value assigned in Fig. 7 of 40 W/m^2 is simply 38% of the clear sky case, corresponding to the observed cloudiness of about 62%. This emphasizes that very little radiation is actually transmitted directly to space as though the atmosphere were transparent.“

This is really sloppy math. The term “cloudy” is again ambiguous. If Trenberth’s cloudy term means 62%, then the correct window value is 80 W/m^2. He can stop there (but he doesn’t). If he means 80 W/m^2 is 100% cloudy value then he should interpolate between 99 W.m^2 and 80 W.m^2 and get something like 87 W/m^2. Apparently the 80 W/m^2 cloudy value is thrown in as a detractor, because he interpolates between 99 W/m^2 and 0 W/m^2 and rounds up to 40 W/m^2. I guess the desired answer is 40 W/m^2. Apparently no one has ever read this paper and corrected the error. His 2009 update still uses the same 40 W/m^2 for the window–without further comment.

“It is immediately obvious when the air is contaminated with volcanic CO2, because as you might imagine the CO2 levels spike off the charts. These samples are not used for baseline CO2 measurements…”

If it was totally “either or”, that would be ok. If however, the atmosphere sometimes contains a little volcanic CO2, you can get whatever answer you want by adjusting the cut-off point.

.

Hear, hear, and what about the underwater volcanoes / volcanic activity just off the shores of Mauna Loa island, how are they dealt with?

Great frauds have been purpetrated by the omission of outliers.
AND the use of averages with no raw data comparison / check able to be done I’d add to the paraphased “old” quote above, of a Dr. Glassman comment, on the CO2 acquittal thread at the rocket scientists journal blog.

The four corners of deceit: media, government, science, and academia. Telling lies is more important to liberals than telling the truth, because by telling lies, the liberals can influence the narrative. They protested about the new media and the Internet because they recognized that they would not have as much power and influence. In their minds, only they are qualified to determine what is newsworthy. Their arrogance is only exceeded by their willingness to practice deliberate ignorance about certain stories and facts that do not fit the template. Lies and lying are corrosive and pathological. The liberals are the most dangerous and intolerant of all. Man made GW and man made CC are unproven hypotheses. Most reporters do not know what a hypothesis is. Reporters are innumerate and lack basic science knowledge. Their eyes glaze over when data is represented on a graph. They willingly brag about their inability to perform simple arithmatic. Do not drink the Kool-Aid. Do not believe the drive-by reporters.

It appears that most of us have suffered through this book during our education. I’ve never had a problem with Reif’s actual content, but his way of packaging the explanations leaves a lot to be desired. My own TD prof, way back then, bitched about it constantly, but for some reason continued to require the book for his courses. We came to the conclusion that he liked to have something that “bugged him” so that he would be inspired to teach us the “right” way.

On many later occasions, whenever I have referred to the book for some purpose, all of that has popped right to the surface of my mind, so perhaps it wasn’t such a bad strategy after all. I do, however, try not to inflict this approach on my own students. The pain/gain ratio doesn’t seem to be worth it.

I agree with Jim Masterson (00:07:34) that you can’t take Trenberth’s diagram at face value.

For example, the reflectance of sunlight by clouds (76) versus the surface (29) are clearly off. It should be close to half each or the math on Albedo would never work.

The average Albedo of clouds themselves depends on their thickness, water content, height and type and these amounts have been over-estimated in Trenberth’s figures. About 15 to 20 W/m2 should be shifted between the two.

Thats the point about no more heat retention according to increase in c02. Beyond a certain point it doesn’t matter how much more c02 goes into the atmosphere. The net radiative result will be the same at 300ppm as at 600ppm. Fine analogy re: sheep and gates

I find the constant reference to the 30 degree greenhouse effect very puzzling. Based on the Stefan-Boltzmann law, the Earth as a black body should be about 278 Degrees and not the often quoted 258 degrees. Most climate scientists ingore the fact that the Earth is a system involving oceans and the atmosphere and both are contributing to the socalled greenhouse effect. You cannot just say the earth without the atmosphere would be a certain temperature because you have no scientifically valid basis for such an assessment. The albedo factor often used refers only to the visible light albedo. More than half of the sun’s radiation is outside the visible range and exhibits a much lower albedo. Further, the visible albedo which has been measured is a variable and seems to be controlled somehow by solar activity. Throw in the fact that the solar constant is not infact constant and you can see that any analysis based on the 30 degree warming factor is spurious. There are far too many variables and far too little understanding of the processes involved.
============
Well put. Somebody sometime should write an account of all the huge uncertainties that plague every single aspect of this supposedly settled science, starting with the carbon cycle itself and the bizarre assumption that the ocean-land-plant systems are incapable of dealing with such a small fraction of the total carbon transfers between them and the atmosphere.

The following letter by J. Marvin Herndon published in Current Science adds yet another important potential variable, seldom considered: Variations in heat reaching the surface from the Earth’s core.

[…]
Models of the earth, based upon the incorrect assumption that the earth in the main is like an ordinary chondrite meteorite, are widespread and have led to the assumption that the heat coming out of the earth is constant. The reason for assumed constancy is that such models are based upon the assumption that the heat exiting earth comes solely from the radioactive decay of long-lived radionu- clides, which, on a human timescale, would be essentially constant. But that model of the earth is wrong.

From fundamental considerations, I have shown that the earth in the main is not like an ordinary chondrite, but is in- stead like an enstatite chondrite7, which leads to the possibility of the earth hav- ing at its centre a nuclear fission reactor 8–10, called the georeactor, as the energy source and operant fluid for generating the geomagnetic field by dynamo action11. Unlike the natural decay of long-lived radionuclides, which change only gradually over time, the energy output of the georeactor can be variable12. I have also introduced the concept that the earth’s dynamics is powered by the energy of protoplanetary compression13 and suggested a process whereby such energy may be deposited at the base of the crust14. There is no reason to assume that the release of stored protoplanetary com- pression energy would be constant. Such potentially variable energy exiting the earth may contribute not only to variability in the overall heat budget of the earth, but in exiting undersea may affect change to sea-water circulation currents, which may potentially affect the global weather patterns. The degree and extent has not yet been measured15.

I agree with Jim Masterson (00:07:34) that you can’t take Trenberth’s diagram at face value.

For example, the reflectance of sunlight by clouds (76) versus the surface (29) are clearly off. It should be close to half each or the math on Albedo would never work.

The average Albedo of clouds themselves depends on their thickness, water content, height and type and these amounts have been over-estimated in Trenberth’s figures. About 15 to 20 W/m2 should be shifted between the two.
“”
Bill I agree too that there are some problems with Trenberth’s early paper. I think though it is more honest than his later one.

I disagree about the surface vs the cloud / atmosphere albedo. If you back out the numbers, you’ll see he is close on that one. Most of the surface is ocean and for where it matters, the ocean albedo is under about 0.04 and that is the vast majority of surface a rea. Something like Mars or Moon indicate rock albedo at around 0.16 and typical measurements suggest vegetation around 0.1 to 0.2. Put in a reasonable estimate based on these sorts of things yields a surface albedo estimate around 0.08. Measurements indicate we’ve got right around 0.3 total albedo and around 0.62 fractional cloud cover. That leaves about 0.22 for clouds (and atmosphere) albedo. Voila, his rough numbers for albedo reflection. I’m sure it’s not perfect and I’m sure it changes substantially also, but it’s a fair first hack at numbers. It is also not a very good set of numbers for agw proponents. Trenberth’s later paper seems to try to ‘help’ the numbers along a bit in obviously biased fashion.

As for numbers not properly adding up in the cartoon, remember clouds are particles of liquid and solid h2o, not vapor, so they have a continuum emission rather than merely h2o emission lines. Clouds contain localized internal thermal convection and lower & upper T values which radiate at different rates. Cloud tops are also above almost all of the h2o vapor and a bunch of the co2 gas.

I wish some of these scientific posts were “peer-reviewed” before being posted at wattsupwiththat so that the site can maintain its scientific reputation. This post would might benefit from some checking and revision.

1) Estimates of the earth’s temperature without a greenhouse effect depend on whether you include the full sunlight (no albedo) or non-reflected sunlight (with albedo). Clouds and ice, made from the GHG water vapor, play an important role in the earth’s albedo.

2) The absorptions of carbon dioxide, water vapor and other greenhouse gases overlap, making it misleading to say that carbon dioxide contributes only 10% of the greenhouse. As one goes up in altitude (and temperature drops), the relative importance of water vapor and carbon dioxide to the greenhouse effect changes dramatically. Since most heat is removed from the earth’s surface by evaporation (latent heat) and convection to the upper troposphere – NOT by direct radiative cooling to space. Only the upper troposphere cools mostly by radiation and there CO2 is the most important greenhouse gas. (See Lindzen’s 2007 article.)

3) The IPCC’s analysis also uses a logarithmic relationship between CO2 are radiative forcing. The accepted coefficient is 3.7 W/m^2 for 2XCO2 rather than the 2.94 from Eschenbach’s post (a minor difference). It isn’t clear what factor this post uses to convert a forcing in W/m^2 into a temperature increase in degC. Monkton’s APS article discusses several possible values, all of which are similar. A similar value can be obtained by differentiating Boltzmann’s Law (W = oT^4) to get dW/dT = 4oT^3, substituting W/oT for T^3 to get dW/dT = 4W/T and rearrange terms to dT/T = (1/4)*dW/W. A 1% increase in radiation (dT/T; from a hotter sun or radiative forcing by GHG’s) will cause an 0.25% increase in temperature (dT/T) in degK (not degC). Since the accepted radiative forcing from 2X CO2 is 3.7 W/m^2, one can calculate that the direct warming from 2X CO2 (without feedbacks) will be only 1 degC. This 1 degC is the only part of the AGW hypothesis that might be termed “settled science”, but the IPCC doesn’t want us to know that 2X CO2 without feedbacks produces non-catastrophic warming.

4) Your bar graph appears to be wrong. The log2 of zero is negative infinity, so you can’t calculate a temperature rise for the first 20 ppm of CO2. You appear to be calculating the temperature rise associated with a rise from 1 ppm to 20 ppm – a 20X increase or 4 doublings. Using the values in paragraph 3), that temperature rise should be about 4 degC (so you may be off by a factor of 2 somewhere).

5) Using the bar graph turns a simple calculation into something hard to understand (summing up the contribution of many little bars). From 180 ppm CO2 (LGM) to 280 ppm (pre-industrial), is a 1.55 fold increase and the log2 of 1.55 is 0.64. You can then calculate the increase in radiative forcing using Eschenbach’s (2.94) or the IPCC’s (3.7) factor for the radiative forcing associated with 2X CO2, before converting to degC. Or more simply, a little more than half a doubling is a little more than 0.5 degC without feedbacks.

6) Although cold water does hold more CO2 than warm, CO2 won’t get low enough to endanger plants because plants are the main consumers of CO2 not cold water. If you put plants in a sealed system, photosynthesis stops when CO2 level reaches 100 ppm. (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC439845/?report=abstract)
This happens because water escapes out the same openings that CO2 enters, making it wasteful to try to conduct photosynthesis when there is relatively little CO2 around. (Photosynthesis also shuts down every night when there aren’t enough photons around also.)

7) The IPCC’s models don’t assume that the water vapor feedback begins at 280 ppm. The use a figure of +2.0 W/m^2/degC for the water vapor feedback – whether the temperature was 4 degC lower and CO2 was 180 ppm during the last ice age or 4 degC and 500 ppm higher in 2100. You are completely correct calculating that the radiative forcing associated with anthropogenic increases in greenhouse gases (1.5X for CO2 along, 1.75X for all long-lived GHG’s) should have raised temperatures far more than observed if feedbacks amplify direct warming by CO2 from 1 degC/2XCO2 to 1.5-4.5 degC/2XCO2. Although the IPCC doesn’t like to publicize this discrepancy, they “fix” this problem by including cooling from aerosols (global dimming) – a phenomena that is currently assumed to negate 25-75% of the current warming due to increased GHG’s. With uncertainty this larg, no one can prove whether this hypothesis is correct – but the science is nevertheless “settled” and we are “>90% certain that the observed 0.7 degC warming is mostly due to man”.

The basic premise or presupposition that CO2 is a so-called “greenhouse” gas has failed to be demonstrated to a level of scientific certainty in my opinion.

(Without scientific certainty of the basic premise, no action should be taken.)

Water vapor and CO2 are not in the same ballpark either in terms of concentration in the atmosphere, and just as important, behavior in the atmosphere.

CO2 is a trace molecule that is dispersed evenly in rapid diffusion from point sources with no concentration at any height in the atmosphere or geographical area.

Of course, water vapor is both concentrated by geographical area (air masses above geographical areas, to wit, storm fronts, low pressure and high pressure systems) and atmosphereic height (from ground fog to stratospheric clouds). And, water vapor has reflective properties in cloud cover and heat retension in high humidity clear areas.

Again, CO2 has none of those physical properties in the Earth’s atmosphere.

There is a woeful lack of basic science that supports the “greenhouse” supposition.

And, unless or until the basic “greenhouse” premise of CO2 is demonstrated in controlled laboratory experiments that are analogous to Earth’s atmosphere (both concentration and behavior), hairsplitting at this ‘parts per million’ or that ‘parts per million’ below some upper limit is a waste of time.

Basic Science has been neglected in the rush to generate momentum for a (dubious) political agenda.

Let me say, and this might bring some brick-brats my way: Too many folks, here, on this website have accepted the basic premise of CO2 as a “greenhouse” gas and go on to argue splitting hairs.

In a sense, these folks have conceded more than half the battle (scientific argument) and are already arguing on AGW turf (the position of the deck chairs on the Titanic).

Frank:
Only the upper troposphere cools mostly by radiation and there CO2 is the most important greenhouse gas. (See Lindzen’s 2007 article.)
Henry@ Frank

Sorry Frank, you lost me here. How do you people always seem to know absolutely for sure that CO2 is a greenhouise gas?
What testing can you refer to that confirms that CO2 is a greenhouse gas?
The trick they used (to convince us) is to put a light bulb on a vessel with 100% CO2.
But that is not the right kind of testing.
You must look at the spectral data. Then you will notice that CO2 has absorption in the 14-15 um range causing some warming (by re-radiating earthshine) but it also has a number of absorptions in the 0-5 um range causing cooling (by re-radiating sunshine). So how much cooling and how much warming is caused by the CO2? How was the experiment done to determine this and where are the test results? If it has not been done, why don’t we just sue the oil companies to do this research? (I am afraid that simple heat retention testing will not work here, we have to use real sunshine and real earthshine to determine the effect in W/m3 [0.04%]CO2/24hours)

This happens because water escapes out the same openings that CO2 enters, making it wasteful to try to conduct photosynthesis when there is relatively little CO2 around. (Photosynthesis also shuts down every night when there aren’t enough photons around also.)
<<

It depends on what you mean by “photosynthesis.” There are a subset of C4 plants called CAM (includes many desert plants, ice plants, and Bromeliads such as the pineapple) where they keep their stomata closed during the day (ostensibly to conserve water) and perform CO2 processing at night.

Frank (08:39:47) :
I wish some of these scientific posts were “peer-reviewed” before being posted at wattsupwiththat so that the site can maintain its scientific reputation. This post would might benefit from some checking and revision.

As would yours!

6) Although cold water does hold more CO2 than warm, CO2 won’t get low enough to endanger plants because plants are the main consumers of CO2 not cold water.

George E. Smith (18:24:32) :
And I have to say I am quite suspicious of some of Trenberth’s numbers. Well for a start, I disagree totally with the phony construct of dividing the solar insolation by 4, since the relation between incoming flux, and surface temperature reached is non-linear, so that process under values the peak surface temperatures reached and so under-estimates the LWIR emission.

The division by 4 is necessary to determine the average incoming flux at the surface, if you wanted to calculate the local intensity you could scale by cos() but if you integrate over the whole surface you’d still end up with a factor of 4. “””

Phil, of course I know THAT it is traditional to divide the real TSI numbers by four on the theory that a circle has area pi.r^2, while a sphere has area 4.pi.r^2. I just don’t know why they would do that, since Gaia doesn’t do that.
I also know that at any moment somewhat more than half of the total earth surface, is sunlit; albeit under some cloud cover some places. Based on the
0.5 degree angular diameter of the sun, and the approximately one degree horizon refraction by the atmospere, I estimate it (stick in the sand wise) at somewhere in the 51 to 52% range.

I’m quite surprised by your 62% average cloud cover figure; 50% is the number I have seen most, but I admit I haven’t done a lot of digging; so I’m ambivalent about it. If it is indeed 62%, then that would give me an even more jaundiced view of the albedo importance of surface ice and snow.

But I reject the idea of quartering the TSI, simply because that might be the average for any single surface location over time.

If I put my hand in the freezer half the time, and in boiliong water half the time; on average the temperature might not be too bad. The problem is my thermal time constant, is short enough that I reach temperature equilibrium in either location, so I don’t get the benefit of that lower average.

Neither does the earth surface. The thermal effects and weather/climate effects that take place during a surface insolation of 1000 W/m^2 are quite different from what happens at 1/4 of that value; so the poroblem is a non linear one. Why not model a real planet that rotates under its sunlight, rather than a fictional one that is uniformly illuminated over 4pi space; even at the poles at midnight.

As to the atmosphere + clouds, the atmosphere treats incoming and outgoing (radiation) similarly (in an optical sense); allowing of course for the spectral differences. That’s not so for the clouds.

The sun is a near point source; 0.5 deg angular diameter, so clouds casta shadow, with a half degree penumbral edge. In that shadow zone, the surface is cooler, depending on the density of the cloud, the cloud attenuates the complete beam as seen in the shadow zone.

BUT the surface in that shadow zone is a diffuse radiator; at least Lambertian, in the case of an optically smooth surface such as calm water, and more likely near isotropic for other rough surfaces. So the same cloud that formed that shadow zone, can only intercept a small fraction of the outgoing diffuse LWIR from that very same shadow zone. The difference is most noticeable in the case of essentially total opacity of the cloud. No sunlight reaches the ground (in the shadow zone) yet almost all of the LWIR being emitted from that same spot, still escapes around the cloud.

Yes I know that there can be cloud layers larger than the cloud height, where my model only applies to the perimeter of the cloud rather than the total area; but that doesn’t change the fact that clouds are a lot more effective in blocking incoming sunlight than they are in blocking the diffuse outgoing LWIR. And yes I also do realize that the cloud will also intercept diffuse LWIR from outside the shadow zone.

If the cloud tops are hotter than the cloud bottoms (don’t dispute that), then they also will radiate more upwards, than the bottoms do downwards. I still say the exit path is favored over the earthly direction.

And Trenberth does label that 324 W/m^2 as BACK RADIATION; he does not mention any back conduction or convection; but he does show some rain; but surprisingly no snow. I haven’t given a lot of thought to the Latent heat consequences of precipitation. Too many balls in the air at one time for this old head.

James! I am puzzled how your answer ended up before my posting here, but I am glad that we seem to agree!
It appears that no testing has been done in the manner that we both would like to have seen it done.
In addition, I should tell you that they recently discovered that there is also absorption of CO2 in the UV region… I think this makes it even more complicated….so it also acts a bit like ozone….

It appears that most of us have suffered through this book during our education. I’ve never had a problem with Reif’s actual content, but his way of packaging the explanations leaves a lot to be desired. My own TD prof, way back then, bitched about it constantly, but for some reason continued to require the book for his courses. We came to the conclusion that he liked to have something that “bugged him” so that he would be inspired to teach us the “right” way. “””

Dr Bill, I have to plead ignorance of Reif’s Texbook, since my early education was half a world away; “far flung” is the word we use. So I can’t comment intelligently on it, but if he says Kirchoff doesn’t require thermal equilibrium, he is plain wrong on that factoid.

A whole lot of science is plagued by incorrect application of principles in situations where they simply don’t apply. They may still be valuable principles; but we can’t use them willy nilly as if they are universally true. I think a whole lot of second law utterances, suffer from that same problem.

As in LWIR photons from earth are not allowed to land on the sun, because it is hotter than the earth. Electromagnetic radiation (photons) and “heat” are two different species, that have no means of communicating with each other; hell one of them isn’t even a noun.

My students were all pre-med, so becoming fluent in even elementary Optics and Atomic Physics, was not out front and center in their career objectives.

My, My, My. What a confused mess this post is. Either David Archibald doesn’t have the slightest grasp of logical consistancy, or he does and his post is deliberately deceptive. You decide.

He begins with the basic point that without any greenhouse gases in the atmosphere the temperature would be 30 DegC colder. This is not contentious, its basic Thermodynamics. And the known logarithmic radiation behaviour of CO2 is also well known. “””

Well Glenn, you apaprently are the possessor of the Rosetta stone.

I’ve been looking for this “”” And the known logarithmic radiation behaviour of CO2 is also well known “”” well known information, for some time, so far with no luck whatsoever.

I would aprpeciate a citation; preferrably from some established scientific journal, or either a Physical model of the process that leads to such a logarithmic radiation behavior, or else a good measured or credible proxy data set showing such a relationship experimentally. I have lots of both measured and proxy reconstructed data sets over a variety of time scales out to so far 600 million years, and CO2 “doublings” of five times; well actually doublings^-1, and so far, none of those data sets I have found demonstates a logarithmic CO2 radiation behavior; nor have I found any physics textbook or jouranl papers on any physical process leading to such a result.

George E. Smith (09:54:47) :
Phil, of course I know THAT it is traditional to divide the real TSI numbers by four on the theory that a circle has area pi.r^2, while a sphere has area 4.pi.r^2. I just don’t know why they would do that, since Gaia doesn’t do that.

Presumably because they need to equate incoming with outgoing?

I’m quite surprised by your 62% average cloud cover figure; 50% is the number I have seen most, but I admit I haven’t done a lot of digging; so I’m ambivalent about it. If it is indeed 62%, then that would give me an even more jaundiced view of the albedo importance of surface ice and snow.

If the cloud tops are hotter than the cloud bottoms (don’t dispute that I do, it’s the other way round!), then they also will radiate more upwards, than the bottoms do downwards. I still say the exit path is favored over the earthly direction.

Henry Pool (10:00:16) wrote: “In addition, I should tell you that they recently discovered that there is also absorption of CO2 in the UV region… I think this makes it even more complicated….so it also acts a bit like ozone….”

Henry, if I understand your comment correctly, it suggests that possibly CO2 high in the atmosphere absorbs UV radiation (energy) before it gets into the lower atmosphere, thus, reducing energy that could serve to warm the atmosphere and/or surface.

Now, I may misunderstand the implications you report or the results of the mechanism the report suggests, but this possible mechanism certainly hasn’t been incorporated into the theoretical models used to predict AGW as a result of Man-caused CO2 concentrations in the atmosphere.

If so, then there is another reason to reject calls for limiting CO2, let alone severe CO2 reduction.

There is so much Science doesn’t know about CO2 concentrations in the atmosphere.

That’s a state of knowledge that hardly justifies government intervention and regulation likely to depress economic activity right at a time when the U.S. economy and most of the world’s economy is already depressed.

George E. Smith (10:29:26) :
“”” Glenn Tamblyn (19:25:01) :
And the known logarithmic radiation behaviour of CO2 is also well known. “””

Well Glenn, you apaprently are the possessor of the Rosetta stone.

I’ve been looking for this “”” And the known logarithmic radiation behaviour of CO2 is also well known “”” well known information, for some time, so far with no luck whatsoever.

I would aprpeciate a citation; preferrably from some established scientific journal, or either a Physical model of the process that leads to such a logarithmic radiation behavior, or else a good measured or credible proxy data set showing such a relationship experimentally. I have lots of both measured and proxy reconstructed data sets over a variety of time scales out to so far 600 million years, and CO2 “doublings” of five times; well actually doublings^-1, and so far, none of those data sets I have found demonstates a logarithmic CO2 radiation behavior; nor have I found any physics textbook or jouranl papers on any physical process leading to such a result.

So you could do us all a favor if you have the answer.

For the general behavior you could try here, it’s a long used approach of astronomers. Basically weak absorbers are ~linear (Beer’s Law), moderate absorbers ~log and strong absorbers ~square root (scan down to “Equivalent Width Versus Line Strength”):http://web.njit.edu/~gary/321/Lecture6.html
You can find a detailed derivation using Voigt profiles in some advanced optics texts.
The GHGs come from all three regimes: CFCs; CO2; CH4 & N2O

“It is immediately obvious when the air is contaminated with volcanic CO2, because as you might imagine the CO2 levels spike off the charts. These samples are not used for baseline CO2 measurements…”

If it was totally “either or”, that would be ok. If however, the atmosphere sometimes contains a little volcanic CO2, you can get whatever answer you want by adjusting the cut-off point.

Hear, hear, and what about the underwater volcanoes / volcanic activity just off the shores of Mauna Loa island, how are they dealt with?

I say again. The samples are taken up at something like a 13,000 feet elevation or so. At night, the island is colder than the ocean. Cold air sinks, and blows down the volcano on all sides. This air has travelled thousands of miles across the Pacific, so it is free from recent additions of CO2 and is well mixed.

CO2 from the volcano, on the other hand, is not well mixed. It is generally more concentrated near the ground. And it changes quickly with time, as an errant wind blows it about.

As a result, there is very little difference between two consecutive day’s good samples. And there is virtually no difference between good tower and ground level samples. And there is virtually no difference between the multiple good samples taken over the course of each night.

This makes it very easy to identify bad samples, they stick out clearly against a background of good samples.

Regarding undersea volcanoes, the wind is not coming off the sea. It is coming downwards from the upper troposphere. That’s why the measurements are taken there.

As I said, I’m as skeptical as a guy gets, and I’ve looked hard at this question. Temperature measurements are a huge problem … but in my opinion, background CO2 measurements are not.

3) The IPCC’s analysis also uses a logarithmic relationship between CO2 are radiative forcing. The accepted coefficient is 3.7 W/m^2 for 2XCO2 rather than the 2.94 from Eschenbach’s post (a minor difference).

Frank, the 2.94 was the calculation by MODTRAN for a specific situation of clouds and latitude. You are correct that the IPCC uses 3.7 W/m2 as a global average, as do I. However, I have never seen a scientific explanation for the derivation of that number (3.7 W/m2), I use it simply because I have nothing better.

George Smith. If you care to continue writing, I’d be happy to read what you have to say. However, I think I’ve exhausted my “understanding” of “grey body” absorption and “grey body” emission. So, unless a specific point comes up that I’m almost positive I know is incorrect, I’m going to let this subject go. I still believe it is incorrect to use a T^4 law for emission when the emissivity is a function of frequency. Furthermore, although it’s a matter of semantics, I believe a “grey body” is a surface whose absorptivity and emissivity are equal and independent of frequency. The absorptivity/emissivity of a grey body surface may change with time, but at any instant in time, the emissivity and absorptivity are equal. I may be wrong, but as the saying goes: “that’s my story and I’m sticking to it.”

Again, thanks for the stimulating discussion. “””

Well Reed, I think it might be helpful, if you adjusted your understanding of what “Grey Body” really means.

First off it is apparent to me, that you do understand what a “Black Body” is.

Of course it is a quite fictional edifice, as are ALL scientific models; but it happens to be one which developed completely from first principles; along with the new introduction of Planck’s constant (h), and it is one of the crown jewels of modern physics. Arguably (h) is not even a new introduction, since it is also enshrined in Einstein’s E = h.nu ; which won him a Nobel prize, in Physics, whereas E=m.c^2 did not. As it turns out, with proper selection of materials and cavity construction, very close laboratory approximations to a Black Body are possible, so they are very useful devices, and along with other methods, have enabled the verification of Planck’s law to very high prescision, and with the help of astronomy, over extremely wide spectral ranges. So few would venture to dismiss black body radiation. The Planck law sets an upper boundary to the radiation AT ANY WAVELENGTH that can be emitted by any physical body; solely as a result of the Temperature of that body.

Many real bodies either absorb or emit “almost as much” EM radiation as a BB over a broad enough spectrum to almost qualify as black bodies, falling short either at the spectrum edges, or in the completeness of absorption or emission as the case may be; but the result is that their absorption or emission spectrum at a particular temperature “looks like” that of a black body; but less emission or absoprtion than a real black body.

A real body that absorbs or emits some fraction of the energy of a true black body, that is reasonably constant over a wide enough spectral region is referred to as a “Grey Body”. Wide enough spectral region practically means, from about 0.5 of the spectral peak wavelength, to 8 times the peak wavelength.

So for a solar spectrum absorber, that would be from about 250 nm to 4.0 microns wavelength. That contains 98% of the solar spectrum, with one percent lost beyond each end.

The “Grey” part of the description really has two facets. It’s not black, so the absorptance (or emittance) is less than 1.0. BUT it is also NOT colored, if the absorptance or emittance is reasonably constant over that wavelength range.

So a real body could be a grey emitter or a grey absorber; but it is less important that it be both.

A spectrally selective absorber such as would be designed for the solar thermal collector I mentioned; that absorbs all or most of the solar spectrum (0.25 to 4.0) microns, but had a very low emissivity in the spectral range from 4.0 microns to say 40 microns, so it doesn’t want to emit a lot of LWIR , is certainly not a black body, and it isn’t a real grey body either although for the incoming solar spectrum it is “grey enough”.

We might call such a body a “blue” body. Conversely a body that did the reverse spectrally, we would call a red body; or maybe pink if it didn’t absorb or emit very strongly. This is somewhat analagous to the terms “blue noise” and “pink noise” that analog circuit designers talkabout, to distinguish it from spectrally flat “white” (Gaussian) noise.

Pink noise is often also “1/f” noise, where the noise peak amplitude, can occur without limit; but with ever diminishing frequency of occurrence. It can be shown that for true 1/f noise, the energy content is the same in any octave of the spectrum.

So a noise glitch that occurs only once every 24 hours on average, may be very large, but there’s not much energy associeated with the average of such events. I often say that the “Big Bang” was simply the bottom end of the 1/f noise spectrum.

But back at the “grey body”, the ocean is a pretty good example of a grey body. Water has a refractive index of about 1.333 over quite a large spectral range, certainly most of the solar spectrum. There are some anomalous index regions, often around 1.0 microns, due to a molecular resonance absorption, but 1.33 works over a wide range. The average Fresnel surface reflection coefficient is given by ((n-1)/(n+1))^2
So we have ((4/3-1)/(4/3+1))^2 = ((1/3)/(7/3))^2 =1/49 about 2%

This is the normal incidence reflection coefficient of water; which means that 98% of the normal incidence solar spectrum range of wavelengths propagates into the water, where ultimately it is absorbed by something unless the water is shallow.
At angles other than normal incidence, you have to take polarization into account, and the reflection coefficient for one polarization goes to zero at the Brewster angle of incidence . B =arctan(n), which is 53 deg for water.

At 53 deg off axis, the reflected light is plane polarized; but now the reflection coefficient for the reflected polarization has increased somewhat over the 2%. the net result, is that the total reflection coefficient remains almost constant at the 2% level out to the Brewster angle (53 deg for water), and then both polarizations experience increased refelction up to 100% at 90 deg incidence (grazing).

So overall allowing for diffuse illumination, water absorbs about 97% of incident sunlight111, and reflects about 3%
So reasonably at least for the solar spectrum, we can regard (deep ocean)water as a grey body with an absorptance of 97%.

But we have to be careful, because extreme depth is the source of most of that absorption; so we should not apply that to shallower waters, that are quite transparent over much of the soalr range.

So just what the total spectral emissivity curve for water looks like, I am not sure I know; nor how it depends on water depth. I would still expect the surface to emit rather greyly in the 300K region LWIR spectrum.

I have not done any extensive analysis, as to what happens to 4th power laws when you have objects that are not approximately grey over the spectral range of interest; but I get wary, when they diverge significantly from greyness.

Phil. If the emissivity is a function of frequency, then the total power radiated by a planar unit of surface area at a temperature T is NOT the product of (a) the Stefan-Boltzmann constant, (b) the “frequency averaged emissivity”, (c) the size of the surface area, and (d) T^4. It may be a reasonable approximation, but that remains to be seen. I, therefore request that you provide me with a model of emissivity as a function frequency. For example, you might say “the emissivity is a step function: 1 for frequencies from 0 to X Hz, and 0.4 for frequencies from X Hz to infinity.” Once we have quantified the behavior of the emissivity as a function of frequency, we can compute (at least numerically) (a) the average emissivity using your definition

Similarly for the Earth’s emissivity:
∫ε(ν)Iearth(ν)dν/∫Iearth(ν)dν

Where Iearth is the emission spectrum of the earth

of “average emissivity”, and (b) using Planck’s law modified to include a frequency dependent emissivity, the total power radiated by a square meter of surface area at a temperature T, which we’ll assume is either 255 degrees or 288 degrees Kelvin (your choice). We can then compute the value of the product of (a) the Stefan-Boltzmann constant, (b) one square meter, (c) the average emissivity, and (d) T^4. It may turn out the two powers will be approximately equal, in which case the use of the T^4 law is a reasonable approximation. If they are significantly different, then the T^4 rule is a poor approximation and should not be used.

James F. Evans (11:08:25) :
Henry Pool (10:00:16) wrote: “In addition, I should tell you that they recently discovered that there is also absorption of CO2 in the UV region… I think this makes it even more complicated….so it also acts a bit like ozone….”

Henry, if I understand your comment correctly, it suggests that possibly CO2 high in the atmosphere absorbs UV radiation (energy) before it gets into the lower atmosphere, thus, reducing energy that could serve to warm the atmosphere and/or surface.

Well I’m rather more sceptical than you, I want to know who ‘they’ are and where they published?

I believe a lot of people here have never seen or made by themselves real time measurements of CO2 in air. And I believe there is confusion in what is “background CO2 level”. The background CO2 level had been introduced by C. Keeling and you will find it in the higher troposphere (4-8 km or up) or over sea surface (marine boundary layer MBL).
Let´s check an Ameriflux station, Harvard Forest, far from human influence only influenced by soil, vegetation and wind. You can easily measure there changes of 100-200 ppm per week up to 500 ppm and more. But thats not essential. The annual average near ground is within 1% the background CO2 level measured by areoplane over this station in 1-8 km altitude. Do you need more evidence?

Therefore they use it as a background station in the WDCGG list.
This fact I have used to calculate upper tropospheric background level at the historical stations. And that´s why the Giessen data by Kreutz are valid and supply a precise base to do calculations.

If the cloud tops are hotter than the cloud bottoms (don’t dispute that I do, it’s the other way round!), …..
I was thinking of a thin (but dense) cloud layer, where the top could be solar heated, while the bottom was somewhat shielded from the sun above. I agree on the tall cloud case.

Phil. (11:15:25) :

George E. Smith (10:29:26) :
“”” Glenn Tamblyn (19:25:01) :
And the known logarithmic radiation behaviour of CO2 is also well known. “””

Well Glenn, you apaprently are the possessor of the Rosetta stone.

I’ve been looking for this “”” And the known logarithmic radiation behaviour of CO2 is also well known “”” …..
For the general behavior you could try here, it’s a long used approach of astronomers. Basically weak absorbers are ~linear (Beer’s Law), moderate absorbers ~log and strong absorbers ~square root (scan down to “Equivalent Width Versus Line Strength”):http://web.njit.edu/~gary/321/Lecture6.html
You can find a detailed derivation using Voigt profiles in some advanced optics texts.
The GHGs come from all three regimes: CFCs; CO2; CH4 & N2O

Well I’ve always understood the Beer’s law to be the simple non-fluorescing absorption case where t = exp(-alpha.x), and the absorption is somewhat large; which ought to be the case for CO2 inside its absorption band.

And theres’ no question that when the absorption species is sparse, as often in the astronomy case, the absorptance is roughly linear with distance, because the absorptance is low; but then :- exp(x) =1+x +…

But all the texts I’ve read say ” “Climate sensitivity” is the permanent increase in mean global surface Temperature as a result of a doubling of CO2. ” I’m not questioning whether the CO2 (or other GHG) absorption of surface emitted LWIR radiation is logarithmic with CO2 abundance IN A PARTICULAR LOCATION but then the available LWIR to absorb, varies more than an order of magnitude depending on where you are on earth. Then the resulting atmospheric temperature increase is non-linear with respect to that order of magnitude variable LWIR CO2 absorption (in a way that overestimates the Temperature increase), and none of which simply relates to the change in mean global surface temperature, since the effect of that warmer atmosphere on the surface depends on the nature of the surface, and the local thermal processes, so it too is not linear. etc etc.

So there’s a big difference between Beer’s law maybe governing the CO2 absorption of LWIR in the 15 micron band, in some exponential/logarithmic relationship, and the surface temperature following along; with total disregard for anything else going on, such as cloud formation stepping in to interfere with what CO2 is trying to do.

The assumptions are mind boggling. Surface emitted LWIR is uniform and constant all over the earth, and CO2 distribution is the same all over the earth, and nothing else has any say in what the Temperature of the atmosphere is, and somehow the Temperature of the atmosphere sets the surface temperature of the earth. Meanwhile the most common molecule on the planet, just sits by and lets all that happen.

…… Dr Bill, I have to plead ignorance of Reif’s Texbook, since my early education was half a world away; “far flung” is the word we use. So I can’t comment intelligently on it, but if he says Kirchoff doesn’t require thermal equilibrium, he is plain wrong on that factoid.

A whole lot of science is plagued by incorrect application of principles in situations where they simply don’t apply. They may still be valuable principles; but we can’t use them willy nilly as if they are universally true. I think a whole lot of second law utterances, suffer from that same problem.

As in LWIR photons from earth are not allowed to land on the sun, because it is hotter than the earth. Electromagnetic radiation (photons) and “heat” are two different species, that have no means of communicating with each other; hell one of them isn’t even a noun. ……

George: No, Reif doesn’t mangle Kirchhoff. The essential problem in much of this, which I believe to be primarily a misunderstanding (although not entirely so), is that the equality of absorbtivity and emissivity applies to an object in thermal equilibrium at a location. People don’t always adhere to this when doing calculations and drawing conclusions. Fundamentally, I believe that long-term progress in climate analysis will only stem from treating extensive variables like energy. Unfortunately, climate science is really now just in its infancy, and has been hijacked for other purposes before having had a chance to mature.

Regarding the “heat is not a noun” issue, you have no idea how many times I have tried to beat that notion into the heads of my students! HEATING is a process, as you rightly imply.

“However, I have never seen a scientific explanation for the derivation of that number (3.7 W/m2) ….”

Gee. And here I was done thinkin’ that all them good ole “official science type” boys up at the Un’s IPCC scientific types would of gone up and done did perfectfully and completely esplain’ that little bitty minor point.

Guess they done did fergit doin’ that ‘midst all them other emails ’bout cursing people who really check things ……

The discussion here does nothing to change my opinion that the
understanding of the quantitative physics on both side of the AGW
debate is pathetic when compared to , eg , undergraduate
electrodynamics . However , this blog may be the only public place
where some of these essentials are being worked thru . Clearly Reed
Coray and I ( and I’ll mention Marty Hertzberg ) , and Phil
are within a fat epsilon of the same understanding
. I again suggest my planetary
temperature page as a better introduction to the most basic
quantitative physics than any I have found , tho I would welcome links
proving me wrong .

Other than terawatts of geothermal and manmade heat , and some
particulate energy from the solar wind and such like , the mean
temperature of the earth is determined by its radiant exchange with the
celestial sphere around it , a tiny fraction of which is at close to
6000k . I find these essentially 1 dimensional energy budgets
such as Eshcenbach apparently adapts from Trenberth rather useless and
misleading . It can’t even model changes in spectrum with latitude or
day and night .

The Stefan-Boltzmann Power = sb *
T ^ 4 relationship , modified by Kirchhoff’s observation 151
years ago that at any frequency , absorptivity
= emissivity
must hold between
our sphere and the celestial sphere . Any conduction and convection
within that shell is constantly driven in the direction of satisfying
that balance . No “feedback” or “runaways” can effect that balance .
Phil’s equations are on the right path ; it is the correlation of the
spectrum of an object with the spectra of its sources and sinks that
counts .

The only definition of gray which is analytically
useful is having a flat spectrum . I think the use
of the term albedo ought to be limited to this case
. In that case , a
= e = ae is a
constant across the spectrum and the correlation with any source or
sink is the same , and the gray value albedo drops out of the equation
leaving the pure ratio of correlations . Thus , as has been
commented , the ubiquitous notion that changing the albedo of a uniform
gray body will change temperature is wrong . ( I believe this was
Kirchhoff’s actual insight . ) And , particularly give the
width of Planck distributions , nothing with an absorptivity less than
1 can have an effective emissivity equal to 1 .

Given any particular object spectrum , and source and sink spectra ,
actual “greenhouse effects” can be calculated . I could
extend my array language implementation to do this in a few lines of
code , but spend more time on all this than I can afford as it is . I’d
rather see someone perhaps translate the code into a more common
language like MatLab and calculate the equilibrium temperatures for
various substances , eg , CO2 and H2O , and scenarios , eg , the
planet’s surface spectrum , its lumped spectrum as seen from space ,
etc . These could then be confirmed in lab experiments , and
after several decades of lysenkoism , we could see the
beginnings of a “Climate Science” deserving of the name .

Incidentally , the massive effect of “greenhouse gases” on the variance
, as opposed to the mean , of our temperature seems
almost to be concept which is alien to the CS community . The GHGs do
facilitate the transfer of heat to and from the air every day and night
, but I have never seen any discussion of that life sustaining effect .

2) The absorptions of carbon dioxide, water vapor and other greenhouse gases overlap, making it misleading to say that carbon dioxide contributes only 10% of the greenhouse. As one goes up in altitude (and temperature drops), the relative importance of water vapor and carbon dioxide to the greenhouse effect changes dramatically. Since most heat is removed from the earth’s surface by evaporation (latent heat) and convection to the upper troposphere – NOT by direct radiative cooling to space. Only the upper troposphere cools mostly by radiation and there CO2 is the most important greenhouse gas. (See Lindzen’s 2007 article.)

Frank, I agree. You can’t just give a single figure. I just went to MODTRAN, which gives the following figures for standard conditions (the ones that MODTRAN opens with except holding relative humidity steady);

Cooling if we remove all GHGs: ~25.2°C
Cooling if we remove CO2: ~12°C
Cooling if we remove H20: ~15.3°C

As you point out, the bands overlap, so removing both gives less than the sum of the individual coolings.

Finally, as you point out this does not include any losses in the system from evaporation or convection.

In other words, on a theoretical Earth, if there were no CO2 in the atmosphere, we’d be about 12°C [Edited to remove typo of “7°C, thanks, Phil] colder. Well, that is, we would be that much colder if there were no thermostatic mechanism regulating the earth’s temperature.

Phil. If the emissivity is a function of frequency, then the total power radiated by a planar unit of surface area at a temperature T is NOT the product of (a) the Stefan-Boltzmann constant, (b) the “frequency averaged emissivity”, (c) the size of the surface area, and (d) T^4. It may be a reasonable approximation, but that remains to be seen. I, therefore request that you provide me with a model of emissivity as a function frequency. For example, you might say “the emissivity is a step function: 1 for frequencies from 0 to X Hz, and 0.4 for frequencies from X Hz to infinity.” Once we have quantified the behavior of the emissivity as a function of frequency, we can compute (at least numerically) (a) the average emissivity using your definition

Similarly for the Earth’s emissivity:
∫ε(ν)Iearth(ν)dν/∫Iearth(ν)dν

Where Iearth is the emission spectrum of the earth

of “average emissivity”, and (b) using Planck’s law modified to include a frequency dependent emissivity, the total power radiated by a square meter of surface area at a temperature T, which we’ll assume is either 255 degrees or 288 degrees Kelvin (your choice). We can then compute the value of the product of (a) the Stefan-Boltzmann constant, (b) one square meter, (c) the average emissivity, and (d) T^4. It may turn out the two powers will be approximately equal, in which case the use of the T^4 law is a reasonable approximation. If they are significantly different, then the T^4 rule is a poor approximation and should not be used.

“”
In the IR, there is little variation of a surface of a solid or liquid from the planck eqn and consequently, compared with a gas spectrum, using a single number, essentially almost 1.0, and stefan’s law is going to do reasonably well, at least to the accuracies of information involved.

So just what the total spectral emissivity curve for water looks like, I am not sure I know; nor how it depends on water depth. I would still expect the surface to emit rather greyly in the 300K region LWIR spectrum.

My bible, Geiger’s “The Climate Near The Ground” (Amazon), gives the spectral emissivity of water in the 9 to 12 micron region as 0.96. I don’t believe that it depends on the water depth, as IR is totally absorbed by 1 mm of water. Finally, 9 microns is wavelength of the center frequency corresponding to the radiation from a blackbody at 300K.

Geiger also says “The surface will be treated as a blackbody [for longwave emission/absorption] throughout the remainder of the book since, for the range of natural surface emissivities, the departure of Tr [blackbody radiation temperature] from Ts [true surface temperature] is small.”

Well Willis, it is not to be argumentative; just inquisitive. The case for the deep oceans being a pretty good grey body with alpha about 0.97, simply rests on the belief that only about 35 is reflected in any spectral range of interest (to us), and if it is not reflected, then it at least proceeds into the water, and if the thin surface layer doesn’t absorb it, as you point out happens for the 300K region LWIR, then it proceeds deeper until something else does absorb it, so it is in effect an anechoic chamber, or a cavity absorber. So I think the case for near BB Grey body absorption is pretty “robust” to use the adjective du jour.
But when it comes to the emission from the oceans, I’m not so sure; in the sense that “I don’t knowe.”

If we are prepared to say this is a case where Kirchoff’s law does apply, at least somewhat (for reasons as yet unknown (to me); then yes we would assume that the deep oceans are a pretty good grey body emitter.

But WHY ?! If we can’t ascribe the total absorptance to the surface ( in the solar spectrum visual range) , then what of the emissivity in the same range, which is why the shallow water issue arises.

Now if we assume that the top few microns to mm surface layer sufices to absorb the likely LWIR 300 K spectrum, then the shallow water issue doesn’t arise, and maybe we are safe in expecting a high LWIR emissivity.

I can attest to the fact that very deep (thousands of feet) clean sea water (Sea of Cortez) , when shaded from the blue sky, and seen through good wide band grey polarized glasses, to remove the surface reflection. Yes of course I look at it at the Brewster angle; looks very spookily black to me.

I’m sure planet earth looks very black from outer space, if you filter out the atmospheric scqattered blue light; well talking oceans here; not brown ground.

So it is in the shallow water visible spectrum range, where I don’t know what to expect.

But that very thin surface LWIR trap, is why I blanche at the idea of regarding the downward back radiation from the atmosphere (ok Phil; clouds too) as simply another forcing W/m^2 to store in teh deep oceans along with the solar energy.

The solar insolation on the other hand should only result in weak evaporation, because of the transparency and hence deep penetration of most of the high energy (irradiance) portion of the spectrum, which blows right on by the surface layer.

I have to plead guilty of being a wavelength oriented spectroscopist, rather than frequency; even though I know that the answer to the coffee pot question:- “What’s Nu ?” is E/h

So I’m uncomfortable when thinking about the Planck Law in its frequency oriented version; just force of habit. And that is a bit weird, since when I worked at Tektronix, in the early 1960s, “Frequency” was a swear word; a mere figment of the imagination; real things happened in the time domain.

So maybe I still regard frequency as an upside down view of reality; but I know that chemists, and molecular spectroscopists think in cm^-1;maybe if they used TerraHerz instead, we could be friends.

But regardless of that, or irregardless as the case may be, I don’t think any of that infringes on the Stefan-Boltzmann Law. The total energy still goes as T^4

Perhaps it is that I am comfortable with the Planck function being a function of the single variable T.lambda.

Well Willis, it is not to be argumentative; just inquisitive. The case for the deep oceans being a pretty good grey body with alpha about 0.97, simply rests on the belief that only about 35 is reflected in any spectral range of interest (to us), and if it is not reflected, then it at least proceeds into the water, and if the thin surface layer doesn’t absorb it, as you point out happens for the 300K region LWIR, then it proceeds deeper until something else does absorb it, so it is in effect an anechoic chamber, or a cavity absorber. So I think the case for near BB Grey body absorption is pretty “robust” to use the adjective du jour.
But when it comes to the emission from the oceans, I’m not so sure; in the sense that “I don’t knowe.”

Inquisitive is good.

What is the “35 is reflected”? Thirty-five what?

Now if we assume that the top few microns to mm surface layer sufices to absorb the likely LWIR 300 K spectrum, then the shallow water issue doesn’t arise, and maybe we are safe in expecting a high LWIR emissivity.

Well Willis, if my fingers actually did what my brain tells them to do, you would see that 35 is texting code for 3%; it saves wasting a finger on the shift key.

So you don’t “expect” a high emissivity for water; so 97% isn’t high enough for you ?

I do appreciate those little (here) thingies you scatter around, specially that one for water emissivity; just how beautiful is that absorption edge at 6 microns; I’ll have to see if I can dig up any refractiv index curve corresponding to that wavelength. And too damn bad that your plot stops at 3 microns, because that clearly is the start of another absorption edge, and I just happen to know that water has its very highest (IR) absorptance right at 3 microns, where the alpha is about 10,000 cm^-1, and my handbook source also confirms that edge at 6 microns, where the alpha is maybe 3000 cm^-1.

My IR handbook says “Tilt” below around 160 nm so it doesn’t show that 10^6 UV peak at your last (here).

for visible light, oceans tend to absorb around 96% of the incoming light when at a high angle of incidence wrt the horizon. As I recall, it gets even better further into the IR. This is subject to the state of the surface as well as to the angle of incidence but low angles of incidence (wrt horizon) are not where there’s much incoming power/area. Considering that there is slightly more solar power in the IR (mostly near) than in the visible region (with balance mostly being uV), it should be no surprise that the surface reflects little. We do have the clouds and atmospheric scattering that makes for a rather bright blue planet despite the lack of all that much reflection from the surface.

the net result as willis stated is a rather robust ability to use stefan’s law and the gray body approximation. One can go all the way to using line by line spectral data, molecule by molecule for gas and it provides close to the same results as lesser approaches but that is a whole lot more time consuming and difficult to accomplish considering very little is gained for most of the conceptual discussions associated with this thread.

What’s more, whatever benefit in accuracy occurs is substantially negated by the presence of clouds as well as by the h2o vapor cycle and convection.

This is not to say that agw or cagw is correct. One can even use the simpler stefan’s law approach to provide ‘robust’ ‘proof’ that the Earth is quite insensitive to variations in forcing compared to the cagw claims.

2) The absorptions of carbon dioxide, water vapor and other greenhouse gases overlap, making it misleading to say that carbon dioxide contributes only 10% of the greenhouse. As one goes up in altitude (and temperature drops), the relative importance of water vapor and carbon dioxide to the greenhouse effect changes dramatically. Since most heat is removed from the earth’s surface by evaporation (latent heat) and convection to the upper troposphere – NOT by direct radiative cooling to space.

No most surface heat loss is via radiation.

Only the upper troposphere cools mostly by radiation and there CO2 is the most important greenhouse gas. (See Lindzen’s 2007 article.)

Frank, I agree. You can’t just give a single figure. I just went to MODTRAN, which gives the following figures for standard conditions (the ones that MODTRAN opens with except holding relative humidity steady);

Cooling if we remove all GHGs: ~25.2°C
Cooling if we remove CO2: ~12°C
Cooling if we remove H20: ~15.3°C

As you point out, the bands overlap, so removing both gives less than the sum of the individual coolings.

Finally, as you point out this does not include any losses in the system from evaporation or convection.

In other words, on a theoretical Earth, if there were no CO2 in the atmosphere, we’d be about 7°C colder.

Why 7ºC and not ~12°C? And of course according to C-C equation the water vapor would drop to ~65% of its former value (@288K) which of course would cause a further drop in T, let’s say a further 4ºC, now the water vapor’s down to ~50% so it gets a bit cooler, etc. Of course if the initial drop is actually 12ºC then T really plummets!

Well, that is, we would be that much colder if there were no thermostatic mechanism regulating the earth’s temperature.

I agree with your last comment 100%. We don’t really know. I have yet to see any proof that CO2 is a greenhouse gas, i.e that the cooling caused by CO2 is less than the warming…..see argument below….

Henry @ Phil.
I believe we had this argument before.
here is the famous paper that confirms to me that CO2 is cooling the atmosphere by re-radiating sunshine (12 hours per day).http://www.iop.org/EJ/article/0004-637X/644/1/551/64090.web.pdf?request-id=76e1a830-4451-4c80-aa58-4728c1d646ec
they measured this radiation as it bounced back to earth from the dark side of the moon. Follow the green line in fig. 6, bottom . Note that it already starts at 1.2 um, then one peak at 1.4 um, then various peaks at 1.6 um and 3 big peaks at 2 um.
This paper here shows that there is absorption of CO2 at between 0.21 and 0.19 um (close to 202 nm):http://www.nat.vu.nl/en/sec/atom/Publications/pdf/DUV-CO2.pdf
There are other papers that I can look for again that will show that there are also absorptions of CO2 at between 0.18 and 0.135 um and between 0.125 and 0.12 um.
We already know from normal IR that CO2 has big absorption between 4 and 5 um.

So, to sum it up, we know that CO2 has absorption in the 14-15 um range causing some warming (by re-radiating earthshine, 24 hours per day) but as shown and proved above it also has a number of absorptions in the 0-5 um range causing cooling (by re-radiating sunshine). This cooling happens at all levels where the sunshine hits on the carbon dioxide same as the earthshine. The way from the bottom to the top is the same as from top to the bottom. So, my question is: how much cooling and how much warming is caused by the CO2? How was the experiment done to determine this and where are the test results? If it has not been done, why don’t we just sue the oil companies to do this research? (I am afraid that simple heat retention testing will not work here, we have to use real sunshine and real earthshine to determine the effect in W/m3 [0.04%]CO2/24hours)

I am going to state it here quite categorically again that if no one has got these results, then how do we know for sure that CO2 is a greenhouse gas?

Henry Pool (03:58:24) :
I am going to state it here quite categorically again that if no one has got these results, then how do we know for sure that CO2 is a greenhouse gas?
===================
I’ve recently read an article arguing the opposite: i.e. that ALL atmospheric gases are in a sense “greenhouse” gases simply because they are all heated by contact with the surface and then by convection. And anything that is heated (above absolute zero) radiates energy (“heat”) back. The article is here.http://www.americanthinker.com/2010/02/the_hidden_flaw_in_greenhouse.html

Well I suppose there must be some abysmal reasoning flaws in that article, but the following sounds like a fun thought exercise. Let’s say you eliminate the two main components of the atmosphere, nitrogen and oxygen, plus any trace gases that are transparent to infrared radiation. Now you have an atmosphere whose thickness and density are a tiny fraction of the real one, and is composed only of orthodox greenhouse gases. What would the effect of this radical thinning be on surface temperature? Would it not be a lot cooler?

Thanks! Interesting article. I liked it. It makes sense to me. I would indeed argue the same points and possibly evaluate the (so-called) greenhouse gases individually at various concentrations against a constant backround of 20% oxygen and 80% Nitrogen. For my testing there would not be much point in removing that same backround because the reality is that it is here. I think you are right though. My instinct also tells me it would be a lot cooler in the atmosphere without the oxygen and nitrogen.

“”
Francisco
… Well I suppose there must be some abysmal reasoning flaws in that article, but the following sounds like a fun thought exercise. Let’s say you eliminate the two main components of the atmosphere, nitrogen and oxygen, plus any trace gases that are transparent to infrared radiation. Now you have an atmosphere whose thickness and density are a tiny fraction of the real one, and is composed only of orthodox greenhouse gases. What would the effect of this radical thinning be on surface temperature? Would it not be a lot cooler?
“”
Eliminate the o2 and n2 and basically, you’ve got Mars. Actually, Mars has more like 40 times the amount of co2 in a column as does Earth’s atmosphere. That’s 5 additional doublings worth of it. The actual Martian warming is under 10 deg C relative to a gray body of the same albedo. That value may be under 5 deg C. The albedo does vary from place to place and potentially time to time. What is going on here though is that the lower total atmospheric pressure is far less and results in far less line broadening which makes the absorption lines narrower and able to capture less total power. Of course, that goes on way up in our atmosphere too where t he pressure is less but not nearer the surface.

The non IR emitting atoms and molecules are doing two things. First, they provide pressure which broadens the lines and permits greater power absorption (and emission). Second, they help store energy temporarily and help disperse it around the general area and to other ghg molecules. Note that there is very little storage capacity in the atmosphere compared to the radiative and convective heat flow.

Ultimately, one has the problem of the geometry. One side (at the bottom) is heated by the surface (via all mechanisms) and on the other side (at the top) it is heated by space radiatively at a temperature of under 3 K. Any slab of air in between with ghgs will absorb some power and it will radiate outward at a rate related to both its absorption ability and its temperature. Were space to be at the same temperature as the surface, then the whole atmosphere would reach thermal equilibrium at that same temperature and there would be absorption from the outbound in a slab as well as the same amount of inbound absorption which would equal the radiated power inbound and outbound from the slab. However, with space effectively at 0 K we get a thermal gradient because energy (power = energy / time) must balance so what comes in from below (plus back radiation from above – if any) must equal what radiates outbound plus what radiates inbound and that means the temperature must be less.

RE: David Archibald (17:17:30) : “That is a very good point. Perhaps the physical chemists amongst us could put that graph together.”

This is an example of the type of chart I had in mind, perhaps zoomed in on the critical 5 to 20 micron range, the Earth thermal radiation band. I believe a linear frequency scale would be the best indicator as radiated energy is proportional to the frequency bandwidth.

I must say after years of following these kinds of discussions from the perspective of a curious layman with some scientific background, that our actual ability to quantify the effect, if any, of a given increase in atmospheric CO2 concentrations — putatively attributed to humans — this ability is just about non-existent at present. This is the only rational conclusion I can reach from the stunningly wide range of divergent opinions and approaches by all kinds of people with a solid background in these matters. I see nothing remotely resembling a “consensus” on ANY of the countless aspects of this puzzle. What I DO see very clearly, is that at every single step of this tottering tower, a particular hypothesis has been selected among many by the official custodians, and that the only guiding principle of each selection is to provide the necessary prop to the next step. Once that’s done, the previous step is canonically considered as “settled”. But nothing at all seems anywhere near settled or well understood, not even the very core of the assumptions. And empirical experiments are conspicuously absent, partly because some of them may be unfeasible, and partly because the high priests are not interested in setting them up. The effect of the whole thing is rather dizzying. I cannot think of any science, except perhaps sociology, endowed with such evanescence and theoretical prestidigitation.

The bottom line is that the core question of this science is the *ideal* question for the construction of an inexhaustible and indestructible BS generating machine, cloaked as Science, if only because a conclusive demonstration that CO2 emissions pose no grave danger seems as impossible as a conclusive demonstration that imps don’t exist. And as long as there remains a political will at the decision making centers, this may just go on and on and on indefinitely.

Spector (08:58:04) :
RE: David Archibald (17:17:30) : “That is a very good point. Perhaps the physical chemists amongst us could put that graph together.”

This is an example of the type of chart I had in mind, perhaps zoomed in on the critical 5 to 20 micron range, the Earth thermal radiation band. I believe a linear frequency scale would be the best indicator as radiated energy is proportional to the frequency bandwidth.

Already done, the plot on the top left shows the complete spectrum and the other plots show the result of progressively removing each gas, thus showing the individual contributions:

Eliminate the o2 and n2 and basically, you’ve got Mars. Actually, Mars has more like 40 times the amount of co2 in a column as does Earth’s atmosphere. That’s 5 additional doublings worth of it. The actual Martian warming is under 10 deg C relative to a gray body of the same albedo. That value may be under 5 deg C. The albedo does vary from place to place and potentially time to time. What is going on here though is that the lower total atmospheric pressure is far less and results in far less line broadening which makes the absorption lines narrower and able to capture less total power. Of course, that goes on way up in our atmosphere too where t he pressure is less but not nearer the surface.

As an illustration of that effect are the comparative partial CO2 spectra at Earth/Mars surface conditions:

Well said Francisco. I’m in the same boat with a mere 4 yr. Physics degree, an interested layman at best. The science seems to be far from settled, in fact, it appears to me there is a tremendous value in the ongoing discussion of the science. Those that persist in claiming the “science is settled” are engaging in advocacy politics and not science.

is that some sort of measured values for Earth vs Martian conditions or is that a synthetic or calculated spectrum from a Hitran database or something similar? Also, is there a particular reason for choosing an 85nm window around 13 microns?

Admittedly, I’ve not looked closely at 13um before and I don’t do frequencies as I prefer wavelengths. I find the Martian graph rather believable. I’m a bit concerned over just how broad the lines are for Earth. They look a little too broad. In fact, they look more like what I would expect in the visible on a scale of maybe 2nm, not 85 nm. That raises the question of is this a one dimensional model or merely a zero dimension, everything at 1 atm pressure, type of model?

I agree with your comments 100%. The science on carbondioxide is far from settled.
They always refer to the warming properties and forget or ignore about the cooling qualities. To tell you the truth, if they ever will do the right kind of testing, they will properly find that CO2 causes as much cooling as it causes warming.
I see Phil. did not take my bait again.
But to me it does not really matter anymore. The debate is pointless and unnecessary..
I started my own investigations on global warming in October last year and I already compiled my final report. If you are interested in reading this, here are the results of my investigations/conlusions:
FOR MY CHILDREN, & FAMILY AND FRIENDS LIVING IN THE NORTHERN HEMISPHERE

You may not know this. For a hobby I did an investigation to determine whether or not your carbon footprint, i.e. carbon dioxide (CO2), is really to blame for global warming, as claimed by the UN, IPCC and many media networks. I guess I felt a bit guilty after watching “An inconvenient truth” by Al Gore, so I had to make sure for myself about the science of it all. If you scroll down to my earlier e-mails you will note that I determined that, as a chemist, I could not find any convincing evidence from tests proving to me that CO2 is indeed a major cause for global warming. As my investigations continued, I have now come to a point where I doubt that global warming is at all possible…. Namely, common sense tells me that as the sun heats the water of the oceans and the temperatures rise, there must be some sort of a mechanism that switches the water-cooling system of earth on, if it gets too hot. Follow my thinking on these easy steps:

1) the higher the temp. of the oceans, the more water vapor rises to the atmosphere,
2) the more water vapor rises from the oceans, the more difference in air pressure, the more wind starts blowing
3) the more wind & warmth, the more evaporation of water (evaporation increasing by many times due to the wind factor),
4) the more evaporation of water the more humidity in the air (atmosphere)
5) the higher the humidity in the air the more clouds can be formed
6) Svensmark’s theory: the more galactic cosmic rays (GCR), the more clouds are formed (if the humidity is available)
7) the more clouds appear, the more rain and snow and cooler weather,
8) the more clouds and overcast conditions, the more radiation from the sun is deflected from the earth,
9) The more radiation is deflected from earth, the cooler it gets.
10) This cooling puts a brake on the amount water vapor being produced. So now it is back to 1) and waiting for heat to start same cycle again…

Now when I first considered this, I stood in amazement again. I remember thinking of the words in Isaiah 40:12-26.
I have been in many factories that have big (water) cooling plants, but I realised that earth itself is a water cooling plant on a scale that you just cannot imagine. I also thought that my idea of seeing earth as a giant (water) cooling plant with a built-in thermostat must be pretty original….
But it was only soon after that I stumbled on a paper from someone on WUWT who had already been there, done that …. well, God bless him for that!
i.e. if you want to prove a point, you always do need at least two witnesses!
Look here (if you have the time):

But note my step 6. The Svensmark theory holds that galactic cosmic rays (GCR) initiate cloud formation. I have not seen this, but apparently this has been proven in laboratory conditions. So the only real variability in global temperature is most likely to be caused by the amount of GCR reaching earth. In turn, this depends on the activity of the sun, i.e. the extent of the solar magnetic field exerted by the sun on the planetary system. We are now coming out of a period where this field was bigger and more GCR was bent away from earth (this is what we, skeptics, say really caused “global warming”, mostly).
But apparently now the solar geomagnetic field is heading for an all time low.
Look here:

Note that in the first graph, if you look at the smoothed monthly values, there was a tipping point in 2003 (light blue line). I cannot ignore the significance of this. I noted similar tipping points elsewhere round about that same time, (e.g. in earth’s albedo, going up). From 2003 the solar magnetic field has been going down. To me it seems for sure that we are now heading for a period of more cloudiness and hence a period of global cooling. If you look at the 3rd graph, it is likely that there wil be no sun spots visible by 2015. This is confirmed by the paper on global cooling by Easterbrook:

In the 2nd graph of his presentation, Easterbrook projects global cooling into the future. These are the three lines that follow from the last warm period. If the cooling follows the top line we don’t have much to worry about and the weather will be similar to what we had in the previous (warm) period. However, indications are already that we have started following the trend of the 2nd line, i.e. cooling based on the 1880-1915 cooling. In that case it will be the coldest from 2015 to 2020 and the climate will be comparable to what it was in the fifties and sixties. I survived that time, so I guess we all will be fine, if this is the right trendline.
Note that with the third line, the projection stops somewhere after 2020. So if things go that way, we don’t know where it will end. Unfortunately, earth does not have a heater with a thermostat that switches on if it gets too cold. Too much ice and snow causes more sunlight to be reflected from earth. Hence, the trap is set. This is known as the ice age trap. This is why the natural state of earth is that of being covered with snow and ice. This paper was a real eye opener for me:https://wattsupwiththat.com/2009/12/09/hockey-stick-observed-in-noaa-ice-core-data

However, man is resourceful and may find ways around this problem if we do start falling into a little ice age again. As long as we are not ignorant and listen to the so-called climate scientists whose agenda’s depend on money. A green agenda is still useless if it has the wrong items on…
Obviously: As Easterbrook notes, global cooling is much more disastrous for humans than global warming.

Henry,
Yes, that the system must be governmed by negative rather than positive feedbacks seems to be supported by the relatively narrow range of temperature fluctuations in the past, in the face of great fluctuations in other factors, including a much dimmer earlier sun etc.

If you have not seen it, the following half-hour talk by Richard Lindzen makes the main points very well.

Reading a couple of Lindzen papers was in fact how I began to get interested in this topic a few years ago.

There are further basic aspects of the whole thing that I seldom see discussed because they are assumed to be essentially correct, but I have many doubts. The carbon cycle for example, and the modelling assumption that the small amount of human emissions (i.e. small with respect to the total transfers between the different parts of the system) will not be taken by the huge sinks, seems hard to understand.

Consider some more or less official figures of the carbon cycle from the following NASA chart (I should add I once saw a similar IPCC chart with similar figures, and the disclaimer that the expected errors were in the order of more than 20%

the atmosphere 760 PgC (increasing at a rate of about 3 PgC per year)
the ocean surface layers 800 PgC
the deep ocean 38,000 PgC
plants and soils 2,000 PgC

We see that the human contribution of 6.5 Pg per year is a very small fraction of the total turnover between the atmosphere and the soil/plants and between the atmosphere and the oceans, according to the chart. (And it is only 0.015% of the total carbon in the system.)

It is well know that the plants do take extra CO2 if it becomes available. That’s why you get this

Yet they apparently model all this by assuming that the biomass intake of CO2 is static. This is like assuming a herd of animals will always eat exactly the same amount of food, regarding of how much is available. It is beyond absurd, if true.
Regarding the oceans. I still don’t understand well the extent to which Henry’s Law should be strictly applied to the atmosphere-ocean system. But if it is applied, well then you have a current distribution of 38,800 Pg in the oceans vs 760 in the atmosphere, according to the chart above. That is about 50 to 1. Or 98% in the oceans and 2% in the atmosphere. Assuming this arrangement represents partial pressure equilibrium, then any additional CO2 released in the atmosphere will find itself distributed in the same way (eventually) at roughly the current temperature. I suppose the only question is to find out what “eventually” means. Much is made of the fact that the oceans will release CO2 when warmed up, but the attempts I have seen at quantifying this show it is very little. Lubos Motl had a post some time ago where he calculated roughly that you need a huge 8 deg C increase in temperature to produce something like 100 ppm increase in CO2. So this is clearly insignificant. So roughly 98% of any human released CO2 will eventually find its way to the oceans (assuming there are no plants who also want a piece of the pie.) Or, put another way, a permanent doubling of atmospheric CO2 would require (eventually) a doubling of ocean CO2 – which is a physical impossibility: there simply isn’t enough carbon around.

It seems to me that we are assuming the trickle of a dripping human faucet will not go through the huge sinks of that vast pool. If you’ve ever seen one of those long-tongued, insect-eating animals, it seems to me the system will treat our CO2 contribution with the same calm disregard as those animals swallow a little mosquito within reach of their tongue – and without a second thought.

Again, if the total amount of available fossil fuels in the ground (gas, oil and coal) is 5,000 Pg (according to the same chart above), where are we going to get the extra 39,000 Pg of carbon needed to double the amount in the oceans, if the atmospheric concentration is to be also permanently doubled.? Where indeed?

Further, it is perfectly possible that natural variations in the CO2 cycle are of a magnitude larger than our contribution. Much is made of the human “signature” based on the proportion of C13 and so on. But I’ve seen comments (one by Roy Spencer, I think) saying this signature is not necessarily ours.

is that some sort of measured values for Earth vs Martian conditions or is that a synthetic or calculated spectrum from a Hitran database or something similar? Also, is there a particular reason for choosing an 85nm window around 13 microns?

Hitran, I chose a small enough window to show the individual lines.

Admittedly, I’ve not looked closely at 13um before and I don’t do frequencies as I prefer wavelengths. I find the Martian graph rather believable. I’m a bit concerned over just how broad the lines are for Earth. They look a little too broad. In fact, they look more like what I would expect in the visible on a scale of maybe 2nm, not 85 nm. That raises the question of is this a one dimensional model or merely a zero dimension, everything at 1 atm pressure, type of model?

As listed at the side of the plots it’s at surface conditions pathlength 1km.
I just reran it to see if I’d goofed on any of the parameters, seems OK.

Phil wrote at 20:24:16, 9th March:
“Bad math(s) and physics, it isn’t only the radiant energy that can return to the surface, it all can! So more correct accounting is:

390+24+78= 492 out
324=66% back

So as far as the atmosphere is concerned, it gets its energy from the surface in the following proportions:
Latent Heat: Three Fifths 16%
Conduction: One Fifth 5%
Radiation: One Fifth 79%

When the surface warms, the Radiation proportion decreases increases as T^4 and the Latent Heat proportion increases as e^(4900/T).
”
I think we are arguing semantics. However there is a point.

The NET flux into the atmosphere due to photon absorption is, on these numbers, 26W/m^2.
This enters the atmosphere at a different level to the other two fluxes: Conduction enters at ground level. Latent heat enters at cloud level. But the majority of absorption of surface radiation is in the first 250m of the atmosphere – more close to the ground and roughly exponentially less the higher you go. Similarly the back radiation mostly originates from this close to the ground layer. (One can quibble with this – obviously this is not true for the “wings” – but it is true for all major emission lines. The greenhouse effect is pretty much done and dusted by 500m as far as the surface is concerned.)

As the surface temperature goes up, so evaporation increases and the NET radiation from the surface decreases. For an increase of 3DegC, the latent heat transfer increases between 6 and 16 W/m^2 (depending on who one believes for the rate of evaporation increase). The NET radiation decreases by the same amount (otherwise the surface enegy fluxes don’t balance).

So as the surface temperature rises, the net energy flux into the very lowest portion of the atmosphere (conduction+radiation) drops by between 12% and 32%, and the energy flux into the clouds increases by between 6% and 20%.

I didn’t understand the comment: “it isn’t only the radiant energy that can return to the surface, it all can!”, I wonder if Phil could expand that thought.

is that some sort of measured values for Earth vs Martian conditions or is that a synthetic or calculated spectrum from a Hitran database or something similar?

I can get similar spectra using Spectracalc, which does use the HITRAN 2004 database. But I was only able to replicate his Mars spectrum at a path length of 1,000,000 cm, not the 100,000 that appears next to his graph. And the Earth graph was 100 cm.

For the moment, rather than get involved in complex transmission calculations, I would like to concentrate on just how CO2 just affects the basic transparency of the atmosphere in the critical band for radiating heat from the Earth’s surface.

On the Bill O’Reilly show, for example, a few weeks back Bill Nye presented a linear darkening model which implied that each parcel of CO2 added to the atmosphere was having a uniform additive effect over the whole band.

On the other hand, what I see from absorption data plots is that CO2 only blocks transmission at specific frequency or wavelength bands and at these bands we already have zones of 100% absorption of IR emitted from the surface. It appears that all we are getting from added parcels of CO2 is a minor and diminishing increase in the width of these blocking bands.

Hitran is a line database. I didn’t think it had any internal line width calculations etc. What software did you use to create a spectrum? Also, using a 1km single pressure thickness layer is not going to provide accurate results.

You know, if you have a logarithmic response, you really shouldn’t use a linear plot.

I calculated Iout and surface temperature required for Iout to be equal to that at 375 ppmv CO2 with MODTRAN for CO2 levels ranging from 0.01 to 2000 ppmv using the 1976 standard atmosphere and constant relative humidity, 100 km altitude looking down, all other conditions default. Plotting both using a logarithmic axis for the CO2 concentration gives this graph. As you can see, the response isn’t logarithmic at all concentrations. Nor would one expect it to be. At low concentrations, less than 0.1 ppmv, the response is nearly linear. It becomes logarithmic by 10 ppmv and the slope is constant to 2000 ppmv. That doesn’t qualify as “tuckered out” to me. The temperature sensitivity to doubling the concentration is 1.22 degrees for 10 ppmv to 2000 ppmv. The change in forcing for doubling CO2 is 2.86 W/m2. That’s less than the IPCC value because this is calculated for the top of the atmosphere rather than at the tropopause, clear sky conditions and the stratosphere has not been allowed to equilibrate to the change in CO2, as is done to calculate the IPCC forcing.

it’s an extremely complex situation in some ways. Nye is doublessly full of it. depending upon the overall pressure and the total amount of ghg atoms in the atmospheric column, there are effects. At low presures, the individual lines are quite narrow and tall. at higher pressures, the bands widen out with less of a peak. they differ by line as to just how wide the line is and how tall it tends to be. Many are quite potent requiring only a few cm of length to block practically all of the power at the peak. THere are thousands of these lines, some forming bands. The net result is pretty much described. At the tropopause, one loses an additional 3.7w/m^2 for a doubling of co2. At higher altitudes, some of that loss is made up. This is only relevent to clear sky paths which is less than half of the total.

some factors that seem not to be taken into account include that when clouds are present, one has the re-emission of a continuum from the top level. This is above most all of the h2o vapor, some of the co2, and a fair amount of the highest pressures.

that co2 absorbs in the atmosphere (and it also emits) is not really where there are problems. The real factors are associated with the sensitivity and with cloud & h2o vapor cycle contribution. One can see there is about 150 w/m^2 of blocked outgoing radiation and that this is associated with 33 deg. C of temperature rise. That amounts to a very low sensitivity. factors like cloud albedo tosses in a great deal of unknown into many facets.

DeWitt Payne (14:52:37) :
Re: cba (Mar 11 10:27),
is that some sort of measured values for Earth vs Martian conditions or is that a synthetic or calculated spectrum from a Hitran database or something similar?
I can get similar spectra using Spectracalc, which does use the HITRAN 2004 database. But I was only able to replicate his Mars spectrum at a path length of 1,000,000 cm, not the 100,000 that appears next to his graph. And the Earth graph was 100 cm.

That’s odd, I ran it again through Spectralcalc using the same parameters and got the same results.

cba (15:39:30) :
Also, using a 1km single pressure thickness layer is not going to provide accurate results.

I didn’t do the calculation to simulate a full vertical traverse through the atmosphere, I just did it to contrast the broadening of the spectral lines on Mars and Earth, for which it’s adequate.

Spector (15:32:27) :
For the moment, rather than get involved in complex transmission calculations, I would like to concentrate on just how CO2 just affects the basic transparency of the atmosphere in the critical band for radiating heat from the Earth’s surface.

“The argo buoys have been sampling ocean heat for four years now and show a decline in each year. That’s some lag”

Actually they have been sampling reliably since 2003. And deployment of them started earlier. Prior to that we had sampling from XBT’s going back several decades. See Domingues et al 2008

As to the supposed cooling reported, that was reported in Levitus et al 2009. There study only looked at the top 700m of the ocean. Von Schuckmann et al 2009 looked at Argo data down to 2000 m since 2003 and showed continued warming for that body of water. The upper layers may have plateaued of dropped a little but the total down to 2000m is still warming.

“The assumptions are mind boggling. Surface emitted LWIR is uniform and constant all over the earth, and CO2 distribution is the same all over the earth, and nothing else has any say in what the Temperature of the atmosphere is, and somehow the Temperature of the atmosphere sets the surface temperature of the earth. Meanwhile the most common molecule on the planet, just sits by and lets all that happen”

And exactly how do you know what the assumptions used in climate models actually are? Simple question. Are you critising something based on how you THINK it is done, rather than ACTUAL KNOWLEDGE of how it is really done. And if you have actual knowledge of how they do it, care to share your sources.

General rule; Chinese Whispers from the Blogosphere doesn’t count as knowledge. A lawyer would dismiss it as uncorroberated hearsay.

Francisco
“Much is made of the fact that the oceans will release CO2 when warmed up, but the attempts I have seen at quantifying this show it is very little. Lubos Motl had a post some time ago where he calculated roughly that you need a huge 8 deg C increase in temperature to produce something like 100 ppm increase in CO2”.

Henry@Fransisco
I doubt this. We were taught at college that to get rid of CO2 in the water you have to boil it.But I am sure just a bit of heat would do the trick. Remember that the heat is transferred from the top to below, so the temp. of the top layer may well be initially a lot higher, releasing CO2. There is also the effect of wind and storms, causing release of CO2 in the atmosphere due to vacuum boiling…etc.

There was a prof. in Australia who charted the lagging of CO2 increases after warming, I think his name is Bob Carters or Carteres. His studies were quite convincing.

There is also my question whether a general increase in the salinity of the oceans ( for example, also more carbonates) due to human activities, could trap more heat in the oceans? I don’t know. Have not seen any studies on it and certainly nobody has ever mentioned to me this possibility, as a reason for global warming. I doubt that conductivity of the oceans has been measured on a scale as temperature has.

Glenn, I remember seeing a NASA satellite study that showed CO2 was not evenly distributed in the atmosphere but rather it was concentrated in a few long bands that circled the earth. Do we know what the implications of this non-uniform distribution are? It would seem to me that a climate model assumption of uniform CO2 distribution might inject significant errors in the calculations.

““The assumptions are mind boggling. Surface emitted LWIR is uniform and constant all over the earth, and CO2 distribution is the same all over the earth, and nothing else has any say in what the Temperature of the atmosphere is, and somehow the Temperature of the atmosphere sets the surface temperature of the earth. Meanwhile the most common molecule on the planet, just sits by and lets all that happen””

“The assumptions are mind boggling. Surface emitted LWIR is uniform and constant all over the earth, and CO2 distribution is the same all over the earth, and nothing else has any say in what the Temperature of the atmosphere is, and somehow the Temperature of the atmosphere sets the surface temperature of the earth. Meanwhile the most common molecule on the planet, just sits by and lets all that happen”

And exactly how do you know what the assumptions used in climate models actually are? Simple question. Are you critising something based on how you THINK it is done, rather than ACTUAL KNOWLEDGE of how it is really done. And if you have actual knowledge of how they do it, care to share your sources. “””

Glen, I don’t recall saying anything about “climate models”; certainly not those of the type you referenced at real climate. If somehow I left that impression; that was an error, on my part.

My comment, which you referenced above, derives from the NOAA global energy budget cartoon depiction; which apparently is attributed to Dr Trenberth.

The “model” which that diagram clearly espouses, is of a planet that is uniformly irradiated over 4 pi steradians, continuously 24 hours a day; 365 days a year, never changing and uniform everywhere. That is the only way, that a real TSI of about 1366 W/m^2, that is scanned by a 24 hour axial rotation about a tilted axis, and a 365 day elliptical orbit journey about a near point source (0.5 degree angular diameter) radiation source, can end up as a 342 W/m^2 total insolation over the entire surface. And the 390 W/m^2 total surface radiation, again emitted 24/365 from each and every square meter of the surface, corresponding to an isothermal radiator at a BB Temperature of 288 K; to be contrasted, with an actual real planet, which at any one instant of time, could have surface temperatures ranging over an extreme temperature range of 150 deg C, from a low of about -90C to a high of +60 C or even more, so the actual surface radiation available to interract with CO2 or any other GHG including water vapor, varies by more than an order of magnitude in total Radiant emittance.

If Trenberth’s depiction, is in fact not that of a static isothermal uniformly irradiated object, I would appreciate somebody pointing out where on that diagram should one look to find evidence that that is not the case.

Glenn, I remember seeing a NASA satellite study that showed CO2 was not evenly distributed in the atmosphere but rather it was concentrated in a few long bands that circled the earth. Do we know what the implications of this non-uniform distribution are? It would seem to me that a climate model assumption of uniform CO2 distribution might inject significant errors in the calculations.

Yes, the implications are zero, an ~2% fluctuation is of no consequence!

How long on average does CO2 slow down OLW energy from escaping to space? Is it on the order of minutes, hours, days, months, or years? What is the corresponding time for H2O to slow down LW from radiating to space?

When sunlight penetrates the ocean as light, for how long, on average, is that energy stored in the ocean before it is radiated as LW?

How effective is a cloud at turning sunlight into LW radiation?

Isn’t the critical factor in measuring global warming the ohc? How effective are ghg at heating the ocean? If sunlight is much more effective at heating the ocean than ghg, then wouldn’t anything that converts sunlight to LW tend to have a long term cooling effect on the ocean?

I understand that clouds can result from increased heat. I don’t really get clouds increasing the ohc. What % of sunlight that hits a cloud is immediately reflected back to space?

Back in 2005 Hansen predicted that ohc was going increase every year for several years by a significant amount (IIRC, it was something like 10^22 joules/year). That prediction did not happen as OHC leveled off and even declined in recent years. Where did Hansen go wrong? Has that shaken your confidence in modeling global climate?

The path length problem was that I left the VMR (volume mixing ratio) at the default setting of 0.1. Duh (Or, *smacks forehead* D’oh)! I always thought there should have been a concentration setting somewhere, but never really looked until today. I must have assumed that VMR was some sort of instrument function. 100,000 cm path length with a VMR of 0.000375 looks exactly like your spectrum. So for Mars the VMR would be ~1. Learn something new every day. Spectracalc charges more than I want to spend for a subscription to do their more interesting stuff.

The path length problem was that I left the VMR (volume mixing ratio) at the default setting of 0.1. Duh (Or, *smacks forehead* D’oh)! I always thought there should have been a concentration setting somewhere, but never really looked until today. I must have assumed that VMR was some sort of instrument function. 100,000 cm path length with a VMR of 0.000375 looks exactly like your spectrum.

Good, I wondered if it was something like that, I used 0.000385.

So for Mars the VMR would be ~1.

Yeah, I used 0.95, again no biggie.

Learn something new every day. Spectracalc charges more than I want to spend for a subscription to do their more interesting stuff.

Right, If I need to do some of the fancy stuff I just save everything up and subscribe for a month.

George E. Smith (10:32:17) : “If Trenberth’s depiction, is in fact not that of a static isothermal uniformly irradiated object, I would appreciate somebody pointing out where on that diagram should one look to find evidence that that is not the case.”

We are on the same wavelength . It is in fact logically impossible to explain a variations in a radiantly heated object’s due to spectrum with a 1 dimensional sum because that’s computationally equivalent to a point surrounded by a sphere of uniform temperature . By the 0th law of thermodynamics , the point must come to the temperature of the sphere . Spectrum makes no difference . As I have repeated , it is the integral over the correlations between the spectra of a sphere in various directions and the spectra of the heat sources and sinks in those directions which determines the sphere’s mean temperature . My few lines of array code accept any partition of a radiated sphere and its surrounding distribution of temperatures . Its answers match those of the one dimensional simplification in those cases the cruder model can handle . But , I think makes the physics much more understandable by correctly implementing the actual 3 dimensional reality .

It is good to see that a number of people here understand how fraudulent the “cold earth” 255k assertion for a naked earth is .

If you want to see a real world extreme example in real-time, the OLR anomaly at about 160W to 160E, the Nino 4 region, has been as high as -50 watts/metre2 over the past few months.

The El Nino is now dumping its heat into the atmosphere and this area has been nearly continuously covered in tropical convection storms for the past 3 months. As a result, the long-wave radiation is not escaping from the atmosphere – it is being held in. This is a fairly typical response for the end and immediate period after an El Nino, while the opposite happens with a La Nina.

A drop from the normal OLR of 230 W/m2 to 170 W/m2 at the end of January is a big number. Temperatures in this region should increase by 5C as a short-term immediate (speed-of-light) response from this alone (average -30 W/m2 over the past few months). On the other hand, the reduction in sunlight from all the tropical convection storms (if 50% higher which looks reasonable) could reduce temperatures by about 3C.

What are temperatures in the region doing? The closest I could find was Truk in the Caroline Islands (a little north and west of the centre of the region but close enough) which is about 1C to 2C above average over the last 90 days.

So, my estimates above constructed using the Stefan Boltzmann equations are not far off. Clouds and water vapour (with impacts as high as 20 to 30 W/m2 on both sides) have much more impact that a minor GHG with 3.7 W/m2 impact upon doubling.

This does provide a little example of how an El Nino can affect global temperatures as well (the warming in this large region will filter off to the rest of the planet now and the OLR anomalies appear with a lag of a month or two compared to the peak of the El Nino).

Thanks Phil, my original suggestion was for a presentation graphic that could be used by those who might wish to show the real effect that added-CO2 has on the atmosphere. This could serve as an introduction to explain why added CO2 has a logarithmic effect. I believe that Bill Nye’s simplistic linear darkening model is *the* concept generally accepted by the public and that this is much of the motivation behind those who support the fierce return to ‘350’ movement.

I suspect an expanded and perhaps simplified view of the “Radiation Transmitted by the Atmosphere” chart (copied from another WUWT article) might be best suited to that purpose. A similar shaded graphic zoomed to cover the important 5 to 20 micron (or 15 to 60 THz linear frequency) IR range might be very telling. I include this link once more for quick reference.

greetings and thank you for the input, I initially missed seeing them as I think we might have beenposting at the same time and my time this week has been extremely limited. As I was traveling all day yesterday I’ve got a bit of catch up to do still.

Henry Pool wrote: “Sorry Frank, you lost me here. How do you people always seem to know absolutely for sure that CO2 is a greenhouise gas?”

If you google “Absorption Spectrum of Carbon Dioxide” you will see many similar figures, some of which have the emission spectrum for the sun and earth superimposed. For example, see: http://chriscolose.wordpress.com/2008/03/09/physics-of-the-greenhouse-effect-pt-1/ The absorption of sunlight by carbon dioxide is negligible because it occurs where the emission from the sun is very weak and the carbon dioxide absorption overlaps with water vapor. On the other hand, the absorption spectrum of carbon dioxide has very strong absorptions at wavelengths where the earth emits.

6) Although cold water does hold more CO2 than warm, CO2 won’t get low enough to endanger plants because plants are the main consumers of CO2 not cold water.

Not true, the ocean is a larger sink for CO2 than plants.

Perhaps you are right. The difference between the last LGM (180 ppm) and pre-industrial (280 ppm) is believed to be due to the extra carbon dioxide the colder ocean. It is an interesting question to consider how much colder would the ocean have to be in order to drive carbon dioxide concentration below 100 ppm. According to the “snowball earth” hypothesis, the oceans have gotten cold enough to freeze over in the distant past when the sun was cooler, but I’m not aware that CO2 ever approach 100 ppm during those times. Those periods were supposedly ended when volcanos emitted enough GHG’s.

Frank wrote
“The absorption of sunlight by carbon dioxide is negligible because it occurs where the emission from the sun is very weak and the carbon dioxide absorption overlaps with water vapor. On the other hand, the absorption spectrum of carbon dioxide has very strong absorptions at wavelengths where the earth emits”

Henry@Frank
1) you are probably only referring to the absorption between 4 and 5um, not the others (see argument below).
2) You cannot say that without actual results from an actual experiment. I had this argument with Phil. before.
here it comes again:

Here is the famous paper that confirms to me that CO2 is cooling the atmosphere by re-radiating sunshine (12 hours per day).http://www.iop.org/EJ/article/0004-637X/644/1/551/64090.web.pdf?request-id=76e1a830-4451-4c80-aa58-4728c1d646ec
Note that they measured this radiation as it bounced back to earth from the moon. Follow the green line in fig. 6, bottom . Note that it already starts at 1.2 um, then one peak at 1.4 um, then various peaks at 1.6 um and 3 big peaks at 2 um.
This paper here shows that there is absorption of CO2 at between 0.21 and 0.19 um (close to 202 nm) (UV):http://www.nat.vu.nl/en/sec/atom/Publications/pdf/DUV-CO2.pdf
There are other papers that I can look for again that will show that there are also absorptions of CO2 at between 0.18 and 0.135 um and between 0.125 and 0.12 um.
We already know from normal IR that CO2 has big absorption between 4 and 5 um.

So, to sum it up, we know that CO2 has absorption in the 14-15 um range causing some warming (by re-radiating earthshine, 24 hours per day) but as shown and proved above it also has a number of absorptions in the 0-5 um range causing cooling (by re-radiating sunshine). This cooling happens at all levels where the sunshine hits on the carbon dioxide same as the earthshine. The way from the bottom to the top is the same as from top to the bottom. So, my question is: how much cooling and how much warming is caused by the CO2? How was the experiment done to determine this and where are the test results? If it has not been done, why don’t we just sue the oil companies to do this research? (I am afraid that simple heat retention testing will not work here, we have to use real sunshine and real earthshine to determine the effect in W/m3 [0.04%]CO2/m2/24hours)

I am going to state it here quite categorically again that if no one has got these results then how do we know for sure that CO2 is a greenhouse gas?

RE: Henry Pool (07:41:38) : “So, to sum it up, we know that CO2 has absorption in the 14-15 um range causing some warming (by re-radiating earthshine, 24 hours per day) but as shown and proved above it also has a number of absorptions in the 0-5 um range causing cooling (by re-radiating sunshine).”

I believe that most solar radiation is in the 0.2 to 1.5 micron range (1500 to 200 THz, respectively, one micron is nominally 300 THz.) So, other than Rayleigh scattering, there should be no significant CO2 absorption bands in the optical range. I believe the best argument against the hypothetical CO2 crisis is that you cannot kill a horse more than once and most of that horse is already dead.

I suggest looking at the graphics in the WUWT article titled “A Window on Water Vapor and Planetary Temperature, Part 2.”

I ran across an article at RealClimate based on data from the HITRAN spectroscopic archive which seems to indicate that a nominal CO2 concentration designated 1xCO2 (280 ppm, I believe) would yield a CO2 saturated absorption band of 13.5 to 17 microns and increasing the CO2 concentration to 4xCO2 (1120 ppm) only appears to make this range 13 to 17.5 microns. (Ref: “Part II: What Ångström didn’t know.”)

Spector said:
“I believe that most solar radiation is in the 0.2 to 1.5 micron range ”

Henry@Spector
the sun radiates from 0 to 5 um. The energy associated with infra red coming from the sun is 46 or 47%, which I think is still considerable.
You can feel this here in Africa. You cannot stay in the sun for too long, the physical heat is sometimes too much even for 10 minutes exposure. Unless during the day the humidity goes up, then you can feel the heat becoming less. So I would not underestimate the total cooling caused by CO2, from all the absorptions in the 0 – 5 um range, taking together……
I agree that the debate is somewhat moot, to me as well, as I believe that global warming as such is not possible. Earth has its own (water) cooling plant with built-in thermostat. Fluctuations are (most probably) caused by the mechamism that causes cloud formations. (Svensmark theory). But there are an awful lot of people still out there who believe CO2 is a problem. A person is not dead until some doctor declares him dead. So we have to test it.

I am afraid that simple heat retention testing will not work here, we have to use real sunshine and real earthshine to determine the effect in W/m3 [0.04%]CO2/m2/24hours). We know Svante’s Arrhenius formula was wrong. It appears no one did the correct research to get the correct formula.

Again, I say that if no one has got these results then how do we know for sure that CO2 is a greenhouse gas?

Come on folks, if you’re truly skeptical then where’s your skepticism? Why are you skeptical of climate scientists and not David Archibald? Well, what’s there to be skeptical about?

To begin with, how did Mr. Archibald convert from a radiative forcing (in watts per square meter) to an actual temperature increase? A good skeptic should be curious about that and should look into it. In order to produce his figures, Mr. Archibald has simply assumed that for each 1 W/m2 increase in radiative forcing there is a corresponding temperature increase of 0.1°C. This is what he is calling “Natural Warming”. Time for some skeptical alarm bells to go off!

A climate sensitivity of 0.1°C per W/m2 is an absurdly low value that cannot be taken seriously. Even the prominent climate skeptic Richard Lindzen agrees that without feedbacks in the climate system this value should be about 0.3°C per W/m2 (3 times as high). This is relatively straight-forward physics. However, of course there ARE feedbacks in the system and this number is likely quite a bit higher.

What does a sensitivity of 0.1°C per W/m2 imply? Well, the ice ages don’t make sense any more (at all). The 8°C rise coming out of the last ice age would take an astronomical 80 W/m2 of radiative forcing! Also, the 20th century rise in surface temperatures of about 0.7°C would require a radiative forcing of 7 W/m2. Well, for those of you still clinging to the “it’s the sun” theory, this doesn’t jive very well with an upper-bound increase in solar radiative forcing during the 20th century of about 0.3 W/m2 (then again, changes in solar activity don’t jive with any accounting of 20th century warming). Okay, so Mr. Archibald’s “Natural Warming” seems to violate fundamental physics and is completely incapable of explaining ANY climatic changes, natural or anthropogenic. Hmmm…

But what does he compare this to? Well, Mr. Archibald suggests that “some of the IPCC climate models predict that temperature will rise up to 6° C as a consequence of the doubling of the pre-industrial level of 280 ppm.” Really? The upper bound of 6°C is associated with the “A1FI” future emissions scenario which is far more than doubling of CO2: achieving a concentration of nearly 1000 ppm (more than a tripling!) by 2100. And does Mr. Archibald think that CO2 is the only climate driving factor? The rise in CO2 only accounts for about half of the total anthropogenic greenhouse gas forcing over the twentieth century.

So Mr. Archibald’s “Natural Warming” uses an impossibly low climate sensitivity and he then compares it to “Anthropogenic Warming”, for which he uses the absolute upper bound of climate model sensitivities (which is actually for 1000 ppm CO2 not 560 ppm – whoops, a minor oversight that happens to produce an even more “shocking” figure).

Moral of the story? If you multiply something by a small number, you get a smaller value than if you multiply it by a large number. Tada! Mr. Archibald has just demonstrated multiplication! Wait, I thought he was trying to say something about climate…

ATTENTION SKEPTICS! Live up to your title and apply the same level of skepticism to all claims. Why believe Mr. Archibald and not the thousands of trained, competent, and honest independent scientists who are dedicated to understanding our climate system? Mr. Archibald just spoon-fed you nonsense. You should read the label before you swallow.

p.s. Mr. Archibald: where on earth did you get the idea that “plant growth shuts down at 150 ppm” CO2?! That is completely bogus. I should know – it’s what I study. SOME plants have trouble “breaking even” at 150 ppm, but not even close to all. Just for my own satisfaction, tomorrow I will expose a leaf to 150 ppm of CO2 and watch it photosynthesize.

RE Henry Pool (21:02:01) : “how do we know for sure that CO2 is a greenhouse gas?”

I believe you are assuming the term ‘greenhouse gas’ is more restrictive that it actually is. All it means, I think, is that the gas in question has electro-magnetic radiation absorption bands in the far infra red range typical of ‘earthshine’ and is transparent to radiation in the optical range typical of sunshine. I do not think this term requires that an increase of the fraction of these gases in the atmosphere must necessarily produce a corresponding increase in the terrestrial greenhouse effect.

From the data I have seen, increasing the CO2 in the atmosphere to a whopping 1120 ppm will only reduce the width of the window open to earthshine by about a half micron from that of pre-industrial times. In the WUWT article, “A Window on Water Vapor and Planetary Temperature,” it says that water vapor is generally assumed responsible for 70 to 90 percent of the greenhouse effect on earth.

At the Earth, the flux of solar radiation, otherwise known as ‘The Solar Constant’ is about 1368 watts per square meter. According to the Stefan Boltzmann formula, that is enough to cause any flat surface directly facing the sun with no other means of heat dissipation or reflection to reach a temperature of 394 degrees K (121 degrees C.) I believe this is why you can sometimes cook eggs on the hood of a car. The peak heating power from the sun appears to be coming in at wavelengths around 0.5 microns.

Luckily, the Earth reflects an average of about 30 percent of incoming energy and the Earth also has four times as much surface area as it has area facing the sun so it only needs to radiate an average energy of about 239 watts per square meter to expel the energy received from the sun. This reduced average energy flux corresponds to a temperature of 255 degrees K (-18 degrees C.)

Greenhouse gases restrict the window available for ‘earthshine,’ so, according to Dr. Miskolczi, the average Earth surface temperature needs to be about 33 degrees C higher than -18 degrees C to force all the received (and internally generated) energy back out.

ABG (11:38:03 , 14th March 2010) wrote:
“A climate sensitivity of 0.1°C per W/m2 is an absurdly low value that cannot be taken seriously. ”

It depends on what boundary one is talking about. At the lower boundary (the Surface) the sensitivity at 15DegC is between 0.095DegC/W/m^2 (Clausius Clapeyron increase of evaporation of 6.5% per degC) and 0.15DegC/W/m^2 (lowest guess increase in evaporation of 2.5% per DegC).

At the upper boundary, say the Tropopause, where the air is cold and there is only radiation to consider, the sensitivity is around 0.4DegC/W/m^2, ie much higher than the surface.

The Climate Crew bang on a bit about the “Radiative Forcing” (ie the energy imbalance) at the Tropopause for an increase in CO2, and take it as read that any increase in Tropospheric temperature is necessarily translated into the SAME temperature increase at the surface. I’m not so sure about that. I would like to see somewhere, the SURFACE BALANCE accounted for – what is the level of cloud influence on insolation and reflection, what is the change in albedo, what is the level of back-radiation, and where does it come from? It would be nice to see a version of the Kiehl-Trenberth diagram at 18DegC for example. Is there some problem? Do the surface fluxes now balance? Or not?

For an Ice Age to be sustained, there would need to be a large increase in the effective albedo of the planet – by about an additional 60%. We know that lots of ice over the land will increase the surface albedo. It seemsthat the average day in an Ice Age would be cloudy but with very little rain/snowfall (around 40% of today’s values).

That seems to work well, but of course if one assumes a positive cloud/water vapour feedback you could get anything.

I don’t see a low sensitivity as implying that an Ice Age cannot occur, only that it would be a low precipitation, very cloudy, not much weather happening sort of planet. Under those conditions, which seem to be borne out by the guesses of what happened in the last Ice Age, the surface will sustain a non-heating ice box.

The big Ice Age question is what causes the switch from this apparently stable state to and from the present rather more salubrious condition. I like Hoyle’s theory best…

ABG wrote:
“A climate sensitivity of 0.1°C per W/m2 is an absurdly low value that cannot be taken seriously. Even the prominent climate skeptic Richard Lindzen agrees that without feedbacks in the climate system this value should be about 0.3°C per W/m2 (3 times as high).”

ABG is assuming that any feedback must be positive. He starts from the “without feedback” case and seems to assume that there must be at least some positive feedback. A stable system, or one in equilibrium, must have negative feedbacks. Examples of negative feedback would be:
Higher temperatures give more evaporation. The water vapour rises and carries heat to the upwards. When the water condenses, the heat can escape to space without radiating from the surface. The increased heat loss opposes the increase in temperature.

Higher temperatures give more evaporation. The water vapour rises and creates more cloud cover, reflecting sunlight into space.

There can be negative feedbacks and positive feedbacks. If the net effect was positive, the earth’s temperature would be unstable. For example, a volcanic eruption such as Krakatoa would cause an ice-age.

So Lindzen can agree with 0.3°C per W/m2, but then negative feedback can take that to 0.1°C per W/m2.

Spector said
“I believe you are assuming the term ‘greenhouse gas’ is more restrictive then it actually is. All it means, I think, is that the gas in question has electro-magnetic radiation absorption bands in the far infra red range typical of ‘earthshine’ and is transparent to radiation in the optical range typical of sunshine”.
Henry@Spector
Ok, I think they have to come clear on the definition of a greenhouse gas.
I was assuming that the term “greenhouse” refers to a substance in the atmosphere where the net effect of the cooling and warming properties is warming, rather than cooling. It is true, they also call ozone a greenhouse gas (because it has absorption at around 13) but it has strong absorption in the UV region, thereby blocking a lot of UV from the sun. I assume the net effect of ozone is cooling, rather than warming, but again I could not find the relevant test results on this. As I said and proved before, CO2 is not transparent to sunshine, it has quite a number of absorptions in the 0-5 range, so I donot understand why you keep on re-stating that it is (which also seems to be the common line that is taught at tertiaire institutions).

then you say:” Luckily, the Earth reflects an average of about 30 percent of incoming energy”

Yes, and if you look carefully at the two graphs showing the incoming and outgoing radiation (that you referred to earlier – there are better ones that also show the clear gaps caused by CO2 on the incoming solar radiation -) then you will have noticed that this 30% is due to the combined effect of water, oxygen/ozone & carbondioxide.

You see what I am getting at? I honestly donot know what the point is talking about GHG’s if you donot know for sure that the net effect of the substance in the atmosphere is warming rather than cooling. You first have to test this! Don’t come to me with an IR spectrum of a substance that has some absorption in the earthshine region and then call it a GHG. Even if it is methane. I think that is stupid. You first have to determine how much cooling it causes (by blocking/re-radiating sunshine) and then you have to determine the warming (by trapping/re-radiating earthshine) and then compare those two…

So where are those results? That is what I was looking for. .. I put is to you it does not exist. Everybody thought that somebody would do it, in the end nobody did it.

Carl Chapman wrote:
“There can be negative feedbacks and positive feedbacks. If the net effect was positive, the earth’s temperature would be unstable. For example, a volcanic eruption such as Krakatoa would cause an ice-age.”

Well first off, your example is not a good one. Aerosols leave the atmosphere in a few years and they don’t nearly have enough of a sustained effect to allow for strong feedbacks to happen.

And of course there can be negative feedbacks – I’ll get to that in a second. With or without feedbacks, IF the ultimate climate sensitivity to a radiative forcing is 0.1°C per W/m2, the climate system makes no sense. It should be so insensitive that temperatures will barely respond unless forcings are HUGE. Given our best estimates of the changes in forcings and temperature over the last century, a reasonable estimate for climate sensitivity with feedbacks is around 0.7°C per W/m2. This implies that positive feedbacks dominate over negative feedbacks.

But again, the main point is that if you compare a small number to a high number, you’re going to see a big difference that will make a shocking(!) graph. Archibald uses an implausibly low number that nobody considers close to reality and compares it to the UPPER BOUND of climate models (and in doing so he mistakenly suggests that this corresponds to a 560 ppm scenario instead of a 1000 ppm scenario AND that all of the warming is due to CO2 alone – wrong). The result – a shocking(!) graph.

Carl Chapman wrote:
“If the net effect was positive, the earth’s temperature would be unstable.”

Not true. If you’re a good skeptic then you should be curious. If you’re curious then look this up and see what people have to say about it. Do you think that nobody’s ever thought about this??

In a system with positive feedbacks, the total gain of the effect (the multiplier of the initial temperature change) is G = 1/(1-f) where f is the feedback factor (in degrees of warming per degree of warming). The feedback responds to the initial warming, but much of the additional warming is due to the feedback upon feedback upon feedback, etc., but with diminishing returns. Unless f > 1, this additional warming dampens and G is a real number, i.e. there is no runaway effect.

With a doubling of CO2, the direct temperature increase without feedbacks would be about 1.2°C (using 0.3°C per W/m2). A feedback factor (f) of 0.4 would produce a total temp increase of 2°C. If f = 0.7 then the total temp increase would be 4°C. Note that all of the climate model forecasts imply that f < 1 (by quite a bit). If f = 0.95 we would see the temperature increase by 24°C, but still there would be no runaway! The runaway greenhouse claim is a red herring. It has been well studied and it is exceedingly unlikely. In no way do positive feedbacks imply a runaway effect – only EXTREMELY strong positive feedbacks.

“ABG is assuming that any feedback must be positive.”
No I am not. I study plants – they are a negative feedback. What I am suggesting is that a climate system where the NET feedback is negative doesn’t fit with reality. We cannot explain observed changes in climate with a net negative feedback. All of the evidence points towards a predominance of positive feedbacks over negative feedbacks.

Please be a good skeptic and investigate your claims before spouting them. Look it up and think it through: Does a sensitivity of 0.1°C per W/m2 make sense? Do positive feedbacks really lead to an unstable climate? No and No.

Skeptics shouldn’t have an agenda, they should have a curious mind that seeks the best answers. In general, scientists are about as skeptical as it gets. We can’t wait to be the one to show that the other is wrong! Then why is there such a broad consensus on the main principles of climate change? Because it’s the best answer – far and away. There’s no agenda.

…
Please be a good skeptic and investigate your claims before spouting them. Look it up and think it through: Does a sensitivity of 0.1°C per W/m2 make sense? Do positive feedbacks really lead to an unstable climate? No and No.

Well, before getting that far, ask your self “does the idea of a single unchanging climate sensitivity make sense?” I say that climate sensitivity is temperature dependent. This is because, like any heat engine, as the temperature rises, the parasitic losses increase. And with the earth, the throttle also shuts down.

Consider a day in the tropics. When it is cool in the morning, there are no clouds. Temperature rises quickly as the sun’s energy increases, so climate sensitivity is high.

By about 10:30, however, clouds start to form. And although the energy from the sun continues to rise, the temperature rise doesn’t keep up. So the corresponding climate sensitivity (warming per additional unit of solar energy) is lower.

As an example from another field, consider an old, non-aerodynamic car. Give it some gas, and it will go a certain speed. Double the amount of gas, however, and the speed doesn’t double. Why not? Because the aerodynamic drag from the air increases roughly by the square of the speed. So what is the “speed sensitivity” of the car to an increasing input of energy? Well, it depend on the starting speed, just as the climate sensitivity of the planet depends on the starting temperature.

The same is true for the planet as a whole. For example, radiation losses rise as temperature to the fourth power, so as the surface warms, radiation increases faster than temperature. Evaporation (a parasitic loss) also increases faster than temperature. So each additional degree of warming takes more and more energy to achieve.

This is why, for example, that the oceans never get much over 30°C. A combination of evaporation, cloud cover, and thunderstorms makes the climate sensitivity over the warmest oceans effectively zero …

As a result, I say that climate sensitivity is a function of temperature, and the idea of a single unvarying “climate sensitivity” for the planet is an incorrect understanding of a dynamic system.

Then why is there such a broad consensus on the main principles of climate change? Because it’s the best answer – far and away.

I despise statements like this, they are so vague as to be meaningless. There is a broad consensus, for example, that the earth has been warming over the last few centuries. On the other hand, there is no consensus on the net effect of aerosols, even the IPCC says we have a low level of understanding of their complete effects. Any time anyone starts talking about “consensus” I tune them out, because I know that they are not talking science any more. See here for more information on the dangers of consensus.

ABG (13:22:01 16Mar10) wrote:
” IF the ultimate climate sensitivity to a radiative forcing is 0.1°C per W/m2, the climate system makes no sense”

I previously posted that:
1. The Surface sensitivity, at equilibrium, is between 0.095 and 0.15 DegC/W/m^2.
2. This is quite different to the upper atmosphere where the sensistivity is about 0.3 (I said 0.4, but had not allowed for the emissivity, which for the atmosphere is much less than 1).

The Surface sensitivity is derived directly from the First law of Thermodynamics which gives the surface balance equation:

Forcings = Response to Forcings

or, Absorbed Solar Radiation + Back IR Radiation from the Atmosphere = IR Radiation from the Surface + Evaporated water + Direct Conduction into the Atmosphere.

Differentiating, I get:
d(Forcings) = d (IR Radiation from the Surface) + d (Evaporated water), assuming that conduction does not change, which seems most reasonable – equivalent to assuming the same sort of relationship between the boundary layer and the surface at the slightly elevated temperature, at equilibrium.

d(IR Radiation from the surface) is evaluated from Stephan’s law as 4sT^3dT, where s is 5.67×10^-8. d(evaporated water) is 78xdT where x is the percentage change in Evaporation per degree.

Putting reasonable numbers for x in , you get 0.095 to 0.15.

If one accepts the hypothesis that the thin, thermally feeble upper atmosphere drives the solid, thermally huge Surface, then you can probably get away with claiming a sensitivity of 0.3. On the other hand, given the uncertainty in evaporation, given the poor state of knowledge of the clouds, convection and water vapour, is it reasonable to use an upper atmosphere number for the Surface temperature sensitivity?

Surely, the boundary layer is driven by the surface. The upper atmosphere cannot drive this layer, as it is not invited to the thermodynamic party by the Second Law.
What drives the surface is the Forcings at the surface – and there are only two at equilibrium:
Absorbed Solar Radiation
Back IR Radiation

We know that if the temperature increases, Evaporation increases (there are arguments about how much – the climate science consensus seems to be “as little as we can get away with and still keep our huge positive feedback”, but this area of the “science” is very far from settled.), so cloud cover should increase, resulting in a REDUCTION of incident Solar radiation, ie a reduction in forcing.

We also know by doing the numbers, that to maintain a Surface temperature 3 Degrees higher than at present, the forcing from Back IR Radiation has to increase by a large amount – by between 22 and 32 W/m^2, weven without the additional forcing required to offset the increased cloud.

So according to the We’re-all-about-to-fry-because-of-the-warm-sky brigade:
1. A doubling of CO2 causes a radiative imbalance of 4W/m^2 at the Tropopause. This “radiative forcing” translates to a SURFACE temperature change of about 1DegC, increased by positive feedback to 3DegC.
2. At the surface then, the initial 1 DegC change is forced by a change in surface forcing of between 7 and 11W/m^2 (Where does it come from?) and results in 2 to 7% more evaporation.
3. But that’s not all (you forgot the Steak knives). By the time the system has settled down into an equilibrium state, the 4W/m^2 tropopausal imbalance has been translated by magic to a 22 to 32W/m^2 surface imbalance, this being the change required to maintain the surface at its new elevated temperature.

… And of course there can be negative feedbacks – I’ll get to that in a second. With or without feedbacks, IF the ultimate climate sensitivity to a radiative forcing is 0.1°C per W/m2, the climate system makes no sense. It should be so insensitive that temperatures will barely respond unless forcings are HUGE.

Well … no. You are assuming that the temperature is set by the change in forcings. However, we have no evidence that this is the case.

For an example from another discipline, consider the temperature of the human body. It is very insensitive to the environmental temperature. When the forcing from the environment goes up or down, the human body temperature barely responds.

But when we get a fever, our temperature can spike radically. All the human body’s lack of sensitivity to forcing shows is that human temperature is not set by the forcing … but that doesn’t mean it can’t change as you claim.

See my post here for a different explanation of what sets the temperature of the earth.

Willis Eschenbach wrote: “You are assuming that the temperature is set by the change in forcings. However, we have no evidence that this is the case.”

We have NO evidence? In which scientific world do you live? You could lean towards the vast body of research linking forcings and temperature OR you could lean toward a single unsupported (by the evidence) hypothesis that the earth will maintain its temperature. It’s certainly possible that tropical clouds could play an important role, but it is not well supported by the data. If it were supported by the data, I would certainly give it consideration – it is an interesting and plausible hypothesis. However, alternative explanations (i.e. forcings change temperature) are supported by the data.

Also, you are very liberal with your use of the word “stable”. Using “bi-stable” doesn’t cut it either as the last several hundred thousand years don’t really show much stability in any way. Huge temperature increases in 5000 years followed by long-term cooling over 50,000-100,000 years with lots of noise in between. If your “thermostat” allows for this much variability, I’m not at all convinced in its ability to dampen future warming.

Putting forth hypotheses is good. Accepting them and promoting them without solid empirical evidence is not.

Willis Eschenbach (23:44:44) :
For an example from another discipline, consider the temperature of the human body. It is very insensitive to the environmental temperature. When the forcing from the environment goes up or down, the human body temperature barely responds.

But when we get a fever, our temperature can spike radically. All the human body’s lack of sensitivity to forcing shows is that human temperature is not set by the forcing … but that doesn’t mean it can’t change as you claim.

Yes, Willis you picked an example of an organism which has a control mechanism to maintain body temperature, a better example would be a lizard, the which body temperature of which follows the surrounding temperature! Also it’s not true that the ‘human body temperature doesn’t respond’, try sitting out in the snow or out in the sahara desert naked.

Willis Eschenbach (23:44:44) :
For an example from another discipline, consider the temperature of the human body. It is very insensitive to the environmental temperature. When the forcing from the environment goes up or down, the human body temperature barely responds.

But when we get a fever, our temperature can spike radically. All the human body’s lack of sensitivity to forcing shows is that human temperature is not set by the forcing … but that doesn’t mean it can’t change as you claim.

Yes, Willis you picked an example of an organism which has a control mechanism to maintain body temperature, a better example would be a lizard, the which body temperature of which follows the surrounding temperature! Also it’s not true that the ‘human body temperature doesn’t respond’, try sitting out in the snow or out in the sahara desert naked.

First, I’m obviously talking about the core body temperature, so it is ingenuous to suggest sitting out in the snow or the desert. Within obvious limits, neither of these change the core temperature.

Second, you assume that the climate has no mechanism to maintain the temperature. This is a very doubtful assumption, as all flow systems which are far from equilibrium have such a mechanism that regulates certain aspects of their behavior. For the climate, this mechanism maximizes the total of work done and turbulent heat loss.

See my posts here and here, and Bejan’s work here among others, for more information on this question.

Willis Eschenbach wrote: “You are assuming that the temperature is set by the change in forcings. However, we have no evidence that this is the case.”

We have NO evidence? In which scientific world do you live? You could lean towards the vast body of research linking forcings and temperature OR you could lean toward a single unsupported (by the evidence) hypothesis that the earth will maintain its temperature. It’s certainly possible that tropical clouds could play an important role, but it is not well supported by the data. If it were supported by the data, I would certainly give it consideration – it is an interesting and plausible hypothesis. However, alternative explanations (i.e. forcings change temperature) are supported by the data.

All that is very fine … but you have only claimed that we have evidence that the long-term temperature of the earth is ruled by the forcings. Until you produce some, all we have is your word that it exists.

Also, you are very liberal with your use of the word “stable”. Using “bi-stable” doesn’t cut it either as the last several hundred thousand years don’t really show much stability in any way. Huge temperature increases in 5000 years followed by long-term cooling over 50,000-100,000 years with lots of noise in between. If your “thermostat” allows for this much variability, I’m not at all convinced in its ability to dampen future warming.

Putting forth hypotheses is good. Accepting them and promoting them without solid empirical evidence is not.

Despite changing locations of the continents, despite strikes by huge meteors, despite millennia long volcanic eruptions, the earth’s temperature has not varied by more than ± 3% in the last half billion years. I call that “stable”. If you can design a system as complex as the climate that has that characteristic, and which does not have some kind of temperature regulating mechanism, I’d love to see it. I propose such a mechanism here …

Henry@ABG
I started my own investigations on global warming in October last year and I already compiled my final report. If you are interested in reading this, here are the results of my investigations/conlusions:
FOR MY CHILDREN, & FAMILY AND FRIENDS LIVING IN THE NORTHERN HEMISPHERE

You may not know this. For a hobby I did an investigation to determine whether or not your carbon footprint, i.e. carbon dioxide (CO2), is really to blame for global warming, as claimed by the UN, IPCC and many media networks. I guess I felt a bit guilty after watching “An inconvenient truth” by Al Gore, so I had to make sure for myself about the science of it all. If you scroll down to my earlier e-mails you will note that I determined that, as a chemist, I could not find any convincing evidence from tests proving to me that CO2 is indeed a major cause for global warming. As my investigations continued, I have now come to a point where I doubt that global warming is at all possible…. Namely, common sense tells me that as the sun heats the water of the oceans and the temperatures rise, there must be some sort of a mechanism that switches the water-cooling system of earth on, if it gets too hot. Follow my thinking on these easy steps:

1) the higher the temp. of the oceans, the more water vapor rises to the atmosphere,
2) the more water vapor rises from the oceans, the more difference in air pressure, the more wind starts blowing
3) the more wind & warmth, the more evaporation of water (evaporation increasing by many times due to the wind factor),
4) the more evaporation of water the more humidity in the air (atmosphere)
5) the higher the humidity in the air the more clouds can be formed
6) Svensmark’s theory: the more galactic cosmic rays (GCR), the more clouds are formed (if the humidity is available)
7) the more clouds appear, the more rain and snow and cooler weather,
8) the more clouds and overcast conditions, the more radiation from the sun is deflected from the earth,
9) The more radiation is deflected from earth, the cooler it gets.
10) This cooling puts a brake on the amount water vapor being produced. So now it is back to 1) and waiting for heat to start same cycle again…

Now when I first considered this, I stood in amazement again. I remember thinking of the words in Isaiah 40:12-26.
I have been in many factories that have big (water) cooling plants, but I realised that earth itself is a water cooling plant on a scale that you just cannot imagine. I also thought that my idea of seeing earth as a giant (water) cooling plant with a built-in thermostat must be pretty original….
But it was only soon after that I stumbled on a paper from someone on WUWT who had already been there, done that …. well, God bless him for that!
i.e. if you want to prove a point, you always do need at least two witnesses!
Look here (if you have the time):

But note my step 6. The Svensmark theory holds that galactic cosmic rays (GCR) initiate cloud formation. I have not seen this, but apparently this has been proven in laboratory conditions. So the only real variability in global temperature is most likely to be caused by the amount of GCR reaching earth. In turn, this depends on the activity of the sun, i.e. the extent of the solar magnetic field exerted by the sun on the planetary system. We are now coming out of a period where this field was bigger and more GCR was bent away from earth (this is what we, skeptics, say really caused “global warming”, mostly).
But apparently now the solar geomagnetic field is heading for an all time low.
Look here:

Note that in the first graph, if you look at the smoothed monthly values, there was a tipping point in 2003 (light blue line). I cannot ignore the significance of this. I noted similar tipping points elsewhere round about that same time, (e.g. in earth’s albedo, going up). From 2003 the solar magnetic field has been going down. To me it seems for sure that we are now heading for a period of more cloudiness and hence a period of global cooling. If you look at the 3rd graph, it is likely that there wil be no sun spots visible by 2015. This is confirmed by the paper on global cooling by Easterbrook:

In the 2nd graph of his presentation, Easterbrook projects global cooling into the future. These are the three lines that follow from the last warm period. If the cooling follows the top line we don’t have much to worry about and the weather will be similar to what we had in the previous (warm) period. However, indications are already that we have started following the trend of the 2nd line, i.e. cooling based on the 1880-1915 cooling. In that case it will be the coldest from 2015 to 2020 and the climate will be comparable to what it was in the fifties and sixties. I survived that time, so I guess we all will be fine, if this is the right trendline.
Note that with the third line, the projection stops somewhere after 2020. So if things go that way, we don’t know where it will end. Unfortunately, earth does not have a heater with a thermostat that switches on if it gets too cold. Too much ice and snow causes more sunlight to be reflected from earth. Hence, the trap is set. This is known as the ice age trap. This is why the natural state of earth is that of being covered with snow and ice. This paper was a real eye opener for me:https://wattsupwiththat.com/2009/12/09/hockey-stick-observed-in-noaa-ice-core-data

However, man is resourceful and may find ways around this problem if we do start falling into a little ice age again. As long as we are not ignorant and listen to the so-called climate scientists whose agenda’s depend on money. A green agenda is still useless if it has the wrong items on…
Obviously: As Easterbrook notes, global cooling is much more disastrous than global warming….

Willis Eschenbach (01:26:04) :
First, I’m obviously talking about the core body temperature, so it is ingenuous to suggest sitting out in the snow or the desert. Within obvious limits, neither of these change the core temperature.

I think you mean ‘disingenuous’ and in any case you’re wrong, either of those situations will drop/raise the core temperature and kill you! The human body’s mechanism is to balance heat release with heat loss and uses heat loss coefficient changes to do so in extreme conditions (shivering/sweating), those only have a small range of effectiveness. The whole comfort zone is governed by the shape of the heat release curve which gives a small region of relatively stable temperature with changing heat loss otherwise you’re in trouble!

Second, you assume that the climate has no mechanism to maintain the temperature. This is a very doubtful assumption, as all flow systems which are far from equilibrium have such a mechanism that regulates certain aspects of their behavior. For the climate, this mechanism maximizes the total of work done and turbulent heat loss.

The mechanism is known as the greenhouse effect, same as the human body, loss=release.

I have investigated this results provided by this tool for the default tropical clear air calculation using 293 W/m2 (292.993) for the nominal required output energy flow at 70 km, Iout. I based this flow requirement on a listed maximum lunar surface temperature. I have tried this with CO2 concentration inputs from 0 to over 70,000 ppm, usually in half doubling (1.41421…) steps relative to the nominal pre-industrial concentration of 280 ppm.

I am accepting the results provided as-is. I have no documentation on the use of this tool so I do not know how valid my results are over the full range of CO2 concentrations I tried. I ran this in an attempt to obtain an estimate of the raw CO2 sensitivity of the atmosphere before any positive or negative feedback effects are get involved.

My results with MODTRAN indicate that a constant nominal log-doubling slope of 0.9 deg C per doubling applies over a range of 70 ppm to 792 ppm, but from 4,480 ppm to 71,680 ppm the slope rapidly rises from 1.2 to 2.9 deg C per doubling. I calculated the slope from a precision (low residual error) polynomial approximation of the MODTRAN surface temperature results where the output energy flow at 70 km is required to be 292.993 W/m2.

I do not know if the swift slope increase above 4,480 ppm (over 10 times our current level) is due to a MODTRAN model calculation limitation or if this just shows the spectrum data noise floor becoming significant at these CO2 concentrations. Of course, this could also be showing the actual onset of atmospheric window pinch-off.

Swift slope increase? I think not. The rate per doubling is increasing very slowly; the 35840 – 71680 temp offset step is 2.65, and the 2240 – 4480 step was 1.16. On a linear basis, the increase is falling off rapidly. The lower level is 0.516°C per 1000 ppm; the higher level is 0.074°C per 1000 ppm.

Just For Reference:
After trying out various relationships with the MODTRAN data, I find that if one defines the variable Z=0.979 + 0.911*CO2 — Where CO2 is the CO2 concentration in ppm, then the formula:

The third term, LOG2(Z) divided by 12.7857 and then raised to the seventh power looks like it would only begin to have an effect greater than one degree K at CO2 concentrations above 7,212 ppm. That assumes that MODTRAN is correctly modeling the real world at those extra high levels of CO2. Also, I believe this data just shows the raw CO2 effect without any natural compensation or exacerbation.

BTW, Log2(Z)=Ln(Z)/Ln(2) [=0.69315]

It does appear that MODTRAN is now yielding slightly different results (about -0.2 W/m2) than when I first compiled the table in my earlier post.

The seventh power log term that shows up in my MODTRAN data analysis does not change my opinion that the logarithmic nature of the effect of carbon dioxide is one of the key reasons for doubting the claims that anthropogenic carbon dioxide is having a serious effect on our climate. That is because this effect only shows up at very high CO2 concentrations

The seventh power term seems to become significant around 18 times our current CO2 level. We would have to put about 72 times as much CO2 into the atmosphere as we have in the last century or so to get the CO2 concentration that high. I am not sure that we could do this even if we wanted to.

As CO2 is not, as some appear to have been conditioned to think, an actual source of heat, it seems only natural to assume that the range of any simple linear logarithmic effect approximation would be limited by a pinch-off effect. This is because the gradually expanding CO2 absorption band is progressively reducing the width of the earthshine transmission window and surface temperatures must increase to force the same amount of power through this narrowing constriction.

Also, there are also other trace isotopic varieties of CO2. The most common on Earth (about one percent of natural CO2) has two extra neutrons in one of its oxygen atoms. Each of these abnormal CO2 ‘isotopologues’ may have their own unique absorption spectra and I suspect these could become significant at 10 to 100 times our current CO2 concentration levels.

Also, once again, I do not know how accurately MODTRAN models the real world at these extra high CO2 concentrations. Perhaps someone with a better understanding of this whole problem could explain why the seventh power term appears to fit the data so well.