844 Responses to “Unforced variations 3”

@ #692John Peter
In case you have not read Wood’s experiment, it is of minor interest in that it shows that greenhouses keep warm air inside primarily by preventing cooling by convection with the air outside, rather than from surface IR being unable to penetrate the glass.

I found a version of it here in case you have not read it. It’s not a full paper, just a note of the experiment (plus some incorrect musings on the capacity of the atmosphere to absorb infrared radiation from the ground). Apart from showing how greenhouses keep warm, it has no significance in relation to climate science.http://www.wmconnolley.org.uk/sci/wood_rw.1909.html

John Peter@892 – Start by spelling the mans name properly, Arrhenius. Then learn the difference between nominal terms and descriptions. The “greenhouse effect” is an accepted rubric for the observed physics.
Next show some search abilities and reading comprehension, like the beginning of the Wiki article you cherry pick [that everyone here is no doubt familiar with]: “The greenhouse effect was discovered by Joseph Fourier in 1824, first reliably experimented on by John Tyndall in 1858, and first reported quantitatively by Svante Arrhenius in 1896″. You also failed to note the further “(…) Arrhenius’ greenhouse law reads as follows:
if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression. This simplified expression is still used today.”

R. W. Wood did not “demonstrate” anything except some conjecture on details of the process, which he also did not fully understand. [Read the whole page, slowly, with comprehension]

John Peter, if you get to make the rules for the discussion, of course you can prolong it indefinitely.

But the point is clear; you know about Asafoku because of Heartland, not because of anything published in the scientific literature.

If you were to look up the science you’d know why. You’re just doing the “what about this? Oh. Well, then what about _this_?” pattern we see often.

Look, it’s tough to come in to a science education site and have people say that you’re just tossing out talking points. It’s not about you, this is something that’s always happening. You’re the one right now who’s most actively pulling stuff out of the “climate debate” area and posting it here asking people to deal with it as though it were science.

Take each new thought that occurs to you. Put your very best skeptical thinking hat on. John Mashey gave you a good example earlier of how to think through this stuff.

Look it up. Tell us what you find in the science journals — not the ‘summary’ stuff you find in sites that lie about the science.

Who lies consistently? CO2Science is one of the worst providers of very well crafted misinformation, and relied on by many copypasters on blogs.
Watch out especially for their stuff, whether first or second or thirdhand:

@ 700 John E. Pearson
I’ll just point out for the record that Angstrom and Koch did not ‘shoot down’ Arrhenius’ work at all, but they probably delayed further research by some decades. As Weart notes:

“These measurements and arguments had fatal flaws. Herr Koch had reported to Ångström that the absorption had not been reduced by more than 0.4% when he lowered the pressure, but a modern calculation shows that the absorption would have decreased about 1% — like many a researcher, the assistant was over confident about his degree of precision.(9*) But even if he had seen the 1% shift, Ångström would have thought this an insignificant perturbation. He failed to understand that the logic of the experiment was altogether false.“

If you read the paper by Arrhenius that I linked to in post #697, you will see that Arrhenius stated some assumptions (eg clouds will not change), presumably so as to simplify his calculations, which he would have made without the benefit of even an HP scientific calculator :D However his results are remarkably similar to those still held today. It’s well worth a read:http://web.lemoyne.edu/~giunta/Arrhenius.html

John Peter, Where in the hell are you getting your information? You are spouting utter falsehoods–knowingly or not. The experiment by Woods did not disprove the greenhouse effect due to CO2, but rather showed that that effect had nothing to do with keeping greenhouses warm. Arrhenius knew the mechanism of the greenhouse effect when he did his calculations. See this post by the Rabett:

Also, you do know that Spencer is a physicist, do you not. I used to work down the hall from him at the International House of Physics when I was an editor. You really ought to read history. You would clearly learn a lot.

No, Koch was Angstrom’s go-fer and misunderstood the behavior of CO2 under pressure. Wood didn’t believe the result. He measured a different way using quartz (which is ir transparent) instead of glass. Wood’s result was the proof that Arrhenius was wrong.

Your ref to Weart leads (finally) to:

”…already in 1896 Arrhenius somewhat inaccurately wrote, “Fourier maintained that the atmosphere acts like the glass of a hothouse,” Arrhenius (1896), p. 237; the word “greenhouse” perhaps first appeared in this context in a study which explained how greenhouses keep the warmed air from rising and blowing away, and that this matters more than the fact that infrared radiation from within does not escape through the glass (which is more like what happens in the atmosphere), Wood (1909); for the science, see also Lee (1973); Lee (1974); possibly the first widely seen use of the phrase “greenhouse effect” was in a 1937 textbook (repeated in later editions), wrongly describing “the so-called ‘greenhouse effect’ of the Earth’s atmosphere” as an effect “analogous to that of a pane of glass.” Trewartha (1943), p. 29…

Experts could dismiss the hypothesis because they found Arrhenius’s calculation implausible on many grounds. In the first place, he had grossly oversimplified the climate system. Among other things, he had failed to consider how cloudiness might change if the Earth got a little warmer and more humid.(6) A still weightier objection came from a simple laboratory measurement. A few years after Arrhenius published his hypothesis, another scientist in Sweden, Knut Ångström, asked an assistant to measure the passage of infrared radiation through a tube filled with carbon dioxide. The assistant (“Herr J. Koch,” otherwise unrecorded in history) put in rather less of the gas in total than would be found in a column of air reaching to the top of the atmosphere. The assistant reported that the amount of radiation that got through the tube scarcely changed when he cut the quantity of gas back by a third. Apparently it took only a trace of the gas to “saturate” the absorption — that is, in the bands of the spectrum where CO2 blocked radiation, it did it so thoroughly that more gas could make little difference.

Still more persuasive was the fact that water vapor, which is far more abundant in the air than carbon dioxide, also intercepts infrared radiation. In the crude spectrographs of the time, the smeared-out bands of the two gases entirely overlapped one another. More CO2 could not affect radiation in bands of the spectrum that water vapor, as well as CO2 itself, were already blocking entirely. These measurements and arguments had fatal flaws. Herr Koch had reported to Ångström that the absorption had not been reduced by more than 0.4% when he lowered the pressure, but a modern calculation shows that the absorption would have decreased about 1% — like many a researcher, the assistant was over confident about his degree of precision. But even if he had seen the1% shift, Ångström would have thought this an insignificant perturbation. He failed to understand that the logic of the experiment was altogether false.

The greenhouse effect will in fact operate even if the absorption of radiation were totally saturated in the lower atmosphere. The planet’s temperature is regulated by the thin upper layers where radiation does escape easily into space. Adding more greenhouse gas there will change the balance. Moreover, even a 1% change in that delicate balance would make a serious difference in the planet’s surface temperature. The logic is rather simple once it is grasped, but it takes a new way of looking at the atmosphere — not as a single slab, like the gas in Koch’s tube (or the glass over a greenhouse), but as a set of interacting layers.”

From Wikipedia:
Robert Williams Wood (May 2, 1868 – August 11, 1955) was a physicist and inventor. He is often cited as being a pivotal contributor to the field of optics and is best known for giving birth to the so-called “black-light effect”. Wood’s patents and theoretical work shed much light on the nature and physics of ultra-violet radiation and made possible the myriad of uses of uv-fluorescence which became popular after World War I.

I would be remiss if I did not thank you all for contributing so much to my education. I really do appreciate it even though my posts do not (always?) show it. So thanks very much.

I know that good science is not done on blogs, even on as fine an example as RealClimate. I am a skeptic because, right or wrong, that is how I was trained to approach science. I will never (too strong a term?) be a denier, for two reasons:

1)- Should we cause a “tipping point” or too many tipping points, the results would be, to say the very least, uncomfortable.

We can show that we have given the planet more CO2 than it can conceivably use, at least that’s what I believe. So what could possibly be wrong with cutting back,some?

Risk much greater than the cost, go for it.

2)- I am a nuclear enthusiast and would like to see nuclear plants phase out coal plants. Not only do coal plants produce a lot of unnecessary carbon, but the mining wrecks our mountains and streams. No matter how much coal folk claim they won’t.

I am trying to “learn” climate science and you all help me a lot. I try to follow and examine any links you share. I prefer papers to web sites and peer reviewed papers the most. Though most of my opinions may seem, to say the least, ignorant, they also may have some value to you because they are “outside the box”. At leaat that is what I always believed.

@ #709 John Peter says: “Wood’s result was the proof that Arrhenius was wrong.”

John, you need to do some more reading on the topic of the greenhouse effect. Woods didn’t do any research at all on what is commonly called the greenhouse effect. Wood’s little experiment was to determine how actual greenhouses stay warm, which despite sounding similar to the greenhouse effect, is quite a different thing.

I suggest you go and read the papers by Arrhenius and Wood to which I provided links. From your comments you either haven’t read these papers or haven’t understood what you’ve read.

(Oops, I see that Ray Ladbury has already picked you up on this one! I’ll let this post ride anyway, if the mods allow it; although I’m starting to think CFU has a point #706)

I forgot to say that, while I can tolerate some of the skeptics, the deniers turn me off. For many reasons find SA a skeptic and I have no trouble relating to him and most his ideas as I understand them.

FWIW, after all your complaints about SA, I haven’t seen any of you critque his letter. That should say something, but I certainly don’t know what.

The paper illustrates what’s wrong with the solar theories, namely every one invents a new summary statistic for whatever measures of solar activity are supposed to produce climate change. He uses a standard technique to identify some periodicities, different from anyone else’s periodicities. The author is cognizant of that fact, and urges more research into all of the measurable aspects of solar fluctuation that might matter. He says that the true test of the model will be in predictions of the future (I would find this attractive, as I have said the same myself!) And he wrote that his model can not predict more than 10 years ahead because the predictions are conditional on measured (and aggregated) solar activity — future solar activity is not yet known.

If his criticisms of IPCC AR4 are objectionable, they should be rebutted, cliched or not.

John Peter — The atmospheric phsyics of CO2 is known to a fair-the-well. Try HITRANS or MODTRANS. Guess what? Arrhenius’s formula is quite a good approximation and Arrhenius himself knew it was approximate.

You can keep an idea bouncing around for weeks here if you manage to keep getting us ordinary readers to retype what we think. But when one of the scientists looks at the ideas, it’s far more useful if we ordinary readers point it out. This is a good example on the major points.

Go through the skeptical exercise yourself; you’ll be able to let go of this once you do.

[Response: Thanks. Anymore on this is OT (it is very tedious). – gavin]

BobFJ (667) — I think GISS forcings do not include so-called internal variabity. If not, the positive internal variabilty as indicated by the AMO won’t be part of the forcing data and so leave a positive residual; similarly for strongly negative values of AMO.

Yes, I agree. Tamino’s fourth figure would be much better replaced with this GISS graph (a) which appears alongside their graph (b), the latter apparently being the same as Tamino’s fifth figure.

Graphs (a) & (b) appear here and both predate Tamino’s article. They clearly do not include any internal variability such as AMO.

Re John Peter – 709 – that last quoted paragraph is key – along with the realization that opacity varies over wavelength and the CO2 and the H2O greenhouse effects are not completely saturated at all wavelengths; in the case of CO2, the variation of opacity over wavelength gives rise to the approximately logarithmic relationship of radiative forcing to amount once the central portion of the absorption band is saturated at the tropopause level.

BobFJ (724) — Yes, one needs net of all forcings plus some index of internal variabiliy to remconstruct the instrumental record. On a decadal scale, the MAO acts as such an index, but imperfectly so since the net of the nonlinear portion of forcings will also effect the AMO.

(ii) The most detailed and accurate observations are limited in time, but proxy records do provide at least some information of what has been going on for 100s, 1000s, 10000s, 100,000s, millions, tens of millions, hundreds of millions and even billions of years; physical predictions can be tested against analysis of paleclimatic data. Other planets also provide real examples of what happens when the forcings are much much different (though this must include the change in gravity and atmospheric thermodynamic properties, etc.).

(iii) In terms of global average temperature: When model trends are compared to historical data over the last century, only the time around 1940 sticks out, and this is comparable to unforced variations seen in individual runs (See IPCC AR4 WGI … chapter 9, I think).

PS model output with only natural forcings and with all forcings only diverge significantly after 1950 or 1960, give or take, and the record follows the model output with forced changes fairly well. In terms of other dimensions of climate (regional variations, oceanic heat gains, etc.), observations can also be compared to model output, and both can be analyzed for features of seasonal changes, response to volcanic eruptions (Pinatubo, for example) and solar cycles, the diurnal cycle, and forms of variability that can be unforced (internally forced), like ENSO, NAM and NAO, SAM, PDO, AMO, QBO, MJO … not all of which are well-produced in models, though a model that gets other general responses almost correct wouldn’t necessarily get all the details right (why would poor resolution of a smaller-scale or small amplitude process necessarily imply a huge error in climate sensitivity, for example? … I don’t know anywhere near everything that’s been done, but one could look at correlation of climate sensitivity and internal variability among models…)… Observations of different aspects of climate trends can be compared for consistency with each other and models, for example, by forcing a model with observed sea surfact temperature variations and then looking at the modelled and observed atmospheric and land surface responses, etc.

(iv,v) I don’t see a big problem here; of course it can be concieved that climate sensitivity could be different once the climate gets more than 0.8 K warmer (and it probably isn’t exactly the same), but it must be understood that GCMs are tuned to match observed trends; parameterizations are often based on smaller-scale observed relationships that can be tested repeatedly, as I understand it, the few that are based on tuning model output to larger scale climate patterns are based on a time average, not a trend, so matching a trend is still a test of the model (see FAQ on climate models parts I and II at this website).

“If most of the present rise is caused by the recovery from the Little Ice Age (a natural component) and if the recovery rate does not change during the next 100 years, the rise expected from the year 2000 to 2100 would be roughly 0.5°C. Multi-decadal changes would be either positive or negative in 2100.”

1.
If the little ice age was forced (solar + volcanic), then the rebound should occur when the forcing changed. The forcing is now changing mostly from anthropogenic effects, so the natural rebound should be quite small (the e-folding time of radiative disequilibrium is equal to the sensitivity times the heat capacity; a residual imbalance can remain as the change in climate penetrates through the heat capacity of the deeper oceans, but a majority of ‘rebound’ should take place in a few decades).

2.
If the little ice age was internal variability (?), why would the rebound continue indefinitely? Why didn’t the little ice just get colder and colder and colder and never let up?

Does SA expect the world to naturally warm 0.5 K per century until the end of time? That would be very very weird, and would tend to imply infinite climate sensitivity.

“It is very important for climatology to include some aspects of archaeology and anthropology in studying earth’s climate change, not just computer science.” … “The IPCC climatology is a sort of ‘instant’ climatology.”

This indicates ignorance of how broad the science is. IPCC AR4 WGI has a whole chapter on paleoclimatology.

“(i) natural components are important and significant, so that they should not be ignored,”
google scholar searches:
Results 1 – 10 of about 21,800 for “cosmic rays” climate
Results 1 – 10 of about 3,660 for climate “total solar irradiance”
Results 1 – 10 of about 554,000 for climate “clouds”
Results 1 – 10 of about 50,300 for climate “ocean circulation”
Results 1 – 10 of about 16,600 for climate “atmospheric turbulence”
Results 1 – 10 of about 1,530 for climate “albedo change”
Results 1 – 10 of about 207,000 for climate “aerosol”
Results 1 – 10 of about 262 for climate “geological weathering”
Results 1 – 10 of about 72,700 for climate “carbon cycle”
Results 1 – 10 of about 5,600 for climate “ocean chemistry”
and even
Results 1 – 10 of about 36 for climate “flying spaghetti monster”

“(ii) it is insufficient to study climate change on the basis of data only from the last 100 years,”
google scholar searches:
Results 1 – 10 of about 55,900 for paleoclimatology
Results 1 – 10 of about 19,600 for climate “ice ages”
Results 1 – 10 of about 1,340 for climate “paleocene eocene thermal maximum”

“(iii) it is difficult to make conclusions about causes of the temperature rise since 1975 until we can understand the rise from 1910 to 1940,”
Only if you ignore the rise in anthropogenic CO2 and its infrared absorption.

“(iv) the present GCM modelings are an attempt to simulate the IPCC hypothesis that the present warming (0.7°C/100years) is caused by the greenhouse effect,”
BS; The use of climate models predates the IPCC, and changes have been made continuously to improve their accuracy in representing current climate, narrow the range of CO2 sensitivity, and better represent current and future regional climates. This is the same old denialist accusation that there is a global conspiracy among scientists.

” and thus,
(v) because of these deficiencies, their future prediction is unreliable and uncertain.”
If its unreliable, it must be uncertain.
if it’s reliable but uncertain, say 2.5 -4.5 deg C, we may be screwed.
If it’s reliable and certain, say 3.3 deg C, we’re definitely screwed.

“If most of the present rise is caused by the recovery from the Little Ice Age (a natural component) and if the recovery rate does not change during the next 100 years, the rise expected from the year 2000 to 2100 would be roughly 0.5°C. Multi-decadal changes would be either positive or negative in 2100.”
And if you put the minimum 2.5 deg C from CO2 + feedback expected from physics(radiative + Clausius Claperyon) which Akasofu forgets, you get 3 degrees plus by 2100, depending on how much fossil fuel we manage to burn by then and assuming the CH4 +CO2 emissions from the Arctic meltdown stay relatively small.

“This rough estimate is based on the recovery rate of 0.5°C/100 years during the last few hundred years. Note that this value is comparable with what IPCC hypothesize as the greenhouse effect. The greenhouse effect shown by GCMs should be carefully re-evaluated, if the present rise (0.7°C/100 years) contains significant natural components, such as those I suggest.
I have been emphasizing the importance of “natural components” during the last few years, but it seems that it is too vague to be getting the attention of many climatologists, GCM scientists, and IPCC scientists. I thought that a more concrete term is needed for this purpose. This is why I used the term “Little Ice Age”. I did not talk about causes of the Little Ice Age, because it is out of my own field. As far as the solar effects are concerned, I find many conflicting results in the literature.”

What the literature conflicts with is Akasofu’s preconceived notion that it can’t be CO2.

“I was director of the UAF Geophysical Institute for 13 years and then director of the International Arctic Research Center for 7 years. Although I am not a climatologist, it has been interesting to observe climatology from the point of view of an arctic scientist. In order for the field of climatology and IPCC to be healthy, I want to provide a few criticisms, which I hope are constructive.”
Spoken like every administrative bean counter and most Dept. Chairs I’ve known. A research group I built instrumentation with once put a ten turn pot with the largest knob we could find, wired so that it would change the DC offset on the output to the chart recorder a small but observable amount, specifically to give certain unnamed “scientist” and inveterate knob twiddler an easy target that wouldn’t mess up the data. It worked,too – there are probably still stripcharts with notations where “Dr. X improved the offset” in some musty archive.

“Since I am not a climatologist, all the data presented in my Notes on Climate Change can be found in papers and books published in the past; that is why I do not want to publish Notes on Climate Change as a paper in a professional journal. It is very important for climatology to include some aspects of archaeology and anthropology in studying earth’s climate change, not just computer science.” He’s not a climatologist, but that doesn’t matter, because climatologists are all just computer scientists – more denialist ad hominem claptrap. At least he didn’t include astrology and thermology.

“The IPCC climatology is a sort of ‘instant’ climatology. Old data, however inaccurate they may be, could be more valuable in predicting future changes than the most accurate (instant) data from satellites.”
The only thing less accurate data is useful for is obscuring the truth.

“Finally, when I sent an early version of my Notes on Climate Change to several distinguished climatologists for their comments, one of them responded that his graduate student is now estimating the “rebounding rate” from the Little Ice Age, thus I suggested that his student should publish it at the earliest opportunity.”
Which begs the question – where is the trampoline that’s storing the energy necessary to drive this rebound?

An historical footnote to the back-and-forth about Arrhenius, Koch, etc., above: Arrhenius (and his buddy and sometime collaborator, Nils Ekholm) both mention that human combustion of fossil fuels could eventually affect climate–sometime far in the future.

The first person to attempt to “attribute” modern CO2-mediated climate change to human activity was Guy Callendar, who published his first climate paper in 1938. Callendar is much less-known and less-appreciatedd than he should be; he basically brought CO2 climate theory into the twentieth century, collating thirty years worth of infrared spectroscopy, and compiling the first temperature time series intended for CO2 climate research (and yes, he controlled for UHI, though it wasn’t called that in 1938)and did pioneering work on the carbon cycle in order to estimate oceanic carbon sinks.

Patrick’s comment on Akasofu is to the point: basically, “rebound” is reified as a real physical effect in theories such as Akasofu’s.

But it isn’t: “rebounds” from Ice Ages (or warm periods, for that matter) must be driven by some real physical process, or we’re not talking science anymore. And isn’t it “warmist” theory that’s a “religion?”

725: Patrick 027 says: “in the case of CO2, the variation of opacity over wavelength gives rise to the approximately logarithmic relationship of radiative forcing to amount once the central portion of the absorption band is saturated at the tropopause level.”

This is a point I would like to understand. I’d love to hear your explanation.

Somewhere at some time I think I wrote that phonons can be thermal or acoustic. No, actually they can be optical or acoustic according to http://www.absoluteastronomy.com/topics/Phonon, and so far as I know, either type can be thermal …

Thinking about solar cells…

When an electron accelerates in an electric or magnetic field (large scale, not atomic scale), it has to change it’s wavevector. Phonons are involved in those types of state transitions. But it is possible for an electron to move smoothly over short time periods. Do the phonons involved all cancel out somehow, or am I thinking of this all wrong?

This thread collects good general advice for analytical work. There will probably be some overlap with our thread Grand truths about human behavior, but the idea here is more toward prescriptive statements about good practices. Such advice should reach beyond the proverbial, and should be referenced, if possible, to those who have in fact performed at a high analytical level.

Here are a few pages from a draft of a Beautiful Evidence chapter (which is 16 pages long in the published book) on evidence corruption, along with some of the comments on this draft material.

The emphasis is on consuming presentations, on what alert members of an audience or readers of a report should look for in assessing the credibility of the presenter. ….

—–

Also — why wasn’t this news?? A need for clear presentation of difficult information so people can understand — and a good choice.

Edward Tufte Presidential Appointment
THE WHITE HOUSE Office of the Press Secretary
FOR IMMEDIATE RELEASE, March 5, 2010
President Obama announced his intent to appoint several individuals to serve on the Recovery Independent Advisory Panel.

“Edward Tufte is Professor Emeritus of Political Science, Statistics, and Computer Science at Yale University. He wrote, designed, and self-published The Visual Display of Quantitative Information, Envisioning Information, Visual Explanations, and Beautiful Evidence, which have received 40 awards for content and design. He is a Fellow of the American Academy of Arts and Sciences, the Guggenheim Foundation, the Center for Advanced Study in the Behavioral Sciences, the Society for Technical Communication, and the American Statistical Association. He received his PhD in political Science from Yale University and BS and MS in statistics from Stanford University.”

Mission statement: To promote accountability by coordinating and conducting oversight of Recovery funds to prevent fraud, waste, and abuse and to foster transparency on Recovery spending by providing the public with accurate, user-friendly information.”

“I’m doing this because I like accountability and transparency, and I believe in public service. And it is the complete opposite of everything else I do. Maybe I’ll learn something. The practical consequence is that I will probably go to Washington several days each month, in addition to whatever homework and phone meetings are necessary.

— Edward Tufte, March 7, 2010”

——-

I’ve been noticing for quite a while that the septic edges of the bogusphere dominate Google Image Search for most any science, health, or economics question I think to check, even when Scholar is pretty good and ordinary Google is the usual melange.

This is a need I’m glad to see getting attention from an expert.
More, I hope, will follow.

732, Hank Roberts: This is a need I’m glad to see getting attention from an expert.
More, I hope, will follow.

This summer Peter Guttorp of the U. of Washington will be one of the speakers in an invited session on statistical analysis of global warming at the Joint Statistical Meetings in Vancouver. He is one of the statisticians who has helped to draft the letters to Congress of the leadership of the American Statistical Association. The Joint Statistical Meetings is the largest convocation of statisticians annually in the world. Their public policy related sessions are usually sharp and lively.

It might be interesting to know Prof. Tufte’s evaluation of “Mike’s trick” to “hide the decline”.

The greenhouse effect was NOT disproved by anybody in 1909! ALL Wood showed was that the name is a misnomer, which has always been known. Yes, greenhouses work by suppressing convection, while “the greenhouse effect” works by atmospheric back-radiation. We know! The atmospheric greenhouse effect is real and if you reject it, you’re not just rejecting Arrhenius (1896), you’re rejecting Callendar (1938), Plass (1956), and everyone since then, because it is very much all the same theory. Weart was right, and Wood’s experiment didn’t show anything interesting.

And 1955 wasn’t “ten years before our first model.” The first GCM (Smagorinsky et al.) WAS in 1955. The first RCM (Manabe and Strickler) was in 1964. Do your homework.

#Hank – Like almost everything else, accountability and transparency is a good thing – unless taken to excess. Reporting requirements are killing many ARRA programs before they even get started. The $5 billion Weatherization program is a case in point.

“Evidence selection in global warming studies (and the non-warming critiques) is an interesting topic but requires a big piece of research first. Until that work is done, most postings will probably reflect prior views rather than fresh analysis.” — E. Tufte

As atmospheric opacity is increased, the source of emission of fluxes passing or reaching any point is concentrated closer to that point (or tends to become ‘centered’ around that point – important when scattering is involved, though that is a minor issue for LW radiation for Earthly conditions), and for emissions at LTE, if the source of emission is found in a region where there is an overall spatial trend in temperature in some direction, increases in opacity concentrate the emission source into regions of more similar temperatures, reducing any net flux.

For LW emission to space, increasing greenhouse gas concentration (for the same sign of change over most or all of the vertical column of atmosphere) generally reduces the outgoing LW flux up (actually, down) to a point due to the general temperature decline with height up to the tropopause, but as the stratospheric opacity becomes significant, this trend halts (this could be considered saturation, though there is another measure of the greenhouse effect that can be saturated) and can reverse where and when the emission to space from the warm upper stratosphere. And so on if emission became concentrated into even much thinner layers like the mesosphere and thermosphere, though this doesn’t happen at any wavelengths, or at least any significant interval of wavelengths, so far as I know. (For a purely scattering greenhouse, the temperature of the atmosphere would not have such an effect, and the outgoing LW flux would continue to approach zero.) (For a surface that is not a perfect blackbody, a purely scattering greenhouse works just the same, but adding atmospheric opacity in the form of absorption, starting from none, will not initially reduce the outgoing radiation as much (and depending on temperature distribution and amount of surface reflection, could even increase it at first), because some atmospheric emission will be reflected from the surface and add to the total upward LW flux from the surface. But with increasing opacity, atmospheric emission eventually dominates and more increases in opacity reduce the outgoing LW flux as previously stated. The Earth’s surface emissivity in the LW part of the spectrum is not 1 but it is close (though I’m not sure of exact numbers).)

At the tropopause level, the upward LW flux approaches that for a blackbody at the tropopause level temperature (for an Earthly greenhouse where absorption and emission are important). At the same time, downward emission from the rest of the atmosphere increases from zero. Because the temperature does not increase too rapidly in the lower stratosphere (in some latitude belts, the lower stratosphere is nearly isothermal), the downward LW flux at the tropopause doesn’t so far as I know increase and then decrease following the temperature fluctuations above (at least for the effect of gases that are well-mixed above the tropopause); rather, by the point at which the opacity nearly ‘blocks’ the darkness of space, the lower colder stratosphere is already significantly blocking radiation emitted from the upper stratosphere.

Thus, the net flux (upward minus downward) at the tropopause level saturates when approaching zero, as the upward flux comes from near the tropopause level and the downward flux comes from near the tropopause level, and thus becoming nearly equal. This should lag behind the ‘saturation’ of outgoing LW radiation, since the stratosphere and troposphere are each thinner than the whole atmosphere.

2.

A graph of CO2 absorption cross section per unit amount (or the optical thickness that the absorption contributes to a layer of air), if plotted in logarithmic coordinates and averaged over smaller scale texture of the spectrum, is to a first approximation a triangle; log(optical thickness) decreases nearly linearly from the peak centered near 15 microns.

This means that if the amount of CO2 is increased by a factor of 10, then, outside a central portion of the absorption band (but within the portion that fits this description; if you go far enough out, things change, but the optical thickness of CO2 at those points is quite low and so the behavior closer in to the band is what is important for the range of CO2 amounts being considered), the optical thickness at any wavelength has increased to a value that another wavelength that was some distance away previously had. In effect, the band has widenned; the interval of the that exceeds any threshold of significant opacity has gotten wider, by about the same amount. If the central portion of the band has become saturated, then the effect of the band widenning dominates for the spectrum-integrated effect on the LW fluxes. Each multiplication of CO2 amount by some factor results in nearly the same widenning and thus approximately the same radiative forcing. The wavelengths at which the effect is greatest are at intermediate opacity and shift outward with increasing amount.

Compexities:

a. The absorption band has some fine texture (lines and relative minima between them). However, one can take the line peaks, the valleys, and various points in between, and find (to a crude first approximation) approximately the same trend in log(optical thickness) for each. The interval of the band that exceeds any threshold of opacity will have fuzzy edges, but the band widenning effect is the same.

b. There is some larger-scale bumpiness and variation to the spectrum, including fluctuations in the variation between line centers and relative minima, and a few especially strong lines, (which, so far as I know, does form a repeating pattern over the spectrum, but over a range of the spectrum with sufficient overall difference in optical thickness such that these bumps don’t cancel each other out in their effect for each doubling), so each doubling or halving of CO2 won’t be identical to the last or next doubling.

c. The effect at any wavelength is proportional to the blackbody radiation at that wavelength for the temperatures involved (or actually, it tends to be proportional to the range in blackbody values for the range of temperatures). This is not constant over wavelength, so each doubling will have it’s greatest effect over a shifting wavelength interval with shifting effect on radiative forcing. But this doesn’t cause a large difference over a few doublings.

d. The shifting of the wavelength interval with greatest effect may also encounter different overlaps with other gases. The effect, along with the variation in blackbody radiation over the spectrum, may make each doubling a bit different from the last or next. (PS think of two radiation flux spectrums, one for no CO2 and one for complete saturation (which is zero flux at the tropopause level); the effect of each doubling should tend to be proportional to the difference in these spectra within the intervals of intermediate CO2 opacity, multiplied by the band widenning.

SM 741: I am collecting references for each prediction. The original list came from Gavin from memory; I added a few myself. 14 of 17 have checked out–I have complete references for A) the prediction and B) the observation for those 14. (A startling number of those predictions date all the way back to Svante Arrhenius in 1896.) When I have the remaining three I will add this to my web site.

Thanks. I’ve bookmarked that. It’s grant writing season. WIll read carefully later. I once read something by a guy who claimed that Beer’s law explained the logarithmic dependence (of sensitivity on [CO_2]) but I didn’t believe him. Since Beer’s law applies in the case of monochromatic radiation it seems that you are not invoking anything like it?

“Evidence selection in global warming studies (and the non-warming critiques) is an interesting topic but requires a big piece of research first. Until that work is done, most postings will probably reflect prior views rather than fresh analysis.” — E. Tufte

That’s from the thread here:

“Corrupt Techniques in Evidence Presentations

Here are a few pages from a draft of a Beautiful Evidence chapter (which is 16 pages long in the published book) on evidence corruption, along with some of the comments on this draft material.

The emphasis is on consuming presentations, on what alert members of an audience or readers of a report should look for in assessing the credibility of the presenter…..”

“Thank you for your recent contribution. It has been placed in a non-public queue for review by the editorial board. Usually the decision to publish is made within 4 days. About 30% of submitted contributions are posted; after publication, about half then survive the occasional reviews of published items. The editors are unable to reconsider their decisions or to answer any queries about editorial decisions (some of which may well be mistaken).”

“Europe could meet all its electricity needs from renewable sources by mid-century, according to a report released Monday by services giant PricewaterhouseCoopers.

A “super-smart” grid powered by solar farms in North Africa, wind farms in northern Europe and the North Sea, hydro-electric from Scandinavia and the Alps and a complement of biomass and marine energy could render carbon-based fuels obsolete for electricity by 2050, said the report…..”http://www.physorg.com/news189101270.html