The Otto et al. paper has received a great deal of attention in recent days. While the paper’s estimate of transient climate response was low, the equilibrium/effective climate sensitivity figure was actually slightly higher than that in some other recent studies based on instrumental observations. Here, Nic Lewis notes that this is largely due to the paper’s use of the Domingues et al. upper ocean (0–700 m) dataset, which assesses recent ocean warming to be faster than other studies in the field. He examines the effects of updating the Otto et al. results from 2009 to 2012 using different upper ocean (0–700 m) datasets, with surprising results.

Last December I published an article here entitled ‘Why doesn’t the AR5 SOD’s climate sensitivity range reflect its new aerosol estimates?‘ (Lewis, 2012). In it I used a heat-balance (energy-budget) approach based on changes in mean global temperature, forcing and Earth system heat uptake (ΔT, ΔF and ΔQ) between 1871–80 and 2002–11. I used the RCP 4.5 radiative forcings dataset (Meinshausen et al, 2011), which is available in .xls format here, conformed it with solar forcing and volcanic observations post 2006 and adjusted its aerosol forcing to reflect purely satellite-observation-based estimates of recent aerosol forcing.

I estimated equilibrium climate sensitivity (ECS) at 1.6°C,with a 5–95% uncertainty range of 1.0‑2.8°C. I did not state any estimate for transient climate response (TCR), which is based on the change in temperature over a 70-year period of linearly increasing forcing and takes no account of heat uptake. However, a TCR estimate was implicit in the data I gave, if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable since the net forcing has grown substantially faster from the mid-twentieth century on than previously. On that basis, my best estimate for TCR was 1.3°C. Repeating the calculations in Appendix 3 of my original article without the heat uptake term gives a 5–95% range for TCR of 0.9–2.0°C.

The ECS and TCR estimates are based on the formulae:

(1) ECS = F2× ΔT / (ΔF − ΔQ) and (2) TCR = F2× ΔT / ΔF

where F2× is the radiative forcing corresponding to a doubling of atmospheric CO2 concentrations.

A short while ago I drew attention, here, to an energy-budget climate study, Otto et al. (2013), that has just been published in Nature Geoscience, here. Its author list includes fourteen lead/coordinating lead authors of relevant AR5 WG1 chapters, and myself. That study uses the same equations (1) and (2) as above to estimate ECS and TCR. It uses a CMIP5-RCP4.5 multimodel mean of forcings as estimated by general circulation models (GCMs) (Forster et al, 2013), likewise adjusting the aerosol forcing to reflect recent satellite-observation based estimates – see Supplementary Information (SI) Section S1. It Although the CMIP5 forcing estimates embody a lower figure for F2× (3.44 W/m2) than do those per the RCP4.5 database (F2×: 3.71 W/m2), TCR estimates from using the two different sets of forcing estimates are almost identical, whilst ECS estimates are marginally higher using the CMIP5 forcing estimates[i].

Although the Otto et al. (2013) Nature Geoscience study illustrates estimates based on changes in global mean temperature, forcing and heat uptake between 1860–79 and various recent periods, it states that the estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing and is little affected by the eruption of Mount Pinatubo. Its TCR best estimate and 5–95% range based on changes to 2000-09 are identical to what is implicit in my December study: 1.3°C (uncertainty range 0.9–2.0°C).

While the Otto et al. (2013) TCR best estimate is identical to that implicit in my December study, its ECS best estimate and 5–95% range based on changes between 1860–79 to 2000–09 is 2.0°C (1.2–3.9°C), somewhat higher than the 1.6°C (1.0–2.9°C) per my study, which was based on changes between 1871–80 and 2002–11. About 0.1°C of the difference is probably accounted for by roundings and the difference in F2× factors due to the different forcing bases. But, given the identical TCR estimates, differences in the heat-uptake estimates used must account for most of the remaining 0.3°C difference between the two ECS estimates.

Both my study and Otto et al. (2013) used the pentadal estimates of 0–2000-m deep-layer ocean heat content (OHC) updated from Levitus et al. (2012), and made allowances in line with the recent studies for heat uptake in the deeper ocean and elsewhere. The two studies’ heat uptake estimates differed mainly due to the treatment of the 0–700-m layer of the ocean. I used the estimate included in the Levitus 0–2000-m pentadal data, whereas Otto et al. (2013) subtracted the Levitus 0–700-m pentadal estimates from that data and then added 3-year running mean estimates of 0–700-m OHC updated from Domingues et al (2008).

Since 2000–09, the most recent decade used in Otto et al. (2013), ended more than three years ago, I will instead investigate the effect of differing heat uptake estimates using data for the decade 2003–12 rather than for 2000–09. Doing so has two advantages. First, forcing was stronger during the 2003–12 decade, so a better constrained estimate should be obtained. Secondly, by basing the 0–700-m OHC change on the difference between the 3-year means for 2003–05 and for 2010–12, the influence of the period of switchover to Argo – with its higher error uncertainties – is reduced.

In this study, I will present results using four alternative estimates of total Earth system heat uptake over the most recent decade. Three of the estimates adopt exactly the same approach as in Otto et al. (2013), updating estimates appropriately, and differ only in the source of data used for the 3-year running mean 0–700-m OHC. In one case, I calculate it from the updated Levitus annual data, available from NOAA/NOCDC here. In the second case I calculate it from updated Lyman et al. (2010), data, available here. In the third case I use the updated Domingues et al. (2008) data archived at the CSIRO Sea Level Rise page in relation to Church et al. (2011), here. Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2 – which over that period is nearly double the rate of increase per the Levitus dataset, and nearly treble that per the Lyman dataset. The final estimate uses total system heat uptake estimates from Loeb et al. 2012 and Stephens et al. 2012. Those studies melded satellite-based estimates of top-of-atmosphere radiative imbalance with ocean heat content estimates, primarily updated from the Lyman et al. (2010) study. The Loeb 2012 and Stephens 2012 studies estimated average total Earth system heat uptake/radiative imbalance at respectively 0.5 W/m2 over 2000–10 and 0.6 W/m2 over 2005–10. I take the mean of these two figures as applying throughout the 2003–12 period.

I use the same adjusted CMIP5-RCP4.5 forcings dataset as used in the Otto et al. (2013) study, updating them from 2000–09 to 2003–12, to achieve consistency with that study (data kindly supplied by Piers Forster). Likewise, the uncertainty estimates I use are derived on the same basis as those in Otto et al. (2013).

I am also retaining the 1860–79 base reference period used in Otto et al. (2013). That study followed my December study in deducting 50% of the 0.16 W/m2 estimate of ocean heat uptake (OHU) in the second half of the nineteenth century per Gregory et al. (2002), the best-known of the earlier energy budget studies. The 0.16 W/m2 estimate – half natural, half anthropogenic – seemed reasonable to me, given the low volcanic activity between 1820 and 1880. However, I deducted only 50% of it to compensate for my Levitus 2012-derived estimate of 0–2000-m ocean heat uptake being somewhat lower than that per some other estimates. Although the main reason for making the 50% reduction in the Gregory (2002) OHU estimate for 1861–1900 disappears when considering 0–700-m ocean heat uptake datasets with significantly higher trends than per Levitus 2012, in the present calculations I nevertheless apply the 50% reduction in all cases.

Table 1: ECS and TCR estimates based on last decade and 0.08 W/m2 ocean heat uptake in 1860–79.

Whichever periods and forcings dataset are used, the best estimate of TCR remains 1.3°C. The 5–95% uncertainty range narrows marginally when using changes to 2003–12, giving slightly higher forcing increases, rather than to 2000–09 or 2002–11, rounding to 0.9–1.95°C. The ‘likely’ range (17–83%) is 1.05–1.65°C. (These figures are all rounded to the nearest 0.05°C.) The TCR estimate is unaffected by the choice of OHC dataset.

The ECS estimates using data for 2003–12 reveal the significant effect of using different heat uptake estimates. Lower system heat uptake estimates and the higher forcing estimates resulting from the 3-year roll-forward of the period used both contribute to the ECS estimates being lower than the Otto et al. (2013) ECS estimate, the first factor being the most important.

Although stating that estimates based on 2000–09 are arguably most reliable, Otto et al. (2013) also gives estimates based on changes to 1970–79, 1980–89, 1990–99 and 1970–2009. Forcings during the first two of those periods are too low to provide reasonably well-constrained estimates of ECS or TCR, and estimates based on 1990–99 may be unreliable since this period was affected both by the eruption of Mount Pinatubo and by the exceptionally large 1997–98 El Niño. However, the 1970–2009 period, although having a considerably lower mean forcing than 2000–09 and being more impacted by volcanic activity, should – being much longer – be less affected by internal variability than any single decade. I have therefore repeated the exercise carried out in relation to the final decade, in order to obtain estimates based on the long period 1973–2012.

Table 2, below, shows comparisons of ECS and TCR estimates using data for the periods 1900–2009 (Otto et al., 2013) and 1973–2012 (this study) using the relevant forcings and 0–700-m OHC datasets. The estimates of system heat uptake from two of the sources used for 2003–12 do not cover the longer period. I have replaced them by an estimate based on data, here, updated from Ishii and Kimoto (2009). Using 2003–12 data, the Ishii and Kimoto dataset gives almost an identical ECS best estimate and uncertainty range to the Lyman 2010 dataset, so no separate estimate for it is shown for that period. Accordingly, there are only three ECS estimates given for 1973–2012. Again, the TCR estimates are unaffected by the choice of system heat uptake estimate.

Table 2: ECS and TCR estimates based on last four decades and 0.08 W/m2 ocean heat uptake in1860–79

The first thing to note is that the TCR best estimate is almost unchanged from that per Otto et al. (2013): just marginally lower at 1.35°C. That is very close to the TCR best estimate based on data for 2003–12. The 5–95% uncertainty range for TCR is slightly narrower than when using data for 1972–2012 rather than 1970–2009, due to higher mean forcing.

Table 2 shows that ECS estimates over this longer period vary considerably less between the different OHC datasets (two of which do not cover this period) than do estimates using data for 2003–12. As in Table 1, all the 1973–2012 based ECS estimates come in below the Otto et al. (2013) one, both as to best estimate and 95% bound. Giving all three estimates equal weight, a best estimate for ECS of 1.75°C looks reasonable, which compares to 1.9°C per Otto et al. (2013). On a judgemental basis, a 5–95% uncertainty range of 0.9–4.0°C looks sufficiently wide, and represents a reduction of 1.0°C in the 95% bound from that per Otto et al. (2013).

If one applied a similar approach to the four, arguably more reliable, ECS estimates from the 2003–12 data, the overall best estimate would come out at 1.65°C, considerably below the 2.0°C per Otto et al. (2013). The 5–95% uncertainty range calculated from the unweighted average of the PDFs for the four estimates is 1.0–3.1°C, and the 17–83%, ‘likely’, range is 1.3–2.3°C. The corresponding ranges for the Otto et al. (2013) study are 1.2–3.9°C and 1.5–2.8°C. The important 95% bound on ECS is therefore reduced by getting on for 1°C.

[i]Total forcing after adjusting the aerosol forcing to match observational estimates is not far short of total long-lived greenhouse gas (GHG) forcing. Therefore, differing estimates of GHG forcing – assuming that they differ broadly proportionately between the main GHGs – change both the numerator and denominator in Equation (1) by roughly the same proportion. Accordingly, differing GHG forcing estimates do not matter very much when estimating TCR, provided that the corresponding F2× is used to calculate the ECS and TCR estimates, as was the case for both my December study and Otto et al. (2013). ECS estimates will be more sensitive than TCR estimates to differences in F2× values, since the unvarying deduction for heat uptake means that the (ΔF − ΔQ) factor in equation (2) will be affected proportionately more than the F2× factor. All other things being equal, the lower CMIP5 F2× value will lead to ECS estimates based on CMIP5 multimodel mean forcings being nearly 5% higher than those based on RCP4.5 forcings.

I am sorry, but all these estimates of climate sensitivity are like discussing how many angels can dance on the head of a pin. People have been discussing estimates of climate sensitivity for something like 40 years. In that time, so far as I can make out, little, if any, progress has been made. Until we know the magnitudes and time constants of all naturally occurring events that cause a change in global temperatures, and so we might have a hope of actually measuring what the numeric value is, all these studies are just a waste of time and money. All people are actually doing is just taking another guess. My best guess is that the climate sensitivity of CO2 is indistinguishable from zero.

I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.

“To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase”

Quite.

GHGs provide an additional radiative window to space that is not provided by a non GHG atmosphere.

Still doesn’t necessarily result in any net thermal effect though once negative system responses are taken into account.

I think that whether the net effect of GHGs is potential warming or potential cooling the air circulation adjusts to negate it.

So an effect of zero or near zero overall but with a miniscule shift in air circulation.

Your updated result for ECS are the most honest, because it is based on the latest OHC data (Levitus 2012) and the latest temperature data (1973 – 2012). The result shows that ECS = 1.7 +- 1.0/0.4 degrees C. The UN goal of limiting climate change to 2 degrees C has apparently been met – congratulations !

Jim, I have been making this point for a while. The climate feedbacks are not Scalar, they are complex, they each have a time dimension, a lag, and they are all different, ranging between milliseconds and decades. Feedbacks cannot be added without accounting for the time (phase component). The lack of accounting for time means that transient sensitivity can vary wildly from moment to moment depending on the speed and direction of all the feedback effects on multiple timescales.

Acheiving a Net gain of 3 in the climate therefore requires a completely implausible loop gain of about 0.95.

In support of your point, sensitivity can only be evaluated by modelling each and every feedback effect, including the lags and amplitudes of each effect. In many cases the feedback amplitudes or phases are dependent on the system itself (consider tropical storm non-linear behaviour)! Sensitivity cannot be a simple number, it is a chaotically varying complex number in both time and space, it is to all intents and purposes unknowable.

Climate science attempts to model this as a simple scalar average, without even knowing if the combination of all the feedbacks represents a stationary function, that is, they dont even know if the mean of the sensitivity is a constant.

Jim Cripwell may well be right in stating “I am sorry, but all these estimates of climate sensitivity are like discussing how many angels can dance on the head of a pin.” but such studies done studiously are still important and should be welcomed as the effect may be (as an intermediate stage) to reduce the “consensus” estimate of climate sensitivity from the IPCC median of 3C. A generally accepted ECS of 1.65-1.75C is much to be preferred to 3C and could have enormous consequences for policy decisions. It would mean that a doubling of CO2 would not mean a 2C (or higher) increase in global temperatures and would minimise the concept of the impending “tipping point”. We are moving slowly towards the Lindzen & Co view.

Now maybe I’m just not looking in the right place but there seems to be a problem here. There is no cooling effect to be seen. In fact good indications of a short term warming. There is no indication of the marked, permanent negative offset that a linear response would produce to such a negative forcing.

Now if the response to volcanic forcing is not materialising in the climate record, then the linear model is fundamentally inadequate and hence current GCMs as well.

If I am overlooking something obvious, looking at the wrong dataset or misinerpretting what to expect , hopefully Nic or someone can point out where.

John Peter, you write “but such studies done studiously are still important”

To a limited extent I agree. My point is that with our current knowledge of the physics of our atmopshere, no-one has the slightest idea of what happens to global temperatures as we add more CO2 to the atmosphere from current levels. Just about the only things we know about climate sensitivity of CO2 is that it is probably positive, and it has a maximum value. If these studies were framed in terms of estimating the MAXIMUM value of climate sentiivity, I would not object. But I do object to claims that these estimates are in some sort of way associated with what the real number is.

bobl says:
May 24, 2013 at 6:47 am
I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas?
Correct.

So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.

No, because the atmosphere is optically thick at the GHG wavelengths, i.e. lower in the atmosphere it absorbs more than it emits. Emission to space only occurs above a certain height and therefore at a certain temperature, as the concentration increases then that height increases and the temperature decreases and hence emission to space goes down.

The climate sensitivity estimate in this article is better than highly overstated ones. Still, though:

1) Properly accounting for GCRs+TSI in solar-related change makes such contribute several times more to past warming than solar irradiance change alone, even aside from an ACRIM versus PMOD model matter on solar irradiance history. Almost whenever cosmic rays are not explicitly mentioned, usually one can assume someone is implicitly ignoring them entirely and treating them as zero effect, which is highly inaccurate. As Dr. Shaviv has noted:

“Using historic variations in climate and the cosmic ray flux, one can actually quantify empirically the relation between cosmic ray flux variations and global temperature change, and estimate the solar contribution to the 20th century warming. This contribution comes out to be 0.5 +/- 0.2 C out of the observed 0.6 +/- 0.2 C global warming (Shaviv, 2005).”*

That leaves roughly on the order of 0.1 degrees Celsius over the past century for net warming from anthropogenic effects / independent components of the longest types of ocean cycles (with likely a large portion of the apparent 60-year ocean cycle being rather sun & GCR generated as looking at appropriate plots suggests) / etc.

Especially considering logarithmic scaling and diminishing returns, human emissions over this century are not likely to contribute more than tenths of a degree warming if even that, even aside from how a near-future solar Grand Minimum starting another LIA by the mid 21st century looks likely. (A mixture of both cooling and warming effects, influence on water vapor, and other complexities apply).

Some illustrations I made a while back:
NOAA humidity data for decades past got drastically changed already since I started posting the above several months ago, but still it provides a number of illustrations.

2a) Considering how many problems there have been with activist-reported (Hansen, CRU, etc.) surface temperature measurements despite such being relatively more readily independently verified than 0–700m ocean heat content, where the latter is talking about mere hundredths of a degree change anyway (with there being quite a reason that such ocean temperature change over hundreds of meters of depth tends to be reported in joules rather than degrees Celsius or Kelvin), OHC has uncertainties, to say the least.

“I am sorry, but all these estimates of climate sensitivity are like discussing how many angels can dance on the head of a pin. People have been discussing estimates of climate sensitivity for something like 40 years. In that time, so far as I can make out, little, if any, progress has been made. ”

This is factually wrong. The first estimates of sensitivity were made over 100 years ago.
Since then the estimate has followed a downward trajectory, from the first report to the fourth the central value has creeped downward. Nic’s work adds to that body of knowledge.

Let me put the importance of this metric into perspective: every degree of C in uncertainty is worth about 1 trillion dollars a year if you are planning to mitigate.

Jim. I suggest you read some of the history of climate science and read some actual papers and work with some actual data.

Nice work. I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation? The purpose of course is to show people how they can locate and communicate their doubts WITHIN the framework and language of their opponents.

As you have shown it is much more effective to question the science from the inside rather
than attack the character and motivations of people from the outside. You’ve shown that there IS A DEBATE and you’ve shown people how to join that debate. You’ve shown that the consensus is broader and more uncertain than people think, not by questioning the existence of the consensus but by working with others to demonstrate that some of the core beliefs ( how much will it warm) admit many answers.

Nic Lewis, thanks for this post. WE posted a different way to derive basically the same answer. Good to see so many data sets and methods converging on something just over half of the AR4 number. It will be very interesting, and important, to see where AR5 comes out given the Otto co-authors. Either the C gets removed from CAGW, or the process is plainly shown to be utterly corrupted.

“bobl says:
May 24, 2013 at 7:23 am
Stephen, no, must have an effect however miniscule, some change needs to drive the air current change, you can have a negative feedback, but there must be a net change to drive the effects. ”

The effect would a change in atmospheric heights and the slope of the lapse rate which is then compensated for by circulation changes.

Assuming there is a net thermal effect from GHGs in the first place. Some say net warming, others say net cooling.

Doesn’t matter either way. The system negates it by altering the thermal structure and circulation of the atmosphere.

I can’t actually prove that with current data so will just have to wait and see but it seems clear to me from current and past real world observations of climate behaviour.

In order to understand science you need a health dose of caution. The limits of our data and understanding mean we must pepper our conclusions with appropriate caveats and/or uncertainty ranges. You seem to completely misunderstand this and instead favour the idea of perfection or nothing. The unfortunate truth is that most of the time science is about being less wrong than it is about being right you need to moderate your skepticism appropriately.

To get to an official ECS~1 will take Ln1/3 =0.04t; t=27yrs.
Hmm, if time elapsed since consensus ECS~3 has been just 5 years, then we would have 13 years to wait for consensus ECS~1. This assumes stalwart resistance and represents an outer limit. Lambda is probably not a constant here – I would go for half the 13 years ~6years.

My take: ECS finally turns out to be vanishingly small (i.e. there is a governor on climate responses al a Willis Eschenbach), then TCR is larger than ECS and within a few years it declines to the minor ECS figure and natural variability is basically all that is left. How’s that for a model!

Mechanisms controlling atmospheric CO2 follow both geological and biological processes. Each pathway operates with different time constants and amplitudes over any time scale. The atmosphere has evolved in composition due to biology. Physicists can’t understand how the atmosphere behaves because they don’t include biology. Except for Argon, the atmosphere is completely biological in origin. Biology also alters surface albedo.
All evidence points to those supporting the “essentially zero” climate sensitivity on a planetary scale. The satellite data support zero temperature increase since 1980. Quality surface thermometers also show zero warming, eg the Antarctic science stations, Amundsen-Scott, Vostok, Halley and Davis.
CO2 follows biology, biology follows temperature.

I have trouble reconciling the realty of surface radiation measurements with climate sensitivity calculations based on TOA calculations. BSRN measurements indicate that since 1992 short wave radiation has increased by 3 w/m per decade likely due to global brightening (less clouds) while long wave radiation (including ghg back radiation) has increased by 2 w/m per decade.
Considering that SW (visible light) is much more easily absorbed by the oceans than thermal long wave radiation it would seem that the .4 to .6 w/m of ocean flux could be attributed mostly to the short wave contribution or simply to changes in cloud cover. AGW proponents will claim that the global brightening is a positive feed back of course. How much of the 2 w/mtr per decade of the long wave surface radiation increase is due to the ocean releasing heat versus GHG back radiation?

However, I do agree with Mosh’s last comment , Nic is taking a very wise approach and doing the difficult task injecting some reason into the thinking in small, digestible pieces. Congratulations on finding the right balance between being honest and being effective ;)

I have absolutely no intention whatsoever of moderating my skepticism. There is no empirical data whatsoever to support the hypothesis of CAGW, and until we get such empirical data, I will continue to believe that CAGW is a hoax. The warmists have been conducting pseudo- science for years, trying to pretend that the estimates they have made on climate sensitivity have a meaning in physics. IMHO, as I have noted, I think these estimates are completely worthless.

I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.

Surely this has to increase losses to space overall.

What am I missing?………””””””””””””

Bob, what it is that you are missing is an understanding of the fundamental difference between atomic or molecular line/band spectra emission/absorption radiation, which is entirely a consequence of atomic and molecular structure of SPECIFIC materials; and THERMAL RADIATION which is a continuum spectrum of EM radiation, that is NOT material specific, and depends (spectrally) ONLY on the Temperature of the material. Of course, the level of such emission or absorption depends on the density of the material (atoms/molecules per m^3).

Spectroscopists have known since pre-Cambrian times, that the sun emits a broad spectrum of continuum thermal radiation, on top of which it was discovered by Fraunhoffer and others, there is a whole flock of narrow atomic or molecular line spectra at very specific frequencies, that are characteristic of specific elements or charged ions, in the sun.

So-called “Black Body Radiation ” is an example of a thermal continuum spectrum.

I deliberately said “so-called”, because nobody ever observed black body radiation, since the laws of Physics prohibit the existence of any such object.

Well some folks think a black hole might be a black body.

By definition, a black body absorbs 100% of electromagnetic radiation of ANY frequency or wavelength down to, but not including zero; and up to, but not including infinity.

Yet no physical object (sans a black hole) is able to absorb 100% of even ONE single frequency, or wavelength; let alone All frequencies or wavelengths. To do that, the body would have to have a surface refractive index of exactly 1.0, the same as the refractive index of empty space. That would require that the velocity of light in the material be exactly (c).

Both of these, and (c) = 2.99792458 E+8 are exact values. the only such fundamental physical constants that are exact.

So a material with a product of permeability and permittivity = 1 / c^2 would have a velocity of EM radiation also equal to (c). But that is not sufficient.

Free space vacuum, also has a characteristic impedance = sqrt( munought / epsilonnought) which is approximately 120 pi Ohms, or 377 Ohms.

And when a wave travelling in a medium of 377 Ohms, such as free space, encounters a medium of different impedance, there is a partially transmitted wave, and a partially reflected wave; so no total absorption.

So any real physical medium, must have a permeability of munought, and a permittivity of epsilon nought, at all frequencies and wavelengths, in order to qualify as a black body. It would be indistinguishable from the vacuum of free space.

The point of all this, is that real bodies only approximate what a black body might do, and only do so over narrow ranges of frequency or wavelength, depending on their Temperature.

And in the case of gases like atmospheric nitrogen and oxygen; the molecular density is extremely low. so the EM absorption doesn’t come anywhere near 100%, even for huge thicknesses of atmosphere. But the absorption per molecule is not zero, as some people assume, so even non IR active non GHG gases do absorb and emit a continuum spectrum of thermal radiation based on the gas Temperature.

Experimental practical near black bodies, operate as anechoic cavities, where radiation can enter a small aperture, and then gets bounced around in the cavity and never escapes. Some derivations of the Planck radiation law are based on such cavity radiation.
In the case of a “black body cavity”, the required conditions are that the walls be perfectly reflecting of ALL EM radiaton, and also must have zero thermal conductivity so that heat energy cannot leak out through the walls.

Once again, such conditions are a myth, and no ideal black body cavity can exist either.

So we have the weird circumstance, that Blackbody radiation has never been observed by anybody, and simply cannot exist, yet all kinds of effort went into theoretical models of a non-existing non-phenomenon, and gave us one of the crown jewels of modern physics; the Planck radiation formula.

Steven Mosher says:
May 24, 2013 at 8:47 am
“You’ve shown that the consensus is broader and more uncertain than people think, not by questioning the existence of the consensus but by working with others to demonstrate that some of the core beliefs ( how much will it warm) admit many answers.”
So it is not a consensus after all. Good to see that the 3C consensus is breaking up. We will all benefit from that (other than the rent seakers).
I also applaud the fact that Steven Mosher has transformed into something less cryptic than usual. Long may it continue as he often has something valuable to add when the notion takes him.

Clearly estimates of climate sensitivity have had to fall because models based on higher numbers have tracked so poorly they have reached the point of falsification. The greatest pressure is on the TCR value since sufficient time has now passed without significant warming to rule out a high value for this number. The ECS on the other hand makes predictions that cannot be fully falsified for hundreds of years so I expect we’ll see people continuing to defend high numbers here for some time. I expect estimates of TCR and ECS will continue to fall if we see cooling over the next decade. These numbers in any case are still based on a simple forcing model with feedback which I don’t think is at all realistic.

I expect the immediate response of the most alarmed will be to start talking up the ECS and downplaying the TCR. However these ECS values are not really alarming. Over the longer term we are staring down the barrel of the next ice age. I find it reassuring to think that our influence on the planet might allow us to dodge this calamity. In fact I am more concerned that ECS might not be big enough to allow this to happen.

The problem is that ECS is bigger than TCR because of long term feedbacks to warming that depend on slow processes like the melting of ice sheets or warming of the deep oceans. But in the context of a planet that should be heading into an ice age the effect of added CO2 may not be to warm but merely to offset the expected natural cooling. If the greenhouse effect is not actually warming the planet but simply staving off the descent into the next ice age then none of these feedback effects will come into play.

Steven Mosher wrote:
“I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation?”

Good point, Steve. That assumption would reduce the increase in global temperature betrween the 1860-79 mean and the 2003-12 mean from 0.76 C to about 0.68 C. All the climate sensitivity estimates, and their uncertainty ranges, would then reduce by about 11%. So a sensitivity of 1.7 C would change to just over 1.5 C, for example.

What I would like to see is negative (below 1 deg C) ECS /TCS, I.e., AT what minimums over the next decades / century would it take for both estimates to get tickets to the LIndzen and Choi ball game ((0.7 deg C)???

In the previous article, Willis questioned why the volcanic forcings were being spread back in time by a running mean filter. It was confirmed by Nic that this was the case but he stated that it was immaterial to the findings of Otto et al 2013. This is probably true.

Now that Nic has kindly linked to a source of the forcings used, I have plotted it up against UAH TLT and TLS and marked in the dates of the two major eruptions.

I chose the SH extra-tropical region since this shows no visible impact from El Chichon and allows us to see the background variation in temperatures that was happening at that time. (Note stratospheric temps tend to vary in the opposite sense so I have inverted and scaled to give a ‘second opinion’ on the background variaitons).

Now we see that the effects of the back spreading of the forcing data produce a totally false correlation with natural variations of temperature that preceded the eruption. This has nothing to do with forcing or the model and is entirely a result of improper processing.The distorted form of the forcing data just happens to correlate with the natural temperature background around the time of the event.

Incidentally, I remain even more convinced now of my initial assessment that this is a five year running mean, not a three year as suggested by Willis and confirmed by Nic. I would ask Nic to check his source of information because it seems pretty incontrovertible from this, that it is affecting two points either side not one, hence it is a 5 pt filter kernel.

So why was this done? There is no valid reason and it has to be an intentional act , you can’t accidentally run a filter on one of your primary inputs.

Whoever had the idea to “smooth” the volcanic forcings, are they also introducing this practice elsewhere than Otto et al, where it may be falsely improving the ability of the hindcasts to reproduce key features of the temperature record?

What I love about science are the necessary assumptions that are made in order to carry out a calculation, you know the kind of thing I mean….’let’s assume a value for such and such’ or, let’s invent a concept like a ‘Black Body’, which of course cannot exist but is nonetheless useful in carrying out this calculation; well here are a couple observations from ‘real life’ which in my opinion seem to render ‘sensitivity’ calculations almost completely irrelevant….
Let’s assume (see what I did there?) that the increase of CO2 concentration from 350 to 400ppm does indeed capture sufficient energy to raise the overall temperature of the atmosphere by say 1 degree C. Let’s then assume that excess heat is eventually transported by ocean currents towards the polar regions. In the case of the Arctic Ocean in winter, sea Ice cover is reduced thereby allowing ‘larger volumes of warmer’ water to come into contact with the atmosphere at a time when there is no solar input (indeed conditions are ideal for heat loss to space).
Could it not then be argued that a slight heating of the atmosphere would cause and be balanced by subsequent polar cooling effect?
Indeed could it be further argued that Arctic Ocean heat loss could be a self amplifying effect ( a bit like the Warmist ‘feedbacks’…subsequently causing ‘runaway cooling’?

Phil. says:
May 24, 2013 at 8:04 am
No, because the atmosphere is optically thick at the GHG wavelengths, i.e. lower in the atmosphere it absorbs more than it emits. Emission to space only occurs above a certain height and therefore at a certain temperature, as the concentration increases then that height increases and the temperature decreases and hence emission to space goes down.

You are oversimplifying the situation.

First, the GHE is real and works off of radiation from the surface. Bobl wasn’t referring to this process.

Second, thermalization and radiation of atmospheric energy (not surface energy) is basic physics. This works in parallel to the GHE and this is what Bobl was asking about. Since the density of the atmosphere is reduced the higher you go, the average distance the radiation travels until re-absorption (or loss to space) is computable, let’s assume X meters upwards. It looks like any flow through a pipe. Now, if you add more CO2 you increase the probability of these events occurring which increases the flow of energy at all levels of the atmosphere towards space. Essentially you create a wider pipe. If climate models ignore this process it’s not surprising they get the wrong answer.

Nic Lewis’s work (a significant contribution) and it’s implications need to be put into perspective. His work doesn’t seem to take into account the paleo record, nor should it necessarily do so. But the extremely short sample period needs to be recognized.

Additionally, from my reading of his results (as well as Dr. Otto’s apparently); at most, we may have a reprieve of ten or fifteen years before the same effects are upon us.

1) “… if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable [based on another assumption that] the net forcing has grown substantially faster from the mid-twentieth century on than previously.”
***
2) “… estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing… .” [assumes the forcing is of any significance at all]
***
3) “…forcing was stronger during the 2003–12 decade…” [assumes significant forcing causation]
***
4) “… Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2… ”
***

From statements like those quoted above, this well-executed paper appears to be a careful attempt to both: 1) deprogram genuinely brainwashed AGW cult members by gingerly casting doubt upon their core beliefs; and 2) provide a face-saving way for AGW crooks who know better to back down from their lies.

It is not, nevertheless, robust, open, debate.

When a debate opponent has no evidence to back up their conjectures, when that opponent offers only assumptions and speculation, then, no matter how complicated their math, it adds up to no more than “I simply believe this.” There is nothing to debate. The above is only playing their imaginary game. It may get them to change their behavior slightly, but not significantly. It’s like going along with a person having a psychotic episode just enough to get them out of the middle of the road and onto the shoulder. “Yes, yes, my good fellow, those tiny green men most likely do want you to go with them, but, I know that they want you to walk on the shoulder, not down the centerline. There’s a good lad. Just keep to the right of (or left — in U.K.) of that solid white line there. Good luck!”

While it is shrewd not to try to tell them “TINY GREEN MEN DO NOT EXIST,” the above really isn’t a debate.

Conclusion: While scientific discussion is very important, the main goal is to save our economies, thus we must win over the voters. And that debate needs to be simply and powerfully stated. In terms such as:

“All people are actually doing is just taking another guess.” [Jim Cripwell]

“Climate science attempts to model this as a simple scalar average, without even knowing if the combination of all the feedbacks represents a stationary function. That is, they don’t even know if the mean of the sensitivity is a constant.” [bobl]

“Clearly estimates of climate sensitivity have had to fall because models based on higher numbers have tracked so poorly they have reached the point of falsification. ” [IanH]

Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.

It seems to me that increasing the CO2 concentration, increases the probability of such an interaction, and therefore must increase the emission to space. Does this component for example form part of the increased IR emission in the C02 emission bands seen in the Satellite record?

This isn’t much more that a thought at the moment, but seems to me that this is just a question of conservation IE energy in Vs energy out, anything that increases energy out must result in an overall cooling – granted it could be stratified, cooling in upper atmosphere only, but given the convection processes at play… Increasing the efficiency of radiation must increase the temperature difference, increasing the rate of convective and conductive heat transport to match.

This question has rocked my world so to speak, I can’t reconcile this with a warming effect, and to date I have been firmly of the opinion that CO2 warms. That’s still true if one only considers radiation, in that case radiation to space should decrease as GHGs rise because the radiation never reaches from the surface to height. But likely not if convective heat is radiated to space by GHGs. In that case there is always plenty of energy drawn from the thermal energy of surrounding N2 and O2 to feed into the pipe…

Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.

It seems to me that increasing the CO2 concentration, increases the probability of such an interaction, and therefore must increase the emission to space.
———————————————————-
I think you’re also assuming that the radiation always has to be outwards (are you?). The reality is that the CO2 molecule has basically a 50/50 chance of radiating up and out or down and in. The net effect is to increase the transit time of the photon and increase the energy content of the atmosphere and the surface as a result. Of course this is happening at all levels of the atmosphere just to make it more complicated. Finally, it can be directly observed just by measuring the radiation from a dark sky at night.

‘Climate’ and ‘Climate Change’ are interpretations, in part based on the psychological state of the ‘observer’ at any particular time and therefore not physical in any way or form, i.e. fantasies or phantasms.

Fantasies and phantasms have no sensitivity, not even memory, they are only apparitions.

The fundamental problem with Climate Alchemy is that it starts from the premise that the ~15 µm CO2 IR band emitting at ~220 °K to space controls IR energy flux to space because if you double CO2, it reduces that band’s emitted flux by ~3 W/m^2.

However, at present CO2 level, that band is ~8% of OLR. 92% of the OLR comes from cloud level, the H2O bands and in the atmospheric window, near the surface temperature.

The premise has to shift to accepting that the Earth self regulates OLR equal to SW energy IN and the variations about the set point are oscillations as long time constant parts of the system adapt.

You may have mastered puppets, but you need to work on your physics. Next time your thermometer says it’s 104 degrees out, tell yourself that that sweat pouring down your face is just a “phantasm”. I’m sure you’ll feel a lot cooler. JP

atarsinc: 12.32 am: ‘Care to describe that mechanism? Or are we supposed to just take your word for it.’

Very simple. The OLR bite idea, whilst superficially appealing can only apply if it is proved that no extra atmosphere heat from increased CO2 leaves directly to space via the atmospheric window.

The system has many ways of doing this so we see the null point of the heat engine. The real GHE is the rise in surface temperature above the no GHG state, ~11 K, and is completely separate from the LR difference of surface temperature to the tropopause.

No CO2-AGW means no feedback via H2O and no positive feedback.

In reality we see extreme negative feedback with the oscillations from long time constant parts of the system, e.g. ENSO.

There appears to be a substantial difference between UAH Version 5.5 global anomaly Vs HadCRUT4 & RISS global anomaly.

It appears HadCRUT3 to HadCRUT4 has been adjusted to match RISS. HadCRUT3 to HadCRUT4 looks to be 0.1C temperature increase for recent temperatures.
I thought it odd that Woods for Trees does not include the UAH global anomaly data which would enable an easy comparison of the different data sets.

Has the sensitivity analysis been done with the UAH Version 5.5 global anomaly data?

Comments:
1. There seems to be a concerted effort to manipulate the temperature data sets to reduce past temperatures and increase current. James Hansen’s actions (and what he states in his books/papers) and the climategate emails appears to indicate that there are some people in senior positions who will support the adjustment of data and analysis to push an agenda.
2. It is surreal that those people who are pushing the agenda appear to have absolutely no understand of the implications of what is required for a true, world reduction in CO2 emissions (say a true reduction of 20% moving to 60%), not just spending money on green scams. The scams do not work. For example, EU carbon emissions, have increased when the carbon content of goods manufactured and imported into the EU is included. The scams do not even reduce CO2 emissions locally. For example, the conversion of food to biofuel has resulted in a net increase in CO2 emissions if all energy inputs are to grow and convert the food are include in the calculations. That scheme is significantly worst than burning fossil fuel as virgin forest is being cut down to grow food to convert to biofuel. Same comment concerning frustration over temperature data manipulation concerning cooked book scam calculations. Western countries are spending money on green scams which only results in higher energy costs and job loss in Western countries. The scams do not significantly reduce CO2 emissions in Western countries. Regardless world emissions are increasing and will continue to increase.
3. If there truly was an AGW crisis and world CO2 emissions were required to be significantly reduced the only viable solution is a massive move to nuclear and wartime like restrictions for all countries. For example, draconian restrictions on private and business travel (airlines go bankrupt, no more tourism travel, forced living in apartments rather than separate housing, and so on.), rationing of energy per person, and so on. There has been zero discussion of the implications of true world reduction in CO2 emissions.
4. The current plan is spend money on scams and accept increased job loss in Western countries and hope for a fairy with a magic wand. Facts are facts. Ignoring reality does not change reality.

What is important here is that Nic has established AND got accepted this improved method of extracting these parameters. The results however, do depend upon the accuracy of OHC data for the difference of TCS and ECS.

The researcher incharge of OHC record found a notable drop in 2006 and was about to present his results to a conference when he was persuaded , on the basis of a disparity with TOA energy budget to “correct” the error in OHC.

This resulted in a significant number XTB data being deemed incorrect and the cooling was removed. He then had to change his conference address to explain it was all big mistake.

However, the was a significant change in length of day that would suggest his original findings were correct (just not politically correct).

Global temperature change causes large shifts in water from oceans to atmosphere and thus bears a strong resemblance to the temperature record.

At some time someone will need to re-examine the objectivity of removing inconvenient data and then reassess OHC. That can be the basis of next years incremental step towards a more accurate assessment of ECS.

@bobl
> To me this implies that all IR radiation to space from the atmosphere must
> be from a greenhouse gas? …
> … What am I missing?

The ‘elephant in the room’ that you are missing is most of the IR radiated to space from a planet is from the surface . GHE does add signficantly to that, mostly from water vapor. CO2 plays a very minor role, along with dust and other microscopic suspended matter.

Tsk Tsk says:
May 24, 2013 at 7:27 pm
I think you’re also assuming that the radiation always has to be outwards (are you?). The reality is that the CO2 molecule has basically a 50/50 chance of radiating up and out or down and in. The net effect is to increase the transit time of the photon and increase the energy content of the atmosphere and the surface as a result. Of course this is happening at all levels of the atmosphere just to make it more complicated. Finally, it can be directly observed just by measuring the radiation from a dark sky at night.

While your statement is true the average of all radiation emitted from thermalized energy is towards space. This is as I said above. Any radiation emitted towards space travels a longer distance before re-absorption than radiation emitted toward the surface. This is due to the density differences. Hence, we can model all these radiation events as the statistical average with all radiation travelling a small distance outwards. Adding CO2 increases the number of radiation events thus increasing the flow of energy to space.

Keep in mind we are not discussing surface energy. The surface energy radiated upward is absorbed as well and the more CO2 the more that gets absorbed and radiated back towards the surface. I’ve read that about 90% is immediately re-emitted. Hence, 10% is thermalized and will participate in the above process as will latent energy, conductive energy and energy absorbed in the atmosphere from the sun.

John Day says:
May 25, 2013 at 5:17 am
@bobl
> To me this implies that all IR radiation to space from the atmosphere must
> be from a greenhouse gas? …
> … What am I missing?

The ‘elephant in the room’ that you are missing is most of the IR radiated to space from a planet is from the surface .

Is it? From what I’ve read the vast majority on Earth is from the atmosphere (almost 90%). According to the KT energy budget around 30% of the sun’s energy is absorbed directly in the atmosphere just to start with. Add to that latent energy, conductive energy, thermalized energy and radiation absorbed by the atmosphere and you aren’t left with much that passes through. Much of it may have started from the surface, but the final radiation event takes place in the atmosphere.

The drop in sensitivity by simply factoring in a microsite bias also applies to any other temperature record adjustments. For example, by removing the OHC adjustments Greg Goodman mentioned or all the historic adjustments that lower previous temperatures I would expect to see a much, much smaller sensitivity. IOW, there calculations are only as good as the data itself.

I wonder what the number would be if only raw temperature data was used …

@Richard M
> around 30% of the sun’s energy is absorbed directly in the atmosphere just to start with…

I think you’re confusing absorption with albedo. 30% of the sun’s energy is reflected by the clouds, 3% is absorbed by the clouds. The atmosphere absorbs only 13% of solar irradiation directly (not counting clouds). Of the non-reflected energy, 84% (the ‘elephant’) reaches the surface and oceans, 4% of that is reflected back. The remainder of that (~80%, aka the ‘elephant’) is re-rediated as IR heat. Yes, 90% of this is aborbed the atmosphere, but its source was the surface. And yes, that heat absorbed by the atomsphere,mostly by water vapor, is significant (as I said in my post) as GHE warming. CO2 plays a minor role, again as I said in my post.

You can literally (almost) see the GHE effect of water vapor in NOAA’s 6-micron IR imagery, where the white areas denote sections of highest absorption of H2O by the atmosphere. Black represents direct IR heat radiating from surface. (It’s a negative image, somewhat unintutive at first glance)http://www.goes.noaa.gov/GSSLOOPS/ecwv.html

Where are the analogous imagery loops of roiling clouds of CO2 absorbing IR heat? I’ve never seen any, have you?
:-|

Regarding John Day’s comments at 8:14 above, the IPCC tells us in Chapter 8 of their AR4 report per Dr. James Hansen that “…a doubling of atmospheric CO2…with no feedbacks…the global warming from GCMs would be around 1.2°C…”

Has anyone ever seen similar climate sensitivity value for a doubling of water vapor without feed backs published anywhere?

We often read about feed backs and how water vapor being a strong green house gas, increases in the warm-up and produces a positive feed back, presumably some of this on its greenhouse effect properties alone. So how much would that be without consideration of its latent heat, cloud albedo and cloud insulation properties?

If the average world temperature is around 60°C and it goes up to around 61°C due to a doubling of CO2, the average concentration of water vapor in the air might go from around 1200 ppm to 1300 ppm. or around 8% and far short of doubling. So how much warming would an 8% increase in water vapor produce?

“…..My take: ECS finally turns out to be vanishingly small (i.e. there is a governor on climate responses al a Willis Eschenbach), then TCR is larger than ECS and within a few years it declines to the minor ECS figure and natural variability is basically all that is left. How’s that for a model!”

The most reliable aspect of OHC data, from argo or otherwise, is the relative data on vertical heat movement (as opposed to global heat budget speculation). Thus this new data, by showing more heat uptake at 2000 than 700m, points to vertical mixing and downward movement of heat. Thats why its getting so bloody cold. Its odd that Nina3.4 seems to be taking a dive now after a rather anomalous peak in March-April, normally Nina 3.4 peaks either in Jan (el Nino like event) or in the summer (La Nina like event). Its also noteworthy that total NH sea ice remains close to the winter maximum.

“Has anyone ever seen similar climate sensitivity value for a doubling of water vapor without feed backs published anywhere?”

Not possible. If as I maintain the surface temperature is set in the radiating zone above 14,000 feet and teleconnected to the surface via the lapse rate, then the water content of the atmosphere is approximately fixed at some value less than 100 percent relative humidity. Exceed this value and it rains.

My cents worth: It is highly unlikely that this generation or the next will see any significant change in climate be it cooling or warming, unless there is massive volcanic activity or a giant meteorite hits us. So to those who say we are going into an ice age is just as dumb as a the warmista proposals currently in vogue although I would just love to see a massive ice age coming tomorrow just to win the argument against the warmistas. Although current low solar activity WILL affect climate over to 1000’s of years, we only live 100’s of yrs (we hope!) at most. LOL

Nic // Why not do a meta analysis to collapse those wide C.I. values. The consistency between the various results suggests that the C.I. is too large.

The errors in the various estimates will be highly correlated, unfortunately, so a meta analysis would not be straightforward and, I suspect, would bring little improvement over the best constrained single estimate, that for the latest decade (or maybe the last 13 or 14 years).

We need to get a better constraint on aerosol forcing, in particular, to bring down the CI. Interestingly, some of the inverse studies give much more tightly constrained estimates of aerosol forcing than do the direct satellite observation based studies. Eg, Forest et al 2006 and my objective Bayesian reworking of it, Lewis 2013, Aldrin et al 2012, and probably Ring et al 2012 if they could be bothered to work out the CI. However, that does ignore model error.

What else is missing? SURGE events that make an overall short term balance within the planetary heat budget; heat dissipation and cold dissipation at the tropopause (think stratwarm & chill) via the narrow bands of the spectrum. It would behoove all to know what the subsurface temperature balance looks like over time at -1m through -100m, including below the ocean basins, which is a trick and a half and unachievable…
Equilibrium happens, but at what levels/forcings?

I (and many others) have been suggesting for years that volcanos are used as a fudge factor to give the impression that model and their projections (predictions) are more capable than they truly are. Now we see the role of smoothing in bringing about this impression.

You are right to observe that there ought to have been no smoothing of volcano forcing.

Greg also points out (Greg Goodman says: May 25, 2013 at 3:31 am) also points out the questionable adjustment made to the ARGO data. It is important to bear in mind that this makes the ARGO data set potentially unreliable at least as far as its tuning to pre ARGO data sets. The full implications of this questionable adjustment should not be overlooked whenever one discusses OHC or ARGO data.

Steve Mosher says, “Let me put the importance of this metric into perspective: every degree of C in uncertainty is worth about 1 trillion dollars a year if you are planning to mitigate.”

Are you saying that the “C” in CAGW will be a trillion dollars in damage per 1 degree rise, or are you claiming it will cost one trillion dollars to change the T one C. Either way I call B.S. to your numbers, so please show me your power. (I actually think a drop of one C would be far more costly, and reduced CO2 means less food, and more land and warter required to produce said food.)

bobl says:
May 24, 2013 at 5:30 pm
Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.

1. CO2 molecule takes up energy through collision with non radiating gas
2. C02 molecule emits photon
————————————————————————————————–
I see all of this as a function of the residence time of the energy involved. So a GHG decreases the residence time of energy received via collision from a non GHG, but can, 50% of the time, increase the residence time of outgoing energy recieved from OLWIR from the surface, by directing said energy back towards the surface. Clearly, if the GHG cools the upper atmosphere relative to a non GHG molecule, then conduction from below, as well as convection accelerates upwards.

An interesting thought experiment is what would happen in an atmosphere with zero GHG. According to radiation theory the atmosphere would be far cooler (some say 30 degrees) then the surface. However, then the hotter surface would continually net conduct energy to the atmosphere just above the surface, the atmosphere above the surface would then cool by conducting energy to ever higher elevations within the atmosphere, and the lower atmosphere would then continually recieve ever more energy via conduction from the surface. Eventually, as energy is never lost, the atmosphere would establish an equalibrium with the surface, the lapse rate would be set via the molecules per sq M with the T established, not by different vibrational rates of each molecule, as they would equalise, but by the number of molecules hitting the measuring instrument. (the more mass per m2, the higher the specific heat per m2) Eventualy, in this non GHG world, you would not have back radiation to the surface, but “back conduction” to the surface, thereby increasing the specific heat above the S-B equation.

So this is my assertion, based on Davids Law of physics which reads, “Only two things can effect the energy content of any system in a radiative balance. Either a change in the input, or a change in the “residence time” of some aspect of those energies within the system.”

26 May: UK Telegraph: Louise Gray: Hay Festival 2013: global warming is ‘fairly flat’, admits Lord Stern
Lord Stern, who originally warned the Government about climate change, has admitted that global warming has been “fairly flat” for the last decade.
“I note this last decade or so has been fairly flat,” he told the Telegraph Hay Festival audience.
He said the reasons were because of quieter solar activity, aerosol pollution in certain parts of the world blocking sunshine and heat being absorbed by the deep oceans.
Lord Stern pointed out that all these effects run in cycles or are random so warming could accelerate again soon.
“In the next five to ten years it is likely we will see the acceleration because these things go in cycles,” he warned…
He said it was an “illusion” to claim that the short term flat line in global warming means that global warming is no longer a threat.
“It is a dangerous extrapolation of the short term phenomenon into a long term trend when the underlying responses for long term trends in terms of rising greenhouse gases are well understood and clear.”
Lord Stern also said he has written to the Prime Minister urging him to introduce a target to decarbonise electricity by 2030 as part of the Energy Bill, currently going through Parliament.
***He said investors need the policy clarity in order to build the infrastructure Britain needs in future…http://www.telegraph.co.uk/culture/hay-festival/10081250/Hay-Festival-2013-global-warming-is-fairly-flat-admits-Lord-Stern.html

Richard M says:
May 25, 2013 at 7:04 am
The drop in sensitivity by simply factoring in a microsite bias also applies to any other temperature record adjustments. For example, by removing the OHC adjustments Greg Goodman mentioned or all the historic adjustments that lower previous temperatures I would expect to see a much, much smaller sensitivity. IOW, there calculations are only as good as the data itself.

I wonder what the number would be if only raw temperature data was used …

first principles will get you to 1.2C. I think we pu the line of demarcation at 1C. But as folks get more evidence that might be push down.

Here’s the difference between a lukewarmer and a CAGW: looking at the same PDF for sensitivity that ranges from 1 to 6, the lukewarmer will note that over half of the PDF falls below 3C. the CAGWer will talk about everything above 3C.

Simple point. The range is uncertain. there is room for many beliefs. But, I discount people who say the KNOW the value is low. I discount people who say the FEAR the value is high.
Just look at the best knowledge we have, dont whine that this knowledge is imperfect. Look at the bet understanding and describe it without over reaching. The PDF runs from about 1 to 6. Its a good bet that the true value is less than 3C. If you can say that you are a lukewarmer

A. Humans add GHGs
B. GHGs warm, they do not cool the planet.
C. How much? over time periods of 100 years or less, doubling is more likely to create less than 3C warming than it is to create more than 3C warming.

According to the data from NASA last year: “Carbon dioxide and nitric oxide are natural thermostats,” explains James Russell of Hampton University, SABER’s principal investigator. “When the upper atmosphere (or ‘thermosphere’) heats up, these molecules try as hard as they can to shed that heat back into space.”
That’s what happened on March 8th when a coronal mass ejection (CME) propelled in our direction by an X5-class solar flare hit Earth’s magnetic field. (On the “Richter Scale of Solar Flares,” X-class flares are the most powerful kind.) Energetic particles rained down on the upper atmosphere, depositing their energy where they hit. The action produced spectacular auroras around the poles and significant upper atmospheric heating all around the globe.
“The thermosphere lit up like a Christmas tree,” says Russell. “It began to glow intensely at infrared wavelengths as the thermostat effect kicked in.”http://science.nasa.gov/science-news/science-at-nasa/2012/22mar_saber/

Nic Lewis says:
May 24, 2013 at 2:11 pm
Steven Mosher wrote:
“I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation?”

Good point, Steve. That assumption would reduce the increase in global temperature betrween the 1860-79 mean and the 2003-12 mean from 0.76 C to about 0.68 C. All the climate sensitivity estimates, and their uncertainty ranges, would then reduce by about 11%. So a sensitivity of 1.7 C would change to just over 1.5 C, for example.

I hope Anthony and others take note of this. What this does is allow people to Frame their concerns about temperature accuracy in the LARGER PICTURE of the scientific debate over sensitivity.

Look at the equations for ECS and TCR. People need to relate their skepticism to this equation
That way they are part of the debate.

In order to understand science you need a health dose of caution. The limits of our data and understanding mean we must pepper our conclusions with appropriate caveats and/or uncertainty ranges. You seem to completely misunderstand this and instead favour the idea of perfection or nothing. The unfortunate truth is that most of the time science is about being less wrong than it is about being right you need to moderate your skepticism appropriately.

#####################################

Yes, Many have pointed out to Jim that he needs to be skeptical about his skepticism.
Skepticism in science is a TOOL, it is not a philosophy. By Jim’s definitions we can never know anything, which is fine for philosophical skepticism, but death to science which operates on incomplete, inconclusive, data and induction. There is a debate in science. people can join that debate by following Nic’s path. So there is a seat for everybody at that table. Crying that the table doesnt exist will get you ignored. Complaining that you want to eat at a different table, will get you ignored. There is a debate. Its a scientific debate. Its about climate science and the most important question we can ask: How much warmer? Arm waving will get you ignored.
Attcking fundamental physics will get you ignored. Screaming fraud will get you ignored. There is a debate. There is a question on the table and open seats. Do like Nic and take a seat.

John Day says:
May 25, 2013 at 8:14 am
@Richard M
> around 30% of the sun’s energy is absorbed directly in the atmosphere just to start with…

I think you’re confusing absorption with albedo. 30% of the sun’s energy is reflected by the clouds, 3% is absorbed by the clouds. The atmosphere absorbs only 13% of solar irradiation directly (not counting clouds). Of the non-reflected energy, 84% (the ‘elephant’) reaches the surface and oceans, 4% of that is reflected back. The remainder of that (~80%, aka the ‘elephant’) is re-rediated as IR heat. Yes, 90% of this is aborbed the atmosphere, but its source was the surface. And yes, that heat absorbed by the atomsphere,mostly by water vapor, is significant (as I said in my post) as GHE warming. CO2 plays a minor role, again as I said in my post.

I am using the KT energy budget diagram as my source. Where did you get your numbers? In addition, my 30% was computed after you subtract out albedo and includes clouds which may have led to some confusion. If you add the atmospheric sources you have 78+80+17 = 175 w/m2 and that does not include thermalized energy. That’s a lot of energy.

Clearly CO2 can interact with any molecule in the atmosphere so I didn’t limit it. I agree the effect is small but so is the greenhouse contribution of CO2 compared to water vapor and clouds. We have two small effects and we don’t know exactly how they work in total. That was my point.

I happen to agree. The problem is very biased people think they know how to adjust the data. As has been proven time and again in medical research, researcher bias always affects the results towards the bias. We can be 99+% assured that the data is biased on the warm side. Being mathematically inclined, I’m more of the opinion that the errors are likely to cancel out and it’s probable we don’t even know all of the factors that make it bad. That makes using the raw data as good (and probably better) as anything else.

Steven Mosher says:
May 26, 2013 at 8:08 am
____________________________
Attempts at appearances of reasonableness only go so far.
There isn’t any compelling argument that a doubling of CO2 produces anywhere near as much as 3C.
All this middle- road stuff only puts half your headlights in the opposing lane.

Steven Mosher. You have made this personal. Let me reply in the same spirit. You misquote what I write, and give a completely false impression of what I believe. I have stated over and over again that CAGW is a perfectly viable and reasonable hypothesis. But that is all it is; just a hypothesis with no empirical data to back it up, and prove it is correct. You accuse me of bringing nothing to the table. That is false. You claim that by bringing hypothetical estimates of a numeric value of climate sensitivity to the table, you are somehow proving the CAGW is correct. You insist that there is no categorical difference between estimates and measurements. All I try to point out is that, until you have actual measurements of climate sensitivity, you cannot make CAGW any more than a hypothesis. I dont bring nothing to the table. I point out that what you and the warmists bring to the table is not proper physics.

The fundamental issue, which you refuse to discuss, is whether the IPCC statements in the SPMs that there is a 95% or 90% probability of something being correct, have any basis in science. I maintain that while CAGW is a viable hypothesis, that it all it is, and the IPCC claims of high probabilities that things are correct, are scientific garbage.

Steven’s claim that you must “sit at the table” (i.e. accept the basic paradigm) to have an impact on a scientific discussion is nonsense. Both ulcers and plate tectonics are examples that prove his assertion is wrong (and there are many others). Sorry Steven, making erroneous assertions to try and force people to your way of thinking only detracts from your credibility.

Steven Mosher says:
May 26, 2013 at 8:08 am
ruvfsy says:
May 24, 2013 at 7:50 pm
So Mosh,
Where is the fine line between denialism and lukewarmerism?
1.2 per doubling of CO2?
##################################
first principles will get you to 1.2C. I think we pu the line of demarcation at 1C. But as folks get more evidence that might be push down.
————————————————————————
Let it be recorded here, Steven Mosher considers Richard Lindzen to be a denier.

Also Mr Mosher, I notice you avoided an answer to my questions with regard to your trillion dollar per degree C statement.

I’m a believer of the standard assertion, that GHGs like H20, CO2, and O3 can and do absorb small sections of the LWIR surface emitted radiation, and thereby raise the Temperature of that atmosphere, at whatever altitude the absorption occurs; quite high, in the case of O3.

Now whatever means, by which, the Temperature of the atmosphere is increased; the result of such a higher Temperature, has to be an increase in the rate of radiative cooling of that atmosphere to outer space. Higher Temperature things radiate faster; all else being equal.
Well increasing atmospheric Temperature, also should enhance convective transport of heat energy to higher altitudes, to be lost to space.

So increasing the atmospheric Temperature, beyond what the natural altitude lapse rate would dictate, should permit faster cooling to space, with a lower surface Temperature; because of the rapid radiative transfer of energy from the surface to higher altitude GHGs, bypassing the slower conduction/ convection mechanisms.

For the life of me, I can’t even imagine by what mechanism, a higher atmospheric Temperature, can transport heat energy down to the surface, contrary to what the second law mandates for actual heat energy transport. The net heat flow must be from the warmer surface to the cooler upper atmosphere. True; the smaller Temperature gradient between a lowered Temperature surface , and a Temperature enhanced upper atmosphere (via LWIR radiation), must diminish the rate of conductive heat energy transport, upwards in the atmosphere.

And of course, we know that downward LWIR radiation from the warmed atmosphere is strongly absorbed in a thin layer of mostly ocean surface, leading to accelerated evaporation, and transfer of latent heat energy back into the atmosphere.

Readers of WUWT, and all friends of “Blackbody Radiation” (including moi) might be interested in the following narrative, from the June 2013 issue of Optics & Photonics News; a regular publication of The Optical Society of America; one of the foundation bodies of the American Institute of Physics; in their “Optical Feedback” (letters) section, from one Edward Collett of New Jersey.
He comments on an article in the April 2013 OPN issue regarding a less than happy episode between Albert Einstein, and S.N. Bose, of Bose-Einstein Statistics renown.
Bose had published a paper on this statistics, and Einstein followed with a paper he couldn’t have written without seeing the Bose paper. Einstein never thought to include Bose by invite, as a co-author of his paper.

It seems that in 1905, Einstein had recognized the inconsistency of Max Planck’s derivation of the Black Body Radiation formula.

“it was part quantum and part classical”.

Einstein spent the next 20 years trying to formulate Planck’s radiation law in completely Quantum Mechanical terms. He failed and by 1925, Einstein had not developed any part of modern Quantum Mechanics. No wonder Einstein didn’t like the whole idea of quantum theory.

The above freely excerpts from Collett’s letter, and he is relating from the literature on Bose and Einstein.

Yes Max Planck suggested that EM radiation came in integral chunks; same as pumpkins on a vine.

You can pick two pumpkins, or 17 pumpkins, but not 2.71828 pumpkins; nor photons either.

Planck placed no restrictions on the size (energy) of either photons, or pumpkins. He simply said that the photon energy, and the associated wave frequency were related by E = h (nu)

That makes h = energy times time (action), so h is the action in each cycle of the associated wave frequency of the photon.

(nu) can range from zero to infinity, sans both ends, without restriction, so the photon energies are in no way quantized; just as the mass and size of pumpkins are not quantized.

Quantum theory deals with the actual physical energy levels of electrons et al in real physical materials. BB radiation theory, incorporates the real physical properties of no material of any kind; and is in every way non-existent; yet such an important step in the evolution of modern physical understanding of our universe.

By all accounts, S.N Bose was one very smart guy, and apparently a very nice guy too. One of the great Indian physicists, like Raman and Chandrasekar . Forgive me if I have left out any of the well known Indian Physics “biggies”.

@george e. smith
> Bose had published a paper on this statistics, and Einstein followed with a
> paper he couldn’t have written without seeing the Bose paper. Einstein
> never thought to include Bose by invite, as a co-author of his paper.

Einstein had indeed seen Bose’s paper, because Einstein wrote it! Bose (the ‘bos’ in boson) had requested Einstein to translate it into German for him. He had unsuccessfully tried to get it published elsewhere, but was rejected because his startling new theory about the distribution of energy in a photon gas didn’t conincide with the ‘consensus’ theory (classical Maxwell-Boltzmann distribution of ordinary gasses).http://en.wikipedia.org/wiki/Satyendra_Nath_Bose

As for Planck’s Black-Body Radiation Law being ““it was part quantum and part classical” and Eintein’s involvment you’ve got that twisted too. Planck formulated the law in 1900 using only empirically derived constants, under classical assumptions. It wasn’t until 1914 that he further expressed it as a statistical distribution:http://en.wikipedia.org/wiki/Planck's_law#CITEREFPlanck1914

So that statement about “in 1905, Einstein had recognized the inconsistency of Max Planck’s derivation of the Black Body Radiation formula” is BS. (What inconsistency?) What Einstein did in 1905 (besides Relativity) was to discover the photolectric effect (scattering of photons as light). It was for this photoelectric work that Einstein received his only Nobel Prize in 1921.

Planck rightfully deserves credit as the ‘father of quantum theory” because the Planck Relation (with its famous constant ‘nu': e=h*nu) and the radiation law are fundamental “planks” (sorry) in Quantum Theory. Planck’s Relation (like Einstein’s e=mc2 and Boltzmann’s S=k*logW) is amazing because it reveals a remarkable relationship between two worlds that previously seemed unrelated.

Einstein gets supporting credit too, for his photoelectric effect, which was one of the earliest portrayals of photons as particles.

@george e. smith
> Bose had published a paper on this statistics, and Einstein followed with a
> paper he couldn’t have written without seeing the Bose paper. Einstein
> never thought to include Bose by invite, as a co-author of his paper……”””””

Well John, if you read my post, you would see I merely relayed the essence of a letter published in OPN for June 2013.

I suggest that the best place for your learned rebuttal of that “BS”, is for you to send it to the OPN feedback column, for them to publish; they always seek to learn the truth, so they would be most appreciative of Wiki’s definitive reporting on the matter.

As for Einstein having “written” Bose’s circa 1924 paper, your Wiki source merely says that Einstein simply translated into German; he did not “write it”.

Yes Einstein belatedly received his Nobel prize in physics for the work on the photo-electric effect.

And for your information, the photo-electric effect has nothing whatsoever to do with “the scattering of photons as light”.

It relates to the emission of electrons from certain metals, when irradiated by EM radiation.

Classical physics, had no explanation for the PE effect (and still doesn’t).
The emission of electrons (or not) is quite unrelated to the intensity of the EM radiation; other than the number of electrons emitted (if any).
What determines the emission (or not), is the frequency or wavelength of the radiation. No matter how weak, the irradiance; even down to a single photon, if the photon energy [h.(nu)] exceeds a certain threshold, electron emission can occur (and with quite high quantum efficiency).

But even a kilowatt of power from a CO2 laser, at 10.6 microns wavelength, will not release a single photo-electron from a material that will emit an electron with as high as 90% QE, from a single 2 eV photon.

So if you rely on wiki for your source of factual scientific information; don’t be surprised if they sometimes feed you “BS”.

And do send your rebuttal to OPN feedback column; I can’t wait to see it in print.

@george e. smith
> … if you read my post, you would see I merely relayed the
> essence of a letter published in OPN for June 2013…
Yet, you had no problem in letting dbstealey and the rest of us think it was your analysis too. George, no offense, but you occasionally pontificate a bit too much on topics you havent’ quite mastered. So, as remediation, I suggest that _you_ write the rebuttal to OPN.

> And for your information, the photo-electric effect has nothing whatsoever
> to do with “the scattering of photons as light”.
Oops, typo, I meant to say “as electrons”. Mea culpa, pescavi. [And perhaps I should have put “scattering” in quotes.] But, PE _is_ a kind of scattering, in a broad sense (if you squint a little, so you can’t distinguish between photons and electrons). So, arriving particles (photons) arrive, interact with atoms and leave in different directions (as electrons). That is ‘scattering’, which is what I was trying to convey (by slightly abusing the term)..

>… Einstein simply translated into German; he did not “write it”.
Now you seem to be trying to hide your previous endorsement of this allegation that Einstein had somehow plagiarized Bose’s idea:” Bose had published a paper on this statistics, and Einstein followed with a paper he couldn’t have written without seeing the Bose paper. ”

The historical facts are: 1) Bose wrote a paper but could _not_ get his paper published 2) he sent the paper to Einstein and asked for his help to get it published.

So the above allegation is clearly false and misleading. Of course Einstein had seen the paper! That was the point I was making! So Einstein translated (i..e ‘re-wrote’) the paper in German. Do you disagree with Wikipedia and other references on the historical accuracy of this? [ BTW, Wikipedia is a collection of peer-reviewed documents (not that that guarantees total accuracy of course)]

Einstein was no saint, but it is well known that he did enthusiastically endorse Bose and his work to the international scientific community. Subsequently, Bose was promoted and given a 2-year paid sabbatical to visit Europe to collaborate with his “peers” (even though he did not have a PhD). Dirac named the ‘boson’ in his honor.

Well John, I just lost an hour’s worth of typing, when PG&E decided to put a four way stop sign on my local power grid. My LED reading lamp, was only out for sixty seconds; but my internet was out and the loss was total, when I tried to post it in the dark.

So just a short comment. Wiki of course does NOT publish peer reviewed papers. They publish what someone unknown wrote. Now they certainly list peer reviewed papers, such as the German Bose paper you mentioned..

But did you look up that paper itself to be certain that the wiki author correctly quoted from it; or from any of the other references.

Some of the Wiki authors, are English language impaired, so what they write versus what they cite in bibliographies, are two quite different things.

As for what I personally write, 95% of it I simply type from memory; so yes, sometimes I disremember it. The other 5%, is typically data directly excerpted from reference handbooks, and other widely available texts. I almost always cite my sources, when I do that.
As for Wiki, I NEVER consult them for information. I do sometimes check them when others, such as you, give specific links to them. Mostly, that is a waste of time.

And Stealey, evidently had no trouble discerning the difference between what I excerpted from Collett’s letter, and what was subsequently my own personal input. So what was your problem with that ?

You should really start thinking for yourself; there is little of this that any WUWT reader can’t understand. So stop citing Wiki references, unless you first read the peer reviewed papers they list, to ensure they quoted them correctly..

@george e. smith
>Wiki of course does NOT publish peer reviewed papers.
Wikipedia is not a system for publishing scholarly papers (yet). It is an on-line encyclopedia that may be reviewed, collaborated or edited by anyone, including experts and yourself. In theory, such a ‘collaborative public encyclopedia’ cannot possibly work, but in practice it works remarkably well. It’s not perfect, but, like a living thing, Wikipedia is evolving and getting better.

George, I know you want to change the subject (which is already far off-topic from ‘aerosol adjusted forcings’) but you really need to face the music and apologize for retweeting those errors in that OPN letter. Just say you’re sorry for any misinformation that you may have inadvertently said or repeated in regard to the 1924 Bose-Einstein paper. There, I said it for you.
:-|

You’ll see that Bose received full credit as the only author of the paper (“Bose, Dacca University India). Einstein’s name doesn’t appear until the end, in the Translator’s Note, where he warmly praises the work as “important progress in my opinion”. Einstein was already famous in 1924, so that tiny endorsement gave Bose the huge opportunity he was seeking.

@george e. smith
>Wiki of course does NOT publish peer reviewed papers.
Wikipedia is not a system for publishing scholarly papers (yet). It is an on-line encyclopedia that may be reviewed, collaborated or edited by anyone, including experts and yourself. In theory, such a ‘collaborative public encyclopedia’ cannot possibly work, but in practice it works remarkably well. It’s not perfect, but, like a living thing, Wikipedia is evolving and getting better.

George, I know you want to change the subject (which is already far off-topic from ‘aerosol adjusted forcings’) but you really need to face the music and apologize for retweeting those errors in that OPN letter. Just say you’re sorry for any misinformation that you may have inadvertently said or repeated in regard to the 1924 Bose-Einstein paper. There, I said it for you…..”””””

Stop making stuff up John. I don’t “tweet” what ever the hell that is. You can’t seem to understand, that the issue is NOT that the first paper by Bose WAS published with Bose as author which he was; and of course in Einstein’s translation. The letter writer’s comment related to a SECOND PAPER written by Einstein as sole author; but which, according to history, Einstein could not have written, but for the fact that he had already seen the earlier Bose paper. The letter writer asserts based on his understanding of the history; that Einstein should have acknowledged the earlier Bose work, and invited him to be co-author of the second paper which drew heavily on Bose’s work. You have not even acknowledged the existence of the later Einstein paper.

You are the one who is challenging the correctness of the letter writer’s assertions. You owe it to him, to write your objections to the OPN editorial staff, rather than shoot blanks here at WUWT.

I have no cause to alter a syllable of what I posted.

And there was no Bose-Einstein paper in 1924; there was a Bose paper, followed later by an Einstein paper, that drew heavily on the former, without attribution..

As for aerosol adjusted or any other “forcings”, these are childish gobbledegook, trying to simplify an overly complex chaotic system. The actual observationally measured real climate data, to the extent there is any, has not been explained by any of these ramblings that assert to blame a single minor component of atmospheric physics, for changes in the climate. We don’t even have any credible climate data predating about 1980, due to the erroneous substitution of ocean water temperatures from uncontrolled depths, for actual lower atmospheric Temperatures. Not only are they quite different, but they also are not even correlated, so the error is uncorrectable.

@george e. smith
>As for aerosol adjusted or any other “forcings”, these are childish gobbledegook…
Then you should have posted that opinion/comment to Anthony and Nic, instead of posting your other off-topic remarks here in this WUWT article.

So now you owe _me_ a really big apology for _your_ lack of due diligence…….””””””

Well, I have often said: “Ignorance is not a disease; we are all born with it; but stupidity has to be taught, and there are plenty willing and able to teach it. ”

If you check the record, you will see it was YOU who transferred the focus from Einstein’s johny-come-lately paper, to the earlier landmark Bose paper which inspired it.

So don’t come wailing due diligence to me. If you actually learned anything in school you wouldn’t have to google wiki to find out how to spell a publication you never ever heard of.

As for dinosaurs; they managed to survive for 140 million years, just by being big, and mean, and ugly; whereas human “intelligence” is maybe 10% of the way to its first million years of survivability testing by Mother Gaia; and may not last longer than the next generation of time wasting juvenile distraction toys lets it run.

@george e. smith
> If you check the record, you will see it was
> YOU who transferred the focus from
> Einstein’s johny-come-lately paper,

What johnny-come-lately paper? I still don’t know the identity of this so-called “paper”, which Einstein allegedly plagiarized from Bose, even though I very specifically asked you to provide a reference for it before any discussion took place. So, how could _you_ “focus on a paper whose identity has not been cited?

Actually, I would still like to know which paper you were “focusing” on. Then we can have a nice long discussion on it, and its impact on “aerosol-adjusted forcings”. I’m sure Anthony and Nic would appreciate devoting even more bandwidth to that.

You’ll probably say “Look it up yourself”, but it’s funny that didn’t stop you from pontificating on it, a paper whose existence you still can’t provide a citation for.
:-|

@george e. smith
> If you actually learned anything in school you
> wouldn’t have to google wiki to find out how
> to spell a publication you never ever heard of.

George, let me try to fill in another, very annoying gap in your knowledge. You may surprised to learn that the word “wiki” was not coined by Jimmy Wales, and it is _not_ a general abbreviation for “Wikipedia”. There are tons of wikis around on the Internet. They were invented by Ward Cunningham in 1994, long before Wales invented Wikipedia in 2001:http://en.wikipedia.org/wiki/Wiki

So, Wikipedia consists of many, many thousands of wikis (“articles”), which is why you see the term ‘wiki’ in every single Wikipedia URL reference. But there are tons of wikis on the Internet that don’t belong to Wikipedia.

So please stop using ‘wiki’ as an abbreviation for Wikipedia. (Unless you’re just trying to irritate me, then that’s OK)

> Well, I have often said: “Ignorance is not a disease;
> we are all born with it; but stupidity has to be taught,
> and there are plenty willing and able to teach it. ”

So who taught you to never use Wikipedia? That’s like saying you won’t use any textbook if it contains any factual errors or typos. By doing this you are deliberately depriving yourself of a rich source of knowledge. After using Widipedia for a while, you’ll soon learn to judge the reliability of the articles, and use the contained links to learn more, and navigate around the bogus information that you occasionally find in some Wikipedia wikis.

As I have often said: A person who teaches himself stupidity has a fool for a teacher.

And George Smith’s long professional carreer was in the electro-optics field, which trumps ‘studying on your own’. You really should listen to George, who has probably forgotten more than most folks will ever learn about the subject. He has been helping readers understand the subject for many years here. You showed up only in the past few months, that I know of.

When you need 3 posts to respond to one of George’s comments, you’re taking it way too personally. You could learn something by reading, instead of reacting. George Smith knows what he’s talking about WRT optics.

The 2009 Nobel Physics Prize co-winner, was a long time Bell Labs researcher. For 30 years, our careers intertwined, but I never met him. Often, at a technical conference, I would go to the registration desk, only to be told that I was already pre-registered, and there were messages for me on the message board. Well it was the Bell labs CCD inventor. By some quirk of fate, the director of R&D at Beckman Instruments, at that time, was also a George E. Smith. Three of us with similar jobs and job functions.

There are folks, who know me, as well as the Nobel Laureate, and have for decades. The inventor of the first practical “visible” LED, knows both of us well. The inventor of the first practical LED ( GaAs infrared) was Bob Baird at Texas Instruments in the early 1960s. He is also still active in the industry, but he would NOT know me.

As for learning in school; the best we can hope for John, is that they teach us HOW to learn. The actual material they teach, is not that important, so long as we learn how to replace it with what we really want to know.

If you don’t already have it, a small cheap ($11) book, I would heartily endorse for your reading enjoyment, as well as reliable information is George Gamow’s “Thirty Years that Shook Physics.”, subtitled, The Story of Quantum Mechanics. It’s extremely readable and informative.

I am fortunate to be able to chat with a PhD Physicist, who was a student of Gamow. He also is a top medical Doctor; not just any medical Doctor, but a world famous star of television news. Well he was back in the days of Mercury, Gemini, and Apollo, when astronauts needed their vitals monitored. As it so happens, he is currently studying Quantum Mechanics at Stanford University.

There is some, not very accurate bio of mine bobbing around on the web, circa 2000; but no, I am not the Nobellist.

Those must have been interesting times for you, George, with three George E. Smith’s working in the same field!

Let’s give our little discussion a rest. Sorry I gave you such a hard time. My only motive is to seek the truth. I like reading about the history of physics, because it gives us some insights into physics today.

Those must have been interesting times for you, George, with three George E. Smith’s working in the same field!

Let’s give our little discussion a rest. Sorry I gave you such a hard time. My only motive is to seek the truth. I like reading about the history of physics, because it gives us some insights into physics today.

Best regards,
John Day……””””

Never any hard feelings John; at my age my hide is well tanned like a good Texas saddle.

And I really seriously commend to you that George Gamow book; it’s a Dover paper back, available at any Barnes and Noble or Amazon. And although maybe not a very big name in early 20th century Physics, he was actually there in the midst of all those biggies, while all that earth shattering stuff was going down. You will get a lot of enjoyment, and good learning too out of it.