These are two reasons why the Lewis & Crok estimates of future warming may be biased low. Nevertheless, their methods indicate that we can expect a further 2.1°C of warming by 2081-2100 using the business-as-usual RCP 8.5 emissions scenario, much greater than the 0.8°C warming already witnessed.

From comments by Ed Hawkins:

John,Lewis & Crok are part of the consensus. Their estimates are well within the IPCC range. And, yes, their method is observationally based, but it still uses a model, and as described in the post this approach is no panacea.Ed.

Comment for Nic Lewis:

Piers ForsterHi Piers, I’ve just seen your comments on my and Marcel Crok’s report, Piers. In your haste you seem to have got various things factually wrong. You cliam that the warming projections in our report may be biased low, citing two particular reasons. First:

I spent weeks trying to explain to Myles Allen, following my written submission to the parliamentary Energy and Climate Change Committee, that I did not use for my projections the unscientific ‘kappa’ method used in Gregory and Forster (2008).

Unlike you and Jonathan Gregory, I allow for ‘warming-in-the-pipeline’ emerging over the projection period. Myles prefers to use a 2-box model, as also used by the IPCC, rather than my method. His oral evidence to the ECCC included reference to projections he had provided to them that used a 2-box model.

I agree that the more sophisticated 2-box model method is preferable in principle for strong mitigation scenarios, particularly RCP2.6. If you took the trouble to read our full report, you would see that I had also computed forcing projections using a 2-box. The results were almost identical to those using my simple TCR based method – in fact slightly lower.

So your criticisms on this point are baseless.

Secondly, you say:

“in Figure 3, the CMIP5 models have been reanalysed using the same coverage as the surface temperature observations. In this figure, uncertainty ranges for both ECS and TCR are similar across model estimates and the observed estimates. This indicates that using HadCRUT4 to estimate climate sensitivity likely also results in a low bias.”

I have found that substituting data from the NASA/GISS or NOAA/MLOST global mean surface temperature records (which do infill missing data areas insofar as they conclude is justifiable) from their start dates makes virtually difference to energy budget ECS and TCR estimates. So your conclusion is wrong.

Perhaps what your figure actually shows is more what we show in our Fig.8, That is, most CMIP5 models project significantly higher warming than their TCR values imply, even allowing for warming-in-the-pipeline.

Armour et al.

One of the most insightful papers on climate sensitivity that I’ve read in a long time is a paper by Kyle Armour et al. Kyle Armour also gave an invited talk in the same APS session as mine earlier this week.

Time varying climate sensitivity from regional feedbacks

Kyle Armour, Cecilia Bitz, Gerard Roe

The sensitivity of global climate with respect to forcing is generally described in terms of the global climate feedback-the global radiative response per degree of global annual mean surface temperature change. While the global climate feedback is often assumed to be constant, its value-diagnosed from global climate models-shows substantial time variation under transient warming. Here a reformulation of the global climate feedback in terms of its contributions from regional climate feedbacks is proposed, providing a clear physical insight into this behavior. Using (i) a state-of-the-art global climate model and (ii) a low-order energy balance model, it is shown that the global climate feedback is fundamentally linked to the geographic pattern of regional climate feedbacks and the geographic pattern of surface warming at any given time. Time variation of the global climate feedback arises naturally when the pattern of surface warming evolves, actuating feedbacks of different strengths in different regions. This result has substantial implications for the ability to constrain future climate changes from observations of past and present climate states. The regional climate feedbacks formulation also reveals fundamental biases in a widely used method for diagnosing climate sensitivity, feedbacks, and radiative forcing-the regression of the global top-of-atmosphere radiation flux on global surface temperature. Further, it suggests a clear mechanism for the ‘efficacies’ of both ocean heat uptake and radiative forcing.

A paper published last December in the SIAM/ASA Journal on Uncertainty Quantification argues that it is more appropriate to compare the distribution of climate model output data (over time and space) to the corresponding distribution of observed data. Distance measures between probability distributions, also called divergence functions, can be used to make this comparison.

The authors evaluate 15 different climate models by comparing simulations of past climate to corresponding reanalysis data. Reanalysis datasets are created by inputting climate observations using a given climate model throughout the entire reanalysis period in order to reduce the effects of modeling changes on climate statistics. Historical weather observations are used to reconstruct atmospheric states on a global grid, thereby allowing direct comparison to climate model output.

It has been argued persuasively that, in order to evaluate climate models, the probability distributions of model output need to be compared to the corresponding empirical distributions of observed data. Distance measures between probability distributions, also called divergence functions, can be used for this purpose. We contend that divergence functions ought to be proper, in the sense that acting on modelers’ true beliefs is an optimal strategy. The score divergences introduced in this paper derive from proper scoring rules and, thus, they are proper with the integrated quadratic distance and the Kullback–Leibler divergence being particularly attractive choices. Other commonly used divergences fail to be proper. In an illustration, we evaluate and rank simulations from 15 climate models for temperature extremes in a comparison to reanalysis data.

Damn! I been to Niagra Falls many times but never seen it like that. I’d love to visit it right now!

Interestingly despite the frigid cold there is very little snow in western New York where I grew up. The reason is the Great Lakes are almost totally frozen over so there’s very little “lake effect” snowfall.

Time “constants”
Climate sensitivity depends on time horizon and thus on the time “constants” involved. e.g. see:
Scafetta N., 2008. Comment on `Heat capacity, time constant, and sensitivity of Earth’s climate system’ by Schwartz. Journal of Geophysical Research 113, D15104. DOI: 10.1029/2007JD009586. PDF

Over short times of up to 2 years, Scafetta finds a short term time constant tau1 = 0.39 +/- 0.1 years (~ 5 months). Over a longer 20 year period, he finds a longer term time constant tau2 = 8.1 +/- 2 yrs to 12 +- 3 years.

Thus, the Earth’s climate may have entered a new mode (a new ~30-yr episode) near the turn of the 20th century: no further temperature increase, a dominantly negative PDO index and a decreasing AMO index might be expected for the next decade or two.

Interestingly despite the frigid cold there is very little snow in western New York where I grew up. The reason is the Great Lakes are almost totally frozen over so there’s very little “lake effect” snowfall.

Lake effect snow! Ocean effect snow! It snows more when oceans are warm and sea ice is thawed. It snows less when oceans are cold and frozen. That adjusts albedo and keeps temperature in bounds. Look at actual data. It always gets cold after it snows more in a warm period. It always gets warm after it snows less in a cold period. If you disagree, explain this in a different way that is reasonable.

Will Happer identified the fatal flaws in the IPCC IR physics 21 years ago. He was ignored. What he did not work out was that the aerosol physics of Carl Sagan, used to claim the ‘indirect aerosol effect’ supposedly hides AGW by making the clouds backscatter more solar energy, is the wrong sign.

So, the real AGW we had in the 1980s and 1990s was from Asian aerosols reducing cloud albedo, proved by the increase in Ocean Heat Content. The fact that we now have no warming shows that effect, which also explains the end of ice ages, has saturated and there is no significant CO2-AGW.

This is easily explained by the real IR physics, which is that increased ‘Forcing’ reduces NET surface IR and mechanisms in the atmosphere which keep LW out = SW in. We are now heading to a new Little Ice Age. Eco-fascists threatening thermageddon to establish a new Pol Pot Regime are desperately trying to panic politicians and public. However, they have lost.

Welcome to climate science, mpcraig. Processes that aren’t well understood are “parameterized” in the models with constant or nearly constant values. Global average humidity, cloud cover, albedo, etc. Mostly stuff to do with the water cycle but also aerosols, solar “constant”, and others.

Engineers will usually call such parameterizations “fudge factors”. If you have enough fudge factors you can paint any picture you want but you’ll probably get in trouble when the rubber meets the road, so to speak. The climate change narrative is in trouble now because of it.

“While the global climate feedback is often assumed to be constant, its value-diagnosed from global climate models-shows substantial time variation under transient warming.”

I think mpcraig was asking why it’s assumed to be constant and hence my explanation that it’s poorly understood and poorly understood processes tend to be parameterized with constants that engineers usually refer to as “fudge factors”. So ECS is treated like any other poorly understood process even though technically it is, as Armour states, a value diagnosed from models, which I also stated in my response.

AK I think what mpcraig is getting at is that ECS is produced as a constant for all scenarios for all time. I suspect his concern that ECS changes as the environment changes is well founded. In particular I think it’s capped pretty hard by the water cycle. Ocean surface temperature for instance just doesn’t go over 30C except in rare conditions. I believe that’s because so much water vapor is generated that clouds choke off the source of the warmth – solar short wave – and it doesn’t get warmer.

We can see this ceiling hit reliably at the beginning of every interglacial period where there’s runaway warming, temperature shoots up like a rocket, then hits a ceiling (the same ceiling) every time. The Holocene was an exception. Something happened during the runaway warming that took the wind out of its sails. The Younger Dryas in my opinion is the signal event. The Greenland Ice Sheet was spared, sea level rise stopped 6-9 meters short of the Eemian peak, and temperature peak stopped about 3C short of the Eemian peak.

Given the Eemian interglacial was 125,000 years ago we can rule out anthropogenic greenhouse gases and establish a ‘natural’ (as if humans aren’t naturally evolved organisms) variability for interglacial sea level and temperature that is 6-9 meters higher than present and 3C warmer. Ergo, whatever caused the naturally higher peaks in the Eemian might still be at work, just delayed by something like the Younger Dryas, in the Holocene.

1. In very simple “effects” climate models ECS would be an input.
2. In GCMs its an output.
3. There are some studies that suggest that as an output it is variable, not a constant.
4. For the purpose of estimating it is assumed to be constant over the period of interest.

One can question the assumption of linearity over time, or question whether it is really a constant, but you need a viable alternative to this assumption.

I offered hydrologic cycle as a mechanism that changes ECS in particular albedo changes driven by cloud type and extent. Global average specific humidity is a parameterized constant (fudge factor) in climate models as is albedo IIRC. I’m certainly not alone or in bad company with this unless you want to shrug off, for example, Richard Lindzen and Roy Spencer.

AK I think what mpcraig is getting at is that ECS is produced as a constant for all scenarios for all time. I suspect his concern that ECS changes as the environment changes is well founded.

I’ve always said it was a myth. Not a lie (that’s a metaphorical use of the word “myth”), but a metaphorical story intended to incent some action(s). As an actual number representing the effect of changing pCO2 on “Global temperature” (another myth), I wouldn’t be at all surprised if it could change, say, from 1.1 to 1.2 due to the volcanic destruction of a mountain peak in the Andes, or the disappearance of a glacier or even year-round ice-pack in West Tibet, or a variety of other changes to the boundaries of what has been called a “spatio-temporally chaotic system”.

I was actually answering mpcraig’s question: people want a constant they can feed into subsequent “predictive” models (of, e.g. the effect of certain changes to pCO2). “[T]he point of Armour et al.” is that they’re asking the impossible. IMO using any number for those predictive models is a mistake, because they yield results that people will depend on when they shouldn’t.

Mosher’s “…need a viable alternative.” is the basis of many of the problems that science encounters in its history. There is an inherent conservatism of thought in science that would surprise many lay folk who believe that scientists are out there looking for “new stuff.” Being great believers in KISS and the logic of Roger of Occam, there is a propensity to force the shoe to fit, even when it doesn’t fit very well, because the shoe has adequate parts to cover the foot. It’s important to realize that this is both a rational and very useful approach, but when it dominates a debate to the exclusion of alternative ideas, the science involved may be hanging itself with its own tie.

The geosynclinal theory of orogeny [GTO] (mountain building events) lasted as long as it did, despite very obvious crippling, even out right paradoxical issues, because there was “no viable alternative” until the actual nature of mid-ocean ridges was finally established. The GTO was scientific using known laws and properties of rock to construct an explanation of how mountain ranges with sedimentary rock at the crest came to exist. It also seemed to adequately explain how immensely deep sedimentary sequences came into being. What it could not explain well were horizontal offsets, overturned sedimentary sequences, or volcanic arc-trench pairings. Large horizontal transform faults like the San Andreas could not be properly dealt with. Simple truth was that most geologists simply could not accept the idea that the planetary crust might be both being steadily renewed and in constant motion in all planes.

The discovery of symmetric magnetic banding along mid-ocean ridges that correlated with known geomagnetic reversals finally showed where new crust formed and that the rates of crustal formation varied immensely around the globe. The resistance to this theory was pervasive and lasted for over a decade.

I had a structural geology professor who absolutely refused to consider the “absurdity” of continental drift ten years after the early publications and insisted that all students be conversant with geosynclinal theory instead. Right now, I would say that, were it not for the political-economic linkage, climate science would be in similar straits, except that there is no clear front runner as a “viable alternative.”

Yeah right. Mosher a guy with an undergraduate degree in english and philosophy has surveyed the entire atmospheric physics literature and unilaterally determined (but failed to publish his findings) that there are no viable mechanisms described that could make ECS a variable. ROFL

the fact remains that there isnt even a single runner in the field for viable alternative [to AGW as the principle cause of past warming, representing a serious potential future threat to humanity and our environment]. There are plenty of cranks. none of them viable.

You are wrong there, Mosh.

There are “viable alternates”.

One very obvious alternate is that we are uncertain about the magnitude of natural factors on the past warming, so we cannot attribute it to AGW with any certainty, and therefore cannot project with any certainty what AGW will do in the future to our climate (the position, which it appears our hostess shares).

Now if you had written:

the fact remains that there isnt even a single runner in the field foridentified mechanism for a viable alternative [to AGW as the principle cause of past warming, representing a serious potential future threat to humanity and our environment]

You would be getting a bit closer to being right, depending on how one wants to classify the cosmic ray cloud mechanism (Svensmark et al., Shaviv), which has been verified but not yet quantified at CERN.

Good question. IMO it is spatially and temporally chaotic. I agree with SM that there are no viable alternatives to the existing treatment in the models and the estimates based on observational data, but I don’t find that argument very compelling regarding future predictive ability.

Question for the room:
If the stadium wave paper is essentially correct what will these observational based sensitivity measurement methods yield?

“AK I think what mpcraig is getting at is that ECS is produced as a constant for all scenarios for all time. I suspect his concern that ECS changes as the environment changes is well founded. In particular I think it’s capped pretty hard by the water cycle. Ocean surface temperature for instance just doesn’t go over 30C except in rare conditions. I believe that’s because so much water vapor is generated that clouds choke off the source of the warmth – solar short wave – and it doesn’t get warmer.
We can see this ceiling hit reliably at the beginning of every interglacial period where there’s runaway warming, temperature shoots up like a rocket, then hits a ceiling (the same ceiling) every time.”

Thanks, David for putting this so clearly. If such a thermostat exists then climate sensitivity to unit increase of atmospheric CO2 must decrease towards zero as more of the ocean surface warms to around 30C. Willis Eschenbach has assembled a lot of data pointing in this direction. What does the mainstream have to say?

I don’t think the mainstream has anything to say about why SST is capped at 30C. It was actually Willis Eschenbach that got me thinking about it. He wrote an article about ARGO a year or three ago and noted that with rare exceptions SST doesn’t exceed 30C. Readings up to and including 30C are quite common then there’s a rare few scattered readings between 30C and 35C and none over 35C. 35C happens to be the highest mean annual temperature ever recorded and it was in 6 consecutive years in a tropical desert in Africa around 1960. Notably CO2 growth since 1960 has not enabled the breaking of that record which is also interesting. Yet more of interest is that a blackbody at 35C has a radiant emittance of 511W/m2 which is almost precisely the average insolation falling on the surface of tropical deserts. Even more interesting is that the ocean’s average basin temperature is 4C and the radiant emittance of a blackbody at that temperature is 334W/m2 which is almost precisely the solar constant of 1366W/m2 divided by 4 which is the divisor when the light is projected onto a rotating sphere. Yet something else to think about is that a deep body of water illuminated from the top has the essential characteristic of a greenhouse gas that distinguish them from non-greenhouse gases i.e. transparent to shortwave and opacity to longwave thus shortwave energy from the sun penetrates at the speed of light unimpeded (impurities in the water, not the water itself eventually absorb and thermalize the energy) to a depth of hundreds of feet and then that water must be mechanically lifted to the surface in order to lose the thermal energy (easy in, not so easy out; classic GHG effect). I rather think the global ocean is more responsible for greenhouse warming than is the atmosphere.

Lastly I have to consider what happens if the ocean is entirely or almost entirely frozen over which has happened more than once in the earth’s history. This would effectively shut down the vast majority of the GHG effect as water vapor is frozen out of the atmosphere everywhere and shortwave cannot penetrate the ocean. Precipitation would stop. CO2 sinks would stop working. Volcanoes however would continue belching soot and CO2 and after millions of years would darken the frozen surface and build atmospheric CO2 level the combination of which would eventually cause a melt.

Springer, SST appears to be capped at 30C because “average” ocean temperature is at 4C. The Dead Sea which is land locked has a higher maximum surface temperature, ~35C. The difference should be close to the rate of poleward advection.or about 32 Wm-2 which established the sea ice edge. In about 400 years should the “average” ocean temperature rise to 5C then the maximum SST will rise to about 31C increasing the overall poleward advection by about 5 Wm-2 with a corresponding increase in high latitude SST/Surface Air Temperature. So SST is not “fixed” by any means at 30C, just limited by ocean sub-surface heat transport. It is an energy balance thing :)

Brian Rose has been doing quite a bit of work with Aqua and ridge world models.

“Springer, SST appears to be capped at 30C because “average” ocean temperature is at 4C.”

The upper and lower ocean are well stratified so that makes no sense at all.

“The Dead Sea which is land locked has a higher maximum surface temperature, ~35C.”

The Dead Sea is too small and too close to land to make and retain its own cloud cover cum thermostat.

“So SST is not “fixed” by any means at 30C, just limited by ocean sub-surface heat transport.”

We observe it not going above 30C in the open ocean. Yes, it is fixed at the surface at a maximum of 30C as stratification prevents the mixed layer from any significant vertical mixing in any fixed column.
The mixed layer temperature varies by latitude while the abyss temperature is the same everywhere

David, in the ice ages bioproductivity probably increases as dust levels rise, increasing the mineral fertilization of the oceans and the land that is left is well fed year round by glacial rivers.
The flocculating effects of dust will send more marine biomatter down to the bottom, to be mineralized and cut the levels of atmospheric CO2.

You point about water behaving like a GHG is very important. The effect of 1 W/m2 of light at 500 nm is quite different from the effect of 1 W/m2 of light at 10,500 nm.

I didn’t criticize ARGO for sparse aerial coverage except perhaps underneath sea ice which is probably important and is about 15% of total ocean area. I criticized it for not diving below 2000 meters when the average ocean depth is 4000 meters and the cold conveyor belt returning from the poles rides the bottom so we miss that which is even more important. I neither case does this make a whit of difference to maximum surface temperature measurements. Are you high?

Steven, is it inconsistent for one to be skeptical of the usefulness of Argo in assessing OHC trends, but find Argo data useful for getting a good idea on maximum SSTs? Aren’t recorded SSTs over 30C rare, from Argo and other methods of measuring SSTs?

jim2, “CD – The 30 C limit of SST is hit during the day. It is a fast mechanism. Ocean heat uptake is a slow mechanism. I would expect it to operate over decades or centuries.”

It will take decades or centuries to shift, which is why I said it could take 400 years to shift one degree.

Springer, by sticking to the “fixed” or physically limited to 30C you are missing the subtle heat flows other than latent that currently help maintain an ~30C maximum SST. If the subsurface heat transport changes, the apparent maximum SST will change. You need to describe the controlling mechanisms instead of declaring they exist. Brain Rose is attempting to do that with more complete ocean models.

CO2 vapor pressure in the atmosphere naturally goes up and down with temperature. Just like in a carbonated drink. This is very basic physics. Open a hot and an cold carbonated drink and see the difference. They use this relationship and just say that the CO2 difference make the carbonated drink to get cold and warm. The oceans are huge carbonated drinks. They just have it backwards. The temperature difference makes the CO2 difference. The have cause and effect backwards.

Mpcraig: The planet’s mean temperature “should” be a function of the rate at which radiant energy enters and leaves. We call net changes in those rates forcings. For well behaved functions, small changes will be approximately linear. (Consider the Taylor expansion.). Within some linear range, climate sensitivity can be treated as a constant.

Climate sensitivity is obvious a function of time. TCR and ECS are climate sensitivity at two differ times.

What if the relationship between temperature and radiatin is not a well behaved function? Then linearity.may not be a reasonable assumption. We experience temperature variability that is not driven by forcing: ENSO, ice ages, ?. If the relationship is chaotic on the time scale climate sensitivity is being measured, then an assumption of linearity is problematic.

tonyb, “Does the higher heat have anything to do with its extremely high salinity?”

Not really. The higher salinity is due to a net loss of evaporation, i.e., the rain is more likely to fall on land than return to the ocean in the area of the evaporation. The north Atlantic is more saline. than the south Atlantic.

jim2, daily variations are one thing max sst another. The persian gulf, arabian sea, dead sea any tropical location were energy is not easily advected poleward can be as warm as 36C. The opening of the Drake Passage reduced “global” mean surface temperature by about 3 C degrees. The closing of the Isthmas of Panama also had a large “global” impact. Prior to that the limit was ~36C and only ocean heat transport efficiency changed.,

a. You don’t need a viable alternative if the null hypothesis has not yet been disproven. ie as yet it still looks exactly like a random walk of natural variability which should not be underestimated even if it is not well understood because it somehow managed to get the Earth in and out of ice ages all by itself.
b. ECS is a dependent variable that is directly forced by the assumptions built into the code re water vapour and clouds plus the parameterised independent variables such as aerosols and natural variability effects and several other unsubstantiated and pessimistic guesses. At the end of each timestep the overall value is updated. As Lindzen explained to the UK CC select committee; for lower sensitivities it takes just a few years to equilibrate but for higher sensitivities it takes thousands of years. Hence for the higher sensitivities only the transient response can be used. These higher sensitivities depend on water vapour and cloud feedback assumptions. Read again: assumptions!

I hope this clears up the Mosher-based diversions that are based mainly on his listening to other people who also knew naff-all about discrete numerical methods.

It’s all about feedbacks. These spell the difference between beneficial warming predominantly in high latitudes during the winter (ECS 1.2C) and catastrophic warming all over all the time (ECS > 2.0C).

Steve, they typically run conferences biannually in the US and in Europe, out of step, so Europe, US, Europe, US. It is typically too expensive to send Ph. D. students and post-Docs over the pond, but it is good for them to meet the people on both sides.

Judith: it seems a bit odd that we have to wait until AR6 to have the best and brightest review the state of the science in two areas: Sensitivity and attribution. It seems a bit odd that we have to address the issue with the tired old machinery of publishing word limited articles in peer reviewed literature. And as much as I like technical discussions on blogs, I think its not up to the task at hand. An annual assessment of the state of the science would be a welcomed addition

I think Climate Assessment Reports should be discontinued altogether. What other branch of science has a body like the IPCC that produces 5-year assessments that read like a bandwagon manifesto? It’s a political body and the reports are propaganda.

who suggested a body like the IPCC? not me. So it doesnt need to be anything like the IPCC but that was a cute strawman.

If you look around you will find many areas of research, particularly in medical research, that have annual
conferences and symposium on the state of the science. That would allow a place for publications like Nic’s.

here’s a small sample of what you think doesnt exists. They are not exactly what I would suggest, but they do demonstrate that
A) annual assessments in other sciences exist
B) it doesnt require the IPCC
C) you are dumb

Sensitivity and attribution, two issues related and both at best quantifiable only roughly. Not really within a margin of error that is of much meaning.

It seems highly unlikely to ever get to a meaningful estimate of long term ECS and the theoretical number is most likely of little meaning to humanity due to the timescales involved. It would seem that the best number would be for an estimate of TCR over the next 100 + years, but I am not sure that is really very important either. Unless we have a better understanding of the probable changes in both atmospheric and the oceans flow patterns over timescales important to humans, how robust is our decision making process going to be? It seems we are destined to make decisions based largely on emotional responses to the situation.

We should not “have to wait to AR6” to know whether or not CAGW (as outlined by IPCC in AR4 and – in slightly watered down version – in AR5) is a viable hypothesis or not.

But, Steve, I sense that IPCC does not want this question answered definitively, because it fears that the answer will be as the L&C paper concludes and several recent observation-based studies on 2xCO2 ECS have suggested.

(Namely, that 2xCO2 ECS = 1.8C+/-, and that we will almost certainly not exceed 2C added warming over the next 100 years, a level at which the net overall impact on human society will still have been positive, according to the Richard Tol study).

Such a conclusion would eliminate the AGW fear factor and make IPCC redundant, and no bureaucratic organization wants to be disbanded.

So it is a matter of survival for the IPCC to keep the ECS question wide open rather than resolved.

Steve: you will have to find another organization than the U.N. Framework Convention on Climate Change and its IPCC to do a useful assessment of the climate science. They are totally biased, by design:

“The ultimate objective of this Convention and any related legal instruments that the Conference of the Parties may adopt is to achieve, in accordance with the relevant provisions of the Convention, stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner.”

One can’t really trust the analysis on this subject by an organization whose raison d’etre is “dangerous anthropogenic interference with the climate system.”

David Springer Yes.
The IPCC are not subtle, either. The whole publishing sequence, where the politically edited summary is published first, followed months later by the “Science”, after the “Science” has been edited to support the summary. It is farcical. It highlights the craven stupidity of the MSM. Selected environmental editors and reporters should be culled, bulldozed and burnt to celebrate Earth Hour every year.

I just thought I would chime in and say that Nic is probably the most underrated scientist working in the climate science field. The level of examination and detail into this discussion is beyond what you normally find in any scientific field.

Those who discount this study or his other work are destined to eat their words.

The foundation for this paper WAS published in the Journal of Climate.

These methodologies fail at the outset by erroneously admitting assorted GCMs as embodying evidence of humanity’s tampering with the global climate without any finding whatsoever concerning the scientific validity of the methodology employed by the supposed experts of single-causation AGW model-making. From the beginning the models demonstrate their inherent unreliability: the marked differences between model-predictions compared to actual conditions will always the very most we can ever presently know for sure.

Take a look at the second picture down. Due to squabbles in the mid 19th century that part of the sea wall carrying the line was never built to the same standards as the rest of the wall. It is single skin with no outer sea wall. it is that precise part that collapsed.

The line is some 160 years old. Over that time I calculated that it has been pounded by some 750,000 damaging waves (and many millions of much smaller ones)

It was well known that it was very vulnerable. Indeed the line was breached the very first year of operation by a storm and it has been breached numerous times since.

We really should have looked to the past in order to see what sort of standards this piece of infrastructure needed to be raised to. We have been fortunate in having a relatively benign climate the last 50 years which has lulled us into a false sense of security, but as our long history shows, that is the exception rather than the norm.

The worrying thing is that extremes appear to occur more during cold periods rather than warm ones. IF the climate is cooling-and I only say IF-we might get more of these events. If it stays in a benign warm period there are likely to be fewer extreme events (compared to those of the past, say pre 1850)

Personally I cant see the climate sensitivity cited and I have now looked at records going back 1000 years and things were much much worse at times in the past. However, whether or not I believe in the sensitivity claimed we should at the very least expect that events as bad as the past will happen and plan accordingly.
tonyb

The Sahel Drought of the 1970s (during that Global Cooling scare) was one of the monster climate events or its century, though nothing killed liked Chinese flooding in the 1930s. What’s worth remembering is the much greater scale of drought in the Sahel during the LIA. Put that together with dynasty ending drought in China, Cambodia etc during the LIA, and it’s pretty clear you don’t need the “new climate” to belt humanity around in the worst possible way. Not that major warmings don’t have their hazards in places like SoCal. The only real “extreme” would be an absence of extremes – which is not a fun thought.

Climate Change! You even get big freezes during warmings and big heats during coolings. Can’t win some times, can you?
But. like Tony says, you can at least patch a seawall, and generally use your loaf.

Maybe you’d like to discuss the physics here rather than the fact that, as we all know, there is still some (natural) long-term warming (about half a degree) yet to come before 500 years of cooling, as between MWP and LIA.

Tonyb
Fine sensible comment as usual. What about part 2 of sea level rise.

I am so curious about the times past the Romans cause I read we are going to El Nino again. Can’t wait till they blame the coming rains in CA on global warming after they blamed the drought. Plus the extra snow in the east and north of US.
Scott

Thank you. I am currently working on ‘tranquillity, transition and turbulence’ which traces climatic history through its transition from the tranquillity of the MWP through to the turbulence of the LIA. You certainly wouldn’t have wanted to live during many periods of the LIA when weather extremes were of a momentous nature

Each ,major article takes up to a year to research although it invariably gathers material that can be used elsewhere. For example TTT is invaluable for seeing what sea level was doing as the snow fell and ice formed after the warmth of the MWP and then started to melt from around 1750.

So I do have some material that will come in useful for the second sea level article. best regards

I have a question for the authors. I am assuming that the observational evidence used in the various studies from which this paper is derived did not take into account the multi-decadal dominant phases of ENSO? Is that true and if so, could the authors of this paper comment on whether they think this should mean that even these sensitivity estimates are too high, as they would have been skewed in the latest period by attributing too much of the temp rise to CO2?

Evaporation is obviously a difficult concept. It causes cooling at the surface and warming in the upper atmosphere. Where there is less water to evaporate – the surface stays warmer. Where do you think that might be webby? Should this make a difference to full depth tropospheric heat content? Or just the vertical distribution of heat?

Judith, thanks for this technical thread. The other was getting bombed.

One of the things I find fascinating is that Guy Callendar deduced the log nature of CO2 climate response in 1938. His hand drawn curve says a doubling would end up causing 1.67 degrees C. He had little climate data, no satellite energy budget information, and no fancy GCM models. Just a general steam engineer’s understanding of how things actually work. Posted a while ago at Climate Audit. Maybe just a coincidence, but I don’t think so. Like Milliken’s oil drop experiment to determine the charge of the electron, the scientific method eventually converges on the ‘observationally correct’ answer. Less model and more observation seems to be doing the same for a general notion of ECS for our world as it presently exists in (the tail end of?) the Holocene. Loehle just published essentially the same result using completely different methods (along the thinking lines of Akasofu for subtracting natural variability from temperature data). See Loehle, A minimal model for estimating climate sensitivity, Ecological Modeling 276: 80-84 (2014). An unpaywalled version is available.

Much of the kerfuffle about this new paper is about how it got published where, and what it says about the IPCC (and by inference, about the AR4/AR5 ECS contributors). “How the IPCC buried Evidence” is a rather strong title for the long technical version, but correct. It has immediately provoked the usual inflamed immune response. I have yet to see a substantive criticism of the ‘new’ ECS estimates.

I think when ocean surface temperature reaches about 30C that’s the end of any further greenhouse warming for that region as that produces enough clouds to block further shortwave heating and ultimately shortwave is the only source of heat. So we’re left with the ocean surface warming into higher and higher latitudes. ECS is given as a global average so if the above is true it must be variable and decreasing as increasing %age of SST hits the ceiling temperature. Over land is a different story as there isn’t a practically infinite supply of water to produce clouds over land.

Land shows a faster response to forcing while the ocean has the lag due to heat uptake. The heat uptake lag is fat tailed in that it has a quick transient followed by a gradual tail. The land has no thermal mass behind it so shows a more immediate response.

Web, an axiom isn’t a proof.
You have no way of knowing if the reason the land warms more than the oceans is because latent heat transferred from the oceans, has a bigger effect on land than on other bits of the ocean.
The BEST animations show waves of heat forming in the Pacific, and also the Atlantic, and crashing into the continents, causing waves of heat to ripple across the land.

Evaporation is obviously a difficult concept. It causes cooling at the surface and warming in the upper atmosphere. Where there is less water to evaporate – the surface stays warmer. Where do you think that might be webby? Should this make a difference to full depth tropospheric heat content? Or just the vertical distribution of heat?

You can get a good enough fit to the temp curve with a few major combinations of factors. Projecting it forward is an orders of magnitude more difficult problem – as can be seen in the divergence of the Lean and Rind graph what we know actually happened.

The real question is about post 1975 warming. Is it mostly anthropogenic?

Available data says it isn’t. Available ERBS data shows cooling in IR (+0.7W/m2) and warming in SW (-2.1W/m2) – from cloud changes associated with ocean and atmosphere circulation feedbacks. Available ISCCP-FD data shows something very similar for broader coverage.

Only a God’s eye view enables proper attribution – else it is all just assumption and presumption.

JC SNIP Skippy has nothing to compare to the excellent fit produced by the consensus science CSALT model
Note how a training interval that stops at 1960 is able to provide enough scaling information to predict the detailed warming up to the present time.
The scaling pertains to CO2 forcing, SOI ENSO effects, etc.

Cumulative SOI because it is known to everyone but webby that this system adds to and detracts from warming on a decadal basis. The radiative flux data is particularly telling and getting to ignore it is a proclivity of certain types.

Collinearity is the problem – over a relatively short period where there is a large increase in anthropogenic forcing – as in the Ammann graphic from NOAA – and where there are other changes in forcing – as in the ERBS and ISCCP-FD data. Or by simply realising that decadal changes in Pacific conditions add to… You can get the increase by scaling one and totally missing the other.

There are a number of ways of doing this very simple exercise – and webby’s is particularly clumsy. And you can tell I am bored because…

JC SNIP Skippy believes that you must integrate the SOI for some reason, yet an integrated SOI will generate a step change when an El Nino hits. He apparently thinks that the temperature will rise and stay up there, which is what happens when a pulse is integrated.

In fact the temperature tracks the spikes of the SOI and doesn’t have the memory of an integrated SOI. Everyone in the world knows this based on what happened in 1998, but this aussie character has to buck the system.

He says these things because he is motiv@ted to say them, no matter if they are correct or not.

JC SNIP Of course the excursions of the SOI are related to the pause. However it is not an integration that shows the average excursion but a running mean with the appropriate window size.That is also a lagged filter and I use 6 months to get the best fit. It is well known that global temperature spikes lag the SOI spikes by about 6 months.

I’m afraid you haven’t yet explained WHY the warming of the land and sea surface temperatures moved in sync over more than 30 years from 1950 to 1986 and then suddenly started diverging over the next 27 years.

The explanation you gave either applies for the first 36 years or it applies for the next 27 years, but it CANNOT apply for both.

So back to my question, Webby:

Why did land and sea warm in almost perfect sync for 38 years and then diverge for the next 27 years?

Please no double talk. Just a straight answer will do – even if that answer is a simple “I don’t know”.

Loehle is again up to his old tricks of ignoring the gradual knee in the log(co2) sensitivity curve.

Is that the knee in the log curve, to the left of which it heads off to negative infinity? Based on that, Loehle also ignores the fact it would be infinity cold if the CO2 concentration were zero! My point being, the CO2 response may be “logarithmish” is some region, but it obviously can’t be a log response over the entire range. So I don’t see how one can validly assert that the shape of the CO2 response must have the same characteristics as the log curve.

We can see they’ve been in perfect sync since about 1998 with land lagging ocean by about 2 years. If you play around with the offsets you can see there hasn’t been any trend as you’d expect from TCS vs. ECS but rather an isolated period in time from 1980 to 1998 when land temp rose faster than ocean temp by 0.45C.

WebHubTelescope’s toy curve fitting exercise doesn’t capture such interesting bits of course it just blurs everything together into gross trends.

Even now the land surface is continuing to warm at twice the rate as the ocean surface. What will Max do? Will he try to turn the clock back 100 years to when the land was warming at twice the rate as the ocean surface?

Manacker, too bad you can not admit the land warming twice the ocean warming since 1960 and also land warming twice the ocean surface warming since 1880. Must be a blind spot. Don’t use wordFiTrees as it promotes intellectual laziness.

No Web. CSALT is intellectual laziness. It can’t connect events such as the 1980 and 1998 El Ninos as the start and end points of a temporary divergence in rates of land vs. SST warming. To do that requires eyeballs, brain, disparate facts, intuition, and how to bring them all together in ways that can’t be done with a few simple equations. You’re letting a computer do your thinking for you. Computers dont’ think they compute. Only brains can think. The intellectual laziness is all yours. Or maybe not all brains can think and yours is one that cannot. I’ll leave that as an open question.

maksimovich, I don’t read too much into the absolute values, more the rates. The absolute values would depend on the mean latitudes of ocean and land, for example. I don’t know how they normalize these.

The “skeptics” are all wound up over not being able to comprehend the significant land versus ocean surface warming rates. This has been ongoing since we first started to emit large quantities of CO2 in to the air.

Look at the long-term trends for the land to sea surface warming rates.
The Ocean / Land warming ratio has been around 0.5 since 1880.

These numbers do not lie. The ocean surface has warmed up by over 0.6C while the land by over 1.2C. Again, look at the graph and try to argue against the general trend.

What separates the dogs that don’t hunt, such as your springer spaniels, from the rest of us scientists is that we use math to do the detailed book-keeping for us. We realize that all the information can’t be retained and juggled in our head, so we use a model such as CSALT to disambiguate and constrain the valid solutions.

A good example of this is when MiCro says

”
Mi Cro | March 9, 2014 at 8:25 am |

The big issue is using global series for regional effects. The El Nino’s reposition large amounts of warm water, then jet streams move moist heated air over the continents in different places.
“

And of course all this model will get is ridicule because as the springer spaniel says ” You’re letting a computer do your thinking for you. “ — whereas in actuality, this is a case of pure algebra. No computer required.

It is more a matter of putting the pencil to paper, which dogs can’t do because they only have pause.

Above is a plot of land-only temperature from 1900 in red smoothed by a 10-year mean to get rid of the noise, SST in blue with the same smoothing, and Mauna Loa CO2 in blue smoothed by a 1-year mean to get rid of seasonal noise and normalized for comparison with temperature.

Nothing more than arithmetic. No algebra required to produce this plot.

Points to ponder:

1) land/ocean warming was identical from 1900 to 1940 with 0.4C total

2) cooling occured between 1940 and 1980 with land cooling faster than ocean

3) warming occured from 1980 to 2010 with land warming much faster than ocean

4) CO2 content grew consistently through the entire record

There is no consistent response from the climate to increasing CO2. Land sometimes cools and sometimes warms, ocean sometimes cools and sometimes warms, sometimes land warms faster and other times land cools faster and yet other times there is no difference between land and ocean.

Yet another interesting tidbit your simple algebra equations didn’t reveal is that land/ocean warming/cooling rate became the same again beginning in 1998 coincident with the beginning of “the pause”.

Above is a plot of land-only temperature from 1900 in red smoothed by a 10-year mean to get rid of the noise, SST in blue with the same smoothing, and Mauna Loa CO2 in blue smoothed by a 1-year mean to get rid of seasonal noise and normalized for comparison with temperature.

Nothing more than arithmetic. No algebra required to produce this plot.

Points to ponder:

1) land/ocean warming was identical from 1900 to 1940 with 0.4C total

2) cooling occured between 1940 and 1980 with land cooling faster than ocean

3) warming occured from 1980 to 2010 with land warming much faster than ocean

4) CO2 content grew consistently through the entire record

There is no consistent response from the climate to increasing CO2. Land sometimes cools and sometimes warms, ocean sometimes cools and sometimes warms, sometimes land warms faster and other times land cools faster and yet other times there is no difference between land and ocean.

Yet another interesting tidbit your simple algebra equations didn’t reveal is that land/ocean warming/cooling rate became the same again beginning in 1998 coincident with the beginning of “the pause”.

A consistent picture emerges from these plots and the cleaner version I posted above. Before 1980, the ocean was able to keep up with the forcing change, including a period of reduced forcing after 1940 (solar, aerosols?) where land responded slightly faster, if you look closely. Then after 1980, which is a period in which we have emitted half of all the CO2 since the industrial age, the land far outpaces the ocean. This is a period of more rapid forcing changes than before, so it makes sense that we see this divergence. Obviously, no proposed natural variability could produce such a divergent land/ocean response. It is a response to external forcing that exhibits the different inertias in the system.

“The stadium wave obviously impacts ocean and land differently and with different lags.”

Yeah. Obviously. For forty years from 1900 to 1940 the old stadium wave kept land and ocean temperature rise in perfect synchronization. Then for the next 40 years from 1940 to 1980 it cooled land faster than the ocean. Then for the next 20 years from 1980 to 2000 it warmed ocean faster than land. Then for the next 14 years (and counting) land/ocean warming is back in synchronization.

Yeah boy, that stadium wave sure is a rock solid explanation. /sarc

Meanwhile CO2 just keeps on climbing with no correlation whatsoever with land or ocean warming or cooling.

Jim D: The picture that emerges is that there land can warm faster than the ocean, it can cool faster than the ocean, and it can warm or cool at the same rate as the ocean. Meanwhile CO2 rises steadily in the background with no correlation whatsoever on what land or ocean temperature is doing. Thanks for playing. Better luck next time.

Obvious that land temperatures can show lags with respect to sea surface temperatures. The stadium wave shows lags with respect to different ocean bodies. That is why the longer term defluctuated trends need to be compared.

We have become victims of instantiate thinking –i.e., a reductionist logic where we accept as our understanding of the whole of something, nothing more that the creation of the mind of someone else, based only on a few snapshots. For example, with respect global warming alarmists we are looking through the blinking eyes of government scientists who in turn have turned the multiple interacting instantiations of an infinitely grander reality ruled by natural causes – comprised of global atmospheric, oceanic and celestial dynamics – into arrays of numbers to make still images that are simple enough to be easily understood and yet have little if any usefulness outside a digital world.

Steven Mosher said;
“One can question the assumption of linearity over time, or question whether it is really a constant, but you need a viable alternative to this assumption.”
No, you’re supposed to establish rigorous ‘ab initio’ robustness *before* you go spraying prediction from your (semi) intelligently designed teleological apparatus.

So was the following assumption actually explicitly laid out ab initio?

We’re assuming the “climate sensitivity” is a constant that is effectively non-responsive to any sort of changes to boundary conditions of what is a very complex non-linear system that exhibits temporo-spatial chaos.

It has certainly been implicit since day one. My early questions on the subject were met with contemptuous (and contemptible IMO) intellectual hooliganism, which is part of why I concluded the whole “global warming” thing was largely a “stalking horse” for a socialist agenda.

“The history of every major galactic civilisation tends to pass through three distinct and recognisable phases, those of Survival, Enquiry and Sophistication, otherwise known as the How, Why and Where phases.
For instance, the first phase is characterised by the question
How can we eat?,
the second by the question
Why do we eat?,
and the third by the question
Where shall we have lunch?”

Using a single constant is reasonable to nail things down; Sophistication comes later.

AK makes asssumptions about a socialist agenda. that’s so funny on a thread that discusses assumptions. ironic.

AK made a conclusion“about a socialist agenda.” Subject to reevaluation with further evidence.

OTOH, among the further evidence I have is a statement of yours: something about people who won’t “cross that thin green line” losing it or something when the subject of CO2 capture came up. How you can tell their agenda “isn’t about carbon”.

I’m sure there are other valid reasons for working so hard to blur the distinction between mitigation and remediation than using the urgency of putting an end to fossil carbon emissions to support ending the Industrial Revolution. I just can’t think off-hand what they are.

Unless it’s what Napoleon (IIRC) said about never attributing to human malice what can be explained by human stupidity. I suppose their general failure to pick up on the real threat from “sleeper weeds” and other direct effects of increased pCO2 on ecosystem stability would mitigate towards that.

But I prefer to give people the benefit of the doubt: better red than dumb.

WebHubTelescope: Start from the position of arguing why the current TCR we are seeing of 2C is wrong.

That is an estimate based on a particular model fit to extant data, that assumes TCR to be fixed. The model has not been tested against future data. We can’t tell whether it is wrong or right. There are other models that fit the data approximately as well, that have no CO2 effect at all, but they also have not been tested against future data, so we can’t tell how accurate they are either.

However, warmer conditions produce more cloud cover, so the assumption of a constant TCR is likely wrong. That undermines the possibility that the model with current parameter estimates will continue to fit well in the future.

The assumption of constant TCR (and constant ECR) is made because models with non-constant TCR are intractable or will likely produce singularities in the estimation procedure.

Warming from 1880 is not trustworthy in hundredths of degrees average across the entire globe. Get serious. Use satellite data from 1979. It’s solid data, it’s global, it’s precise enough for the purpose, and with 35 years that’s enough for two and a half Santers (Santer = 14 years).

WebHubTelescope: Take the warming from 1880 and the growth of CO2 from 1880 and it shows a TCR of 2C.

Take the warming from 1960 and the growth of CO2 from 1960 and it shows a TCR of 2C.

That is an example of modeling extant data. You can model extant data with a constant climate sensitivity, as I wrote. If cloud feedback blunts or reverses the effect on warming at CO2 levels higher than they are now, then the climate sensitivity is not constant. Whether cloud feedback is + or – isn’t known.

The slow feedbacks can also start to kick in. These include albedo changes and greenhouse gases that may start outgassing. There is no evidence that clouds will compensate for the higher specific humidity that is leading the fast feedbacks.

Start from the position of arguing why the current TCR we are seeing of 2C is wrong.

Because we are not seeing it, is the first reason, Webby.

bi2hs and Matthew Marler have explained why.

It is based on an “argument from ignorance” (the logic of: “we can only explain X if we ASS-U-ME Y”).

But it goes back to this logical fallacy:

1. Our models cannot explain the early 20thC warming cycle.
2. We know that the statistically indistinguishable late 20thC warming cycle was caused by CO2.
3. How do we know this?
4. Because our models cannot explain it any other way.

The models explain the early 20th century warming perfectly well. The log sensitivity takes care of this.

You’re wrong on two counts there, Webby.

In AR4 IPCC concedes that the models missed the early 20thC warming (see the discrepancy between the observations and the climate model predictions for the period 1910-1944 in FAQ 8.1, Figure 1). The models predict around one-third of the observed warming (0.21C vs. 0.53C over the 35-year period).

In AR4Ch.9 (p.691) IPCC states

Detection and attribution as well as modelling studies indicate more uncertainty regarding the causes of early 20th-century warming than recent warming.

Oops!

So much for your first mistake.

Now to “log sensitivity”:

According to ice core estimates, atmospheric CO2 was at 299 ppmv in 1910 when the early 20thC warming cycle started and 309 ppmv in 1944 when it ended.

The statistically indistinguishable late 20thC warming cycle started around 1975, when Mauna Loa measured 330 ppmv and ended in 2005 with CO2 at 379 ppmv.

Loehle obviously did it wrong, otherwise he would have arrived at a value close to 2C for TCR, not the low-ball estimate of 1C that the denier cabal has agreed to present as a result of back-door meetings.

Thanks for sharing the link and for your TCS derivation, which I believe can be reasonably thought of as an upper bound.

Your observational data has a 35% increase in CO2 over 132 years – much slower than the standard 100% rise over 70 years used in a GCM derived TCR. Therefore, it has a certain (significant) built-in component of equilibrium response which ought to make tour forecast hot. Strangely, a few had criticized your 82 yr forecast, because it did not use an ECS!

And I noticed a similar complaint against Lewis and Crok for using a 70 yr TCS in a 66-84 year forecast. (wth?!)

McShane and Wyner found absolutely no signal whatsoever in Mann’s proxy data and yet here we have Thorarinsdottir, et al., looking for the optimal method of determining whether the ‘fingerprint’ of man still may properly be found within the output of assorted GCMs.

Snow depth varies a lot during the winter months but it’s gone by August so only what falls in one winter is there and the average depth according to the Russkies who endeavoured to measure it says 34 centimeters on average maximum depth reached in May.

OK David, thanks, a little something for the pair of us to ponder.
Imagine that changes in the amount of heating at the equator alters the salinity of the surface of the Arctic. Now how much ice will be generated during the Arctic night, as a function of salinity? Will salty brines sink before ice forms, as they become dense more quickly than low salt brines, that need a lot of water removed?

Dr. Curry,
I assume this sentence in Nic Lewis’s comment is missing the word “no” between “virtually” and “difference.” “I have found that substituting data from the NASA/GISS or NOAA/MLOST global mean surface temperature records (which do infill missing data areas insofar as they conclude is justifiable) from their start dates makes virtually difference to energy budget ECS and TCR estimates.”

There IS a link between magnetism and the scattering (birefringement) of light by ice crystals, called the Cotton-Mouton effect. For ice, the effect is positive by .004. More magnetism, more “scattering”

We know that atmospheric H2O is more prevalent than CO2, by a factor of 2.0 percent to 0.04 percent or 50 to 1.

Evidence for the prevalent existence of ice crystals in the atmosphere…A government Canada web site says this: “Although there is still some debate in the scientific community about how the electrification of clouds actually occurs, it is agreed that the separation of positive and negative charges must occur within a cloud for lightning to take place. It is also generally agreed that ice must be present within a developing storm for it to eventually form lightning.” (NOTE: of the 3,000,000,000 lightning strikes per year, 90% or cloud to ground. and much more lightning occurs in lower latitudes (Florida is the record holder).

So if you think of ice crystals as tiny blinds, and magnetism as the pull cord, and relate the recent very low magnetic strength of the sun, and the amount of time the sun took to reverse poles this year, and relate the strength of the sun’s magnetic field to the earths magnetic field, you build a very strong case for a direct link between the sun and the impact of the greenhouse effect of H2O in the form of ice crystals.

(NOTE: the strength of the sun’s magnetic field is directly proportional to overall solar activity, and follows the solar cycle like a puppy dog)

The “transience” of H2O in the atmosphere (cycle time 9-10 days), makes it’s position more likely a “regulating” factor, than a spiralling cycle factor.

CONCLUSION:

With the contribution of H2O to the overall greenhouse effect being so vague (36-72%), and introduce the fact that it is variable, you have a very large potential for the overshadowing of CO2 by magnetism related variances in the GWP of H2O.

FURTHER NOTES:

The sun is very weak, magnetically right now.
The last solar magnetic pole reversal took much longer than normal, 9-10 months vs 3-4 months. for a large part of last year, the sun had TWO south poles.
The North Magnetic Pole is actually a south pole, magnetically speaking, that is why the North end of magnets point to it.
The earth has an orbit that is typically off by 7 degrees from the the equator of the sun’s magnetic field. For half the year we are slightly more in one end of the Sun’s magnetic field, than the other. But remember every 11.5 years the sun’s field “FLIPS”
The earth is “tilted” by 23 degrees from our orbit. But the “tilt” of the magnetic poles is different, and is currently about 10 degrees off from the geographic north pole, making it a “blend” between the tilt of the earth, and the inclination of our orbit.
The magnetic poles move significantly, and not always at the same rate, the north pole was travelling south from at least 1600 to about 1800 at which point it turned north again. It moved 1100 kilometers in the 20th century.
The shape of our “magnetosphere is not regular, nor is it static. It is “squashed” on the solar side, and stretched on the non-solar side. It even reacts to CME’s, getting more “squashed” on the solar side, and more extended on the non-solar side.
The heliopause is 38 billion kilometers in diameter, or 100 AU in radius, meaning than earth’s orbit around the sun is about 1% of this distance.
Earth is the densest, most metallic, most magnetic object in the solar system, other than the sun. Jupiter is a close second, Saturn 3rd.
Jupiter orbits the sun every 12 years, but relative to us, every 11 years, and the the sun has a roughly 11 year cycle, with variances related to strength of preceding cycles, and the length of time it takes the sun’s poles to flip.

It is difficult to conceptually “see” the relationship between all the angles, orbits and rotations. I wish I had a big physical model of the sun and earth. And am thinking of trying to construct on.

p.s. Judith, as this is the thread for technical discussions, can we have a thread created for less technical discussions too? for jokes, poems, waxing quixotic? Also, I was really pleased that magnetism was mentioned in your address to the APS. I think it will get eventually get a higher billing in discussions and analysis as time progresses.

I read your post up to the, “NOTE: of the 3,000,000,000 lightning strikes per year, 90% or cloud to ground.” then stopped reading. I am not sure what you mean by this, and also not sure what difference it makes in your argument. However, we see lightning as coming down to us, but what actually happens (from what I have read on the matter) is that lightning proceeds from cloud to ground and from ground to cloud at the same time, meeting somewhere in-between. I am sure that I have seen time lapse photos to this effect, because it is unintuitive.

It seems to me there is more metal in the northern hemisphere, more meteor strikes, (the one creating the nickel deposit in sudbury, ontario. the one making the initial depression where the gulf coast is)

There obviously has been more life in the northern hemisphere, as life created the continents, and the oil deposits.

The issue of birefringmence may have more impact in the northern hemisphere than around the equator because the angle of incidence of light passing through the atmosphere is lower, and therefor the path of the light is often passing through the atmosphere for a longer distance. And there is more land mass to accumulate ground based ice.

The poles are where magnetic field strength is the greatest, but there could be less impact on the south pole because it is dry there, a large part of antarctica being technically desert, I think.

I added the note about the lightning because it is evidence of the existence of ice crystals in the atmosphere even in warmer climates near the equator. (it may seem counterintuitive to some.) water vapor in liquid form does not cause birefringement, only water in ice form does. something to do with it’s crystalline structure. Does that help it seem less of a distraction and more of a relevant factor?

Ice as a crystal has hexagonal crystals and therefore has the property of birefringence. What I don’t get yet is how that fact or the fact that ice exists in clouds shows causation of the earth’s magnetic field changing climate.

I have a graph from somewhere that shows a decline in lightning from 2008-2010 that shows decline 2,000,000….1,000,000….500,000 during same period decline in solar activity. Says from WeatherMatrix, Data by Noaa. So I am guessing US. I can re-source it.

but I have also read from some US weather service or something that it used to be a hard statistic to get for world wide, so I don’t know the validity of the graph I downloaded.

Interestingly both wildfire statistics and death to lightning statistics have patterns that somewhat match solar activity in terms of shape. But then in warmer years there would be more people outdoors more of the time. and same potential for false relationship with wildfires, easier to start some years than others. still interesting.

This article from NASA in 2003 is titled SURPRISE, Lightning has a big effect on the atmosphere…. http://www.nasa.gov/centers/goddard/news/topstory/2003/0312pollution.html
It says:
– Lightning, however, directly releases NOx throughout the
entire troposphere.
– during the summer months, lightning activity increases NOx
by as much as 90 percent and ozone by more than 30 percent.

So if solar activity affects lightning strikes, and lightning strikes affect NOx and Ozone, then there is a causal relationship to be found of some sort, between solar activity and climate. But I can’t imagine this hasn’t been thought of, found, and corelated correctly before, it’s too obvious, and too much in the realm of awareness of climateoligists. I like looking for things I imagine climate investigators wouldnt have found (like variable birefringence, and variable radioactive decay), because they are recent, and more of a physics researchers type of deal.

Please allow me to correct myself, the graph I had in my hand was UAH satellite based temperature, not solar activity. and lightning tracks against it amazingly well. the lightning information I saw was here. seems legit, and makes sense that it would. but I don’t have corroboration. http://nrm.salrm.uaf.edu/~dverbyla/gradstudents/dorte.html

there doesn’t seem to have been a lot of statistical data gathering for lightning, as I said.

found this statement…
Using Very Low Frequency (VLF) wire antennas that resemble clotheslines, Prof. Price and his team monitored distant lightning strikes from a field station in Israel’s Negev Desert. Observing lightning signals from Africa, they noticed a strange phenomenon in the lightning strike data — a phenomenon that slowly appeared and disappeared every 27 days, the length of a single full rotation of the Sun.
at this web site…http://www.sciencedaily.com/releases/2009/11/091111142518.htm

So looking for causal relationship between solar cycle and climate change that may have so far slipped through the crack, perhaps solar-lightning to NOx/O3 production-temperature hasn’t been put together yet??

I really do question the IPCC’s assessment of the limitation of solar variation on changing our climate.

I suppose you could call it a remodel. If it conforms with observation I can’t see how it could be challenged but if there is excelleratated future projections then those assumptions certainly could be I would think.

Wikipedia: “Climate sensitivity is the equilibrium temperature change in response to changes of the radiative forcing. Therefore climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data.”

Accurately? Probably between 1.1 and 2.6? Why does official climatology make itself critically dependent on a non-measurable quantity? A castle built on sand? A buzzword CMIP means “Coupled model intercomparison project”. Why are we comparing models to models, and not to the reality? What does this have to do with science?

Agreed. The fact we make claims, and policy, and sway the spending of so much effort and so many resources on a premise, is boggling to me.

Before we impose tariffs, and lay people off, and close plants, and exchange funds nationally, I would have expected to see the term Global Warming AMOUNT replace Global Warming POTENTIAL.

But then look at the stupidity of fears related to Y2K, 2012 (as wrongly related to “crossing the galactic plane”, acid rain, hole in the ozone and so on. we are always seeing monsters under the bed and reacting before we shine a flashlight under there and make sure we need to. mass belief that we matter, but fear we might noe, that we might be at fault, that we can do something about it. all extensions of human personality traits.

I am sorry, I am a fundamentalist who believes in the scientific method. Unless and until climate sensitivity has actually been measured, any estimates are nothing more than guesses. So my guess that the climate sensitivity for CO2 added to the atmosphere from recent levels, is 0.0 C to 1 place of decimals or 2 significant figures is just as good, or as bad, as any other number. All we can do is get an idea of what the maximum value of CS might be from observed data.

there is a reason why parliment listens to Nic and not to you.
there is a reason why GPWF publishes Nic and not you

here is a hint. you dont understand the scientific methodS

I’ll repeat this advice to skeptics: if you want a debate you are welcomed to join it. Look at what Nic has done. Look at what mcintyre has done, troy masters, anthony watts.. they all get a real hearing.. Cripwells and Cottons and Springers of the world.. on the other hand..

We had a handbag fight where you insisted that there was no categorical difference between estimates and measurements. I suppose you still believe this. I don’t, and I do indeed understand the scientific method. Would you like to educate me, or are you afraid that I will prove you are wrong again?

Jim, you don’t have to be right, only righter.
Newton measured the speed of sound first by clapping and then have a blacksmith bang out a beat and walked away until the image was back in sinc with the sound.

A hearing? The internet never forgets Mosher. That’s why I invented it. One of the reasons anyway. Whatever I have to say about climate is, was, and continues to be dated and recorded for posterity. If you were a bit scientifically literate you’d know that, for example, Gregor Mendel posthumously became the acknowledged father of genetics. His work was ignored for 30 years. I shall be vindicated sooner or later. It’s just a game in any case. Don’t take yourself so seriously. No one else does.

“Jim out of curiosity how does one directly measure climate sensitivity?”

At the time I am writing this, Jim has not responded.

This is my ‘cut’. Jim is free to correct me or chastise me, as appropriate.

First, a definition of terms:

a. climate science: the study of the earth’s climate system, the factors that influence it, and how they all work together to produce the micro and macro climates that we find ourselves surrounded by.

b. Climate Science: the study of how atmospheric CO2 controls the Temperature of the Earth (TOE), why adding Anthropogenic CO2 (ACO2} to the atmosphere is causing the TOE to rise precipitously and dangerously, and, in coordination with political bodies, recommending policies that should be implemented to ‘control’ ACO2.

I will assume from the title of Dr. Curry’s post that your question is in the context of ‘Climate Science’.

The short answer is: One doesn’t. And one can’t.

However, we’ll rephrase the question as:

Postulate (not theory): The concentration of CO2 in the atmosphere is the primary factor controlling the TOE.

To Prove: If the concentration of atmospheric CO2 doubles, how much will the TOE increase?

Data:

a. Except for regular seasonal variations, the concentration of atmospheric CO2 has been increasing monotonically with time since we began monitoring it.

b. Over that same monitoring period we have observed extended periods during which the TOE has increased, decreased, or remained statistically flat (We are experiencing such a period now, where the slope of theTOE trend line over the last 17–and counting–years has been so small the there is ongoing controversy as to its sign.).

Climate Science claims to have solved the problem.

Completion of the exercise, using actual TOE and CO2 history, is left to the student, as is evaluating the methodology used by Climate Science to determine if their solution is convincing.

Bob Ludwick that is very unclear to me. Maybe leave out the stuff that I think is about the manipulation of the science by the political process and just tell me how one directly measures sensitivity. The last part hints at measuring forcing change (CO2) and energy accumulation (TOE??) and working it out from there . Isn’t that the observational approach of L&C?

“Bob Ludwick that is very unclear to me. Maybe leave out the stuff that I think is about the manipulation of the science by the political process and just tell me how one directly measures sensitivity.”

Sorry HR. I was just trying to make the point that Jim has tried to make–often–that absent the political process there is no actual, empirical evidence that there is a ‘sensitivity’ to ACO2 TO measure, directly or indirectly. Until there is, it is political, not scientific, to postulate one, proclaim that you have measured it, and use it in models, the outputs of which are cited as justification for political control of ACO2.

The example I used of the input (CO2) increasing monotonically while the output over which it is putatively the prime influence (TOE) goes up, down, and sideways leads Jim (speaking out of turn for Jim) and me to conclude that trying to determine the ‘sensitivity’ for a doubling of an input when even the direction of the output, never mind the magnitude, is in doubt is intrinsically political.

For ‘climate science’ the existence of a ‘CO2 sensitivity’ is a theory subject to confirmation-or not-empirically. As far as I know, actual data has neither confirmed nor falsified it.

For ‘Climate Science’ the existence of a ‘CO2 sensitivity’ is an axiom. A given. All data, if properly modeled and analyzed, confirms it. Climate Science is currently haggling over its magnitude, trying to establish a sensitivity range so that the minimum is plausibly near to observations while keeping the maximum high enough to make ACO2 catastrophe believable.

Bob you seem to be living in a world of absolutes where we must live in complete darkness until the moment when the full truth is revealed to us. Science doesn’t progress like that, you use what you have to hand and try to improve on it.

The difficulties/uncertainties you talk about seem to be acknowledged by all.

Jim
There is no categorical difference. There are practical differences.
Sometimes they matter sometimes they dont.

Let me define categorical for you.

There would be a categorical difference if one gave you knowedge and the other did not.
There would be a categorical difference if one could never be wrong and the other were always correct.

we prefer direct measurement. Why? because it requires the fewest assumptions ( like my measuring standard is unchanging) and its inter subjective agreement is high: if you and I measure we should both get the same thing.
But, we cant measure everything directly.. So we have indirect measurement. That’s where we measure one thing ( length and time) and infer another thing (speed) by using physical theory.
And somethings are hard to measure indirectly and so we can only estimate.

So there is a contiuum between direct meassures and estimates. But the difference is not categorical. There is no line no lace along that line where we can say with CERTAINTY that knowledge is on one side and ignorance is on the other side. We can certainly say that we prefer measurements. We can say that meassurement trumps estimation. But in terms of drawing a line and saying there can be no knowledge and no practical know how when one estimates reduces our scope of knowledge to the point where we are know. nothings. That is why you are a know nothing.

And I dont have to prove you wrong. working engineers, working operations researchers, working scientists, do science every day with estimates. We make things that work and we predict the future. You are a do nothing.

“Science doesn’t progress like that, you use what you have to hand and try to improve on it.”

OK. I’ll tell you what I have at hand. It may be wrong. If convincing evidence is presented that it IS wrong, changing my mind is not a problem.

We have climates. Plural. They ALL change regularly, and have done so, within bounds, with or without ACO2, for as far back as we can estimate them. They are what they are. The idea that they are static absent human influence is unconvincing. There is certainly no overall ‘climate of the earth’ that can be uniquely identified as changing dramatically. Our climates, individually and collectively, have remained within historical bounds while CO2 has increased by a third. We have convincing evidence that within the last few thousand years, during periods in which CO2 was supposedly lower than that of today, and when there was no arguable ACO2 influence on it, we have had extended periods during which the TOE was higher than today and extended periods during which it was lower. Historically, during all time scales examined, climate has been dynamic; it has changed continuously.

We are told by Climate Science that the atmospheric concentration of CO2 is the dominant factor in ‘setting the thermostat of the earth’ and that doubling it from current levels will force the TOE to rise precipitously and dangerously. Since we have had the capability to measure atmospheric CO2 precisely, it has risen monotonically. During that same period we have had periods during which the TOE has risen, periods during which it has fallen, and periods where it has remained static. We are currently experiencing a period-17 years and counting-where the slope of the temperature trend is close enough zero that whether it is positive or negative is debatable.

Given the evidence that the current set of climates are unremarkable compared to historical climates with far different CO2 levels and the fact that the current trend in the TOE seems indifferent to the CO2 trend, absent heroic and controversial flogging (modeling, kriging, filling, adjusting, etc) of raw sensor data, I see no scientific reason to postulate ANY measurable ACO2 sensitivity, which Climate Science has done, nor do I see a scientific reason to ‘go to the mats’ to discredit any data, individual, or organization that challenges the CO2 thermostat postulate, which Climate Science also does routinely.

The traditional ‘scientific method’ is to observe some phenomenon or another, propose a theory that potentially explains the observations, collect relevant data, and determine if the data confirms or refutes the theory. Rinse and repeat.

Climate Science went about it a bit differently: it entered the scene by postulating-not theorizing-that CO2 controlled the TOE, that ACO2 was raising the overall level of atmospheric CO2 rapidly, that the rise in overall CO2 was causing the TOE to rise precipitously, that the rising temperature would prove catastrophic if not halted by controlling ACO2, that the science was settled, and that it was time to quit talking and take action. That was 25 or more years ago and the collective efforts of Climate Science have not been able to make a convincing, empirical argument that there is such a thing as CO2 ‘climate sensitivity’, let alone measure (per HR’s original question) a meaningful value for it. It is simply postulated to exist and the range of values published are, as Kristian pointed out (March 6, 6:05 PM), simply the output of a classical ‘circular reasoning’ exercise. Meanwhile, the climates continue to do what they have always done: change, within historical limits that modern climates have not approached since ‘catastrophic climate change’ (nee ‘catastrophic global warming’) became a multi-billion dollar industry.

Until Occam’s Razor demands otherwise, the best estimate of ‘climate sensitivity’ is: ‘If it exists at all, it is too small to measure with the climate data acquisition system in place today.’

Going back to ‘using what I have and trying to improve on it.’, how is it a scientific improvement to postulate-not propose-the existence of an ACO2 thermostat that cannot be calibrated to explain data that can be explained as well or better by ignoring ACO2?

Steven. With money and litigation there are other means of making parliamentarians “listen” – as you shall see when my book is printed. Truth will prevail. Just give me 8 to 10 years to get the truth out there.

Lewis&Crok assert that “The [climate models] overestimate future warming by 1.7–2 times relative to an estimate based on the best observational evidence.” To examine support for this, we can compare how the same models preform in simulating the past (i.e., the observed rates of warming). From this looks of it (http://www.cato.org/blog/climate-insensitivity-what-ipcc-knew-didnt-tell-us-part-ii) and if the past is any judge of the future, it seems bad for models and good for Lewis&Crok.

With no general physical understanding of quasi stationary non equilibrium thermodynamic systems at all, especially the irreproducible case (when microstates belonging to the same macrostate can evolve to different macrostates in a short time), current climate modelling paradigm is doomed to failure anyway.
I would not trust “observational” estimates of climate sensitivity either, because all datasets were tampered with based on the same flawed computational models.
Let me introduce a fat example. Figure 2.5 (at bottom of page 55) of Energy and Climate Studies in Geophysics (1977) shows “Recorded changes of annual mean temperature of the northern hemisphere”, according to the caption (between 1880 and 1975). If we compare it to HadCRUT4 Northern Hemisphere Temperatures, a current dataset, which contains the interval for which data were known in 1977, shows only one third of the cooling in 4 decades between the mid thirties and mid seventies compared to the dataset assembled by Budyko (1969) and updated after 1959 by H. Asakura of the Japan Meteorological Agency.
Now, history is not supposed to change, but in climate science it does. And all this because computational climate models had insurmountable difficulties in hindcasting the mid 20th century cooling. Solution? Rewrite datasets retrospectively, make that inconvenient cooling all but disappear. Huh.

Understand what BP is saying. He sounds like a smart guy and it sounds like he knows his physics. Yet the only way he can reconcile the actual TCR of 2C and ECS of 3C is by imagining that the historical temperature records have been tampered with enough to wipe out any real warming that we have seen.

Yet the only way he can reconcile the actual TCR of 2C and ECS of 3C is by imagining that the historical temperature records have been tampered with enough to wipe out any real warming that we have seen.

You are not forced to mistranslate what I’ve said, are you? Moreover, it is not nice to disseminate misinformation, like your “actual TCR of 2C and ECS of 3C”. The entire thread is about a much lower climate sensitivity, buried deep inside the IPCC AR5 report, never making it into the SPM (Summary for Policy Makers). According to Lewis and Crok best estimate of TCR (Transient Climate Response) is 1.35°C, that of ECS (Equilibrium Climate Sensitivity) is 1.75°C based on observational datasets available today. Your offhanded 2-3C, preceded by a heavy “actual” is 50-70% higher than that with no further elaboration whatsoever. In a technical thread you are supposed to explicate what’s wrong with other people’s estimates before throwing in your own.

What I was actually implying, is that current state of observational datasets is not independent of computational climate models just shown to be flawed by the same datasets. It has nothing to do with conspiracies either. It is an openly stated strategy of mainstream climate science to uncover biases in observational datasets using models. However, as models are invariably running high, this method itself introduces a one sided bias, that is, actual climate sensitivity may well be even lower than the one found by the authors, should we have datasets not tainted by adjustments to meet models as much as possible.

A favorite straw man for the alarmists, with a recent boost from Lewandowsky. If you quarrel with the science, if you don’t trust the models, if you’re not sold on the IPCC’s 95 percent confidence, it’s because you’re a conspiracy nut. How convenient. I know this is a technical thread, but I’m discussing alarmist propaganda methods, which after all involves a technique…Hence, it’s technical.

You call a multivariate curve fitting excercise “science”. Don’t you feel how pathetic it is? Any underlying low frequency signal would do the same job for you as CO₂ concentration does, as long as it keeps increasing with time.

Judith, have their actually been measurements made to verify the positive feedback loop? If so, could you maybe provide some citations?

If not, how can this then be put into a model, as it’s then a pure assumption not based on facts (correlation is not causation; or we might as well put in the model the temporal relationship between the number of soda cans sold and gsta… both have increased over time, but are totally unrelated)

It could be done expire-mentally, but hasn’t.
You could track the in front of and in the umbra of a total solar eclipse on the ground and in the air at various altitudes, so that you could measure the up/down radiative fluxes and temperature.

‘In climate modeling, scientists have assumed that the relative humidity of the atmosphere will stay the same regardless of how the climate changes. In other words, they assume that even though air will be able to hold more moisture as the temperature goes up, proportionally more water vapor will be evaporated from the ocean surface and carried through the atmosphere so that the percentage of water in the air remains constant. Climate models that assume that future relative humidity will remain constant predict greater increases in the Earth’s temperature in response to increased carbon dioxide than models that allow relative humidity to change. The constant-relative-humidity assumption places extra water in the equation, which increases the heating.

Many have questioned whether this prediction of a wetter future atmosphere is right, including Dessler and Minschwaner. “There’s no theoretical, simple line of reasoning that should say that it [relative humidity] should be constant,” says Ian Folkins, an associate professor of atmospheric sciences at Dalhousie University in Halifax, Nova Scotia, Canada. Critics of the constant-relative-humidity assumption have said that compensating effects will prevent large quantities of extra water from entering the atmosphere, explains Dessler. “The atmosphere is very efficient at generating dry air. Increases in these processes could balance increased evaporation in a warmer climate, leading to little change in the humidity in the atmosphere.” Like air running over the cooling coils in an air conditioner, he adds, air that rises to high altitudes cools off and water condenses out, leaving the air drier.’ http://www.gfdl.noaa.gov/bibliography/related_files/bjs0601.pdf

The long held assumption of constant RH is built into the math no matter how it is done. Although models that allow RH to change seem to have slightly less water in the atmosphere – the difference is not huge. This has led Isaac Held to argue that constant RH should be used to explicitly evolve cloud feedbacks.

My comment on this topic is at the Ed Hawkins site Judith Curry links to at the beginning of her post. It’s listed there at 7:28 PM, March 6. Since the details are there, I’ll only summarize my thoughts briefly here, as follows. The Lewis and Crok estimates are not those of ECS but a different phenomenon, “effective climate sensitivity”, which probably underestimates ECS. In contrast, estimates of ECS itself probably haven’t changed much in either direction in recent years. The Armour et al paper Dr. Curry cites discusses this in more detail, but a more recent Armour paper raises questions as to whether there is any discernible relationship between “effective climate sensitivity” and ECS, and this was part of Armour’s recent APS talk. The Lewis and Crok TCR estimate is more credible than their ECS analysis, but may still be an underestimate, based on the most recent aerosol forcing data.

Lewis and Crok suggest a 2xCO2 transient climate response (70 years) at a “best observationally-based estimate” of 1.35degC, from which they posit a 2xCO2 ECS (equilibrium) “best observationally-based estimate” of 1.8degC.

What would be your conclusion on an equilibrium CS value, based on their observationally-based TCR estimate?

Max – there is no fixed relationship between TCR and ECS, since the ratio depends on the efficiency of heat uptake by the ocean. However, using a median ratio of 0.56 – transient vs equilibrium response – the Lewis and Crok TCR estimate of 1.35 C would translate into an ECS of 2.4 C.

While the Otto et al. paper of which Lewis was part did acknowledge that their estimate may be low due to Armour’s arguments, Lewis himself has made no acknowledgement or reference to the Armour paper, even though Armour’s argument is directly critical about methods such as that used by Lewis based on past data being used for ECS. Lewis does need to address Armour’s argument, or acknowledge like Otto that his ECS estimate could be low because of those considerations.

‘Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.’

What is the sensitivity at a tipping point? Remembering that these happen with some regularity – we call them internal variability or oscillations or some such. This is a complex system with control variables and multiple feedbacks that exhibits abrupt change. ‘Climate sensitivity is then defined mathematically as the derivative of an appropriate functional or other function of the systems state with respect to the bifurcation parameter.’

Sensitivity is a concept that needs fundamental rethinking – it is locked into a false dichotomy of low or high values. Dynamic sensitivity is low away from tipping points and high at tipping points. This is the essential paradox of climate. As Wally Broecker put it – climate is an unpredictable wild beast, and we are poking at it with sticks. At this juncture – and for decades more – the beast is hiding it’s claws and teeth. But this is by no means guaranteed into the future – and very bad outcomes are possible within as little as a decade.

‘What will happen in the longer term is far less certain. If man continues to enrich the atmosphere of our planet with greenhouse gases such as carbon dioxide, the global climate could get close to the point where the North Atlantic current is interrupted. Due to the inaccuracy of present climate models, it is not possible to date to make a clear statement as to where exactly this point lies and whether and when it might be exceeded. It is another characteristic of non-linear systems that the prediction of future trends is considerably more difficult than with linear systems. The existence of critical thresholds which, if exceeded, would mean a sudden qualitative change of climate is in principle easily understood. But where exactly these points lie, and the behaviour of the climate system near such points, is difficult to calculate.’

It is a new zeitgeist – needed to make sense of things and to properly calculate risk and consequence – but we are locked into the old world views whereby a single number can be nominated for warming in a hundred years and be seriously regarded. Regardless of whether the number is high or low – I don’t have that particular level of forbearance. It is all just utter and unmitigated nonsense from both sides in the climate war. It is unimaginative and ill-informed claptrap no more meaningful than the chatter of monkey tribes.

I have no axe to grind – the solutions remain the same. We might much more usefully talk about solutions – as I do frequently – than chatter on with tribal monkey nonsense.

There is a mathematically objective risk profile – a virtual certainty of climate shifts several times this century and an utter ignorance of what that will entail. This is the system at which we are poking sticks.

The question you need to ask yourself is – do you feel lucky? Well do you punk?

You haven’t answered my question. Do you feel lucky? If not being lucky has some serious potential consequences – it becomes a matter of managing risk where you can.

How to do this? Taxes and caps are nonsense – unworkable magical solutions to a small fraction of the problem. But – on the other hand – it is utter insanity to equate not knowing and inaction to rational responses. More monkey chatter.

Steve. I was having a look through the BEST work and have noticed something interesting.
You have a hot spot around the Iranian city of Bam. The temperature rises in the late 90’s, then there is a big jump at the beginning of 2004.
Now Bam had a few earthquakes in the late 90’s and then a big one, magnitude 6, at the end of December 2003. The city was abandoned and rebuilt over the last decade.
Have a look at the raw 1993-2004 and 2004-2013 raw data, of Bam; you can see a change in the rate of warming by eye.

Instead of looking at light indexes for population, how about look at large earthquakes in inhabited and uninhabited regions. If the UHI effect is real, just after an earthquake you get a population collapse and slow rebuilding.
We know where and when all the earthquakes happened since 1900, and can have a guess from 1850.

Lewis and Crok are wrong because their basic assumptions are wrong. They, and you Jeff, need a paradigm shift in your thinking, because planetary temperatures are not controlled by this imaginary radiative forcing concept which I debunked two years ago in my peer-reviewed paper “Radiated Energy and the Second Law of Thermodynamics.” Temperatures are set by the gravito-thermal gradient (modified by inter-molecular radiation) both above and below any planetary surface.
When a photon from a cooler source strikes a warmer target, that target “recognises” that this photon has the same characteristics of photons which it can emit. It has exactly the right frequency (thus energy) that is required to cause an electron in the target to move up between two quantum energy states. But the process is immediately reversed, and a new identical photon is emitted as part of the target’s “quota” as per its Planck function. So the target does not need to convert some of its own thermal (kinetic) energy to electron energy (then to a photon) and so it cools more slowly as a result of the back radiation, as we all know.
But non-radiative processes can increase their rate of cooling to compensate for slower radiative cooling.
Furthermore, most slowing of surface cooling (and cooling of the 2 metre high surface layer of the troposphere where we measure climate) is caused by slowing of conduction into nitrogen and oxygen molecules.
Have you ever wondered why the temperatures don’t keep falling at a rapid rate all through the night when upward convection almost ceases? It’s due to the fact that the “environmental lapse rate” is a state of thermodynamic equilibrium in which the non-radiative processes have a propensity to form a -g/Cp thermal gradient, but this is reduced (usually by no more than about a third) by the temperature levelling effect of inter-molecular radiation. For example, on Uranus the -g/Cp gradient of about 0.76K/Km is reduced to about 0.72K/Km by radiation between just a small percentage of methane molecules, whereas on Venus it is reduced more like 25% by carbon dioxide, which thus leads to a significantly lower surface temperature on Venus. On Earth it is reduced mostly by water vapour which reduces the insulating effect of the atmosphere by inter-molecular radiation, just as it reduces the insulating effect between the panes of double glazed windows as it helps energy leap-frog across the gap at the speed of light, overtaking the far slower diffusion heat transfer.
And it is because of all this that we actually have evidence that the gravito-thermal gradient exists, and thus the greenhouse conjecture is demolished and there is zero (warming) sensitivity to carbon dioxide. It actually cools by a mere 0.1 degree at most.

Double glazing works by limiting the movement of air away from the surface. A stable air gap is a poor conductor.

So limited advection and poor conduction of heat.

While incoming SW goes right through the glass – outgoing IR is blocked. Just like the atmosphere.

The standard model says that more greenhouse gases in the atmosphere will increase IR photon absorption and scattering. Increasing the energy content of the atmosphere and resulting in increased downward IR emission – decreasing surface IR losses and increasing surface temperature. The increased temperature at the surface – both oceans and land – will increase losses until a nominal equilibrium is reached. The delay between the increased heat content of the atmosphere and a surface energy equilibrium is a measure of thermal inertia – most obvious in the oceans.

The first approximation of the lapse rate is derived by combining hydrostatic –

dP = -g ϱ dz

with thermodynamic considerations –

dW = dU

– where the work lifting or lowering an air parcel is equal to the change in internal energy – and substituting for work and internal energy terms.

It is a fact of life – but quite irrelevant to the energy implications of added greenhouse gases.

This is little reason to revise the standard model – and none at all for Doug Cotton to continue to waste our time.

I am comparing the difference between dry air and moist air in the gap between double glazing. Those involved know dry air is a better insulator. Likewise in the troposphere. My study of 30 years of temperature data for several locations on each of three continents has shown (with statistical significance) that dry regions have higher mean daily maximum and minimum temperatures than regions with high precipitation at similar altitudes and latitudes. I refer you to my next comment below.

Yes General – I can do the math too, but I’m waiting for your answer to the two questions below. Meanwhile I quote below from my book.

So the above considerations lead to the inevitable conclusion that, at thermodynamic equilibrium, there is in fact a temperature gradient maintained by gravity because all molecules at a higher altitude have a lower mean KE (hence a cooler temperature) than those at a lower altitude. Furthermore, the difference in the mean KE should equal the difference in gravitational PE. This means that it is the sum (PE + KE) which is homogeneous at thermodynamic equilibrium, because when that is the case no further work can be done. Hence we can calculate what the temperature gradient ought to be, at least in the absence of any inter-molecular radiation, and these will be the only calculations we need in this book.

Let us consider a thought experiment in which a region of a non-radiating gas of mass M all happens to move downwards by a small height difference, H in a “closed system” where g is the acceleration due to gravity. The loss in PE will thus be the product M.g.H because a force Mg moves the gas a distance H. But there will be a corresponding gain in KE and that will be equal to the energy required to warm the gas by a small temperature difference, T. This energy can be calculated using the specific heat Cp and this calculation yields the product M.Cp.T. Bearing in mind that there was a PE loss and a KE gain, we thus have …

M.Cp.T = – M.g.H T/H = -g/Cp

But T/H is the temperature gradient, which is thus the quotient -g/Cp.

This result is well known, as is the fact that the atmospheres of all planets exhibit a similar temperature gradient that can be calculated from the gravitational force on that planet and the mean specific heat of the gases in its atmosphere.

Moist or dry air in double glazing? Neither advection or conduction are materially influenced by water vapour.

Evaporation is obviously a difficult concept as well Doug. It causes cooling at the surface and warming in the upper atmosphere. Where there is less water to evaporate – the surface stays warmer. Where do you think that might be Doug? Should this make a difference to full depth tropospheric heat content? Or just the vertical distribution of heat? Does this have anything to do with the energy dynamics of greenhouse gases?

“Where there is less water to evaporate – the surface stays warmer. Where do you think that might be Doug? Should this make a difference to full depth tropospheric heat content? Or just the vertical distribution of heat?”

A1: dry areas, obviously.
A2: No
A3: Yes

Does any of this surface temperature change and vertical heat distribution affect the TOA radiative balance? I guess that depends on whether the change in vertical heat distribution affects the effective radiative height, right? More heat higher up would result in more heat escaping at TOA, wouldn’t it? At least until it “re-balances”.

Next: moist lapse rate is smaller than dry lapse rate – doesn’t this suggest that water vapour feedback is overall negative for near surface air temp and significantly so?

Now, temp isn’t all of climate, but I think it’s fair to say it’s certainly been pushed by the climate science community as both a diagnostc of GHG effects, and a result of same – at least until recently, when it no longer seems to support their case. It matters little if this always was the wrong metric to use, it was used – loudly so. If this has now changed, how much will Joe Public accept the change in metric, how many changes will they accept a change and how many excuses for modellings predictive failures will they accept before they call BS? I think we will soon find out…

Of course, none of the above says anything at all about the rightness or wrongness of any particular climate theory, but it will certainly affect the political response (ie, what actions, if any, are taken). Neither does the distinction between a prediction and a projection matter much to Joe Public – they consider the prediction/projection distinction – if they consider it at all – to be simply an excuse for getting it wrong, IMO.

An interesting discussion of evaporation. What have you to say then about the rate of evaporation increasing as the temperature gap increases? Are these engineers wrong about the air temperature being involved? …

Since all ‘observationally based’ climate sensitivity studies start out with the assumption that some or all of any measured (or stated) global warming during some period of time is caused by a rise in atmospheric CO2 concentration, they are basically all founded on a purely circular argument: ‘There is a positive climate sensitivity to a rise in CO2, because we see warming and we assume that some or all of this is caused by the rise in CO2.’

I am deeply unhappy with the very notion of climate sensitivity as something physical. But once they postulate it, a question of “can an observed increase in CO2 cause an observed temperature history” becomes a legitimate one, and data can be analyzed to determine the sensitivity.

Of course, the relationship is so complex that no definitive answer has been found – and may never be found: how come no one has thought of oceans before the pause?

Physics going back a century tells us that doubling CO2 causes warming, and furthermore that warming leads to more water vapor in the atmosphere that causes more warming. There is nothing circular about that. There was a direct prediction made before any of the warming was measurable or even before CO2 levels and changes were well known.

Yes, natural variability was way off, that means attribution was also wrong. Many feedbacks in existing models will need to be revised significantly.

I suspect water vapor is one. I think during Pinatubo attribution studies, they considered drying from cloud formation, temp, wind,… but I don’t think they considered the change in SW radiation on bodies of water and moist surfaces.

“Physics going back a century tells us that doubling CO2 causes warming, and furthermore that warming leads to more water vapor in the atmosphere that causes more warming. There is nothing circular about that. There was a direct prediction made before any of the warming was measurable or even before CO2 levels and changes were well known.’

Higher temp causes less evaporation than direct sunlight, and so clouds come into the picture. This has been known for some time, Jim D

It’s a continuum. Air at higher temperature can hold more water vapor. This has been known forever. To first-order, clouds form along a phase line of Pressure/Volume/Temperature. This means that with higher surface temperature, the cloud bank will likely just move higher in altitude.

What kind of difference will this make? The difference will be determined by how much more a higher concentration of water vapor leads to a greater GHG effect versus how much difference a higher average cloud bank makes.

The water vapor factor is what is known as a first-order positive feedback while the cloud elevation factor is known as a second-order effect. The lapse rate modification may be greater than the cloud altitude effect.

Skeptics, please calculate the second order effects of cloud elevation changes. Consider it a homework problem — can’t be too hard for you since you seem to have all the answers.

“Physics going back a century tells us that doubling CO2 causes warming, and furthermore that warming leads to more water vapor in the atmosphere that causes more warming. ”

I am pretty sure that you are wrong.

Why?

What you have described is a positive feedback loop that will drive itself to the maximum temperature achievable. Since water vapor is constantly being put into the atmosphere and the temperature has observably NOT been driven to the max, it is pretty clear that whatever the effect of atmospheric CO2 the process does not proceed as you have described.

Maybe we could agree that on a planet with no water the influence of CO2 in the atmosphere would be easier to model. Unfortunately, this pesky water evaporates, water vapor increases the greenhouse effect, then condenses in clouds which counteract the greenhouse effect – all sorts of complications. Why don’t we follow an example of Al Gore & Comp and simply refuse to acknowledge the presence of water?

Before the end of the eighteenth century, most people believed that evaporation required the presence of air to dissolve the water. Even today, many people believe that saturated air is holding as much water vapor as it can and that warm air holds more water vapor than cool air. …
…atmospheric thermodynamics text by Bohren and Albrecht (1998) discusses this issue. A lucid non-mathematical explanation with illustrative examples is given by Bohren (1987).

Bob Ludwick, no a positive feedback is not necessarily a runaway feedback. In this case it is calculated as a finite amplification factor of about 2-3. Common mistake, easily made by engineers who think a positive feedback always runs away. Monckton is still confused by this issue too, so you are in “good” company.

thisisnogood, warmer air can hold more water vapor, which is why the tropics rain so much. It doesn’t always hold more water vapor (e.g. deserts), but it does when it has been over warm water for a while. What was your question again? Maybe you think warmer oceans like in the tropics don’t have more water vapor over them?

CG, so far the cloud feedback isn’t working in the direction hoped for by the “skeptics”. The period of strongest warming in the 90’s also had lessening clouds, which it won’t take long to realize is not what you wanted to happen.

JimD – You are assuming that cloud formation is a function of temperature. If cloud formation depends on some other factor, then warmer temperature as a function of less cloud cover is exactly what would be expected – by skeptics, skeptical scientists, or most other people who grasp the role of clouds as sun shades.

It is a common assumption by skeptics that the only thing that can rescue them from the positive water vapor feedback is a negative cloud feedback, which just hasn’t materialized in the observations, only the opposite, if anything.

Clouds are predominantly a feedback to warming – but to ocean and atmospheric circulation. It seems to vary decadally and in discreet jumps at times of climate shifts. Less cloud following the 1976/1977 climate shift and more after the 1998/200 shift.

GS, some people count the Planck response (the actual warming) as a negative feedback, and in that case it is correct that the warming is limited by that response, but for climate the response is not usually considered part of the feedback simply because that is what is being calculated.

Need to be careful about making these sweeping generalizations about cloud increases and decreases over any given period of time, as certain cloud types increase and certain types decrease over the same period.

For climate, it is the low clouds that count most. Their reduction leads to net warming through the loss of albedo, while they also contribute little to reduce the IR emission. The graphic is fairly graphic about the trend there during a time of warming.

thisinogood, I am not sure if you are playing word games. Is it the word “holds” you don’t like? Let’s say tropical air has 20 grams water vapor per kilogram of air, as it often does. Colder air would be far oversaturated with that value, and just would not “hold” it but would form clouds instead and rain it out. The bottom line is that tropical air can hold it, while colder air can’t.

‘A positive feedback occurs when a change in one component of the climate occurs, leading to other changes that eventually “feeds back” on the original change to amplify it. The classic ones in climate are the ice-albedo feedback (melting ice reduces the reflectivity of the surface, leading to more solar absorption, more warming and hence more melting) and the water vapour feedback (as air temperatures rise, water vapour amounts increase, and due to the greenhouse effect of the vapour, this leads to more warming), but there are lots of other examples. Of course, there are plenty of negative feedbacks as well (the increase in long wave radiation as temperatures rise or the reduction in atmospheric poleward heat flux as the equator-to-pole gradient decreases) and these (in the end) are dominant (having kept Earth’s climate somewhere between boiling and freezing for about 4.5 billion years and counting).’

‘The overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period. The most obvious explanation is the associated changes in cloudiness during this period.’

thisisno, you are the one who brought up “dissolved”. Get that straight. Air is a mixture of gases, one of which is water vapor. You could say that air contains water vapor, or even holds it. In fact the water vapor CONTENT is how it is quantified.

WHUTTY and Jim D try to play as if they didn’t hold the erroneous beliefs:

“Air at higher temperature can hold more water vapor. This has been known forever.”

“warmer air can hold more water vapor”

“Let’s say tropical air has 20 grams water vapor per kilogram of air, as it often does. Colder air would be far oversaturated with that value, and just would not “hold” it but would form clouds instead and rain it out. The bottom line is that tropical air can hold it, while colder air can’t.”

“Before the end of the eighteenth century, most people believed that evaporation required the presence of air to dissolve the water. The term saturation vapor pressure arose because it was believed that this was the maximum amount of water vapor that could be dissolved in air. People erroneously believed that warmer air could dissolve more water vapor than cooler air. However, studies by De Luc and Dalton in the late eighteenth century cast doubt on these conclusions. The publication of Dalton’s paper in 1802 finally resolved the issue. Dalton showed that the pressure of a gas is independent of the amount of other gases present. Because air is mostly empty space, each gas acts individually as if it alone existed. Most gases are indefinitely soluble in other gases (Ostwald 1891). In an equilibrium state, the amount of vapor above a liquid depends almost entirely on the temperature of the liquid. John Dalton concluded that the vapor pressure of water in air is independent of the existence of the air (Brutsaert 1991, Cardwell 1968, Greenaway 1966, Ostwald 1891, Dalton 1803).

Even today, many people believe that saturated air is holding as much water vapor as it can and that warm air holds more water vapor than cool air. Unfortunately, this mistaken belief has even made its way into some textbooks. A new general meteorology textbook (Nese et al. 1996) written by faculty members of The Pennsylvania State University should help to dispel this commonly held myth. In addition, a new atmospheric thermodynamics text by Bohren and Albrecht (1998) discusses this issue. A lucid non-mathematical explanation with illustrative examples is given by Bohren (1987).

Air does not hold water vapor. Water vapor is not dissolved in air. This can easily be demonstrated by a thought experiment. Imagine a closed container containing a beaker of pure water and a beaker of ocean water. Place the two solutions side by side so that they are at the same atmospheric temperature and pressure. The air above these two solutions is at the same temperature and pressure. If air “holds” water vapor, then the two solutions should have the same saturation vapor pressure. However, the saturation vapor pressure above the saline solution is less than that above the pure water. In the saline solution, the salt ions replace some of the water molecules so that fewer water molecules are available for evaporation (see Footnote 1). Therefore, the presence of the salt reduces the rate of evaporation from the saline solution compared to the solution of pure water. This then is the reason why the saturation vapor pressure above the saline solution is less than that above pure water. Note that the presence of initially identical air above the solutions could not account for this difference.

Saturation vapor pressure is actually something of a misnomer. The term saturation probably is an historical remnant from pre-Dalton times. It probably should be called equilibrium vapor pressure because, by definition, it is the water vapor pressure that occurs when a phase change is taking place. The higher the liquid water temperature, the more energetic are the liquid water molecules. The more energetic these molecules are, the more readily they can leave the liquid interface. This increases the amount of evaporation and therefore the saturation vapor pressure. The temperature of the air has nothing to do with it except that it can be eventually warmed or cooled depending on the temperature of the liquid surface. “Saturation” occurs when the evaporation rate equals the condensation rate and the air is in equilibrium with the liquid. ”

You boys can play all you like now. Your erroneous conceptions have been dealt with. It’s up to you to accept that you’ve been holding up a myth, or continue to play your games, boys!

Warmer air can hold more water vapor. Take air with a given amount of water vapor in it and enclose it in a large sealed container. Drop the temperature of container down to well below freezing. Next move the contents of that container to another one at room temperature. That container full of air will have less water vapor in it than it did before.

I don’t care if this is a trick because the water vapor condensed out while it was in a cold container. Nature is tricky too.

“Warmer air can hold more water vapor. Take air with a given amount of water vapor in it and enclose it in a large sealed container”

No. Take a sealed container that has air and water vapor in it.

“Drop the temperature of container down to well below freezing. Next move the contents of that container to another one at room temperature. That container full of air will have less water vapor in it than it did before.”

I don’t care if this is a trick because the water vapor condensed out while it was in a cold container. Nature is tricky too.”

There are those who stick to the belief that warmer air can and therefore does hold more water vapor than colder air, following in lockstep with the Clausius Clapeyron equation.

This is the way the climate models cited by IPCC are programmed, and the IPCC reports are full of references to the prediction that relative humidity remains constant with warming, i.e. that specific humidity (water vapor content) increases accordingly.

The study found that water vapor content (specific humidity) did, indeed, increase with warming BUT that constant relative humidity was not maintained, but that water vapor only increased less than one-fourth the amount required to maintain constant relative humidity (i.e. the air became less humid with warming).

NOAA radiosonde data going back to 1948 even show that the specific humidity (i.e. the absolute water vapor content) has decreased over the longer term as the temperature increased, rather than increasing to maintain constant relative humidity, as assumed by the IPCC climate models as the basis for the postulated strongly positive water vapor feedback.

IOW the climate models cited by IPCC grossly exaggerate the positive water vapor feedback, and hence the 2xCO2 climate sensitivity at equilibrium (ECS).

Several recent observation-based studies, such as the one by Lewis and Crok cited here, also confirm that the model-predicted 2xCO2 ECS is exaggerated by a factor of around 2 – and is really somewhere around 1.8C (rather than over 3C, as claimed by IPCC).

“There are those who stick to the belief that warmer air can and therefore does hold more water vapor than colder air, following in lockstep with the Clausius Clapeyron equation.

This is the way the climate models cited by IPCC are programmed, and the IPCC reports are full of references to the prediction that relative humidity remains constant with warming, i.e. that specific humidity (water vapor content) increases accordingly.

But this is apparently not how nature works in real life.”

Max, the decline in the meteorological pan evaporation rate at the same time as temps increased supports it too.

thisisnotlettingitgo, so it is OK when Max says “hold”, but not when warmists do. I see.
Actually, most would agree that RH has dropped especially over land where warming has outpaced the ocean that supplies the moisture, but even so, the absolute water vapor has increased. This doesn’t mean that the oceans won’t warm, just that they are slower to warm, but we already knew this. It is a transient climate state, not an equilibrium one. There is a distinction in how RH behaves in these.

As pointed out (with references) to thisisnotgoodtogo, water vapor (over the tropical ocean as physically measured by Minschwaner + Dessler increased slightly with warming, but less than one-fourth the amount needed to maintain constant relative humidity. This was a short-term study only involving a few years. This would suggest a much smaller (but still positive) water vapor feedback than that ASS-U-MEd by the IPCC models.

The longer term record (NOAA radiosondes since 1948) show that the absolute water vapor content has decreased with warming rather than increased. This would suggest a negative water vapor feedback.

IOW the short and long-term observations do not support the model-predicted strongly positive water vapor feedback ASS-U-MEd by IPCC.

Take the temperature down to 0K. See how much water vapor is left.
Push the temperature up another 100C, see what kind of steam bath we would be in.

That’s one of the things I learned about how to apply physics. You look at boundary conditions and you can fill in the gaps. First-order effects are the direct interpolation or extrapolation, while second-order effects are the wiggles in the curve that fill in the detail.

You look at boundary conditions and you can fill in the gaps. First-order effects are the direct interpolation or extrapolation, while second-order effects are the wiggles in the curve that fill in the detail.

Based on assumptions not valid in systems demonstrating even temporal chaos, much less “spatio-temporal” chaos. This has certainly been well-known since the ’90’s, so you could make a case for getting your money back, depending on when you went.

JimD, “so far the cloud feedback isn’t working in the direction hoped for by the “skeptics”. The period of strongest warming in the 90′s also had lessening clouds, which it won’t take long to realize is not what you wanted to happen.”

This is one of the major problems, it’s chicken vs egg. Did the decreasing clouds drive a good share of the temps independent of CO2 warming? Did this increase low troposphere water vapor rather then the slight increase in IR. This a much more plausible scenario. (and, if heat caused less clouds and more water vapor, it is unlikely this feedback wouldn’t be a runaway one unless other, negative feedbacks quickly overwhelm those). Perhaps adjusting the data set for the calculated, theoretical GHE based on measured concentrations and then comparing to albedo to look for a response or leading relationship.

JimD, “t is a common assumption by skeptics that the only thing that can rescue them from the positive water vapor feedback is a negative cloud feedback, which just hasn’t materialized in the observations, only the opposite, if anything.”

I think neither part of this statement are true. There is nothing to indicate any cloud feedback. More importantly, I believe it is more common to believe that water cycle efficiency is a primary negative feedback. Clouds are more likely to be a forcing than a feedback.

Thisisnotgoodtogo, regarding capacity for water-vapor. Jim is right, this is just semantics for the intent of this discussion. Your point could be relevant in a more in-depth discussion of how h2o behaves in the atmosphere on a micro level, but no one has brought the conversation there. As water vapor increases, the energy available to keep it from condensing (or more correctly, evaporate water) diminishes. There are process which are likely important, like whether energy comes from radiation or conduction and temp/pressure/composition of the atmosphere.

“aaron | March 8, 2014 at 9:35 am |
Clouds are more likely to be a forcing than a feedback.”

What the hell does that mean? I do not speak ‘climate science’, but I know that a ‘forcing’ can be stated as arbitrarily negative or positive, can be arbitrarily termed external or internal and the ‘feedback’ is not used in its normal mechanistic sense and thus the line-shape of the Stefan–Boltzmann equation is a ‘feedback’, but the formation of clouds over warm water isn’t.

You are writing gobbledegook.

State what you mean, simply, in terms of heat fluxes from a surface to space, over the diurnal and seasonal cycles.

aaron, over the last few decades to a century the CO2 net forcing change has been by far the dominant one. The warming we are seeing is simply from that. The clouds may be responding, but if there is a direction detected, it is in the positive feedback direction, which, by the way, the GCMs also derive for it.

Jim D, “over the last few decades to a century the CO2 net forcing change has been by far the dominant one. The warming we are seeing is simply from that.”

This is an absurd conjecture. We simply can’t know this from the data available.

“The clouds may be responding, but if there is a direction detected, it is in the positive feedback direction, which, by the way, the GCMs also derive for it.”

This is wrong. Look at the ISCCP cloud analysis posted early in the thread. Particularly low and mid level clouds. The are pretty much constant during the warming. They decrease after the 98 el nino and decrease during the hiatus. Your previous statements are misleading, and likely dishonest given how clear this is from the data/graphs.

Aaron, you’re wrong. Jim D was wrong, of course, n more than one way.
The discussion was not about semantics, as you could tell if you looked more carefully – see my refusal to accept any alternate words for “holds” that he tried.

As you could tell if you looked more carefully, Jim D changed his tune from warm “tropical air”, to “warm ocean” as he tried to squirm out of his predicament.

Doc, I’m just saying that the cloud response to temperature isn’t likely to be significant and that natural variation in clouds cover (probably quasi-cyclical, related to ocean circulation patterns) likely drove temperatures up coincidently during the 30yrs prior to this decade (leading to misattribution of warming).

aaron, the forcing change from CO2 is now approaching 2 W/m2. This is already four times larger than the negative forcing from a Maunder Minimum, and skeptics have no trouble accepting solar causes for the cooler period associated with that. This is just a resistance to face the truth, and to have a consistent opinion on forcing. We also see short volcanic forcing changes of W/m2 magnitudes that show us that cooling occurs in response to them, and skeptics seem to accept that too. It really is an anything-but-CO2 attitude that I find hard to understand from a scientific perspective. Forcing is forcing, no matter where it comes from.

“Physics going back a century tells us that doubling CO2 causes warming, and furthermore that warming leads to more water vapor in the atmosphere that causes more warming. There is nothing circular about that. There was a direct prediction made before any of the warming was measurable or even before CO2 levels and changes were well known.”

JimD, you’re not very well-versed in the scientific method, are you?

When are you people going to understand that the assertion ‘more CO2 makes the world warmer’ needs to be empirically substantiated? It’s not enough to just state it. It needs to be backed up by observations from the real world. It’s not enough to just state it and then point to some warming and say: ‘Look! We’re right!’ Because behind this lies the A PRIORI ASSUMPTION that this warming is partly or totally caused by the very mechanism that you wish to show (prove) the effect of.

You do realise that such an approach is not science? You do know that it is nothing but circular reasoning?

Look, it’s fine to have an hypothesis. But you can’t just take for granted that it’s correct before you start out EVEN IF you think that the ‘theory’ behind it SHOULD lead to what you suggest. Because that is PRECISELY what you’re setting out to find out. Empirically. If it ACTUALLY works like that in the real world, not just on paper. And if you can’t find it in the real world, it is the PAPER version that needs work, not reality. There is clearly something you hadn’t considered. YOUR physics isn’t the complete picture.

So where are the Earth system observations SHOWING the causal relationship +CO2 >> +T, JimD? And where are the empirical studies SHOWING it, not just CLAIMING it?

The only direct correlation we can point to in today’s Earth system between atmospheric CO2 content and temperature is in the annual cycle, which always goes T >> CO2. The opposite relation has NEVER been found or shown anywhere!

Yet, the AGW hype moves on, based solely on the assertion that this is still the way it just IS, because ‘we’ says so, and because ‘our’ models, based on ‘our’ circular reasoning, show it.

Doc, though, looking at the http://isccp.giss.nasa.gov/climanal7.html, I may be very wrong about clouds responding to forcing. When temps were rising, there were more low clouds. When warming stopped, low clouds decreased and mid clouds increased (the opposite of what WHT says the models say, we have more lower clouds while temps rise and when warming stops, the cloud height rises).

Kristian, the current atmospheric temperature is explained by the GHG effect being known very precisely. It is in weather models that couldn’t even get the surface temperature right without the correct physically based radiative transfer. Knowing and having verified the extent of the current greenhouse effect makes it possible to project the effects of perturbations. It is not like there is unknown physics needed to explain the current temperature of the earth. The physics is very well known and textbook stuff. Judith has a textbook on it.

Mi Cro, GCMs do a good job of representing current climate, and they need CO2 as part of the radiative physics to do that. Take the CO2 out and you get the ice-world kind of results shown by Lacis in his control-knob experiment.

Kristian, you seem to be under the impression that CO2 effects can’t explain the current state of the atmosphere, so that we can’t use these same effects to predict the future, which I have said is wrong. Physics explains the current state of the atmosphere, and just as well predicts what a perturbation does, as has been known for a century, even before warming was known. You have the sequence of events backwards, and are therefore making a wrong conclusion. Physics first, warming second, prior physics explains warming third. Not circular at all.

“Physics explains the current state of the atmosphere, and just as well predicts what a perturbation does, as has been known for a century, even before warming was known.”
No it doesn’t, the physics has been applied as a hypothesis, and every applied model of this hypothesis over estimates temperature. But it was adopted by those who loathes humans influence on the planet and keep shouting the ‘Science is settled’ which if it was the models would actually match measurements.

Mi Cro, sure you can go ahead and invent some other physics to explain the current climate. Good luck with that. No results on that front yet, but I am sure people are working hard on the not-CO2 theory.

Let me try to get it out of the gobbledygook jargon favored by climatologists.

Everyone over about 10 years old knows that clouds cool the surface of the Earth during day time by blocking incoming solar rays.

After around age 16 we also learn that cloudy nights tend to be slightly warmer than clear nights.

So we see that clouds impact the temperature of Earth’s surface.

But as IPCC concedes, “clouds are the largest source of uncertainty”.

IPCC likes to propagate the fantasy that clouds only react to some other “forcing” (CO2, wink-wink) and moreover, that they act as a positive feedback, amplifying the GH effect of CO2 (double wink, wink), making the diabolical gas even more of a threat to our planet (triple wink, wink).

Pallé et al showed us that cloud cover decreased over the latter 20thC (while it was warming) and increased starting around 2000 (when it stopped warming).

CAGW aficionados (like Jim D) point to this as evidence of a “positive cloud feedback” , completely ignoring the possibility that the clouds, themselves, could be driving the temperature change rather than the other way around.

A 5% change in cloud cover would have the same impact on our planet’s climate as the theoretical impact from a doubling of atmospheric CO2.

So what is driving the clouds?

The sun (the Svensmark hypothesis and/or some other mechanism)? Cyclical ocean patterns (ENSO, PDO, AMO, etc.)? A combination of the above? Who knows? (Not IPCC.)

The surface hasn’t even been scratched yet, Doc, and Judy Collins had it right, when she sang: “I really don’t know clouds at all…”

Jim D says, March 8, 2014 at 3:15 pm:
“Kristian, you seem to be under the impression that CO2 effects can’t explain the current state of the atmosphere”

That’s not what I’m saying at all. I’m saying that it is not shown that the increase in atmospheric CO2 is what caused global temperatures to rise. It is CLAIMED to be the cause. It is ASSUMED to be the cause. And then the evidence of WARMING is taken as confirmation of the original assumption, which is the one to be tested. Hence, dead circular.

Theoretical physics or models based on it can’t confirm your hypothesis, JimD. Empirical observations from the real Earth system can.

So I ask you again: “Where are the Earth system observations SHOWING the causal relationship +CO2 >> +T, JimD? And where are the empirical studies SHOWING it, not just CLAIMING it?

(The only direct correlation we can point to in today’s Earth system between atmospheric CO2 content and temperature is in the annual cycle, which always goes T >> CO2. The opposite relation has NEVER been found or shown anywhere! Or are you in possession of evidence never before presented to the world that does show precisely this?)”

““Climate Science is based on an axiom: Atmospheric CO2 warms the earth in proportion to its concentration. Models assume it and data is collected and analyzed–and adjusted–to confirm it.”

Wrong”

“An axiom, or postulate, is a premise or starting point of reasoning. A self-evident principle or one that is accepted as true without proof as the basis for argument; a postulate.”

‘Wrong’ isn’t very helpful. Which part of my statement was wrong?

Wrong as in Climate Science does NOT treat CO2 as the principal factor controlling the TOE as an axiom? And that I can therefore ignore all those comments describing CO2 as the earth’s thermostat? And, more importantly, the laws and regulations passed citing the CO2 thermostat as justification? If not, which physical variable has MORE influence on the TOE than CO2? And where are the peer reviewed papers describing them? And if it is NOT an axiom, rather than a theory, why is anyone who questions the influence of CO2 instantly attacked by Climate Science, personally and professionally? Is the empirical evidence of CO2 as the ‘thermostat which controls the TOE’ so overwhelming that only a fool or an amoral shill of the fossil fuel industry would question it?

Or did Climate Science (the multi-billion dollar industry, not ‘climate science’) simply start out by collecting generic climate data and, in the process of analyzing it, deduce, from empirical evidence, that among all the factors which influence climate CO2 was the primary influence on the TOE?

Wrong as in models do NOT assume, as a given, that CO2 is the major influence on the TOE and that observed variations cannot be explained without making that assumption?

Wrong as in Climate Science does NOT have a history of making adjustments to raw sensor data which consistently result in reducing older TOE data?

Hehe, I have to repeat the Q.E.D. it seems. I specifically mention your disposition towards showing warming and just assuming that the CO2 increase caused it. Your graph is one of the more duplicitous I’ve seen used to ‘prove’ the point.

No, JimD, you have to show the following relationship: +CO2 >> +T. Like you have T >> CO2 across the annual cycle.

Kristian, I showed you why it is not circular. Here it is again. 1) physics (circa 1900) predicts that adding CO2 causes warming, 2) warming is observed (circa 1930), 3) that same physics was confirmed by the warming that happened after the prediction. Very linear. Not circular.

That was Guy Callendar in the 30’s. He is normally a hero of the skeptics, for his low (basically no feedback) sensitivity estimate, so it is sad that you have now turned on the poor guy. But, yes, he obtained a measurable, if highly uncertain, global warming effect, and he knew what it was immediately because science decades before him predicted it. It turns out that there probably were solar effects going on at that time too, however, and the observed warming was more than he expected.

Jim D That “physics going back a century” includes the 19th century Clausius “hot to cold”) statement of the Second Law of Thermodynamics which physicists threw out the window long ago. I am talking 21st century physics about which you appear to know nothing.

“Kristian, I showed you why it is not circular. Here it is again. 1) physics (circa 1900) predicts that adding CO2 causes warming, 2) warming is observed (circa 1930), 3) that same physics was confirmed by the warming that happened after the prediction. Very linear. Not circular.”

And I showed you why it is IS circular, JimD. Let’s look at your points and see how they hold up to the scientific method.

1) Physics predicts that adding CO2 causes warming.

What physics? The physics don’t predict it. People predict it, based on their HYPOTHESIZING about the effect certain physical relationships might have on the grandscale Earth system. It in no way follows that even if their hypothesis is based on some physics (what scientific hypothesis about the workings of the natural world isn’t?), it HAS TO be right – the effect MUST be real and observable, so that all we need to do is FIND it.

No, a hypothesis like this needs to be tested against reality. Always. It is never confirmed already before it leaves the writing desk, even if the ‘theory’ seems sound on paper (it darn well should!). It is NEVER confirmed until the causal relationship behind the suggested effect is actually observed in nature.

2) Warming is observed.

Yes, but is that causal link observed? +CO2 >> +T? Where? When? How?

3) That same physics was confirmed by the warming that happened after the prediction.

Yup, and this is were the argument comes full circle.

You have only observed warming. You haven’t found the cause of the warming. What you HAVE done is predicted warming through your hypothesis and then, when warming is observed, simply concluded that the hypothesis is right. WITHOUT KNOWING THAT THE WARMING IS CAUSED BY THE MECHANISM YOU SUGGESTED – YOU JUST ASSUME IT.

The “climate sensitivity” debate may become scientific one day…when the simplistic framing of the question is replaced by a formulation that encapsulates the NON-EQUILIBRIUM nature of the problem. Until then, we have mostly yadda, yadda from very predictable mouths.

(1) Without looking up Wikipedia, state in your own words what you believe the Second Law of Thermodynamics says.

(2) Imagine a process in an isolated system wherein (one-way) radiation from a cooler atmosphere is assumed to penetrate 2mm beneath the warmer surface of a lake and raise the temperature of that layer of sub-surface water. Does entropy decrease, stay the same or increase?

1.) Second Law of Thermodynamics
The information entropy of Doug Cotton’s comments does not decrease whether isolated or not.
2.) Imagine a process in an isolated system … of sub-surface water. Does entropy decrease, stay the same or increase?
Yes.

‘The second law of thermodynamics is a profound principle of nature which affects the way energy can be used. There are several approaches to stating this principle qualitatively. Here are some approaches to giving the basic sense of the principle.

1. Heat will not flow spontaneously from a cold object to a hot object.
2. Any system which is free of external influences becomes more disordered with time. This disorder (more precisely, multiplicity) can be expressed in terms of the quantity called entropy.
3. You cannot create a heat engine which extracts heat and converts it all to useful work.
4. There is a thermal bottleneck which constrains devices which convert stored energy to heat and then use the heat to accomplish work. For a given mechanical efficiency of the devices, a machine which includes the conversion to heat as one of the steps will be inherently less efficient than one which is.’ http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/seclaw2.html#c2

What was the question again? The second law is obtained from macrostate statistics. The warmer gets cooler and the cooler gets warmer. With a few losses – entropy – thrown into the mix. This is no sense precludes IR emission from the cooler to the warmer in the microstate – simply that the net photons flux is in one direction.

Do you have a problem with the second law as well? Odd indeed.

Entropy always increases – although some have argued that life temporarily creates order out of disorder.

When you find yourselves stumped by these questions, go back to this comment and study every sentence carefully so that, if you wish to take me on about the valid physics therein, you keep to debating the points I have made, not diverging to an altogether different paradigm and reiterating the standard IPCC propaganda with which I am probably as familiar as anyone here.

No Doug – there is absolutely no need to waste any more time. The standard model of atmospheric energetics are central to out understanding of the how the atmospheric physics works. It does not contradict the 2nd law and the lapse rate is irrelevant to photon absorption and emission in a greenhouse gas rich atmosphere.

No General. Physicists have long ago discarded the mid-19th century Clausius statement of the Second Law about heat transferring only from hot to cold, because it is not general enough, General..

That cannot be the case in the troposphere of Venus, for example, or that of Uranus where the base of the troposphere is hotter than Earth’s surface, even though no significant solar radiation gets down there.

The solar energy absorbed in the upper cooler layers of these tropospheres is transferred by diffusion to lower warmer layers, and actually raises the temperature of the Venus surface from about 732K to 737K during its 4-month-long daytime. The process does not violate the Second Law, but is in fact a corollary thereof. It must happen, and you cannot explain energy flows that raise the temperature of the Venus surface using radiation calculations alone.

The second law of thermodynamics states that “Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased.“[

Now, I have described a process in which, if it were possible, the sum of the entropies of all bodies taking part in the process is decreased and so it is not possible.

Instead, what happens at the very surface of the water is that the photons merely raise electrons between energy states and then a new identical photon is emitted in a resonating (or “pseudo”) scattering process, wherein there is no conversion of the extra electron energy to kinetic energy in any of the translational, vibrational or rotational degrees of freedom.

For an explanation see my peer-reviewed paper “Radiated Energy and the Second Law of Thermodynamics” published two years ago on several websites in March 2012 and easily found on Google.

I don’t really give a rat’s arse about Venus. Don’t know – don’t care. Are you talking about lapse rates again? It is not relevant to the physics of IR photon adsorption and emission in Earth’s atmosphere.

Instead, what happens at the very surface of the water is that the photons merely raise electrons between energy states and then a new identical photon is emitted in a resonating (or “pseudo”) scattering process, wherein there is no conversion of the extra electron energy to kinetic energy in any of the translational, vibrational or rotational degrees of freedom.

The reality is that warming of the oceans proceeds from SW radiation penetrating to more or deeper levels – and then turbulently mixed into the mixed layer above the thermocline.

The losses are in IR emissions, as evaporation and in conduction. Increased downward IR flux from a warmer atmosphere – decreases the net flux of IR from the surface up. Ocean temps increase until a new equilibrium is reached.

There is no need for pseudo physical concepts such as pseudo scattering.

I don’t care if you, General, don’t give a stuff about Venus. You can’t make the facts disappear by burying your head in the carbon dioxide hoax.

What hapoens on Venus and Uranus provides clear cut evidence that radiation from a cooler atmosphere cannot transfer thermal energy to a warmer surface and thus raise its temperature.

You cannot explain the fact that the Venus temperature is rising by 5 degrees over 4 months of sunlight when you realise that less than 20W/m^2 of direct solar radiation strikes the surface. Can you (or any reader) imagine Earth’s surface rising to 737K with only about 10% as much insolation striking it?

Radiation from the atmosphere cannot assist the Sun in raising the surface temperature of any planet, and Earth is no exception..

Recent estimates place the mean surface temperature of the Moon below 200K. That’s all the Sun can do. And in fact it cannot raise the 1cm thin surface layer of the ocean to anywhere near the observed mean sea surface temperature of about 294K, because that thin 1cm surface layer is transparent, and no black or grey body is transparent. Probably well over 99% of solar radiation passes straight through the surface layer of 70% of Earth’s surface, namely the oceans. It is not radiation which has established and now maintains sea surface temperatures.

So it is totally incorrect to think you can calculate Earth’s surface temperature using S-B calculations.

And if that temperature cannot be raised to the observed value, then it is irrelevant that back radiation is slowing cooling. Slowing cooling from what temperature?

I don’t care if you think the ocean surface temperature is what it is because of penetrating solar radiation to the bottom of the thermocline. In fact it is actually due to the pre-determined thermal profile in the troposphere and the conduction to and fro across the water-air boundary. It would be too much of a coincidence if the ocean surface just happened to rise from below to the “correct” temperature which then corresponded with the calculated thermal profile.

But whatever you believe to be the reason, my point is that the temperature cannot be calculated in the way the models do using the S-B equation, simply because that layer is transparent. So too is the 1cm surface layer of much of the solid surface in the sense that solar energy passes through it and warms lower regions by conduction every sunny morning. So the ocean surfaces and the grass don’t get as hot as a black asphalt surface which is somewhat closer to being a grey body. But the models assume they would.

In fact it is primarily solar energy which maintains sub-surface temperatures on planets that have surfaces, because, even if there is nuclear energy generation or whatever, it is usually not sufficient and needs to be supplemented by solar energy moving up the equilibrium thermal gradient in the crust and mantle, where such gradients can also be calculated from the g/Cp</I quotient.

You would do well to remember that physics is universal – applying throughout the universe. So if your GH conjecture doesn't work on other planets something is wrong with it.

That’s more or less deeper levels. Some 3% of incident SW penetrates to 100m deep in the oceans.

The real atmospheric physics of the Earth work that way. Solar energy warms the oceans and surface. Increasing greenhouse gases increases the energy content of the atmosphere and everything that proceeds from that.

The increased downward IR in a greenhouse gas rich atmosphere reduces IR losses from the surface. The net flux of IR photons from the surface is reduced.

This is better than introducing novel physical
processes such as pseudo scattering into the mix aye Doug? Or waffling on about Venus?

Doug Cotton, Having read as much as I can about CO2 and the physics involved the lasr several years, I find your paper to be quite and eye opener. I saw a little gif animation sshowing the moecule to vibrate and hold the radiation and then emit it (in any direction). Now your telling me it doesn’t work like that? I don’t doubt your credentials and you seem to be highly educated and very bright but I’ve only seen this one account of this particular information. Why are you right and most of the rest of science mistaken? If you could explain that in laymans terms I’d appreciate it. BTW Robert Ellison should not bw categorized as simply a warmist he is mostly accused of being a skeptic and these simplistic labels aren’t very useful

“(2) Imagine a process in an isolated system wherein (one-way) radiation from a cooler atmosphere is assumed to penetrate 2mm beneath the warmer surface of a lake and raise the temperature of that layer of sub-surface water. Does entropy decrease, stay the same or increase?”

You show the absorptance spectrum of water and then conclude: “The water spectrum is ‘black from 2 microns to 1.2 mm. The skin layer of the ocean is cooler than a mm below because it radiates and evaporates, but absorbs almost no solar energy.”

To understand the thermal energetics of the massive ocean, you have to INVERT the absorptance in order to recognize the depth to which energy at different wavelengths penetrates. UV and LWIR radiation is almost completely absorbed with a fraction of a millimeter and mainly feeds the surface evaporation. Visible light and adjacent wavelengths is what heats the deeper layers. Neither IR nor microwaves penetrate deep enough the bulk mass of the oceans. That is very much NOT a black or gray body!

General thinks he can teach me about how downward radiation from the atmosphere slows that portion of surface cooling that is due to radiation. Of course it does – physics tells us that.

I wrote a peer-reviewed paper saying just that two years ago, General..

Again I quote from my book, the text for which was finalised over a month ago …

“Whilst it is correct to say that the area under the Planck curve represents the total radiative flux for the indicated temperature, it is important to understand that not all of the electromagnetic energy in the emanating radiation actually came from thermal energy in the body itself. In the case of the Earth’s surface, much of the electromagnetic energy in the emanating radiation comes from the electromagnetic energy in radiation from the cooler atmosphere. This radiation is immediately re-emitted by the surface without any of its electromagnetic energy being converted to thermal energy. The incident radiation from a cooler source does, however, slow the rate of radiative cooling of the warmer target because the target does not have to use as much of its own thermal energy in order to fulfil its “quota” of radiation as is determined by the area under its Planck curve.

“So, yes, the so-called “back radiation” does in fact slow that portion of surface cooling which is itself due to radiation. However, because its energy does not go through the complicated process of being absorbed and converted to thermal energy, the back radiation can have no effect on the rate of non-radiative cooling of any planet’s surface. We cannot just add together the radiative flux from the Sun and that from the colder atmosphere, because the latter is all pseudo scattered. The only radiation that can increase the temperature of the surface must come from a hotter source, namely the Sun. The IPCC authors incorrectly assume that we can compound the effect and they explain it away by saying that we only need to consider the “net” effect of radiation entering and leaving the surface. But the radiation from the colder atmosphere has no warming effect and does not transfer any thermal energy to the surface. There is not necessarily any simultaneous corresponding outward radiation causing a net effect. If, for example, it were possible for radiation from the atmosphere to penetrate a little below the surface of warmer water then, if it could increase the temperature of that water, the extra energy would not need to exit the water by way of radiation. If any such radiation could increase the temperature of the warmer water this would be an outright violation of the Second Law of Thermodynamics, because every independent heat transfer must obey that law. There is no dependency between the initial heat transfer (if it could happen) and any subsequent heat transfer back to the atmosphere. Hence it is totally invalid to “excuse” the assumed process with any “net” effect.

“Thus the Solar radiation getting through to a planet’s surface is the only radiation that plays a part in determining a planet’s surface temperature. Neither on Earth or Venus (or any other planet with a significant atmosphere) does that radiation account for the actual observed mean planetary surface temperature, and this fact alone is sufficient to put to rest all the literature and “settled science” which blames global warming on back radiation from the radiating gases in the atmosphere.”

I’ve long suspected C02 simply works both ways, filtering incoming radiation from the sun during the day and trapping outgoing radiation from the Earth at night. There shouldn’t be any amplification or forcing, the missing heat never makes it to the earth’s surface in the first place, it’s just a bandpass filter.

CO2 in the atmosphere does absorb some solar, but since that absorbed energy cannot increase the temperature of the source, the impact of a doubling of CO2 wrt to incoming is negligible. The amount of solar absorbed may change significantly with solar variations, but the how much the solar near infrared varies is not very well known.

According to NASA, about 20% of incident solar radiation is absorbed by the atmosphere, and a further 30% is reflected by clouds that I remind you comprise the most prolific greenhouse gas, water vapour and suspended droplets.

It is these greenhouse gases that do most of the absorbing and also emitting of the same energy back to space. If some of what they emit from the cold atmosphere strikes the warmer surface it cannot raise the temperature of that surface. It can only slow that portion of surface cooling which is itself due to radiation. So it has far less effect than it would have had if the insolation had not been absorbed on the way in.

Preventing incident solar radiation reaching the surface leads to a lower maximum temperature from which cooling starts each afternoon. Slowing cooling does not raise that maximum temperature: it merely causes the warmth of the day to extend longer into the evening (quite pleasant for most of us) but the temperature still gets down to the supported temperature when cooling slows right down in the early pre-dawn hours. This supporting temperature is predetermined by solar flux, gravity, height of troposphere and weighted mean specific heat of gases, with some cooling effect also due to the reduced gradient caused by inter-molecular radiation between the greenhouse molecules like water vapour.

So the overall effect of all that absorption and reflection of insolation by greenhouse gases is a cooling effect, as real world data confirms in a comprehensive study I’m publishing in April.

Doug, how does the the surface know if a particular photon has been emitted from a body warmer or colder than itself?
A photon of 2 micron could have been emitted from the colder atmosphere or from a supernova; so how does a electron on the surface know whether to get excited or not?

Something all should consider is the obvious fact that all our temperature measurements showing (natural) global warming are made in the first two metres of the troposphere where weather stations must be placed. But the vast majority of the radiation from the surface passes straight through this mere 2 metres which is obviously a very small percentage of the height of the troposphere.

So the temperatures that we measure are primarily determined by sensible heat transfers due to kinetic energy being shared when molecules collide. That is why, at least in calm conditions, the temperature of the first 2m of air above the ocean surface is very similar to that of the first 1mm of the water surface, because it is only molecules in that 1mm (or in fact far less) which can collide with air molecules. In fact it is the predetermined thermal profile in the troposphere which determines the ocean surface temperature by diffusion and conduction, not the other way around.

Now, the models do not calculate the temperature of that 1mm fairly transparent surface layer of water by somehow working out how much of the energy in the warmed ocean thermocline will rise to the surface and what the temperature would thus be, or by any calculations involving sensible heat transfer in the troposphere..

Instead the models do a most ludicrous calculation using the Stefan Boltzmann Law which is only for black and grey bodies that do not transmit any incident radiation, quite unlike that 1mm ocean surface layer.

If the models were to use S-B calculations in any remotely valid way, they should calculate the percentage of solar radiation that is actually absorbed in the first 1mm (or even less) and use that far, far smaller radiative flux in their calculations, which would then give totally incorrect results of course, because radiative flux is not the primary determinant of planetary surface temperatures, as is blatantly obvious on Venus..

This is seriously warped. The standard 2m enclosure is just a standardized temperature measurement – nothing more nothing less. It is such an old system that I doubt that there is much rationale to it other than convenience.

The ocean temps taken in sea water samples from the surface.

The skin temperature is a function of loss of energy from the surface in IR and as vapour loss. This is a fast process that leaves the top microns cooler than the underlying water column. Mixing is a slower process whereby new water is constantly brought to the surface.

It is apparent that there is a one way flow of energy – from the Sun to the oceans to the atmosphere and back to space. The atmosphere does not warm the oceans.

These are such baby physics that it is just incredible that other ideas are even possible.

“In fact it is the predetermined thermal profile in the troposphere which determines the ocean surface temperature by diffusion and conduction,”
—–
Completely erroneous. The flow of sensible and latent heat is quite strongly from ocean to atmosphere and it is the ocean that therefore drives the short-term variability in tropospheric temperatures. This is seen most readily in ENSO fluctuations, which of course reflect the charge-discharge cycle for the IPWP.

What I’m saying is at the forefront of research in this area, and only two of us in the world appear to be onto explaining it..

Of course I am aware of the standard assumptions and all the IPCC propaganda which I quote in my book. However, those assumptions do not allow explanation as to why the rate of surface cooling slows as much as it does in the early pre-dawn hours. That is when the thermal profile in the troposphere is “supporting” surface temperatures.

Furthermore, the whole thrust of the IPCC radiative forcing greenhouse conjecture is based on the incorrect assumption that the Earth’s surface receives additional energy from radiation from the cooler atmosphere, because they realised that the solar radiation alone was not enough (even using absorptivity of 95%) to explain (using SBL) the mean 287K surface temperature.

Well, not only should you not count the back radiation (because it does not increase the temperature to which the Sun could raise the surface) but also you should not use absorptivity of anything like 95% when explaining the temperature of 70% of Earth’s surface – the oceans. It doesn’t matter if their temperatures are measured in the water or just above. The point is that the ocean skin temperature is not at all determined by the radiation passing through that skin, because, by definition, a black or gray body is not at all transparent. The emissivity of the ocean surface skin is nothing like its far lower absorptivity, because it is transparent. But the IPCC explanations very clearly claim that they can use the S-B calculations with all this extra back radiation to explain that “33 degrees of warming.”

That 33 degrees is actually over 40 degrees (caused by the gravito-thermal effect and the supporting thermal profile in the troposphere) with an adjusting cooling effect mostly due to water vapour, which reduces the thermal gradient primarily by inter-molecular radiation and only very slightly as a result of latent heat transfer. The latent heat assumption is another fictional “guess” of climatology. Other radiating gases on other planets also reduce their thermal gradients because of the temperature levelling effect of inter-molecular radiation.

To prove me wrong you would have to explain by some other process just precisely how the required thermal energy gets into the surface of Venus to actually raise its temperature from 732K to 737K during the course of its 4-month-long day. Only one other author has explained this in the same (and correct) way that I have, based on the process described in statements of the Second Law of Thermodynamics and the consequent isentropic state.

In my mind, I explain the lower (essentially no watervapor feedback) result by considering that in the troposphere, where the water molecules are, any greenhouse effect is eliminated by an equal increase in convection to maintain thermodynamically stable lapse rates. Is my innocent notion correct? I hope it is because it would so nicely align the measurement with fundamental physics.

It Is wrong to assume that a planet’s troposphere only develops a thermal gradient (aka lapse rate) as a result of upward rising advection from a surface that receives and absorbs incident Solar radiation.

Uranus has no such surface at the base of its nominal troposphere, and no significant incident Solar radiation getting down there through 350Km of hydrogen and helium above it. But it’s 320K down there – hotter the Earth, but nearly 30 times further from the Sun.

Piers Forster, in his comments on Ed Hawkins blog, lays his cards on the table in his second paragraph: “Climate sensitivity remains an uncertain quantity. Nevertheless, employing the best estimates suggested by Lewis & Crok, further and significant warming is still expected out to 2100, to around 3°C above pre-industrial climate, if we continue along a business-as-usual emissions scenario (RCP 8.5), with continued warming thereafter.”
That of course is nonsense and a denial of reality so I wrote a comment disproving it. I for one think that trying to play the climate modeling game is a waste of time and accordingly do not get myself into specifics of what Lewis & Crok are saying. Below is my refutation of Piers Forster’s claim:

I take it that your claim of further and significant warming out to 2100 is based on the canonical theory of global warming from IPCC that totally ignores observations of nature. I hate to tell you but that is a pseudo-scientific claim. There is no greenhouse warming now and there has been none for the last seventeen years.That is two thirds of the time that IPCC has existed. There is something wrong with an allegedly scientific organization that denies an observed fact for most of its history. The only responses to it I have seen are laughable attempts to find that missing heat in the bottom of the ocean or in other contortions of reality. And not one of their “scientists” has bothered to apply radiation laws of physics to the absorption of IR by carbon dioxide. It so happens that in order to start a greenhouse warming by carbon dioxide you must simultaneously increase the amount of carbon dioxide in the atmosphere. That is necessary because the absorbency of that gas for infrared radiation is a property of its molecules and cannot be changed. Since there has been no warming at all in the twenty-first century we have to look at twentieth century warmings to see how they meet this criterion. There are two general warming incidents in that century, plus a separate one for the Arctic. The first warming started in 1910, raised global temperature by half a degree Celsius and then stopped in 1940. The second one started in 1999, raised global temperature by a third of a degree Celsius in only three years, and then stopped. Arctic warming started suddenly at the turn of the twentieth century after two thousand years of slow, linear cooling. There is also a warming that starts in late seventies and raises global temperature by a tenth of a degree Celsius per decade that is shown in ground-based temperature curves. Satellite temperature curves indicate no such warming in the interval from 1979 to early 1997 which makes it a fake warming. Fortunately we do know what carbon dioxide was doing when each of these warmings started, thanks to the Keeling curve and its extension by ice core data. And this information tells us that there was no increase of atmospheric carbon dioxide at the turn of the century when Arctic warming began. And there was no increase either in 1910 or in 1999 when the other two warming periods got started. Hence, there was no greenhouse warming whatsoever during the entire twentieth century. This makes the twentieth century entirely greenhouse free. The twenty-first century is also greenhouse free, thanks to that hiatus-pause-whatchamacallit thing. And this takes care of your claim above that “Climate sensitivity remains an uncertain quantity.” It is not uncertain at all and has a value of exactly zero. No matter how much carbon dioxide you plow into the atmosphere it will never raise global temperature. To get warming you have to pray for wild cards like the 1910 and 1999 warmings, both of which had natural causes. There were only two of them last century if you don’t count the Arctic. You just may get lucky again in another fifty years or so but who knows? In the meantime, make use of this science and start dismantling the devices and plans you have made for changing the climate. They don’t work but cost a lot.

A quote from early (p. 88): Within more recent years the concept of a climate system has become firmly established. The basis for this view is the realization that the underlying ocean and land surfaces (and the ice, snow, lakes, rivers, and living things that are often found between these surfaces and the atmosphere) are not mere inert boundary conditions, to be taken for granted in seeking explanations for the atmosphere’s behavior. On the contrary, they possess their own internal dynamics, and for them the atmosphere is one of the boundary conditions. Together with the atmosphere they form a larger system that may logically be studied as a single entity.

From a statistical point of view, it is appropriate to view the climate as the distribution (changing
over time) of climate variables.

It would be less tautological had he said distribution of weather variables, but the focus is on the “distribution”.

I have written of “climate system” and “distribution of measured variables” in some of my posts. I may have gotten the language from some of Prof. Guttorp’s earlier papers and talks, but for us statisticians the focus on “distributions” instead of means is most common.

Time variation of the global climate feedback arises naturally when the pattern of surface warming evolves, actuating feedbacks of different strengths in different regions.
Computer models work on 3 principles.
1.CO2 levels causing AGW
2. positive feedback to increase the sensitivity to 0.2 degrees of warming per decade.
3 . Predicting a logarithmic increase in CO2 leading to an exponential increase in temperature.
Because the North and South Hemispheres [more land mass in the north] heat and cool at slightly different rates there will be some variation around a trendline which starts out at 0.2 degrees and then veers exponentially upwards.
That is all a computer model should be expected to show after a large number of runs with those parameters and the same starting point.
To make it more human appearing [natural] the Computers are then given the best guesses of ENSO, PDO, currents, tides and volcanoes.
Because different models use different guesses the graphs are made to show dips which should never be there in the first place and hence vary from each other.
Catastrophe [say 8 degrees warming] will still strike comfortably within 10 years of each graph.
Please just put in the start point, sensitivity, an averaged component for the various volcanoes etc and give the finished product and argue about it.
Mr Lewis has more than ample skills for this, even Tamino could do it [and no doubt has done it].
Then the discrepancy between observation of data with all its variables can be compared.

Actually this by Forster is wrong.
“Nevertheless, their methods indicate that we can expect a further 2.1°C of warming by 2081-2100 using the business-as-usual RCP 8.5 emissions scenario, much greater than the 0.8°C warming already witnessed.”

8.5 W/m2 and 1.75 C ECS gives 4 C warming (3.2 C more than now) under this scenario (twice the IPCC recommended value). Besides which Lewis distinctly said the second highest scenario (6 W/m2) gives 2 C, which is actually 2.8 C above preindustrial levels, but he was maybe using his TCR not ECS, not caring what happens beyond 2100.

manacker, isn’t it like burying your head in the sand to use TCR to compute what is required for 2 C rather than ECS? In the end it is ECS that matters. It is no victory if we hold the temperature to 1.99 C on January 1st 2100, if the CO2 in the atmosphere is over 800 ppm by then and still rising, because that means a lot more warming to come. What matters is the ppm, which means the fossil Gigatonnes burned.

If you start off with an exaggerated 2xCO2 climate sensitivity and compound that with an exaggerated future CO2 level, it’s pretty obvious that you will end up with a ridiculously exaggerated theoretical warming.

If your intent is fear mongering, that is what you would do.

But not too many people fall for it, Jim – just those who really want to.

On a lighter note I have just had a terrible feeling about the Mauna Loa CO2 data. I understand it goes up in a sawtooth pattern, but so regular for so long?
So straight with the temperature pause?
So straight with all the known atmospheric variables.
For so long.
It looks clockwork not real. It looks like a computer generation.
Does anyone one know of any other long running CO2 measuring laboratories anywhere else in the world and if any of them have graphs we are permitted to see??
If not why not.

I’ve had similar concerns, but I think it is because the see-saw is between two reserves, one of course much larger than the other. Also, is it even settled for sure that what the see-saw represents is seasonal fixing and release of Northern Hemisphere vegetation? It seems that’s been controversial; at least some other mechanisms have been proposed.
=============

Cape Grim on the NW tip of the Australian island state of Tasmania.
The Cape Grim station is run by the CSIRO.
Cape Grim being situated at a latitude just north of 41 degrees South which is consequently some hundreds of kilometres south of the latitude of the South Africa land mass to the west has no land masses for some 6500 kms west and upwind of the Cape Grim station until South America / Argentina.
Cape Grim is probably a much better station than Mauna Loa with no volcanic activity or any other sources of potential pollution impact on the measurements other than the usual ever changing ocean influences.

At almost 200 comments in a technical thread on TCR vs. ECS, and so far not a single match with the string “sigm”. No sigma analyses, no discussion of the sigmoid relationship of transients and means in lagged response dynamic systems.

Does anyone want to shine a light on the MAIN point of the topic?

For those not familiar, typical in nature one sees “S”-shaped (ie sigmoid) curves in plots of such phenomena as population growth, mass of trees, double sigmoids that form the undulations, oscillations, waves and like patterns or non-regular pseudopatterns, and of course the mark on Superman’s chest.

Black swans are often outliers at the start of a fishtail-shape that begins a sigma curve in systems undergoing external forcings and chaotic disruption, and clusters of extreme events in complex systems generally are predicted.

So when something really huge is about to happen in a chaotic system as its former equilibrium state is disturbed and it shifts to a new ordering, we expect to see.. Wait, this just in: the pattern of shifting frequencies of extremes across some 50 or more essential climate variables observed currently exactly matches such a shift, and it is very unlikely we will not see the sharp rising portion of the sigma curve following these indicators, with everything for the past half century plausibly described as “the flat portion of the S”.

‘In the Earth’s history, periods of relatively stable climate have often been interrupted by sharp transitions to a contrasting state. One explanation for such events of abrupt change is that they happened when the earth system reached a critical tipping point. However, this remains hard to prove for events in the remote past, and it is even more difficult to predict if and when we might reach a tipping point for abrupt climate change in the future. Here, we analyze eight ancient abrupt climate shifts and show that they were all preceded by a characteristic slowing down of the fluctuations starting well before the actual shift. Such slowing down, measured as increased autocorrelation, can be mathematically shown to be a hallmark of tipping points.’ http://www.pnas.org/content/105/38/14308.full

The latest shift was to cooler conditions for some 20 to 40 years from 1998 – and more cooling beyond that is certainly possible. Current conditions are well within late Holocene limits – despite rather silly scare mongering about minor effects attributable to greenhouse gases thus far. There are sufficient real issues without inventing more.

We do have as I say a intractable quandary. An unmistakable risk factor for climate shifts as greenhouse gas emissions rise from 4% to 8%, 16%, etc as economies grow this century – and non-warming – or even cooling – for another decade to three.

I enumerated 50 essential climate variables long months ago here; their metrics are available through WMO and other sources; do you want the analyses and code in R? Perhaps Bayesian Additive Regression Trees in R (BART R)?

Or would you trust any of the authors of the most recent 10,000 climate studies to show you their math and code? Actually, I wouldn’t mind seeing their math and code, too, but I don’t really have the time to correspond with tens of thousands of people in a field notorious for losing the scraps of paper they scribbled their notes on.

Or do you mean on the general properties of sigma curves? A primer on sigmoid analyses? Chaos Theory in general? How low down do you want me to drill, on a blog, reproducing here things you could find in standard texts, journals and web searches?

At almost 200 comments in a technical thread on TCR vs. ECS, and so far not a single match with the string “sigm”. No sigma analyses, no discussion of the sigmoid relationship of transients and means in lagged response dynamic systems.

It appears the natural tendency of threads to grow following a roughly sigmoid pattern within the usual random variability, the positive feedback mechanisms that dominate at this stage in online discussions, and the atmosphere of collegial respect of a technical thread, taking into account negative feedbacks of stronger moderation and fewer denizens with anything to say that withstands technical requirements all combined to get the thread up and over 250 in less than a day, all without substantially increasing the net technical merit of the comments very much.

It is quite reasonable to ask that wild speculation has at least some basis in the science.

This does not – it invents freely climate shifts based on specious arm waving towards climate variables, the WMO and chaos textbooks.

It is quite reasonable to ask for science where none seems to inform the comments – other than in the purloining of half comprehended jargon and specious claims of – inter alia – 10,000 climate studies.

Just one will do to enable discourse on the relevant points. Otherwise it is all just meaningless and supercilious verbiage.

1) This climate sensitivity is with respect to CO2 only. The climate is sensitive to many other radiative forcings from cloud coverage, aerosols, ocean cycles, etc. IPCC excludes cloud and ocean in its assessment reports but their forcings are probably greater than CO2. (Note the “pause” is sometimes attributed to PDO, ENSO, AMO) The uncertainty in aerosol forcing is about -2.5 W/m^2 (IPCC estimate). This means it could cancel the forcing of CO2 = +1.66 W/m^2. This is obvious from the IPCC AR4 radiative forcings chart but not emphasized.

Even if CO2 doubles, it is uncertain temperature will rise by X amount because that assumes all other forcings remain constant. “All things being equal” The climate is always changing. “All things are never equal”

2) The relevant sensitivity is TCR not ECS because the latter takes thousands of years to attain. It is only flawed models that exclude deep ocean mixing that assumes ECS can be attained in decades.

3) Any TCR estimate above 1 C implicitly assumes positive feedback. Lindzen and Spencer have studies based on satellite data that the climate has strong negative feedback (approx. 6 W/m^2/K) This means it’s possible TCR is less than 1 C.

The easiest blunder to pick in the energy budget diagrams is an apparent assumption that the absorptivity of the surface skin of the oceans is equal to its emissivity.

This skin does not act remotely like a black or gray body, because, by definition, such bodies do not transmit any radiation. In contrast the surface skin transmits nearly all solar radiation and pseudo scatters all radiation from cooler regions of the atmosphere.

It is quite obvious that the IPCC authors think they can calculate Earth’s surface temperature from the absorbed radiative flux. They found they had to boost the flux with back radiation to get anywhere near the “right” answer by counting both lots of radiation and ending up with even more flux than that in insolation at TOA.

But they should have counted perhaps only 0.1% of the flux as being absorbed by the transparent ocean surface skin.

‘When dealing with non-black surfaces, the relative emissivity follows Kirchhoff’s law of thermal radiation which states that emissivity is equal to absorptivity. Essentially an object that does not absorb all incident light will also emit less radiation than an ideal black body.

That universe today article in stating: “In general, the duller and blacker a material is, the closer its emissivity is to 1. The more reflective a material is, the lower its emissivity” leaves out some very important exceptions to the “general” rule. In this case those important exceptions are water, snow, and ice which all have emissivities very close to one. Snow is highly reflective in shortwave and is a very common surface on the earth but its emissivity, which is a longwave measure, is 0.969 – 0.997.

You have absolutely no understanding of when Kirchhoff’s law is applicable and when it is not. If more than 99.9% of incident solar radiation passes through that 1mm (or 1cm) surface layer of the ocean, then that thin layer is absorbing less than 0.1% of that incident solar radiation. It is not a layer of black asphalt over the whole ocean. It is a transparent 1mm (or 1cm) thin body, and transparent bodies do not obey Kirchhoff’s law for the simple reason that they can transmit radiation, not absorbing much at all, yet get warmed by non-radiative heat transfer processes and thus emit far more radiation than they absorb. That is where the energy diagrams and models go haywire.

“In contrast the surface skin transmits nearly all solar radiation and pseudo scatters all radiation from cooler regions of the atmosphere.”

WTF is “pseudo scatter”? The surface skin absorbs longwave radiation from the atmosphere. Being a good absorber at the frequency Kirchoff’s law informs us it’s also a good emitter. Water is transparent to shortwave and opaque to longwave. It’s actually the impurities in the water not the H2O molecule which absorbs and thermalizes the solar shortwave. The impurities then kinetically transfer the solar energy to surrounding water molecules. Water is a greenhouse fluid in fact as it has exactly the same properties that distinguish a greenhouse gas from non-greenhouse gases in that it is transparent to shortwave and opaque to longwave. It’s actually an UBER-greenhouse fluid because it is opaque to all longwave frequencies not just a narrow bands like CO2 and CH4. So the solar energy absorbed at depth via shortwave cannot exit at depth as longwave but instead that warm water at depth must be mechanically transported to the surface where it radiates longwave and to a much greater degree evaporates. Evaporation is much more efficient at cooling than radiation in cases where there’s unsaturated air moving across the water surface. Energy flow (and fluid and electrical flows too for that matter) partitions across possible paths ratiometrically by impedance of the available paths.

That answer as to whether carbon dioxide has any warming effect can only be clarified with valid physics, and I believe I have done that, along with only one other author, who I now learn published similar to what I did, maybe a month before me in October/November 2012. We both applied valid physics and came up with the same explanation for all planetary atmospheric, surface, crust, mantle and core temperatures observed in our Solar System.

I’m not going to reproduce my paper or my book (or his) here, but there are numerous comments I’ve written here and on Roy Spencer’s threads that should give anyone a fairly good summary of what we are saying, and why carbon dioxide and water vapour cool rather than warm.

Until climate models come to terms with the fact that they do not account for long term (i.e. greater than 15 years) of observed variability as measured in the data then any sensitivity calculations are fairly pointless IMHO.

I am not sure that any value so calculated is particularly useful. If the range implied by natural variability is high, then any such sensitivity value would just alter the natural range of that rather than be useful on its own.

The facts are that ALL of the models effectively flat line above 15 years until you get to century periods. Makes if difficult to see how the various medium term outcomes then stack up.

1. First principles calculations: a simple first principles approach will get you
a lambda of around .4C per watt^2. Engineering codes used to
predict forcing from doubling c02 gets you 3.7 watts per doubling
3.7 * .4 gets you.. roughly 1.5C per doubling.
2. Paleo studies us no GCMS. paleo estimates range in the 3C +- 1.5C

3. Observational studies: relaxation response from volcano, satellite data
land surface data.. all get you results with no GCM

Steve: OK. So you can get an outline answer by using the mathemagics that the models use without running the models themselves.

Still does not explain the variability in the data sets on times scales from above one year to below a century which are clearly visible in the measurements to date.

The satellite data shows exactly that

Here we have a high quality filter at annual resolution that shows the way in which the temperatures have evolved since 1979 (per UAH). Care to take a moment to think how you would describe that observed state of affairs?

Just how does the modelled system or its methodology demonstrate the above observed behaviour? What short term feedbacks to you have to use to create such an outcome in the models? Why do they not have such mechanisms?

Your first principles calculations are wrong because your assumptions are wrong. Planetary surface temperatures are determined primarily by the autonomous thermal gradient which evolves spontaneously at the molecular level in their tropospheres. The gradient (aka lapse rate) does not require any surface warmed by direct solar radiation, or any upward rising advection, or any internal energy generation or energy release through cooling.

If the height of Earth’s troposphere were, say, 10Km more than it is, then the mean surface temperature would be in the vicinity of 30 to 40 degrees warmer than it is. You will not get that “answer” using GH radiative forcing conjectures. Yet you need look no further than Venus and Uranus to see examples of temperatures at the bases of their tropospheres.

The fact that IPCC, Armour, Lewis or anyone else labels climate sensitivity as “caused by energy imbalance”, “calculated with regional feedbacks”, “instrumental or proxy derived” or “estimated from a slab ocean version of the model”; do not mean that the climate sensitivity value obtained by all these methods is a scientifically valid value.
In fact, the value of climate sensitivity due to CO2 doubling is by any of those ways ending up in tuning, so then, ECS in the range [1.5 – 4.5]K is a fictitious value.
Judith Curry, as I sent you 3 pdfs and you have not posted them in your blog, the best thing I can do is to update my “Refuting …”:https://docs.google.com/file/d/0B4r_7eooq1u2VHpYemRBV3FQRjA
with the basic ideas in those 3 pdfs + those 3 more ideas to come, and upload that update into google docs. There anyone could read my ideas: about ECS and attribution uncertainties; about timescales; about reliability of models; about proxy uncertainties and about manipulation. Let’s see if the readers of this blog are then ready for a “technical discussion” that you Judith could host.

blueice2hotsea thanks for commenting the old version of my “Refuting …”: your welcoming “to the IPPC Post Modern World” is very funny.
But I am not resentful with JC. As the readers of this blog are not kindergarden kids, there is no need to create 6 easy-to-read pdfs. A new version of my “Refuting …”, focusing this time in refuting the 1552 pages of the final WGI AR5, will do the job.

For Steven Mosher. I thought you were going to educate me on what the scientific method is. I understand the difference between estimates and measurements. I thought I understood the scientific method. Where am I wrong?

You think that there is one method. You mistake the methods of experimental science for the methods of observational science.
Its a pretty simple mistake you make.
Note, a lot of OR is observational science. We almost never do experiments.

Take plate tectonics as an example.
It the latter no body did experiments to test the theory of how the continents came to be arranged. A theory was prosposed to explain what HAD HAPPENED. The theory might be used to make predictions about what the earth will look like in 1 billion years, but nobody tests that science by doing experiments.

Take astronomy for example, you’ll see the same type of reassoning.

Take OR. In OR before Desert Storm we were asked to predict the outcome of various strategies and tactics. There was no lab to perform these experiments in. So, we apply the best tools we had to the problem
and ruled out certain options without doing a single experiment.

The problem is that in the early 20th century certain philosphers tried to define ‘the scientific method” using a very small sample of science.
They did bad science on the study of science.

Science broadly speaking is understanding how things work. In some cases the best way to do this is with controlled experiments. In other cases where controlled experiments are not possible or too costly we come to understanding using other tools.

Now of course you could argue that geology ( plate tectonics) is not a science, or that OR is not a science, or that astronomy is not a science.. but then you’ve just turned the debate into a semantic dispute. To which one asks? is semantics a science? or rather, what experiment did you do to prove that your idea of science was correct?

Steven, IMO it’s not science or not science, or experiment or not experiment, it’s taxing everyone who uses energy multiple trillions with very thin evidence, some might even call it supposition. Proxies and models don’t exceed the bar of evidence for me, absorption spectra as lab work does, but I want conformation from measurements in the wild.

Seeing a model of temp going up doesn’t exceed the bar either. And I have put the effort into looking at the data, and I see the data saying it’s not Co2 causing the temps to go up, they’re going up, but the cause looks to me to be redistribution of heat stored in the oceans.

Is there anyway I can get you to use the 100+ million surface measurements as tests of the daily rate of cooling, and not as an input to a GAT series? The answer is in this data, it’s just no one is asking the right question.

Operations research is clearly a technology – even an art form – rather than science. Science still relies on hypothesis, analysis and synthesis. The hypothesis contains the seed of a testable idea, the analysis relies on more or less reliable data sometimes collected over centuries and the synthesis creates theories based on observation.

“What I am really trying to do is bring birth to clarity, which is really a half-assedly thought-out-pictorial semi-vision thing. I would see the jiggle-jiggle-jiggle or the wiggle of the path. Even now when I talk about the influence functional, I see the coupling and I take this turn – like as if there was a big bag of stuff – and try to collect it in away and to push it. It’s all visual. It’s hard to explain.” Richard Feynman

The theory explains the observations. It is not science without this process – one that involves empirical research pioneered as early as 1600BC. There is no other way of knowing that can be called scientific.

@ Steven Mosher | March 7, 2014 at 2:30 pm |
…
Take plate tectonics as an example.
It the latter no body did experiments to test the theory of how the continents came to be arranged. A theory was prosposed to explain what HAD HAPPENED. The theory might be used to make predictions about what the earth will look like in 1 billion years, but nobody tests that science by doing experiments.

Take astronomy for example, you’ll see the same type of reassoning.
…
OK, those are good examples. Those have to be confirmed by observation. Even for plate tectonics and astronomy, theories can be created and tested via observation. Think Mercury and Relativity.

So, along those lines, we need to ensure that every hypothesis in climate science is testable by observation. Thanks for that clarity!

Yes.let us take plate tectonics. After the recent major Japanese earthquake which triggered the massive tsunami (I forget the year) the amount the plate moved was MEASURED using GPS. So, yes we measure how much the earth moves. We measure the distance between Europe and North America routinely. And many other such distances.

One can use science to solve problems by means other than the scientific method. But there is only one scientific method.

“In some cases the best way to do this is with controlled experiments. In other cases where controlled experiments are not possible or too costly we come to understanding using other tools”

However, the fact that you have a huge number of people in the field who are not experimentalists means that cheap, basic, controlled are not done because the majority of ‘climate scientists’ do not have the training, background or imagination to design controlled experiments.

“Take plate tectonics as an example.
It the latter no body did experiments to test the theory of how the continents came to be arranged. A theory was prosposed to explain what HAD HAPPENED. The theory might be used to make predictions about what the earth will look like in 1 billion years, but nobody tests that science by doing experiments”

I rarely take issue with Mosher – his confused grammar, execrable spelling and chaotic sentence formation render his comments semi-comprehensible at best. One is never sure what it is he has actually tried to say

In the quote above, however, his meaning seems clear enough and is very WRONG.

Every drillhole, seismic line, outcrop map, airborne geophysical survey, unexpected exposure in an operating mine and so on, is an experiment, in part testing the understanding of and predictions from plate tectonic theory. Scale of experimentation is also an obvious factor – eg. an operating mine is a larger-scale experiment; many simultaneously-operating mines comprise a regional-scale experiment

If one doubts that (and I think many do), then try this typical question from an accountant:

“before I supply you with the $$capital, tell me what that drillhole will find ?”

Think about it – if the result of the drillhole was *known* before drilling, it would not need to be driilled

Any prediction of the results (which applied geologists are required to make in order to procure $$capital) depends in part on existing data and in part on WHERE you think the rock pile you are exploring is within the theoretical tectonic cycle

Do results disprove the tectonic theory ? No, the aim is to test, quantify and modify the theory (amongst many other aims). Is a prediction of continental emplacement 1bn years from now an aim ? Of course not – too silly

“Climate” science theory requires that humidity increases as CO2 atmospheric concentration increases (not linearly) as a positive feedback. So the theory is testable by experimental observation. So where are the empirically observed results for this prediction ?

“Take plate tectonics as an example.
It the latter no body did experiments to test the theory of how the continents came to be arranged. A theory was prosposed to explain what HAD HAPPENED. The theory might be used to make predictions about what the earth will look like in 1 billion years, but nobody tests that science by doing experiments.”

Plate tectonics may have been an unfortunate example of science proceeding without benefit of experimental evidence.

In point of fact, the movement of the various tectonic plates and sub plates are routinely measured using a variety of techniques, with GPS being the most obvious and apparently the most precise:

(My question, repeated from earlier thread:)
Use _your own_ methods of Bayesian inference; what is the rate of change of TCR and ECS for each month of the Pause? How much do they go down every month of flat temperatures?
Can I assume that we all agree that the answer must be a non-zero negative number? How much good news are we getting every month?

You could track the in front of and in the umbra of a total solar eclipse on the ground and in the air at various altitudes, so that you could measure the up/down radiative fluxes and temperature.

Why bother with an eclipse? The Sun goes down every day, and in the upper latitudes the length of day/night changes as the seasons progress. You can also see the transition point where the planet goes from warming to cooling (as well as cooling to warming) as the length of day changes(this last image shows that the rate of change in temperature as the length of day and night increased changed from the 50’s to 2000, but looks to be changing direction from 2000 to 2011).

If you want a steady state an eclipse is more transient than the day night cycle, also the ratio of day to night changes slowly, while IR being absorbed by Co2 should happen almost immediately, just monitor the temp drop after sunset.

Not quite all of it. The mean annual temperature of a desert is higher than a moist region if latitude and altitude are similar. The tropical desert, climate type BWh, has the highest mean annual temperature of all climate types.

I’m not sure, I took a quick look at a couple 10×10 lat/long blocks in North Africa and southeast Asia all from 10-20 North lat, and most of the tropical locations had higher average temps(and lower Tmax) but a narrower daily range (20°F) vs deserts of North Africa(25°F).

” A communications professor at the University of Houston studying the issue found anonymity contributes to less civil discourse. He looked at online comments in newspapers for more than a year and half and found 53 percent of comments were uncivil in papers that allowed anonymity. That percentage dropped to 29 percent when newspapers required names or links to Facebook accounts.”

The state of climate science reminds me of the state of the field of underwater acoustics (my field), about 100 years ago.

This duality is interesting and perhaps even informative and can help frame the state of climate modeling.

“Back then” in acoustics we were assuming very simple things like the attenuation of the signal is proportional to the square of the range (spherical spreading) or 20*log10(R). We knew things were not that simple, but didn’t have the foggiest idea why. At low frequencies in shallower water at longer ranges the attenuation was often less, sometimes far less, and cylindrical spreading seemed to fit better. Then it was observed that instead of a monotonic loss of signal as range increased, there were often abrupt cut-offs (shadow zones) and abrupt returns of the signal (boundary interactions and convergence zones) but the physics was not understood, and so indistinguishable from magic. Folk-lore soon developed – sensing was better here than there, in the morning vs. night, summer vs. winter, etc, etc. We knew propagation was complicated, some basic relationships were understood or approximated, but these often broke down and nobody really knew why.

I get the sense that this is approximately where we are in climate science.

Fast forward 100 years and now we have computers, highly detailed descriptions of the ocean including range dependent sound velocity profiles and reasonably accurate bottom maps and types, bottom loss vs. angle and frequency, sub-bottom characterizations, the parabolic equation, fast computers, small sampling grids, and now we can, kind of, predict propagation. The physics behind what is going on is pretty well known now. We do careful experiments and measurements sort-of agree with predictions. Measurement and theory are not always exact matches (that is being kind) and differences are hand-waved off because we didn’t know this or that with enough precision. Maybe the tide was out and the depth was actually a meter less than we thought. Oops. Often we’ll go back (after the fact) and tweak this or that parameter or input variable and show that, low and behold, our predictions match observations. So it’s not perfect but at least we’re tweaking parameters that are built into a general universal physical framework and the basic physics is pretty well understood.

Very similar to these fundamental assumptions in the early days of acoustics is that of climate science claiming a very simple relationship between temperature and CO2 of something like T = T0+TCR*Log(CO2)/log(2).

Presumably we’re way past this in climate model sophistication, however, we’re observing the real world and it’s not behaving the way we expect. The more data we get the less well it agrees.

So we add a bunch of knobs and tweak them so that the hind-cast looks good and it still doesn’t predict the future very well at all. If you add enough knobs you’re no longer doing physics, you’re doing curve-fitting, and that really doesn’t tell you a single thing about what’s really going on – i.e. the physics.

It appears to me that climate science is where underwater acoustics is about 100 years ago. This is not a dig. This is simply a statement of fact. It is a far harder problem and much more difficult to verify a hypothesis since to do so means maybe waiting 50-100 years.

Take the pause for instance. Not predicted by something like 98% of climate models. The pause is the bane of the climate modeler. So now they are scrambling, but the important distinction is that they are scrambling to add unknown dependent variables into the basic physics to see if this will “fix it.”

Nobody seems to be talking much about negative feedback mechanisms. These are not favorable to the meme of runaway warming, so perhaps it’s lack of interest and funding. Ocean algae, trees, grass. All of these respond to CO2 and consume CO2. That’s a negative feedback. Have models run these out to 2100?

And our modeling doesn’t seem to take into account the fact that we have a paleolithic and Holocene climate history of 1,000,000 years or more that we don’t seem to understand it in the least. Why does the world spend 95% of its time in an ice-age? Why only 5% in interglacial periods? What mechanism caused the transitions back and forth? We are poised on the cliff of the next glacial period… will CO2 tip us in or keep us out? We don’t have the foggiest idea from the reading I’ve done so far. (please provide reference if you have. I’d love to see it.) If we don’t understand what happened in the past ans why, can we have a hope of understanding what will happen in the future?

We don’t even know what ECS is!!! One single number. More so, we don’t even seem to know if it’s a single number, or varies with other dependent variables.

This all tells me that we are in a process of discovery of the basic physics of climate science, and that the basic physics is not at all understood.

Your comparison of present-day state of knowledge in “climate science” to underwater acoustics 100 years ago unduly slights the latter. At least the fundamental physics of acoustic wave propagation was well established back then; what remained to be learned were real-world effects of density stratification and other factors of the medium. By contrast, there is scarcely an analogous comprehension of the physics that governs the climatic quasi-steady-state of a complex system that never achieves thermodynamic equilibrium. The radiative greenhouse paradigm, with its ubiquitous reliance on the Stefan-Boltzman equilibrium black-body relationship between energy and temperature, doesn’t begin to provide a comprehensive scientific framework for understanding a capacitive system whose major component–the oceans–transfers thermal energy primarily by evaporation, not radiation.

It is of course your prerogative whether or not you choose to believe me and learn from my published paper “Radiated Energy and the Second Law of Thermodynamics” and my book “Why it’s not carbon dioxide after all” that is based on my paper “Planetary Core and Surface Temperatures” now withdrawn from PSI because of my disagreement with the radiative “physics” of Joe Postma and Pierre Latour.

To my knowledge, only one other author has put forward the same valid explanation of planetary atmospheric and surface temperatures, although I have extended it to explaining all crust, mantle and core temperatures of planets and satellite moons such as our own. My hypothesis is supported by all known observed and estimated planetary temperature data. It explains, for example, exactly how the required energy gets into the surface of Venus in order to actually slowly raise its temperature by five degrees over the course of its 4-month-long day. I have explained why the base of the nominal Uranus troposphere is hotter than Earth’s surface, even though there is no significant energy input from internal generation or direct solar radiation.

My interest in physics dates back to when I was awarded a scholarship in physics by Prof Harry Messel and his team at Sydney University, under whom I studied for my first degree with a major in physics. I subsequently turned to more lucrative business ventures, operating an academic tuition service (where I personally helped tertiary physics students) and writing medical, dental and mathematics software from which I have earned several million.

In the last four years (in semi-retirement) I have turned my attention to comprehensive study of the very latest concepts in physics pertaining to thermodynamics (especially the Second Law) and radiative heat transfer. No one has successfully rebutted what I have written in numerous comments and the above papers. But unless people are willing to learn from me, I will not waste my time.

Very, very briefly, I have proven beyond reasonable doubt that the gravito-thermal effect is a reality, and I have debunked all known papers and articles that attempted to prove isothermal conditions would prevail in any troposphere, even one without so-called greenhouse gases. Because of this autonomous thermal gradient (which is a direct corollary of the Second Law) I have explained what happens in all planetary tropospheres and just exactly why atmospheric and surface temperatures are what is observed, and how the energy gets there primarily by non-radiative processes, just as it gets to that thin layer of the ocean surface. It is only molecules from the very top of that thin surface layer of oceans (and solid crust) which affect the temperatures we measure for climate records. It is not radiation which is the primary determinant of such temperatures, but non radiative processes transferring energy that has been absorbed elsewhere, both from above and below. And perhaps the most remarkable deduction that I have made is that even subsurface temperatures are governed by the gravito-thermal effect, and solar energy can “creep” up the thermal profile above and below planetary surfaces as it restores thermodynamic equilibrium in accord with the Second Law. This is a whole new paradigm.

Then prove me wrong, Richard, with valid current physics, not the 19th century Clausius (“hot to cold”) statement of the Second Law which has been discarded by physicists. Others can judge if I’m heeding what you say.

I think you’ll find that I have responded to nearly everyone, but do please link me to any comment on any climate blog (directed to myself) that I may not have noticed.

So yes, I’m throwing down the gauntlet to you Richard to prove your claims that my physics is wrong in any way what-so-ever.

Why would I bother? As you rather so plainly do NOT understand this at all? Trying to follow you down the rabbit hole is not something I wish to engage in. Please retire to your lair muttering about how unfair this all is….

Highly relevant to the issue of sensitivity, indeed the whole validity or otherwise of the greenhouse conjecture, is the trillion dollar question, was Loschmidt right about the gravito-thermal effect? If so, the sensitivity to water vapour, carbon dioxide and its colleagues is negative not positive.

Josef Loschmidt was a brilliant physicist who was first in the world to estimate reasonably closely the size of air molecules – not bad in the 19th century. He obviously thought a lot about molecules, kinetic theory, and how molecules exchanged PE and KE in free flight. Like me, he came to the conclusion that a thermal gradient evolves autonomously at the molecular level in a gravitational field.

But I take it that you, Judith Curry, and others here think Loschmidt was wrong on that point. Of course you can’t prove he was, and the evidence in all planets, the outer crust of Earth, the Moon’s core etc all supports what he said. But, no, you think he was wrong. Try proving it and I’ll show you where you are wrong, or any of the papers that purport to do so.

Maybe you should run a blog on it, Judith and, if so, I’d be happy to write an article.

The lapse rate combines hydrostatic and thermodynamic considerations in the well known first approximation. The fact that warm air rises, expands and cools is a fact of life. The maths of the first order approximation is very simple. I went into it a little bit earlier – no need to make a song and dance of such simple math.

It still has nothing to say on the emission and absorption or IR photons in the atmosphere. The fact that Doug still can’t seem to understand that the second law is derived from macrostate statistics is the fundamental flaw in his thinking. And it is such a basic error. Photons move both ways between atmosphere and surface – and the net is up. Necessarily.

There is nothing mysterious about heat moving from the Sun, to the surface, to the atmosphere and back to space. Nor is there anything mysterious about a two way flux of photons between atmosphere and surface. The 2nd law applies if the net is up. It is just accounting after all.

Well, General, the troposphere is in the realm of “macrostate statistics” such as I use, but a fundamental assumption of Kinetic Theory is that the dynamics of the molecules can be treated classically. Feel free to edit Wikipedia if you think otherwise.

Besides, warm gas rising on Uranus is not a fact of life (or no life) out there. There’s no significant incident solar radiation or internal energy reaching the base of its nominal troposphere where it’s hotter than here on Earth, even though nearly 30 times further from the Sun.

On Venus, when the Sun rises, most of its incident energy is absorbed by carbon dioxide in the colder upper half of the troposphere and above. Some of that now-warmer gas “falls” to the far hotter surface, and that’s how we can explain why the surface temperature rises by 5 degrees over the course of the 4-month-long Venus day.

Oh, and there’s no hot air rising in the outer 10Km of Earth’s crust where a borehole in Germany enabled temperature measurements that conformed with the expected gravito-thermal gradient.

You know, General, you really don’t have to teach me what climatologists teach their climatology students who teach their climatology students … I’ve spent thousands of hours learning about all the details of the hoax, so that I could be watertight in my book when debunking it..

So please keep your arguments to physics in which you hopefully have at least a pass degree in order to participate in this somewhat esoteric discussion of 21st century breakthroughs in thermodynamics and the physics of radiative heat transfer. Otherwise you’ll be out of your depth, as have been hundreds of climatologists I’ve debated over the last three or four years on several climate blogs.

For a very comprehensive analysis of the climatology conjecture that the Second Law can still apply to the “net” of two or more independent processes, see my peer-reviewed paper “Radiated Energy and the Second Law of Thermodynamics” published on several websites two years ago, and which you’ll easily find on Google.

Then consider the fact that the second law of thermodynamics states that “Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased.”

Notice the words “every process” (singular) and the word “proceeds” which rules out going backwards first before you go further forwards. Radiation does not have to be two-way. Just consider radiation from a colder atmosphere penetrating just below the surface of warmer water. Is it absorbed just below the surface like solar radiation (both visible and IR) can be down to 10 metres or more in the ocean thermoclines? If not, then why not, and what does happen?

Doug, I get the Magratheans to build me a planet that has a surface of titanium oxide and has an atmosphere of CO2/SO2/H2S and large amounts of sulphur on the ground. Give it a nice, slow, rotation, and come day break the sulphur will warm up, sublimate, and then move to the dark side. Temperature will rise and then fall, as a function of light flux. I am not sure what the sublimation temperature of sulphur is on Venus, but it is in the right ball park.

You have not explained how, when most of the incident solar energy gets absorbed in the upper troposphere and above (all of which is relatively cold) it then moves up the temperature gradient and into the surface, warming the far hotter surface to a higher temperature still during the planet’s daytime.

Why should I be interested in other people’s “opinions” as if that is what decides the issue. Physics speaks for itself when correctly applied. If you wish to write a comment correctly applying physics then I will be quite interested to see what you have to say based on correct physics and supported by empirical data – as all of what I write is.

But, Richard, opinions such as you enunciated in your last comment are not of relevance to my thoughts, and defamatory slurs are like water off a duck’s back.

Doug: You have not convinced me that you have a clue about what you are talking about. Nor have you convinced many others. That apparently does not prevent you thread bombing everybody else trying to SHOUT you are right.

Richard, I will be investing thousands over the next 8 to 10 years in this campaign to bring home the truth about why it’s not carbon dioxide after all.

I am not the slightest bit interested in assertive statements such as you make about me being “wrong.”

No one has proved me wrong. No one has proved Teofilo wrong when he has said the same as myself quite independently. No one has prove Loschmidt wrong.

Provide your full name, Richard and state your acquired qualifications in physics and which branch thereof you specialise. Thermodynamics and Radiative Heat Transfer are my specialties in which I have written papers.

That really depends on how you fill in what will happen after the ‘full kernel’ (does not change with new data) bits up to the current day (the latest actual figure is the dot).

It looks like the land/ocean curves are about to swap sides of the global figure again and thus the ocean will get relatively warmer. Given the other historical observations though, it does not look like it will be a large one.

And you, Richard, just can’t explain any reason based on valid physics which negates anything I have said. I can easily detect that your understanding of thermodynamics and radiative heat transfer is not up with the 21st century advances in this field.

For Steven Mosher. About your many “methods of observational science” discussion with Jim Cripwell, please notice that in the magacine “Science”, 4 Oct. 2013 in an article by R.A. Kerr, it is said: ” “Equilibrium climate sensitivity is kind of an odd diagnostic, since it represents something that has never been, and will never be, observed in nature”, writes modeler and IPCC author Gerald Meehl of the National Center for Atmospheric Research in Boulder, Colorado, in an e-mail.”
May be, Steven Mosher, you need to explain Gerald Meehl your many “methods of observational science” for the ECS parameter.

Thinking some more about this overnight, what Steven Mosher fails to realise is that different problems can be solved using a variety of scientific techniques. In my own career, this includes problems solved using the various techniques of operations research.

Bur in the case of CAGW, there is only one way to show whether the hypothesis of CAGW is correct or not, and that is to use the scientific method; the way the majority of problems are solved in physics. CAGW will remain a hypothesis unless and until climate sensitivity has actually been measured; something that is beyond the capabilities of current technology.

Operations Research is very different from applying the well defined laws of physics. Climatologists get wrong answers in their “fissics” because they do not understand the laws, the limitations and the prerequisites for those laws, or indeed much about thermodynamics and the physics of radiative heat transfer.

I can detect this lack of understanding in the comments they make, which are rarely doing any more than reiterating the pseudo fissics they were taught in climatology school and would never have learnt in a school of physics.

Because taking a high quality annual low pass filter to either Anomalies or Raw data will produce the same graph? True the left hand scale will have different numbers on it. Anomalies will have a zero at the centre, raw would have some positive value. Other than those scaling offsets the outcome will be the same.

What this does show in detail is how the tropics have reacted in the last 34 years. It allows a direct comparison between Land and Ocean in a way that allows for eyeball style predictions. I could work this up into a expert predictor based on past cyclity but that will wait for now.

What I regard as surprising is the interchange between Land and Ocean around the central Global figure. AFAK that has no current explanation.

In general I find it is climatologists who are desperate because their income depends on the carbon dioxide hoax which I will ensure completely crumble within 8 to 10 years. You have no idea what I plan in publicity, advertising and litigation. If you live by fraud you will pay for the deception you cause.

Dear everyone, I realize after the fact, I misinterpretted the title of this thread and realize after the fact it wasn’t meant as an open technical because of the article it mentioned, but rather an open technical discussion about the article in question. My apologies.

The 30 C limit happens during the afternoon and it’s non-linear. If ocean heat capacity were the primary mechanism, there wouldn’t be such a sharp cuttoff. The ocean isn’t the moderator for tropical SST.

This paper is often misquoted in climatology circles as supposedly rebutting the existence of the Loschmidt gravito-thermal effect. But does it? In fact the conclusions are inconclusive. The existence of the gravito-thermal effect debunks the greenhouse radiative forcing conjecture altogether.

The authors assert that “convective turbulent motions are now taken into account, albeit implicitly. Their role is to mix the potential temperature field, to strive to homogenize it.”

This is not necessary, as there is no reasonable evidence of such convective turbulence existing on some other planets, notably Uranus. Instead it is the actual movement of molecules between collisions which provides the random mixing they claim is requiring advection. (They are not even precise in their terminology, because “convection” can include diffusion.)

They deduce in fact two conclusions using different constraints. However the constraint that leads to their deduction of isothermal conditions is not appropriate. It involves assuming constant enthalpy and this implies that there is a compensating increase in mean molecular total energy that is offset by the reduction in density at higher altitudes. This means that the molecules would be retaining equal kinetic energy, whilst gaining gravitational potential energy, that being offset by the reduction in total numbers so that total enthalpy remains constant. There is no justification for this assumption and the constraint is not a reality.

Furthermore, they introduce “constancy of the integrated potential temperature as a single additional constraint” and then they admit “but this choice is of course open for debate.” Well, of course it is open for debate because there is no logic supporting it. What they are doing is trying to find a reason for the wet lapse rate being less than the dry one. They know that isentropic conditions lead to the dry rate (-g/Cp) but what they don’t realise is what I have explained in my book about the temperature levelling effect of inter-molecular radiation.

As I have said all along, the empirical evidence that water vapour cools rather than warms supports the fact that the gravito-thermal effect produces the dry gradient which is then reduced in magnitude by the inter-molecular radiation, not primarily the release of latent heat.

All in all, this is a very wishy-washy paper. Whilst their computations are OK, they do not engage in any detailed discussion or reasoning as to what would be the correct constraints. It would have been appropriate to start by considering a sealed perfectly insulated cylinder of ideal non-radiating gas. If they had done this there would have been no ambiguity about the constraints or any need to discuss advection. This it the approach I have taken in my papers and the book. Once we accept that the gravito-thermal gradient evolves spontaneously at the molecular level without any need for advection, then it is not hard to extend the concept to a troposphere which has a propensity to approach such a thermal gradient, modified by inter-molecular radiation.

“convective turbulent motions are now taken into account, albeit implicitly. Their role is to mix the potential temperature field, to strive to homogenize it.”

This is not necessary, as there is no reasonable evidence of such convective turbulence existing on some other planets, notably Uranus. Instead it is the actual movement of molecules between collisions which provides the random mixing they claim is requiring advection. (They are not even precise in their terminology, because “convection” can include diffusion.)

They deduce in fact two conclusions using different constraints. However the constraint that leads to their deduction of isothermal conditions is not appropriate. It involves assuming constant enthalpy and this implies that there is a compensating increase in mean molecular total energy that is offset by the reduction in density at higher altitudes. This means that the molecules would be retaining equal kinetic energy, whilst gaining gravitational potential energy, that being offset by the reduction in total numbers so that total enthalpy remains constant. There is no justification for this assumption and the constraint is not a reality.

Furthermore, they introduce “constancy of the integrated potential temperature as a single additional constraint” and then they admit “but this choice is of course open for debate.” Well, of course it is open for debate because there is no logic supporting it. What they are doing is trying to find a reason for the wet lapse rate being less than the dry one. They know that isentropic conditions lead to the dry rate (-g/Cp) but what they don’t realise is what I have explained in my book about the temperature levelling effect of inter-molecular radiation.

As I have said all along, the empirical evidence that water vapour cools rather than warms supports the fact that the gravito-thermal effect produces the dry gradient which is then reduced in magnitude by the inter-molecular radiation, not primarily the release of latent heat.

All in all, this is a very wishy-washy paper. Whilst their computations are OK, they do not engage in any detailed discussion or reasoning as to what would be the correct constraints. It would have been appropriate to start by considering a sealed perfectly insulated cylinder of ideal non-radiating gas. If they had done this there would have been no ambiguity about the constraints or any need to discuss advection. This it the approach I have taken in my papers and the book. Once we accept that the gravito-thermal gradient evolves spontaneously at the molecular level without any need for advection, then it is not hard to extend the concept to a troposphere which has a propensity to approach such a thermal gradient, modified by inter-molecular radiation.

(Reposted from Lucia):
I’m curious why I have not seen the following reaction from believers in AGW to Nic Lewis’s paper: “Oh, thank God! I don’t know if this is right, but I pray that it is. It would be such incredible good luck. Mitigation was failing so badly, no one is serious, no one is doing it nearly fast enough. This would be such a gift: it would be like _several_ _successful_ Kyoto accords. Just like that, we have more time, total damage is much less severe. A wonderful reprieve.”
Why do all the accounts look like this: “Lewis’s paper just illustrates one of several possibilities, that climate sensitivity may be _slightly_ lower than we thought.” Take a look – they all add the word “slightly”, or “a little”, or “a tiny bit”. Or, “we’d have an extra _few years_.” Remember that they are describing Lewis’s value which is about a third smaller, and where very high sensitivites are almost wiped out.
Isn’t this (potentially) great news for everyone?
I don’t mean to be cynical. I imagine that they have already set their minds on severe mitigation, and therefore their only reaction is, “Enemy. Trying to stop us. Resist.” They can’t see anything else.
Of course, if some of them really like the de-industrialization that serious mitigation requires, low climate sensitivity would be a really annoying setback.

An isothermal profile in a gravitational field is not isentropic, for the simple reason that, firstly you are assuming all molecules have the same kinetic energy, but secondly, we know the ones at the top have more gravitational potential energy.

So, consider the following initial state …

Molecules at top: More PE + equal KE

Molecules at bottom: Less PE + equal KE

In such a situation you have an unbalanced energy potential because the molecules at the top have more energy than those at the bottom. Hence you do not have the state of maximum entropy, because work can be done.

Let’s consider an extremely simple case of two molecules (A & B) in an upper layer and two (C & D) in a lower layer. We will assume KE = 20 initially and give PE values such that the difference in PE is 4 units …

At top: A (PE=14 + KE=20) B (PE=14 + KE=20)

At bottom: C (PE=10 + KE=20) D (PE=10 + KE=20)

Now suppose A collides with C. In free flight it loses 4 units of PE and gains 4 units of KE. When it collides with C it has 24 units of KE which is then shared with C so they both have 22 units of KE.

Now suppose D collides with B. In free flight it loses 4 units of KE and gains 4 units of PE. When it collides with B it has 16 units of KE which is then shared with B so they both have 18 units of KE.

So we now have

At top: B (PE=14 + KE=18) D (PE=14 + KE=18)

At bottom: A (PE=10 + KE=22) C (PE=10 + KE=22)

So we have a temperature gradient because mean KE at top is now 18 and mean KE at bottom is now 22, a difference of 4.

Note also that now we have a state of maximum entropy and no unbalanced energy potentials. You can keep on imagining collisions, but they will all maintain KE=18 at top and KE=22 at bottom. Voila! We have thermodynamic equilibrium.

But, now suppose the top ones absorb new solar energy (at the top of the Venus atmosphere) and they now have KE=20. They are still cooler than the bottom ones, so what will happen now that the previous equilibrium has been disturbed?

Consider two more collisions like the first.

We start with

At top: B (PE=14 + KE=20) D (PE=14 + KE=20)

At bottom: A (PE=10 + KE=22) C (PE=10 + KE=22)

If B collides with A it has 24 units of KE just before the collision, but then after sharing they each have 23 units. Similarly, if C collides with D they each end up with 19 units of KE. So, now we have a new equilibrium:

At top: C (PE=14 + KE=19) D (PE=14 + KE=19)

At bottom: A (PE=10 + KE=23) B (PE=10 + KE=23)

Note that the original gradient (with a difference of 4 in KE) has been re-established as expected, and some thermal energy has transferred from a cooler region (KE=20) to a warmer region that was KE=22 and is now KE=23. The additional 2 units of KE added at the top are now shared as an extra 1 unit on each level, with no energy gain or loss.

That represents the process of downward diffusion of KE to warmer regions which I call “heat creep” as it is a slow process in which thermal energy “creeps” up the sloping thermal profile. It happens in all tropospheres, explaining how energy gets into the Venus surface, and explaining how the Earth’s troposphere “supports” surface temperatures and slows cooling at night.