Is Earth in energy deficit?

Unlike many fiscal budgets, earth’s energy budget is widely believed to be in surplus.

With each year of increasing amounts of greenhouse gasses, earth is modeled to send less energy outward than it receives from the sun. This energy surplus, as understood, continues until the global average temperature rises sufficiently to restore balance by emitting more energy in accordance with the Stefan-Boltzmann Law. Indeed, the concept of ‘missing heat’ implies that a surplus of energy exists to be missed. And the NASA GISS Model E projects a trend of increasing energy surplus. The runs of Model E for “Dangerous Human-Made Interference” (from 2007) A1B scenario ( available at link) yield this projection for net radiance at the top of the atmosphere:

The CFSR is the first reanalysis from NCEP to use radiance observations from the menagerie of past satellites. The CFSR also uses the AER RRTM radiative model to fill in the gaps of satellite data. The RRTM is the same radiative code used by many climate models. By subtracting the top of the atmosphere outgoing infrared from the net shortwave radiative flux, one arrives at the net radiative flux. And by dividing the outgoing shortwave radiative flux by the incoming shortwave radiative flux, one arrives at albedo. Examples for March of 1979 appear as:

Due to missing values, all data for the year 1994 are excluded. By calculating the spatially weighted global annual averages, the time series of various fields yield interesting results. The data for the top of the atmosphere net radiance appear as:

The CFSR Net Radiance data indicate radiative deficit following the El Chichon volcanic eruption in 1982, and again following the Mount Pinatubo volcanic eruption in 1991. Also, the peak net radiative surplus appears during 1997 which coincides with the anomalously warm El Nino event. I was quite surprised, however, to note that the years 2001 through 2008 indicate net radiative deficit and that the overall trend was toward decreasing net radiance.

Should I have been surprised? Perhaps not. Net radiation, particularly the shortwave component, is known to be quite difficult to measure because shortwave reflection varies greatly with respect to the angle of observation depending upon the composition, size, shape, and orientation of clouds and earth’s surface. Further, the very process of reanalysis can add spurious errors. That is why NCAR ( the National Center for Atmospheric Research ) warns that reanalysis should not be equated with “observations” or “reality.”

Still, while not “observation” nor “reality”, the CFSR does represent a best assessment of the recent climate based on observations and the same radiative codes that lie within the prognostic climate mod

So what does this imply?

To the extent that the CFSR radiance is accurate, it implies that earth was in radiative deficit, not surplus, for the decade of the 2000s and that for this decade, there is no ‘missing heat’ to be found.

The CFSR net radiative deficit also implies that energy loss to space, rather than shifting of energy within the climate system may be responsible for the negative trend since 2001 in many of the global temperature data sets.

Biosketch: Steve McGee has a bachelor of science degree in meteorology. His long career of software engineering includes the development of numerous defense related systems providing analysis and display weather and atmospheric effects.

JC note: This post was submitted via email. Since this is a guest post, please keep your comments relevant and civil.

I think we can all agree:
1. Insolation is not the only source of heat. A small amount of energy comes up from the Earth’s core.
2. A large amount of insolation is immediately tied up in photosynthesis, and will not be released until the cells decay.
Bassd on these givens, are there any ‘accurate’ estimates of what percent of insolation is available for radiating into space??

I took the number (133 TW average power, while the solar radiation that hits the Earth is 172500 TW) from an old estimate that was easy to find. A more original source for that is the book Odum: Ecology (1972), but I picked it from a secondary source.

The best approach is probably to start from estimates of biomass growth. That’s not a fully unique concept, because part of the photosyntesis is followed very soon by respiration. Only part of the biomass ends up as biologic material for a significant period. The best concept to consider is perhaps Net Primary Productivity that’s estimated to be 100 Gt carbon per year in several sources including a recent book Vaclav Smil: Harvesting the Biosphere.

I haven’t checked before, how this value relates to the 133 TW, but lets do it now. 100 Gt/a is 3170 t/s. Thus 133 TW would require that the energy content of biomass is 42 GJ/t, which is exactly the lower heating value of oil. Thus the values are roughly consistent.

With all noted caveats, this is interesting, and actually falls in line with the actual measured optical depth increase of the stratosphere during the time period of the last 10-15 years (from natural aerosols of volcanic origin). There are four key dynamics that need to be looked at for an accurate analysis of the Earth’s energy balance over such a short time frame.

1) How much less solar across all wavelengths has been reaching the Earth due to the quiet solar activity- with the sun being at the lowest TSI since the Dalton Minimum.
2) How much less SW has been reaching the surface because of the increased uptick in aerosols from the broad increase in volcanic activity during the past 15 years?
3) How much less net energy have the oceans transferred to the atmosphere because of the cool phase of the PDO?
4) How much energy have the oceans been retaining, given that they are the key energy storage mechanism for the planet.

These four questions need to be answered in before accurately discussing Earth’s energy balance over the past 15 years.

“The level of aerosols is unusually low at the moment and has been for a decade, so one should have observed warming.”
—–
Completely false. The optical depth has been increasing for over a decade. I would advise you not to get your data from WUWT.

R Gates –
1) Since the IPCC assign minimal effect to TSI, it seems unlikely that this is a significant factor. If you think it is significant, please provide a link to TSI data or some calcs.
2) DocMartyn (next comment) posts that aerosols have decreased. If you think they nhave increased, please provide a link to aerosol data..
3) According to the IPCC, the PDO has no effect on global temperature – see their table of forcings, which is used as the total of all drivers of global temperature and in particular of all the warming of the late 20thC. If you think it has s significant effect, how much of the late 20thC warming do you think it caused? You can’t have cooling now without warming then.
4) Good question. It would appear to tie in with Q3. ie, oceans can be expected to net release more heat during ocean warming phases such as the PDO, and to net absorb more heat during ocean cooling phases. If that seems counter-intuitive, bear in mind that ocean phases are based on movement of water, particularly vertical movement. If I’m correct, then this factor is omitted from the IPCC report too.

You may be correct in saying that the questions need to be answered before “accurately discussing Earth’s energy balance”. Funny that the IPCC didn’t wait for the answers..

Also, the PDO is not a forcing but natural variability in the ocean that translates into what the net energy flow is from ocean to atmosphere. La Niña tend to occur more frequently during this cool phase, and during these, slightly less net energy is transferred to the troposphere, and given that over 50% of the energy in the troposphere comes from the ocean, a slowdown in this rate can make a big difference in tropospheric temperatures.

TSI has a easily observable small effect on temperature. This effect is seen as a regular part of the variability over the 11 year solar periods. The whole climate science community surely agrees on that.

What has been stated in addition in many places is that it’s not likely to have much influence on the long term trend. IPCC AR4 as an example gives a small nonzero estimate (range 0.06 – 0.30 W/m^2) for the change in radiative forcing from 1750 to 2005.

TSI change is small but power spectrum change is much larger where UV waxes and wanes while lower frequencies move opposite to keep total power near the same. This changes which levels of the atmosphere or surface absorb and reflect and how much. If near infrared increases while UV decreases then troposphere warms and stratosphere cools. Complicating matters is that anthropogenic CFC emissions have influenced amount of UV absorbing ozone in the stratosphere. Potentially this could explain a great deal without resort to solar magnetic activity throttling cosmic ray flux at TOA and hypothetically influencing cloud dynamics.

I’m afraid we’re still at the point where it’s apt to say to the climate science boffins “There’s more in heaven and earth, Horatio, than is dreamed of in your philosophy”.

Are you stupid or what, Pukite? Natural cycles that are net neutral if you wait long enough can last at least up to ~100,000 years the length of a glacial/interglacial cycle. We haven’t been observing long enough to have seen complete cycles. We haven’t had satellites for even a full cycle of the AMDO you blithering nitwit.

Let me apologize in advance if not a blithering nitwit but are rather just dishonestly playing one.

Except that the enhanced Brewer-Dobson circulation, both predicted and observed, as well as the broadening of the tropical tropopause, both predicted and observed, are in disagreement with the idea of a static GH response via a “hotspot”. This is an old objection from outdated theory. Probably need to put the ghost of John Daly to rest.

note how the numbers have changed, but let us focus on one.
The number given for back radiation was 324 in 1997 and 333 W/m2 in 2009.
According to Trenberth 100% of the IR radiation is absorbed by the Earths surface. If the Albedo of the surface is 0.98 rather than 1.0,then 6.6 w/m2 bounces off into space. Relatively small changes in Earths albedo from the atmosphere to the Earths surface, of IR, changes the budget in a manner hugely different from the quite modest changes postulated for a doubling of CO2.
To pretend one can actually measure the annual outgoing radiation from Earth, at +/-0.01% is arrogant hypocrisy. This sort of measurement accuracy is difficult to do in daily whole room Calorimetery, so you cannot build a thermally isolated animal chamber that will allow you to measure the heat output of an animal to 0.01%, but you can do it to planet Earth over decades, using satellite spectroscopy.

Lacking empirical data on heat fluxes natural variability of tropospheric temperatures may have two origins:
– variability in the net flux between oceans and atmosphere,
– variability in the net flux at TOA.

Intuitively I have not seen any reason to consider the first alternative more likely than the second one. If this interpretation of the empirical data is even partially confirmed, we seem to be seeing the second alternative.

These two are related fundamentally, but long term changes in flux from ocean to atmosphere is driven by the thermal gradient between ocean and space– which of course is driven by non-condensing GH gas concentrations in the atmosphere.

The idea that I have in mind for long is that changes in the state of the oceans affect clouds and through that albedo and TOA balance. Thus the natural variability would be controlled by oceans, but the largest variability of energy fluxes would be at TOA. It would still be variability, the recent hiatus would soon be over, and a period of relatively rapid warming would have its turn.

Naturally I have not had much trust on the correctness of this hypothesis, and I have noticed that it has not been even discussed much by main stream scientists as far as I know. I have wondered, whether they have good reasons to consider this hypothesis excluded, and wondered why they don’t tell the arguments for that.

Based on the above I’m certainly receptible for empirical data that supports the idea. I’m, however, waiting for confirmation, and comments by real experts.

Pekka, your ideas seem quite logical and based on solid concepts, and I too would like to hear what “experts” might think. Given that 50% of the energy in the atmosphere at any given time has come from the oceans! any short or longer term changes to this will make a difference to net TOA. But net TOA is a proxy for measuring ocean to atmosphere flux- not perfect, not exhaustive, and not causative.

Everybody who slept under the stars knows that under a tree the night is warmer than in a place with an unobstructed sky view – a radiative effect. Do radiative models actually consider the temperature of the ground, the effects of a thermal capacity of the ground, and the leaf cover? I did not undertake a thorough analysis of the models. Does anybody know if the models treat deserts differently from jungles?

Tangentially related to this is the approximately 5% increase in green ground cover over the planet in the past 30 years or so. This has altered the albedo, but also has stored some energy by converting energy to mass. Also of course, it has altered the moist enthalpy levels– is a better overall metric of energy in the troposphere– as Roger Pielke Sr. Has well pointed out.

Yet Total Precipitable Water has fallen steadily (but slowly) for 15 years, the time during which temperature has stabilised. The implication is that the hydrological cycle rate has increased more than countering increased evapo-transpiration.

Miskolczi predicted according to his virial theorem model that pH2O would fall as pCO2 rose, and apparently showed this to be true in 61 years of radiosonde data. His theory has been criticised. i’m developing a new model of the process which shows how the hydrological cycle rate increases and this leads to exact (on average) compensation of the ‘CO2 – bite’ warming as the atmosphere maintains SW IN = OLR with falling TPW as the main part of the control system.

This is standard reasoning for us engineers but IPCC alchemy is based on an imaginary positive feedback which, if real, would already have made us into another Venus!

The planet has remarkable temperature stability within narrow limits no matter what the sun throws at us (‘Faint Sun Paradox’) and it does so through a PID control system; we oscillate slowly about the null point due to ENSO, the ‘I’ part.

Go further into the irreversible thermodynamics of OLR and the flora and fauna react to that external thermodynamic constraint! Each interglacial will have a different land-based dominate life form but the real dominant species is phytoplankton in the oceans, which control biofeedback.

In effect I and others are doing the mathematical engineering analysis of Gaia. Come back J Lovelock – you were right but didn’t study engineering so missed out on the beautiful simplicity with which you can quantify the system!

The approximately 0.8 to 1.0 w/m2 TOA imbalance, most likely caused by the increase in GH gases, will go into multiple parts of the climate system, but the biggest sink is naturally the ocean. The Earth system can naturally respond through negative feedbacks, but, too rapid a change can overwhelm the system. The oceans have buffered the troposphere from the majority of the effects, but they are paying a big price to do so.

@AlecM
“The planet has remarkable temperature stability within narrow limits no matter what the sun throws at us (‘Faint Sun Paradox’)”
I’m pretty confident I have a good explanation for this incredible stability, which also solves the Faint Young Sun Paradox.
And it also explains why the average surface temperature on Earth is > 90K higher than the Moons.
Perhaps we can discuss:
email wouters at multiweb dot nl

Gates, you say that “The Earth system can naturally respond through negative feedbacks, but, too rapid a change can overwhelm the system.” To my less-in formed view, the latter comment is an assertion which I do not think has been demonstrated. Even if true, the question is whether or not a change of sufficient magnitude and rapidity is at all likely. Several posters over a long period have referred to self-regulatory mechanisms which have constrained the earth’s temperature in a fairly narrow bound. You may have done this in the past, but could you please indicate the grounds for thinking that a system-overwhelming change is likely, what its likelihood is, and what magnitude of change might be involved?

In short, as a layman and former policy adviser, I’ve not seen sufficient grounds for taking the “rapid,disruptive, highly-damaging” change possibility into account for current policy, perhaps you can demonstrate them?

Gates, you say that “The Earth system can naturally respond through negative feedbacks, but, too rapid a change can overwhelm the system.” To my less-in formed view, the latter comment is an assertion which I do not think has been demonstrated.

I agree. There have been rapid warming events in the past and whenever they occurred, most life thrived (e.g. Younger Dryas, greening of Greenland; rapid warming over Ireland abut 16,000 years ago caused the ice to retreated after the last glaciation and life thrived).

Warmer is better. Warming is good.

So, Gates needs to show the evidence that warming over a century or so would be dangerous or catastrophic.

Trapped water from melted Ice Sheets broke ice dams and dumped into the Northern Oceans and chilled the world and raised sea level such that Water got into the Arctic and turned snowfall on before Greenland melted and prevented the Major Warming that would have caused another Major Ice Age.

Ten thousand years of a well bounded temperature is the new normal and it will continue until something major changes.

And sudden warmings in Ireland 16,000 years ago when the ice retreated in about a decade (I understand).

And rapid warmings that greened Greenland and the Danes moved in. Life thrived.

So as not to lose my main point in arguments about details I’ll repeat it. My main point is that there have been many rapid warmings in the past, and life loved them. So I ask where is the persuasive case and evidence that a future sudden warming would be dangerous or catastrophic?

“You may have done this in the past, but could you please indicate the grounds for thinking that a system-overwhelming change is likely, what its likelihood is, and what magnitude of change might be involved?”
—–
The issue of this century– all related to the same rapid transfer of carbon from the lithosphere to the atmosphere and hydrosphere- is the health of the oceans. This transfer is altering the oceans so rapidly that every major expert on the oceans is warning of dramatic changes (and none good) already starting and that will accelerate in the coming decades. The magnitude is huge, the effects far reaching, the urgency to do something to slow the changes pretty strong.

This all comes about because the oceans have taken the bulk of the effects thus far from anthropogenic emissions. This is really where policy makers should be looking– and it should be remembered that the history of life on Earth is such that as go the oceans so goes the land eventually.

Sorry, Gates, that answer on oceans is not sufficient. I raised it with somewhat different questions in response to your assertions on the previous thread, perhaps you could provide some specific links which would enable me to understand and perhaps assess the particular threats to the oceans (or elsewhere) which will “overwhelm the system,” and allow me to consider policies which might address them?

There is no question the oceans need our very focused policy attention. They have been taking the bulk of anthropogenic effects, but are at a breaking point in terms of the collapse of the ocean biosphere. There is not one expert on this topic who is not concerned.

So as not to lose my main point in arguments about details I’ll repeat it. My main point is that there have been many rapid warmings in the past, and life loved them. So I ask where is the persuasive case and evidence that a future sudden warming would be dangerous or catastrophic?

Yes, life was much better in the Roman and Medieval and current Warm Period than it was in the cold periods that always come before and after the Warm Periods. We have warmed again, out of the Little Ice Age and we will cool again into the next Little Ice Age or something similar.

It snows more when the oceans are warm and the water surface is wet.
It snows less when the water is cold and the surface is frozen.
This Wonderful Polar Ice Cycle more tightly bounds temperature by using Albedo.

Consensus theory makes earth cold and adds ice when the water is frozen.

Mother Earth makes earth warm and adds ice when water is wet, Albedo increases and then it gets cold.

Mother Earth turns off the snowfall by letting the cold water freeze on top and quit letting moisture get into the air.

The Polar Ice Cycles use snowfall and Albedo to tighten the temperature bounds around the SET POINT.

THE SET POINT IS THE TEMPERATURE THAT POLAR SEA ICE MELTS AND FREEZES.

Small changes in Earth’s Albedo due to Ice Extent and Clouds that always increase after the Polar Sea Ice Melts and that always decrease after the Polar Sea Ice Freezes on Top and turns off the snowfall.

Sherlock Holmes and Dr. John Watson went on a camping trip.
After sharing a good meal and a bottle of wine, they retire for the night.

At about 3 AM, Holmes nudges Watson and asks, “Watson, look up into the sky and tell me what you see?”

Watson said, “I see millions of stars.”

Holmes asks, “And, what does that tell you?”

Watson replies,

“Astronomically, it tells me there are millions of galaxies and potentially billions of planets.
Astrologically, it tells me that Saturn is in Leo.
Theologically, it tells me that God is great and we are small and insignificant.
Horologically, it tells me that it’s about 3 AM.
Meteorologically, it tells me that we will have a beautiful day tomorrow.

So I took my new IR thermometer out today under clear skies both today and tonight and it was somewhere below -60, -70F, I’m not sure because it was colder than it could stabilize the sensor temp at. Air temp was ~22F. FYI I live at 41N, 81W.

R. Gates aka Skeptical Warmist | November 29, 2013 at 11:03 am said: ”There is no question the oceans need our very focused policy attention. They have been taking the bulk of anthropogenic effects, but are at a breaking point in terms of the collapse of the ocean biosphere”

relax Gates, relax and stop worrying without reason. If the oceans get warmer / for any reason -.> evaporation increases / evaporation is cooling process.and equalizes in a jiffy.

when evaporation increases so do the clouds increase / clouds are the sun- umbrellas for the land and oceans. The creator of this perfect planet has solved all the problems by creating the laws of physics. Don’t leave in fear, you will wet the bed…

I have another “real” data point for shortwave surface data during the El Chichon eruption. From 1980 to1982 I worked at a solar furnace in the desert of NM. We measured surface shortwave solar irradiance from morning to late afternoon for each day we worked at the furnace (generally weekdays and sometimes also on Saturdays). The data was plotted in real-time so that the value climbed to solar noon and dropped off in the afternoon. We would write down daily solar noon peak values and watched how they changed over the years. We used a calibrated Eppley pyrheliometer to obtain the data. We noticed a seasonal difference in values at solar noon. Namely “clear sky” summer to late summer readings would often max out at about 850 watts/m^2 while “clear sky” fall readings could often reach over 1000 watts/m^2. We attributed this difference to the very moist atmosphere in the summer and the very dry atmosphere in the fall. Then in March 1982, El Chichon erupted. Over the next few months our peak solar noon readings steadily dropped until about 6 months later, we noted about 100 watts/m^2 less in the peak solar noon readings from our instrument than in previous years. This continued into the fall but steadily returned to normal through the next 6 months. This was a fairly short lived but very dramatic change in surface levels at one point on the global that were clearly tied to this volcanic activity. It is odd that our surface plot of a drop of many 10s of watts/m^2 would mimic a TOA plot of a watt or two per meter squared (see above). I am not sure what it means. But I am certainly doubtful that we can measure an ongoing 0.6 watt/m^2 build up when we have these dramatic surface fluctuations caused by a single volcanic eruption. It seems to me that the natural noise in the irradiance data far outstrips this tiny imagined energy build up.

You are right! It ended up with the Feds which was actually probably the circular file. Memory can be a dangerous thing but I am almost sure that all values given are within 90% of the real thing. The drop in surface irradiance due to El Chichon may have been in the range of 95 to just over 100 watts/m^2. The way that material from the eruption was carried aloft and circulated globally could probably be ascertained by plotting the daily solar max values, but I am sure we never did that.

It might be worth going back to the folks that operate the furnace today to see if the actual data might be recoverable. It is nearby and I am retired, so I might just do that.

I don’t know what “reanalysis” is, let alone how to do it. Does everyone performing “reanalysis” come to the same conclusion? And Is it something the public can perform themselves with publicly available data?

If the answer to either of these is no then why should the public be expected to believe it?

What I haven’t been able to find however is the promised “nearly real time” data that would permit extending Steve’s last graph beyond 2009. The next four years would bring things a bit more up to date.

The basis for Steve’s trend line would seem to be the seven years 2001-2007. 2001-2006 are clustered closely together averaging around half a W/m2 below 0 while 2007 spikes down to -1.5 W/m2.

It would be nice to see some other reanalyses just to get an idea of what the variance is between reanalyses. Otherwise it’s hard to distinguish noise in the temperature data from noise in the reanalysis methodologies.

Reanalyses where discussed also in the recent thread Uncertainty in Arctic temperatures. The paper of Chung et al tells that more recent data from CFSR reanalysis was not available at the time they did their work.

In one sense reanalysis involves making sense of all the thermodynamic variables as a collection. These all have to fit together following the first law. The CSALT model is an example of this operating at the highest possible aggregation ignoring the spatial character of the earth and simply considering the mean values.

It works well and is important because it places the understanding of the trends and fluctuations within arms reach.

Thanks, Web. I’ll take that as a positive answer to my question, “Is [reanalysis] something the public can perform themselves with publicly available data?”

Now that you and I have both forsaken analytic models (filtered sawtooth, cubic) of global temperature for the time being, it would seem the remaining significant difference between us would be your focus on “all the thermodynamic variables” (C, S, A, L, and T) vs. my goal of reconciling just T and C, with other variables being brought in only when I’m convinced they’re absolutely essential to the reconciliation.

2. Having chosen T and C as my variables of primary interest, I’m an equal opportunity employer of all possibly relevant other variables, including those that might not previously have occurred to me or anyone else. That said, prospective employees must convince me they’re essential.

I wouldn’t call this “reanalysis” however. My goal is merely more clarity in the reasoning behind the already well-understood relationship (certainly qualitatively) between C and T, along with any quantitative improvements that might result from the clarity. I respect the long history of that reasoning, and want merely to see it sharpened up to meet today’s standards of criticism.

I see nothing unreasonable in advocates of oil and coal asking scientists to set that bar higher. Given the economic scale involved, it’s a very reasonable demand.

Who should pay for that research is an interesting question. There is a long and understandable history of those whose ox is being gored being reluctant to fund or even permit publicly accessible research whose conclusions might handicap them. That sort of question is above my pay grade.

Vaughan,
I agree with your view of the inessential variables other than C=CO2 and T=Temperature and consider them similar to nuisance variables. In statistics, these are the variables that are in the background as noise and are not important — IOW, they just get in the way and are considered a nuisance.

However, the ankle-biting members of Team Denier don’t consider these variables a nuisance, and instead consider them their salvation. They are a salvation in that they will show that it is ABCD (Anything But Carbon Dioxide). So the only way around this is to show how the variables are used to construct the long-term fluctuating trend.

That’s why I am working on the CSALT model. It is clear that over 0.9 of the correlation coefficient is immediately recovered by just incorporating ln(CO2), but with the rest of the CSALT variables we can get the CC up to R=0.99, with each of the nuisance variables contributing fractional increases. This has the effect of substantiating the role of CO2 and marginalizing the role of the nuisance variables in the long term trend.

@WHT: However, the ankle-biting members of Team Denier don’t consider these variables a nuisance, and instead consider them their salvation.

You’d make an interesting lawyer, Web. Normally the counsel for the defense addresses his argument to the jury, not to the prosecuting attorney.

CE has been constituted by a prominent climate scientist to put climate science on trial. As such it works much like a courtroom. The respective counsels speak up, the jurors listen in silence and then leave the courtroom to mull over which arguments they found most convincing.

One difference is that on CE the occasional juror pipes up to express appreciation for the forum. Another (presumably) is that being on CE’s jury is more a pleasure than a duty.

So the only way around this is to show how the variables are used to construct the long-term fluctuating trend.

There may be more than one way to skin this CO2-Aerosols-Temperature CAT. ;)

Loehle and Scafetta 2011 did a better job of deconstructing global temperature record than either of Vaughn Pratt or Paul Pukite (WHT), they did it earlier, and they published it in an Atmospheric Science journal.

Neither of you have added a single notable bit of knowledge to the body of climate science. Keep trying though. Stranger things have happened than either of you two solid state electronics clowns doing something significant outside your ostensible areas of expertise.

Loehle and Scafetta missed the attribution of CO2 to the warming of the earlier 20th century.
They also missed the attribution of Curry’s Stadium Waves to long term fluctuations in temperature.

In the latest version of the CSALT model, there is some attribution given to Scafetta’s 9.1 year cycles and it appears that the barymetric velocity of the sun has an influence on the temperature as well.

But none of these impact the value of TCR, which is at 2C for a doubling of CO2.

Loehle and Scafetta model the slope of CO2 radiative forcing as a steady zero throughout the 100 years 1850-1950, and then an equally steady 0.66 °C/century throughout the 150 years 1950-2100.

One can see at least three advantages for this model. It lends itself to computation: one can readily see without resorting to a calculator that it forecasts rises of 0.33 °C for 1950-2000 and 0.66 °C for both 2000-2100 and 2100-220. When illustrating it at the blackboard it can be drawn very accurately using a straightedge. And it can be taught in elementary school.

Likewise Planck’s law can be drawn as a triangle, or more accurately as a trapezoid. This makes it easy to compute where the percentiles of Planck’s law fall, and easy to draw at the blackboard.

Loehle and Scafetta assume that the Keeling curve has the form of an exponential. They conclude that its log has the same slope in 2010 as in 1958.

Apparently it’s never occurred to them to look at the log of the actual Keeling curve. Had they taken 1.5 times the log base 2 (the formula assuming a climate sensitivity of 1.5 °C per doubling of CO2, the lowest value conceivable to IPCC authors), they would have observed that its slope was 0.57 °C/century.in 1960, and 1.21 °C/century by 2010.

Far from being a constant 0.66 °C/century over the 50 years from 1960 to 2010 as L&S claim, the slope more than doubled!

A slightly higher climate sensitivity of 2 would entail respective slopes of 0.76 and 1.62 °C/century at 1950 and 2010.

Anyone claiming that this slope is constant during 1950-2100 is either confused or bent on fraud. I have trouble picturing Springer capable of the latter.

Considering that 1.5C per 3.7Wm-2 represents the Planck response of a surface at -30C, the approximate temperature of the assumed isothermal effective radiant layer, that perhaps the inconceivable is something the IPCC authors might have spent more time on.

What’s your definition of “no feedback sensitivity”, capn? I have no intuition for what it might mean in practice.

But ok, let’s go with 0.8 C per doubling. It remains the case that the slope of the log of the Keeling curve doubles between 1960 and 2010. This doesn’t make L&S’s claim of constant slope over that period any less confused/fraudulent (pick one).

Vaughan, No feedback sensitivity basically means that water vapor/clouds are not assumed to triple or quadruple the impact. That reduces the response enough that it is close to linear, not exactly but close, depending on what number you settle on. 0.8 C is close to linear with the Mauna loa/Law Dome data and if you consider that 334.5 Wm-2 is the effective energy of a surface at 4C degrees which is about the average temperature of the DWLR surface and the average temperature of the oceans is not divorced from reality.

Kimoto considers latent, sensible and radiant and arrives at a Planck Feedback Parameter of ~.73C per 3.7Wm-2. for the “average” global surface. Unfortunately, Kimoto didn’t have the Stephens et al. Earth Energy Budget at the time, so the ~18Wm-2 error in the K&T budget caused some issues.

However, if you are concerned with the quality of life of the inhabitants of the upper troposphere, 1.5C would be your number.

@cd: No feedback sensitivity basically means that water vapor/clouds are not assumed to triple or quadruple the impact.

1. What’s your basis for assuming this is the only feedback? Couldn’t higher temperatures release more CO2 from the oceans, for example?

2. Does “no feedback sensitivity” have a physical meaning in the sense that it can be measured? If so how would you measure it? If not, why is a parameter with no physical meaning relevant to anything?

Webster, “Bingo. And because of that bone-headed move L&S come up with a TCR of CO2 of ~1.3C instead of the 2C if they had done it correctly, as Vaughan is demonstrating.”

You not getting this is understandable, Vaughan however is a different situation. Prior to 1950 the impact of CO2 assuming a no feedback sensitivity which is debatable, but between 0.7 and 1.5 C with the normal range of 1 to 1.2 C per 3.7 Wm-2, is less than 0.15C, so assuming a linear trend of zero is within the error margin of the surface temperature data. From 1958 to 2010 the Mauna loa data was available, then from 2010 to 2100 they can assume any rate they like. Assuming no change in the rate of increase, you get about 560ppm by 2100 or a doubling from pre-industrial. Since they are assuming a linear trend you get a linear increase. With no feedback at the lower range for CO2 only. you get a nearly linear trend matching the nearly linear assumption.

Vaughan has this magic 1.5 C per 3.7Wm-2 or a doubling of CO2 in his head probably because he reads in different climate circles. 1.5C is based on the effective radiant layer assumed to be approximately -30C degrees which is where water vapor influence is low enough to allow CO2 to do its magic relatively unmolested. Since water vapor and cloud feedbacks are the largest unknown, some don’t assume they will amplify the impact of CO2, hence, no feedback climate sensitivity. There are still some old codgers that don’t assume things because they are popular.

If you really what to know how that works consider a surface at 4C degrees which has an effective radiant energy of 334 Wm-2. Adding 3.7Wm-2 to 334 gives you 337.7Wm-2 which by S-B would be a surface at 4.73 C degrees or about 0.8 C per 3.7 Wm-2 ignoring potential latent and sensible cooling of that surface.

@WHT: And because of that bone-headed move L&S come up with a TCR of CO2 of ~1.3C instead of the 2C if they had done it correctly, as Vaughan is demonstrating.

How is TCR defined these days? AR4 defined it as being when CO2 increased 1% a year. Doubling time at that rate is 69.7 years. L&S claimed 0.66 °C per century, which equals 0.66/0.697 = 0.95 °C/doubling as the climate sensitivity. Considerably lower than 1.3C. One of the above assumptions must be wrong if they got 1.3C.

Vaughan, “1. What’s your basis for assuming this is the only feedback? Couldn’t higher temperatures release more CO2 from the oceans, for example?” Since the impact of a doubling of CO2 is pretty well known to be ~3.7 Wm-2 +/- a touch, it is one of the very few knowns in this whole problem. “Couldn’t higher temperatures release more CO2?” Yes but that is limited to a small fraction of the current rate and and biological “could” reduce that. It is left as an unknown, not assumed negligible, but just unknown.

“2. Does “no feedback sensitivity” have a physical meaning in the sense that it can be measured? If so how would you measure it? If not, why is a parameter with no physical meaning relevant to anything?”

Yes and no. If you can find a spot that sits still long enough, you could measure it. Until then it is approximated using the Stefan-Boltzmann law. This is the famous, “all things remaining equal” argument for CO2 impact. Planck Parameter is probably a better name, but “no feedback sensitivity” is a product of GHE theorists.

@cd: Vaughan has this magic 1.5 C per 3.7Wm-2 or a doubling of CO2 in his head

Moi?

Where did I fix 1.5? I gave 1.5 first, then 2, and then 0.8 when you asked. I only started with 1.5 because that’s AR4’s lower limit. I have no personal attachment to or knowledge of 1.5.

My point was that the slope of log(Keeling) more than doubles between 1960 and 2010, contrary to L&S. This fact is independent of whether you take CS to be 0.5, 0.8, 1.5, 3, or 10.

As for moi, I interpret HadCRUT4 as demonstrating a climate sensitivity of around 1.73-1.76 C/2xCO2 over the period 1850-2014 if you don’t compensate for Hansen-type ocean delay. 0.8 may be the no-feedback value, but temperature rises a lot more than your no-feedback value as CO2 rises.

Vaughan, “Where did I fix 1.5? I gave 1.5 first, then 2, and then 0.8 when you asked. I only started with 1.5 because that’s AR4′s lower limit. I have no personal attachment to or knowledge of 1.5.”

here, “Apparently it’s never occurred to them to look at the log of the actual Keeling curve. Had they taken 1.5 times the log base 2 (the formula assuming a climate sensitivity of 1.5 °C per doubling of CO2, the lowest value conceivable to IPCC authors), ”

That was just before this,

“Anyone claiming that this slope is constant during 1950-2100 is either confused or bent on fraud. ”

I don’t have any heartburn turning that into two straight lines since the margin of error is PDS. This is one of those much ado things.

I have other issues with the paper like lack of a believable mechanism, but not with that simplification, must be the slide rule training.

Sorry, cd, but I’m really not following you here. You quoted me as saying “Had they taken 1.5 times the log base 2 (the formula assuming a climate sensitivity of 1.5 °C per doubling of CO2, the lowest value conceivable to IPCC authors”. What the IPCC sets as their lower limit has absolutely nothing whatsoever to do with what I think CS is. It was simply one possible starting value I picked to illustrate my point. Any other number serves equally well to illustrate that point, and to ward off any possibility of that sort of confusion I immediately repeated my point substituting 2 for 1.5.

If that doesn’t make it obvious to you that I couldn’t care less about 1.5 then we have a serious failure to communicate.

Also your graph didn’t clarify anything for me. How does it refute the important point that the slope of log(Keeling) doubles over its range? Calling it “much ado about nothing” fails to grasp the significance of a slope that doubles in 50 years. That’s a key fact! If your graph doesn’t reflect it then it’s unrelated to reality.

Below are the conclusions Loehle and Scafetta draw. Mauna Loa records do not go back before 1950 so I have no idea where Pratt is getting his data that there must be a CO2 signal prior to that date. The consensus (not that bandwagon science has any intrinsic merit) is that there is none due to concomitant aerosol production. I have no particular reason to reject the aerosol negation hypothesis.

Since Loehle and Scafetta published in a peer reviewed climate journal I suggest Pratt and/or Pukite exhibit the same professionalism with their criticisms.

1) The estimated AGW component matches theory, since the log of an exponential rise in carbon dioxide should give an approximate linear trend (as in fact the climate models do). The timing of AGW effects (beginning in 1942) also matches expectations.

4) Warming due to anthropogenic GHG+Aerosol of 0.66 oC/Century is not alarming, in comparison to the IPCC protected 2.3 oC/Century This 0.66 value is an upper bound in our estimation (due to possible poorly corrected UHI and LULC effects that may explain part of the observed warming trend since 1950).

Vaughan, ” Calling it “much ado about nothing” fails to grasp the significance of a slope that doubles in 50 years. That’s a key fact! If your graph doesn’t reflect it then it’s unrelated to reality..

I guess the reality of 2100 could be debated. The curve is based on continued linear growth in CO2 concentration not unsustainable exponential growth and dTCO2(0.8) is 0.8ln(Cf/Co)/ln(2) using the BEST CO2 estimate. You get a nearly linear increase in T through 2100 with a no feedback sensitivity of 0.8C per doubling.

If you think the slope “has” to double in 50 years or is likely to double in 50 years you would have a different vision of the future than people assuming linear growth as a reference.

That uses the same assumption of 560ppmv by 2100 with linear growth and compares 0.8, 1.6 and 3.2 C sensitivities. Calling it “That’s a key fact” fails to consider that the rate of CO2 increase over the next 90 years is not a known fact.

Pratt needs to accept the fact that economically recoverable fossil fuels are finite, we’ve reached peak oil already, energy cost is rising which has capped the rate of consumption, and that the growth experienced from 1950 to 2000 in anthropogenic CO2 production is not sustainable. Anyone who doesn’t acknowledge this is either confused or bent on fraud. I have trouble picturing Pratt capable of the latter.

From the article:
World oil production surpassed 75 million barrels per day for the first time ever in December 2011, at 75.45 million barrels, and went even higher in January of this year at 75.58 million barrels, setting a new monthly production record, according to data recently released by the EIA. The red line in the graph shows the upward linear trend in world oil production from 1973 onward, with daily production increasing by almost 600,000 barrels per day on average every year since 1973.

And the fact of matter is that due to the logarithmic decline in warming efficacy of each incremental increase in CO2 it must increase logarithmically in order to sustain a linear rise in temperature.

And the temperature rise during this period of time is not linear by any stretch of the imagination and in fact most of it occurred in stair step fashion in a single decade. This stair step increase in global average temperature is not at all consistent with the smooth steady growth of a well mixed greenhouse gas over 50 years. The stair step can be easily seen in a 10-year running mean.

This is an absolute killer of a chart, where I use Scafetta’s cycles to pin the ears back on the data via the CSALT model:

I will have a post up on this soon. Suffice to say, since Scafetta’s cycles are invariant properties of the earth’s orbit that has implications for forecasting.

This together, with the work that I have done on fossil fuel depletion modeling means that we finally have a good set of high-fidelity yet first-order system models to be able to forecast global warming in the extended future.

Webster, “I will have a post up on this soon. Suffice to say, since Scafetta’s cycles are invariant properties of the earth’s orbit that has implications for forecasting. ”

Of course they do, unfortunately they lack mechanisms that are compatible with GHE theory concepts of “forcing” which tend to ignore the impact of internal heat transport that is effected by “regional” impacts on tides, currents and temperature gradients, all that silly fluid dynamics stuff.

jim2, “Downwelling IR from CO2 is insignificant in the presence of clouds. That, plus cloulds reflect SWR, rendering that portion a non-player.”

That depends on what “surface” you are considering. Cloud cover attenuates the CO2 related DWLR below the clouds. Above the clouds, CO2 is still there doing its thing. Adding more CO2 will increase the effective temperature at the cloud tops which increases upper level convection. MODTRAN does a pretty good job.

If you use a tropical atmosphere with an altostratus cloud base the difference between 400ppm and 1600ppm is 0.6 Wm-2 at the surface and 9Wm-2 at 4000 meters. If the temperature at 4000 meters is 4C that 9Wm-2 increases the temperature there to 5.8C provided convection doesn’t increase. With clear skies but a moist atmosphere the quadrupling produces 3.7Wm-2 of impact at the surface. With that surface at 27C the 3.7Wm-2 has a 0.6C impact on the surface. Since we only have 400ppm versus ~280ppm “normal” the impact at 4000 meters should be about 0.4C if convection is not producing a negative feedback, but since the “Climate Science Guys” would rather smear “surfaces” than isolate surfaces they lose sight of the “signatures” they should be looking for.

I think it is pretty comical blowing the sign on cloud “forcing”, but then good help is hard to find.

Perhaps if Pukite or Pratt were to actually publish their curve fitting exercises in a climate journal they might be comparable to Loehle & Scafetta 2011. Surely you boys don’t expect your unpublished blog science to be taken seriously, right?

Loehle and Scafetta model the slope of CO2 radiative forcing as a steady zero throughout the 100 years 1850-1950, and then an equally steady 0.66 °C/century throughout the 150 years 1950-2100.

You must not be aware that “CO2 forcing” is actually a basket of different anthropogenic forcings, both positive and negative, including methane, black carbon, nitrogen and sulfur compound aerosols, ozone, and others. Written in the scriptures of bandwagon climate science is a screed talking about aerosols negating CO2 forcing before 1950. Please make a note of it.

I already have Webster. Combined solar and volcanic forcing have been underestimated due to not properly considering the impact of internal heat transport and the “memory” of the oceans. Based on the Oppo et al. 2009 indo-pacific warm pool the ocean tropical zones which have the majority of the heat capacity hit a Little Ice Age minimum in ~1700 AD which was ~0.9 C lower that the 0-2013 “normal”. The recovery from that minimum was delayed/complicated by volcanic/low solar activity mainly in 1816 and 1900 which had internal delays on “global” impact on the order of 25 years. The recovery from that depression was amplified by albedo change, increased atmospheric water vapor and land amplification and based on global diurnal temperature ranges the full recovery was basically complete in ~1985 though there is a lagged Pinatubo impact nearly finished.

If you include recovery you will over estimate CO2 equivalent forcing by a factor of two or more. The AMO/PDO are basically features of the internal hemispheric equalization required during the recovery and the 30N-60N SST makes a marvelous index for combined AMO/PDO “global” climate impact. Indian Ocean SST thanks to the reduced THC impact makes a marvelous “global” temperature proxy and you can smooth the IO SST with 8.5 year moving average and the residual makes a perfectly marvelous ENSO proxy.

Once you get all that natural recovery/variability out of the way you can start estimating the actual CO2/land and water use impacts.

So unless your and S&L’s models can predict volcanic activity, I doubt that will be much good at predicting future climate.

“You must not be aware that “CO2 forcing” is actually a basket of different anthropogenic forcings, both positive and negative, including methane, black carbon, nitrogen and sulfur compound aerosols, ozone, and others. Written in the scriptures of bandwagon climate science is a screed talking about aerosols negating CO2 forcing before 1950. Please make a note of it.”

Oh, I sarcastically thank you very much. The commenter doesn’t realize that the CO2 control law shorthand is to include all those into the log sensitivity. The aerosol negative feedback shifts a log sensitivity from the 4’s and 5’s into the 2’s and 3’s as James Hansen has described and that Andrew Lacis has described in his recent paper that he posted here.

As Lacis says, the CO2 is only about 20% of the forcing, so obviously aerosols are counteracting that, but since it all follows the Beer-Lambert law, this is a good shorthand.

Cappy,
You seem to be missing an important premise in your argument. A warming recovery as you suggest implies that there is an energy term which drives it.
Where is this term?
Where is this thermal source coming from?
Is it coming from the depths of the ocean?
Is there some sort of heat source down there in the deep?

The beauty of the CSALT model is that it accounts for all the energy terms to a correlation coefficient of over 0.99 during the last 130+ years.

If you aren’t just blowing smoke, and you have an additional energy term, I can add it to the model. Just state what it is.

Webster, “If you aren’t just blowing smoke, and you have an additional energy term, I can add it to the model. Just state what it is.”

It doesn’t require an extra energy term, it is just efficiency. You can play with entropy if you like, but one layer’s entropy is another layer’s input. Simply a cooler hemisphere/layer loses less energy to space and gains more energy from its surroundings. The sun is always there, the SST is always warmer than the depths, the oceans are always warmer than the land and the tropics are always warmer than the poles. Change the internal mixing efficiency and you change the rate of heat gain or heat loss. There is only around 5Wm-2 equivalent in energy required to power the mixing and that is mechanical energy. That is why there is a good correlation with solar/lunar, SOI, ENSO AMO etc. because they are all related to mixing efficiency.

Toggweiler’s quick Shifting Westerlies paper has a lot more information that you think at first glance.

@DS: Mauna Loa records do not go back before 1950 so I have no idea where Pratt is getting his data that there must be a CO2 signal prior to that date.

Are you saying you don’t know even one method?

Here are three for starters. 1. Use Hofmann’s raised-exponential formula [Hoffman et al 2009]. 2. Fit the last 50 years of CDIAC emissions data (which goes back to 1751) [Houghton et al] to the Keeling curve and hindcast. 3. Use CO2 content of Antarctic firn air (annual values available from 1850 to 1978). All three methods give similar results for the period 1850-1978, as well as being in close agreement with the Keeling curve for 1958-1978.

@cd: If you think the slope “has” to double in 50 years or is likely to double in 50 years you would have a different vision of the future than people assuming linear growth as a reference.

In this thread the only future projections that have come up have been made by you in your graph and by L&S in their paper. I’ve expressed no “vision of the future” here. Instead I have an understanding of CO2 to date in which the slope of the log of the Keeling curve doubled between 1960 and 2010. That’s the “key fact” I’m talking about, quite apart from any future scenario. There’s no “has to” about it, it’s already happened!

Based on the blatantly false premise that that slope doesn’t change at all between 1960 and 2010, L&S extrapolate that false assumption to 2100.

Extrapolating the truth to 2100 is one thing. Extrapolating such a wildly inaccurate claim about the past 50 years is quite another!

Vaughan, “Extrapolating the truth to 2100 is one thing. Extrapolating such a wildly inaccurate claim about the past 50 years is quite another!”

They are matching the residuals with a straight line starting in 1942 which is more than just the Keeling curve, it is the estimation of all possible positive and negative influences based on their model. I really don’t see how this is a wildly inaccurate claim. Their model may suck, their restricted access to data and code does suck but I don’t see where they are making any wildly inaccurate claims. Using the CO2 data with a lower sensitivity there is not a lot of difference other than a more gradual transition with the maximum change in slope around 1958 when the Mauna Loa data started.

Perhaps you could overlay their chart with your interpretation of the glaring error?

WebHubTelescope:This together, with the work that I have done on fossil fuel depletion modeling means that we finally have a good set of high-fidelity yet first-order system models to be able to forecast global warming in the extended future.

Maybe. I think you ought to wait until after you have published in the peer-reviewed literature and after the model has survived at least 20 years’ worth of testing by out-of-sample data, before you make that claim.

You use the word “forecast.”. Is that your way of warning us that even you do not trust it to make accurate “predictions”?

I’m saying Mauna Loa is the only continuous record using directly comparable methods. I’m saying the consensus climate science bandwagon says anthropogenic global warming was not significant prior to 1950.
“The curve you’re displaying there has a slope that increases from 0.85 ppmv/yr in 1960 to 2.1 ppmv/yr in 2010. If you’re saying that’s not a factor of 2, you’re right: it’s a factor of 2.5!”

Cherry picking. There was rapid growth in the first decade of the record which hasn’t been duplicated since.

In the past 40 years the trend has gone from 14ppm/decade to 20ppm/decade. That’s an increase of less than 50%. That’s not enough to drive the logarithmic increase required to sustain a linear increase in temperature. A nearly 50% increase in decadal rate is needed in the coming decade for your assertion to remain true for the period 1973 to 2023. I say it won’t happen. Want to bet on it?

“Unless I overlooked something, those two were all you had in the way of technical content.”

You didn’t think that the logarithmic decline curve in CO2 warming efficacy was technical? Wow. You do realize that sensitivity is given per CO2 doubling because of that (non?) technical fact, right?

the slope of the log of the Keeling curve doubled between 1960 and 2010

This is roughly correct.

The compounded annual growth rate (CAGR) or exponential growth rate of atmospheric CO2 has levelled off since the early 2000s at around 0.52% per year from a rate of around 0.3% per year in the 1960s/70s.

At the same time, human population (the folks causing this increase in CO2) grew from 3 billion to 7 billion.

But the CAGR of human population is slowing down, from around 2.0% per year in the 1970s to around 1.2% today.

Population is projected to grow from today’s 7 billion to around 10.2 billion by 2100 (US Census Bureau + UN estimates), or at a CAGR of around 0.4% per year.

So it is quite clear to me that the rate of increase of human generated CO2 is not going to continue to grow at an ever increasing exponential rate when human population growth rate slows down to one-third of the present rate.

Let’s take 3 cases:

If we ASS-U-ME that CO2 concentration will continue to grow at the current exponential rate (CAGR) of 0.52% per year, we would arrive at a CO2 concentration of 620 ppmv by 2100.

Per capita CO2 emissions from fossil fuel combustion increased by around 10% from 1970 to today (using CDIAC figures), from 4.02 to 4.42 tons. If we ASS-U-ME that CO2 emissions are tied directly to human population, but that per capita emissions will increase another 30% from now to 2100, we arrive at a CO2 concentration of around 660 ppmv by 2100.

If we, however, ASS-U-ME that the logarithmic rate of increase of atmospheric CO2 caused by human CO2 emissions will continue to grow exponentially independent of human population, we arrive at a CO2 concentration of around 940 ppmv by 2100.

This latter case seems absurd to me, because it would imply that every man, woman and child on Earth would be emitting CO2 at a rate higher than that of US citizens today.

Even you will have to agree that this is not very likely, Vaughan.

It’s more likely that we end up with a business-as-usual increase to around 620 to 660 ppmv by 2100.

This could be reduced by around 60 to 80 ppmv by the implementation of “no regrets” initiatives (see recent ASME report), so the likely range covering various initiative scenarios is between 570 and 660 ppmv.

And (at IPCCs 3.0C climate sensitivity) the net impact of all “climate initiatives” on the “globally and annually averaged temperature” at equilibrium is around 0.5C.

@cd: Perhaps you could overlay their chart with your interpretation of the glaring error?

Near as I can tell their ten-parameter model fits just fine. (12 on the face of it, but two can be absorbed into the other 10.)

Last December Mike Rossander was able to get a terrific fit to HadCRUT3 with just four sine waves and nine parameters. If L&S can’t get an even better fit with ten parameters then they should go back to the fitting room.

The lesson from last December was that even an unreasonable model like four sine waves can be made to fit well. One must therefore judge reasonableness of ten-parameter models by other than how well they fit the data.

In the meantime I’ve decreased the number of parameters in my 2012 spreadsheet by 9, so as to avoid the interminable arguments about parameter counting. I’m now 10 parameters ahead of L&S. The trick is to base the analysis on the observed data instead of coming up with questionable formulas with sinewaves and polynomials like L&S do, and like I used to do. If I’ve understood WHT he’s made the same move. It’s a big improvement over the arbitrariness of analytic models.

Vaughan, I am a bit torn on the value of any fit that involves not knowing why there is a fit. Natural recovery is going to have a similar ln curve since water vapor responses to any temperature change for whatever reason so I have been looking into finding some “out of sample” data to test a few of these fits. BEST btw scales nicely to “global” adding a hundred years if you would like to give that a shot. There is also Tmax and Tmin for gut checks.

This Stochastic resonance tends to model some of the decays from perturbations pretty well though I am more into weakly damped responses at the moment.

The two trend lines in your graph show the slopes at respectively 1977 and 2007 (a reasonable approximation to the Mean Value Theorem for the Keeling curve), which are 30 years apart.

If you click on Raw Data under the graph you’ll see that the 1975 slope is 1.3584, the 2006 slope is 2.0171, which is 1.485 times greater. Squaring that gives an estimate of the slope increase for 60 years. The square implies a factor of 2.2 increase in slope.

I appreciate your confirmation of my observation that the slope doubles in 50 years.

@cd: Vaughan, I am a bit torn on the value of any fit that involves not knowing why there is a fit

My feeling precisely. My preference nowadays is for what one might “emergent models.” This would be a waveform extracted from the data by filtering, and then compensating for any attenuation by turning around and fitting that “emergent model” to the data to give a scaling factor. This is better than simply guessing the factor as I did in my poster last year.

BEST btw scales nicely to “global” adding a hundred years if you would like to give that a shot.

I noticed that in your graph but I didn’t see the accompanying documentation. Maybe your graphs could include links to their documentation.

This Stochastic resonance tends to model some of the decays from perturbations pretty well though I am more into weakly damped responses at the moment.

I thought you needed an actively responding system in order to get any mileage out of stochastic resonance. Isn’t climate data a bit too passive for that to work?

@max: The compounded annual growth rate (CAGR) or exponential growth rate of atmospheric CO2 has levelled off since the early 2000s

Max, we already had the conversation about whether it was meaningful to talk about the CAGR of total atmospheric CO2. Unless you’re having a senior moment you should recall that I don’t believe exponential growth of total CO2 is physical. I’m not up for any more metaphysical speculations on how CO2 accumulates.

In case you want to scale to GISS loti global the factor is 0.65. Excellent fit back to 1900 then some more serious wanderings.

I am not sure about the Stochastic resonance or at least how much it applies, but a more “global” ENSO does tend to be likely a resonance since there is a 27-29 month lag relative to solar/QBO that tends to fit the form. If it does apply, it should relate to all time scales at least until perturbations subside.

Hmm, interesting. Can you really convert land data into sea data? How well does that work on data sets that have both?

Mosher said something a while back about working on BEST sea temperature. I’ve been away from blogs for a while (and have to leave this one for a while now too) so haven’t kept up with this. Have you heard anything? (I haven’t seen him on this thread.)

” Matthew R Marler | November 30, 2013 at 7:03 pm |
Maybe. I think you ought to wait until after you have published in the peer-reviewed literature and after the model has survived at least 20 years’ worth of testing by out-of-sample data, before you make that claim.”

Perhaps you could overlay their chart with your interpretation of the glaring error?
”

The error by L&S is glaring in comparison to their fastidious search for an ABCD answer.

Just use the crumbs that Scafetta leaves behind and there you go. The deniers do all the hard work in trying to deceive, so all one has to do is kick over the rocks and you get left the spoils.

The CSALT model is an example of everyone being right to a partial degree — Curry is right with the Stadium wave, Scafetta is right with his orbital parameters, Crowley is right with the Aerosols, Bob Carter is right with his SOI contribution, sunspots are partially right, and of course whoever figured out the importance of CO2 is right.

Like a blender, we apply the variational principle to thermodynamics and see what pops out. Observational evidence such as the long-term trend, pauses, and natural fluctuations are all characterized.

Vaughan , “Hmm, interesting. Can you really convert land data into sea data? How well does that work on data sets that have both?”

If you can convert proxies into global temperature you should be able to convert land instrumental into global temperatures with a better idea of what areas don’t convert well to “global”. If your model is really good, it should be able to explain any divergences.

If I limit the land temperatures to 60S-60N and less than 300meters in altitude I should get a much better fit than greater than 65S-65N and higher than 300 meters because that region would be more anti-phase to “global” temperatures. If I really know what is happening, I would be able to explain the shift in “global” diurnal temperature range circa 1985.

Climate science has its own bull$hit detector built right in :) You just look for where the arm waving starts.

And you were unable to show me how the increase of atmospheric CO2 caused by human CO2 emissions would grow independently of the growth of human population (the folks emitting the CO2 to start off with).

Because you can’t.

Yet that is what your silly projection ASS-U-MEs.

People sitting in ivory towers often study one aspect to death while ignoring the real world around them.

I happen to be using data from various skeptical scientists.
1. Scafetta thinks that his complex Moon-Sun-Earth-Jupiter orbital tug is the main contribution.
2. Curry thinks that her Stadium Wave theory is a contributor.
3. Bob Carter was one of the first to note that the ENSO SOI value followed the temperature excursions.
4. The “It’s the Sun, stupid” people think that sunspots are the key.

@cd: Perhaps you could overlay their chart with your interpretation of the glaring error?

Capn, to emphasize my point that with ten parameters L&S ought to be able to achieve an excellent fit with their model, I decided to overlay their piecewise-linear “natural+anthropogenic” warming trend with the log of what the CDIAC emissions data (including land use changes) hindcasts from Mauna Loa for atmospheric CO2. These are shown in respectively blue and green here.

The two curves can be seen to be in excellent agreement.

The main take-away point is that L&S have elected to call the CO2 we emitted between 1850 and 1942 “naturally caused”. (On that basis Nebraskan climatologists could appeal to the L&S paper to justify talking about our pre-1942 CO2 emissions in their research reports on the ground that those have been duly certified as naturally caused. Seems fair, wasn’t humanity closer to nature before it invented the bomb?)

Here’s the MATLAB gibberish that plotted these two curves. Non-coders should avert their eyes and skip a paragraph or three.

The first line of the code loads the year and cumulative emissions data from a useful file cumemit.csv obtained by summing CDIAC data since 1850. cumemit(:, 1) is a column vector of years (so t gets set to the years 1850:2012) while cumemit(:, 2) is a column vector of the corresponding accumulated amounts of emitted CO2 since 1850 inclusive. cumemit(163, 2) is 549.8, meaning that the total CO2 emissions from 1850 to 1849+163 (= 2012) is 549.8 gigatonnes of carbon (= 2016 gigatonnes of CO2).

1.5 — The climate sensitivity (CS) best matching the L&S model. (The best match to HadCRUT3 is with CS about 1.75.)
0.44 — The fraction of emitted CO2 retained in the atmosphere
2.13 — ppmv per gigatonne of carbon (= 5.148*12/28.97)
280 — Nominal value for CO2 in 1850. The “1+” therefore effectively adds 280 to emitted CO2. If emitted CO2 is zero then so is AGW because log2(1) = 0. This is why the Y-axis can reasonably be labeled “Expected AGW since 1850 assuming CS = 1.5”. The expectation is based on observations and physics.

The number 0.47 in the latter raises L&S’s model so that it starts just below 0, to match the other curve. The remaining numbers are from their paper.

Hopefully this makes clearer my crack earlier here about modeling Planck’s law as a triangle. L&S have chosen to model a physically well-motivated curve with two straight lines, one called natural and the other called natural+anthropogenic. The slope of the former is 0.16 °C/century, that of the latter 0.16 + 0.66 = 0.82 °C/century.

Max (Manacker, not Planck) will now start complaining that I’m asking people to project the blue curve to 2100 in an unrealistic way. At least Max was kind enough to allow that projecting the green curve to 2100 was also unrealistic, so I shouldn’t complain about his offering to the split the difference. :)

@max: And you were unable to show me how the increase of atmospheric CO2 caused by human CO2 emissions would grow independently of the growth of human population (the folks emitting the CO2 to start off with).

Yes, just as I haven’t been able to show you that I’ve stopped beating my wife. You’ve never challenged me on either.

Ask and ye shall receive. But don’t complain if you don’t ask.

If per capita fuel consumption continues to rise, as it’s being doing nonstop for centuries now, and if all human population were to suddenly come to a halt, human CO2 emissions would then grow independently of of the growth of human population.

That relation between CET and BEST looks very interesting, capn. Can you quantify the correlation? While it may be easy for you to see the correlation by eye, it’s difficult for me, so some correlation coefficients would help there.

Even more help would be some way of displaying the CET and BEST data that makes the correlation more visible. Your 27 month moving average filter leaves too much noise behind to see the trends clearly. Web might recommend a Pratt filter for that, but it looks to me like a Gaussian filter 5 or 10 years wide would be just as effective for that job.

@cd: I think the L&S reference to 1950s magic is a bit of a tweak instead of an attempt to misrepresent.

Could be. Presumably those Christians that were thrown to the lions weren’t guilty of fraud, though those doing the throwing might have thought they were and therefore imagined they were doing the right thing.

I wonder whether Mashey or Oreskes would consider L&S’s paper fraudulent. And would Richard Dawkins have rooted for the lions?

Vaughan, ” Can you quantify the correlation? While it may be easy for you to see the correlation by eye, it’s difficult for me, so some correlation coefficients would help there.”

Not with a “normal” method. The main correlations are actually perturbations and each region has a different dampened response. To show that I would have to use different smoothing that more closely matches the response time for each series. That tends to pad your correlation making them pretty unconvincing. The reason for including mean sea level is to have another reference that is more “global”. The reason for using the 27 month smooth is that picks out a good bit of the ENSO for timing reference. So it not really a correlation between BEST and CET as much as the correlation between the causes and the individual series.

The post also shows the CSALT defluctuated fit to the ln(CO2) signal, which is much better than what L&S can offer:

This is closer to what Vaughan is describing with his own model fit to the CDIAC data. We both use the historical CDIAC data because the scientists went to the trouble of estimating the historical record, and until something better comes along, we will continue to use it.

For the record, I also include a permanent reference to the Pratt filter in the blog post. For attribution sake, of course.

In defense of ignoring human population growth rates in estimating future CO2 levels resulting from human emissions of CO2, you write:

If per capita fuel consumption continues to rise, as it’s being doing nonstop for centuries now, and if all human population were to suddenly come to a halt, human CO2 emissions would then grow independently of of the growth of human population

Per Capita CO2 generation from fossil fuels increased by 10% from 1970 to today, based on CDIAC figures on CO2 and US Census Bureau figures on world population.

In my estimate, which arrives at 650 ppmv CO2 by 2100, I have used US and UN projections of future population growth and have ASS-U-MEd that the global per capita CO2 would increase by another 30% by 2100.

Your estimate of around 1000 ppmv would require the average world-wide per capita CO2 emission to be higher than that in the USA today.

Obviously bonkers, Vaughan.

I’ll repeat this, since it seems you failed to write it down:

To ignore projections of future human population growth in making an projection of future CO2 levels caused by human emissions of CO2 is silly.

@cd: The reason for using the 27 month smooth is that picks out a good bit of the ENSO for timing reference

Even though I can’t see it myself, that’s very good news because it means there should be a way to make the timing reference clearly visible. Doing so would greatly improve the strength of your connection between CET and BEST.

@max: And you were unable to show me how the increase of atmospheric CO2 caused by human CO2 emissions would grow independently of the growth of human population (the folks emitting the CO2 to start off with).

And you were unable to show me why my proof of it was unconvincing. I thought it was a very simple proof, but apparently you didn’t since you simply blew it off without even making an attempt to show what was wrong with the proof.

On this thread you pointed out that “the slope of the log of the Keeling curve doubled between 1960 and 2010”.

I pointed out to you that this occurred as human population was growing at a rate of 2% per year. This rate has already started to slow down, and is projected to slow down even further (to a rate of around 0.4% per year), so it would be absurd to ASS-U-ME that the slope of the log of the Keeling curve would continue to increase over the rest of this century despite this projected slowdown in population growth.

Then I added:

If we, however, ASS-U-ME that the logarithmic rate of increase of atmospheric CO2 caused by human CO2 emissions will continue to grow exponentially independent of human population, we arrive at a CO2 concentration of around 940 ppmv by 2100.

This latter case seems absurd to me, because it would imply that every man, woman and child on Earth would be emitting CO2 at a rate higher than that of US citizens today.

On this thread you pointed out that “the slope of the log of the Keeling curve doubled between 1960 and 2010”.

I pointed out to you that this occurred as human population was growing at a rate of 2% per year. This rate has already started to slow down, and is projected to slow down even further (to a rate of around 0.4% per year), so it would be absurd to ASS-U-ME that the slope of the log of the Keeling curve would continue to increase over the rest of this century despite this projected slowdown in population growth.

Then I added:

If we, however, ASS-U-ME that the logarithmic rate of increase of atmospheric CO2 caused by human CO2 emissions will continue to grow exponentially independent of human population, we arrive at a CO2 concentration of around 940 ppmv by 2100.

This latter case seems absurd to me, because it would imply that every man, woman and child on Earth would be emitting CO2 at a rate higher than that of US citizens today.

And you were unable to show me why my proof of it was unconvincing. I thought it was a very simple proof, but apparently you didn’t since you simply blew it off without even making an attempt to show what was wrong with the proof.

Here is what I believe you were referring to as your “proof”:

If per capita fuel consumption continues to rise, as it’s being doing nonstop for centuries now, and if all human population were to suddenly come to a halt, human CO2 emissions would then grow independently of of the growth of human population.

Do you dispute any of this?

To this statement of “proof” I replied that the per capita CO2 generation from fossil fuels increased by 10% from 1970 to today, and that projecting a further 30% increase in per capita CO2 to 2100 would result in a CO2 level of around 650 ppmv – NOT ~1000 ppmv, which would result from the ASS-U-MEd continued increase in the logarithmic rate of increase of the Keeling curve (or from your projection on an earlier thread).

I also pointed out that to reach 1000 ppmv it would require that every man, woman and child on Earth would generate as much CO2 as US inhabitants do today – an assumption that is blatantly absurd.

So, yes, I responded to your “proof”.

You apparently just did not read my response to you.

It can happen.

But, Vaughan, we have truly beaten this dog to death.

Unless you have something new to add, let’s cap it off and move on to something else.

A reanalysis has a different purpose from climate models. Typically they are done with the aim of providing best-guess gridded fields of atmospheric variables every 6 hours with a model constrained by observations. The addition of observations every 6 hours means it doesn’t conserve things like mass, water and energy that continuously running climate models should, so their budgets would not be accurately closed. Reading the BAMS paper we find that surface precipitation persistently exceeds evaporation for the CFSR, so it is not conserving water, and we would also not expect energy to be conserved because temperatures are adjusted. The observational corrections do this as a price to pay for fitting them better.

If you want to know the weather patterns at a given date and time, that is what the reanalysis gives you, but things like radiative fluxes and rainfall totals are by-products and not the main outputs. Climate models are tuned to give reasonable energy budgets and water budgets, but may have regional biases in the atmosphere that are uncorrected by data, and will not correspond to any real date and time. Mainly with these you are looking at monthly means and their interannual variability, and the effects of changing scenarios, but they may be verified with reanalyses mean temperatures, humidities and winds.
The bottom line is they are different things for different purposes.

Vaughan, it is the best guess of the upper air and surface temperature, winds and humidities with high time resolution, so it can be used for past trends. Some reanalyses go back several decades, but the constraining data gets thinner before the satellite era, especially over oceans and the southern hemisphere.

Vaughan Pratt: The curve you’re displaying there has a slope that increases from 0.85 ppmv/yr in 1960 to 2.1 ppmv/yr in 2010. If you’re saying that’s not a factor of 2, you’re right: it’s a factor of 2.5!

However the curve I was referring to was not the Keeling curve but its log, whose slope just slightly more than doubles between 1960 and 2010 (regardless of what base log you use).

I noticed the same thing about the curve provided by David Springer. Clearly the linear least squares line is inadequate for those data, but wft does not provide an option (at least I found none) for fitting polynomials.

Any projection of future atmospheric CO2 growth (extension of the “Keeling curve”) that ignores the projection of human population growth (the folks allegedly generating the CO2 that cause this increase) is patently absurd to start off with, regardless of how it is constructed.

That was my point to Vaughan Pratt in comment #419832 above.

This is how people sitting in ivory towers crank out silly projections that make no sense at all.

@MM: Clearly the linear least squares line is inadequate for those data, but wft does not provide an option (at least I found none) for fitting polynomials.

Can’t you do it yourself? Although MATLAB is a bit pricy R is a free download. And if you have Excel it has a built-in Solver that is ideal for fitting anything you want, not just polynomials.

For the Keeling curve however, even simpler is just to go to the literature. Hofmann et al’s paper fitting a raised exponential to the Keeling curve, titled “A new look at atmospheric carbon dioxide,” appeared in Atmospheric Environment, 43:12 2084-2086 in 2009. That was two years before the 2011 L&S paper.

While I’m flattered that Springer wants to see this raised-exponential model published by me before he’ll take it seriously, I see no point in my publishing a model of the Keeling curve that’s been out there for more than four years already. Springer should read the paper instead of complaining that I haven’t published a fit that is (a) much closer to the Keeling curve than L&S’s model and (b) not mine.

The L&S model also hindcasts insanely: it goes abruptly from a linear slope to dead flat exactly at 1950. That’s completely unphysical: no way does that have anything to do with the growth of CO2 since 1850 or the Arrhenius logarithmic law, DS’s protests to the contrary notwithstanding. Climate models should respect the physics of what they’re modeling.

Sure, but why? You were obviously correct and DS was obviously wrong — you can tell that by eyeballing the curve with a straightedge . The only importance to the fact that wft did not fit a polynomial was that I could not illustrate the polynomial using the same tool that DS used.

@MM: The only importance to the fact that wft did not fit a polynomial was that I could not illustrate the polynomial using the same tool that DS used.

True enough, I’d overlooked that. Though you’ll notice that about half the time I point to MATLAB-generated graphs because WfT wasn’t up to the task.

If there were a good way to filter viruses, a version of WfT that took arbitrary code in any of R, MATLAB, or Excel/VBA might be an improvement over WfT. Or perhaps some intermediate approach that merely added certain simple R/MATLAB/VBA formulas to WfT.

“The L&S model also hindcasts insanely: it goes abruptly from a linear slope to dead flat exactly at 1950. That’s completely unphysical: no way does that have anything to do with the growth of CO2 since 1850 or the Arrhenius logarithmic law, DS’s protests to the contrary notwithstanding. Climate models should respect the physics of what they’re modeling.”

That is so true. This is the growth of the CO2 and the defluctuated CSALT signal trend which follows that trend the best.

The curve labeled fluctuation are the nuisance variables that capture the energy not related to temperature rise.

BTW, I found the limitations of WoodForTrees make it near useless. The infrastructure I use for the CSALT model is a Semantic Web DSL written in Prolog which interfaces cleanly to R. This is prett neat, because as with other dynamic languages, I can update new code while the server is running.

“The L&S model also hindcasts insanely: it goes abruptly from a linear slope to dead flat exactly at 1950. That’s completely unphysical: no way does that have anything to do with the growth of CO2 since 1850 or the Arrhenius logarithmic law, DS’s protests to the contrary notwithstanding. Climate models should respect the physics of what they’re modeling.”

An emeritus biology professor (lots of mysteries in biology) once told me “Hypotheses have to make sense. Facts don’t. Write that down.”

In fact he’s the guy (passed away a couple years ago) who got me hooked on saying “Write that down.”. I’m carrying it on in his memory.

So Vaughn, write that down. Your protest that the facts don’t make sense don’t carry any water.

Great. But for the record there are not one but two claims you’re proposing to move on from.

1. You claim it’s obviously impossible for human CO2 emissions to grow independently of the growth of human population. I claim it’s obviously possible. I’m happy to leave it at that.

2. You claim I forecast 1000 ppmv on “some CE thread.” I claim you’re making a strawman argument and that I did no such thing. I’m happy to leave it at that.

(It is true is that Hofmann et al fit a raised-exponential to the Keeling curve which when evaluated at 2100 yields 1027.65 ppmv. But that could only serve as a forecast on the assumption that CO2 emissions will continue to track their current trajectory, and I’ve never claimed that! In contrast Loehle and Scafetta are happy to call their extrapolation to 2100 a “preliminary forecast” without any discussion of expected changes in population growth or CO2 emissions, which I imagine you would agree is far-fetched.)

I wanted to cap this off, and then you came back with some silly claims, which need clearing up.

1. You claim it’s obviously impossible for human CO2 emissions to grow independently of the growth of human population. I claim it’s obviously possible. I’m happy to leave it at that.

2. You claim I forecast 1000 ppmv on “some CE thread.” I claim you’re making a strawman argument and that I did no such thing. I’m happy to leave it at that.

Wrong on both counts, Vaughan.

I do not “claim it’s obviously impossible for human CO2 emissions to grow independently of the growth of human population”. I just claim that, while it may be theoretically “possible”, it is a stupid assumption, because it is more logical that there is some sort of correlation between the number of humans emitting CO2 and the total amount of CO2 emitted by humans. I believe a more rational approach is to examine what has happened in the past: from 1970 to today per capita human CO2 emissions from fossil fuels increased by 10%. So it is reasonable to estimate that the per capita emissions could increase by another 30% by 2100

You write that a raised-exponential fit to the Keeling curve which when evaluated at 2100 yields 1027.65 ppmv (I had figured roughly 980 ppmv). Either projection is absurd, if US Census Bureau and UN projections of population growth are correct, because they would imply that every man, woman and child on the planet emits as much CO2 as US inhabitants do today. (And BTW the US per capita emission is decreasing today, as is that in most industrially developed nations.)

@Max: Either projection is absurd, if US Census Bureau and UN projections of population growth are correct, because they would imply that every man, woman and child on the planet emits as much CO2 as US inhabitants do today

With the premise that per capita fuel consumption will not change between now and 2100, yes, I fully agree with you that this contradicts 1000 ppmv. However the Hofmann et al article projected on the basis of both population and per capita fuel consumption increasing. My poster (which was a whole year ago, a lot can change in a year!) simply reproduced their formula while finding their premises quite reasonable.

Your premise that per capita fuel consumption (hence emissions) is constant is an interesting one in the light of historical data for world population in millions (table from Wikipedia) vs. CO2 emissions in MtC (megatons of carbon, source CDIAC). Here’s a table whose rightmost column shows the per-capita emissions for the indicated year in units of tons of carbon per person per year.

From 1850 to 1980 we see continually rising per-capita emissions, starting at 43 kg of carbon per person per year in 1850 rising to 1.2 tons in 1980. Then very interestingly it drops back and settles down to the range 1.11-1.17 tons from 1985 to 2000.

Had we been judging your premise in 2000 it would appear you were right.

But then in 2005 the rate rises to 1.25, with a further rise to 1.31 in 2010.

It’s hard to know what to conclude from this. If the 18% rise in per capita consumption for the decade 2000-2010 were to be sustained for the rest of the century, we’d be up to 5 tons per person in 2100. With world population at 10 billion the world would then be emitting 50 GtC of carbon per year. If on the other hand this was just a fluke and the steady rate during 1985-2000 were to resume at say 1.3 tons/person for the entire century, then it would only be something like 13 GtC.

Given the variability of per capita consumption over decadal periods it might be more reliable to extrapolate on the basis of much longer periods, say a century. Annual per capita emissions increased from 0.324 to 1.11 tons during the 20th century, a factor of 3.4 (implying a CAGR of 1.25% and a doubling period of 57 years). If the same thing happened in the 21st century it would rise from 1.11 to 1.11*3.4 = 3.8 tons per year per person in 2100, or a total of 38 GtC for 2100.

If you know things I don’t about future per capita emissions I’m all ears. It’s not something I know how to forecast accurately at all. But I won’t argue with your figure of 10 billion people in 2100, I don’t have enough data or insight to contradict either you or your sources on that number.

An explanation that I imagine Max would prefer is that he’s right. After carefully rereading this thread I realized that on Dec. 2 at 1:32 am Max did indeed allow 10% as the rate of per capita emissions over the past four decades. 18 hours later I was still falsely claiming he was ignoring the possibility that they could increase at all.

Relative to the actual increase however, 10% over four decades is pretty close to no increase at all. I would apply for a moral victory if I thought that was worth anything on CE, but I have to be realistic.

Hypotheses have to make sense. Facts don’t. The linear trends above along with PDO and AMO synchronous 20/60 year sines of 0.1 and 0.3 (IIRC) amplitude are all that’s needed to very faithfully reconstruct HadCRUT3 temp record for the same period. HadCRUT3 is an observation. L&S is a decomposition of the observations into signals needed to reconstruct it. They hazard guesses as to what caused the component signals but the decomposition is simple and factual. Facts don’t have to make sense. Guesses (hypotheses) have to make sense. Objections that no explanations for the signals make sense is simply an argument from ignorance. The signals remain the same whether you boys can make sense of them or not. Write that down.

Define “far”.
I would define far as, the earth oceans have been warming for more than 10,000 years, and could continue to warm for another 10,000 if we don’t enter a glacial period before 10,000 years is done.

And ocean equilibrium is only real equilibrium there is, atmospheric
equilibrium could be “explained” in terms temperature bouncing up and down daily, and seasonally- which is a stretch to call this a equilibrium.
But if choose to- such atmospheric equilibrium is insignificant/ not important in terms of global climate.

The energy “surplus or deficit” probably varies slightly on a day-to-day basis, but I don’t “hope” for a significant prolonged “deficit” (think wooly mammoths).

The measurements on this are so sloppy and inaccurate (and we are talking about miniscule differences between very large numbers), so I do not believe one can draw any meaningful conclusions as to whether there is a net energy “deficit” or a “surplus”.

At any rate, the oft cited 9 W/m^2 “surplus” of Trenberth et al. is a “plug number” they took over from an earlier Hansen et al. paper, which arrived at the number by circular logic and arithmetic roundup from 8.2 W/m^2.

Unlike many fiscal budgets, earth’s energy budget is widely believed to be in surplus.

With each year of increasing amounts of greenhouse gasses, earth is modeled to send less energy outward than it receives from the sun. This energy surplus, as understood, continues until the global average temperature rises sufficiently to restore balance by emitting more energy in accordance with the Stefan-Boltzmann Law. Indeed, the concept of ‘missing heat’ implies that a surplus of energy exists to be missed. And the NASA GISS Model E projects a trend of increasing energy surplus. The runs of Model E for “Dangerous Human-Made Interference” (from 2007) A1B scenario ( available at link) yield this projection for net radiance at the top of the atmosphere:

THIS IS ALL BASED ON MODEL OUTPUT. MODEL OUTPUT HAS NOT MADE A SKILLED FORECAST IN TWO DECADES. WHY DOES ANYONE CONSIDER THIS WORTH THE TIME IT TAKES TO READ IT?

If you define “in equilibrium” as “steady with multi-decadal swings of a few tenths of a degree superimposed on an even smaller decadal warming trend since we have been emerging from a colder period known as the LIA”, that sounds about OK.

Otherwise forget about “equilibrium”. It exists in theoretical physics, but not in our planet’s climate.

What the GISS models (or any other ones) predict for the future is pure fantasy, based on assumed inputs backed by theoretical deliberations.

As any ocean-going tourist can tell you, Max, equilibrium is all either in the head or the toilet bowl. For the first two days you’re violently ill and then equilibrium is restored.

So far no one has developed an instrument that can tell whether an ocean liner at sea is in equilibrium or not, at least not by the above criterion. Only when a big storm whips up can state-of-the-art instruments detect a general loss of equilibrium among the passengers. The crew remain fine.

Much the same sort of argument could be used to show that Planck’s derivation of his eponymous law was based on circular reasoning. Likewise least-squares estimation of parameters of a model looks like circular reasoning. Much of physics is based on what you would consider “circular reasoning”.

Global average temperature is just one of the thermodynamic variables that determines the free energy balance of the earth. According to the CSALT model we can track these variables accurately and then give attribution to the GHG portion of the warming. There is no missing energy as nature will not allow energy to get lost. It is more a matter of how good we as humans do at book-keeping.

The conclusion of continuing energy uptake is based on estimates of ocean heat content. The absolute accuracy of the determination of TOA energy balance is not sufficient to confirm or refute that conclusion. The paper discusses only the period 2001-10. Over that period the results of Steve McGee do not show dramatic changes. Such changes occur just before this period.

Thus the data presented in the paper of Loeb et al and that presented by McGee on directly determined TOA imbalance are not contradictory, but the OHC measurements give the positive heat balance of 0.50±0.43W m^2 (uncertainties at the 90% confidence level), while the curve presented by McGee has since 2001 a negative average of about -0.5 W/m^2.

This discrepancy and the large uncertainties in the determination of the TOA imbalance tell about the limits of present knowledge.

That is an interesting link. The “benchmark” sensitivity is approximately 3.2 to 3.3 Wm-2k-1 which if the surface were a true ideal blackbody would be a temperature of -30C. Obviously, the majority of the surface is not at -30C and includes water vapor making it a less than ideal blackbody. Stephen’s et al pointed out the difference is roughly +/- 0.4 Wm-2 TOA and +/- 17 Wm-2 at the true surface in terms of uncertainty. The problem is of course water vapor specifically saturated and super saturated water vapor which are common to ~-30C degrees and can be found in mixed phase to less than -40C degrees.

Using a 1000mb reference as a “surface” especially when the average land elevation is ~700 meters (-930mb) and the planetary boundary layer is ~2500 meters (850mb) and assuming that the impact of 3.2-3.3 Wm-2 will produce 1 C of warming has a few issues such as latent, sensible/convection and advection all of which tend to reduce the “surface” impact of ~3.2 Wm-2 at an estimated effective radiant layer at -30C degrees.

Murphy et al. have written what they refer to as “An observationally based energy balance for the Earth since 1950”.

They conclude

About 20% of the integrated positive forcing by greenhouse gases and solar radiation since 1950 has been radiated to space. Only about 10% of the positive forcing (about 1/3 of the net forcing) has gone into heating the Earth, almost all into the oceans.

But wait!

This was written in 2009, when there had been around six years of data on ocean warming from ARGO. These data are still quite sporadic and earlier results showed very slight cooling while later results (after applying some corrections to the raw data) showed very slight warming (of around 0.05ºC per decade).

Prior to the ARGO data there are no meaningful observationally based data of ocean heat content, so how can a meaningful “observationally based” balance be made?

I’d say it can’t.

The “integrated positive forcing by greenhouse gases and solar radiation” is a model-generated estimate.

Only 10% of this “positive forcing” went into heating the Earth, mostly into the oceans but the change in ocean temperature prior to 2003 is not much more than a wild guess.

I’d say we need a lot more than this study to establish observationally whether or not the Earth is in energy balance or not – let alone to quantify any imbalance that might exist.

Brandon, if you were positive someone was being vitriolic in their response to you, and you complained to them about it, and they denied they were being vitriolic, how would you respond?

I would, like with all things, explain to the person why I believe what I believe and provide them an opportunity to convince me I am wrong. In the process, I would naturally review what I felt was vitriolic to check my beliefs.

To be more specific, I would highlight the portions of their response which I believed were bitter or caustic and explain why I felt they could not be interpreted in a more neutrally emotive way.

I would pay special attention to the subject of any criticisms made in their response. Vitriol is emotive, and as such, it usually focuses on individuals or groups. Debate does not. Debate usually focuses on what was said and how it was said, not on who said it. It intentionally has participants remove personalities from focus in order to foster a more neutrally emotive discussion.

For a non-hypothetical comment, I’d recommend looking at two things. 1) Look at the difference in the manner of the comments I made. 2) Look at your responses to manacker on this page. The former will show a clear change in manner, and the latter will show you being more snide, caustic and bitter than I was in the comment in question.

Chris G | November 28, 2013 at 5:39 pm said: ”“We conclude that energy storage is continuing to increase in the sub-surface ocean.”

Chris, how can the phantom heat bi-pas the surface of the water? do you believe in shonky science and guillotining physics…

when water gets warmer -> evaporation increases / evaporation is cooling process === higher evaporation = more clouds / clouds are sun-umbrellas for the land and oceans; what happened to your knowledge of physics? why do you let to be brainwashed by the outdated conspiracy?… tragic…

What surprises me is that this evidence for a radiative deficit is being found now. With the GHG warming being such a huge topic I would have thought that satellite measurements would have been at the heart of the science for decades.

Or do climate scientists only bother about the output from their models?

About 10 years ago I spent quite a lot of time in looking through all available data in preparing my lectures on energy economy. I had written already before reports on the fossil fuel resources for the Finnish Ministry of Trade and Industry and read what was available at that time. One of the sources that I found very useful was US Energy Information Administration. They had just published in 2000 a study on long term world oil supply. It’s main conclusions are presented in these slides

The slide 9 is interesting. On the first sight it appears to tell on a change in estimate of resources after 40 years of no change, but looking more closely the whole change is in the estimate of recoverable share as explained on slide 10. The slides 14-19 present a set of overly simplistic scenarios, which do, however, give a good basis for drawing rough conclusions. Making more realistic assumptions on the rates of increase and decline, and rounding up the top, the conclusion is that a decline is not many decades in the future.

More data has become available since, and OECD/IEA has under the leadership of Birol improved their analysis and publications, which are unfortunately mostly not free. The extensive analyses done tell that maintaining the required rate of investment is very difficult and costly. That explains the otherwise very strange situation that oil price has remained high in spite of the poor economic development. With a stronger growth the price would surely be much higher. The alternatives like tar sands of Canada cannot fill the gap.

Oil is the easiest case to analyze, but the situation is not all that different on natural gas. The problems are coming somewhat later, but they are coming.

Coal and oil shale are more plentiful, but most of their total resources are very difficult to recover.

Many different approaches have been used in studying the likely future of fossil fuel supply. One is extrapolation of past production. The curves on slides 11 and 12 are examples of that. I don’t like that approach as the production depends ultimately on demand as long as there are no strict limitations. Demand may grow more slowly for various reasons and thus lead to wrong conclusions in that approach. The better way is to study the resource base and required investments in a dynamical setting as IEA has done. My thinking is built mostly on what has been published on that basis.

The increase was 12% (not 10%, as I stated earlier, based on another estimate for year 2011).

Either way, an estimate of an additional 30% increase in the global per capita CO2 emission by 2100 seems reasonable to me on a business as usual basis. The UN estimates a population of 10.1 billion by 2100. This gets us to an annual emission from all sources of around 66 GtCO2 per year and around 650 ppmv atmospheric CO2 by 2100 (an increase of around 255 ppmv).

The highest per capita emitter was the USA at around 17 tons but this is down by around 20% from the 1980 average of 21 tons. Other “industrialized nations” have also reduced per capita CO2 emissions. The total for all industrialized nations today (incl. USA) is around 10 tons CO2 per capita, down from 12 tons in 1980.

Non-industrialized nations (including developing giants like China and India) have increased their per capita CO2 significantly, from 2.3 tons in 1980 to 3.8 tons today.

To get to 1100 ppmv CO2 (IPCC RCP8.5 estimate) means an increase of 705 ppmv over today. This would require that annual CO2 emissions from all sources reach 100 Gt by 2100, and that every man, woman and child on this planet emit as much CO2 as inhabitants of the “industrialized nations” do today. This would mean an increase in global per capita CO2 emissions of 220%!

This does not seem very likely to me, for several reasons:
– Industrialized nations are reducing their average per capita CO2 emissions today, and will undoubtedly continue doing so
– The “underdeveloped” nations in Africa, etc. are very unlikely to reach comparable industrial development and per capita CO2 emissions
– Developing giants, like China and India, are already beginning to switch to alternate energy sources, especially nuclear
– Fossil fuel resources are limited: according to an estimate by WEC (2010), the inferred total recoverable fossil fuel resources remaining on our planet represent around 85% of all the resources that were ever there (i.e. we had used 15% of the original total by 2008); the remaining fossil fuels constrain the maximum CO2 we could possibly generate from fossil fuels to around 980 ppmv
– As fossil fuels become scarcer over the course of this century, more difficult to produce and, hence, costlier, new technologies, which will be economically competitive, will undoubtedly emerge, replacing fossil fuels

So I do not share your opinion that we will ever reach CO2 levels exceeding 1000 ppmv.

But, hey, if you want to believe this, go right ahead. It’s a free world.

Well, let’s see. The oceans are getting warmer, there is an accelerating loss of ice mass, my grown children have never lived in a month with below average temperature, Hadley cells are expanding poleward and so are climate zones, and this list goes on. If you believe in conservation of energy, the earth is receiving more than it is sending out.

ARGO misses measurement of more than half the ocean. The bouys don’t dive beneath ocean that is subject to freezing over, they do not dive beyond the average depth of the ocean, and they do not dive over the continental shelves. Your statement is in fact a speculation based on incomplete data. Adding insult to injury ARGO initially found OHC decreasing in the covered volume which initiated a pencil whipping session which changed the polarity of the tiny signal.

“there is an accelerating loss of ice mass”

Not really. The southern hemisphere is growing. Reliable data is not available for very far into the past.

“my grown children have never lived in a month with below average temperature”

Ancedotal. They probably live on land, in or near a city, and have experienced the growth in urban heat islands.

“Hadley cells are expanding poleward and so are climate zones”

Only since the late 1970’s. Few if any argue that the warm half of the 60-year Atlantic Multi Decadal Oscillation which ran from ~1970-2000 does not influence climate on the same decadal timescale. We don’t know whether the cold side of the cycle is equal and opposite in magnitude to the warm side.

“and this list goes on”

Please continue it then to see if there’s something on the list that can’t be explained in terms of natural variation.

“If you believe in conservation of energy, the earth is receiving more than it is sending out.”

If you believe in the greenhouse effect this should have resulted in a runaway greenhouse millions of years ago when CO2 levels were as much as 10 times greater than today. There is almost certainly something at play which sets a ceiling temperature for the earth. Less certainly but still highly probably IMO is that the ceiling is established by negative feedback from clouds. When the ocean is mostly low albedo liquid it can only get so warm before it generates enough high albedo clouds to starve it of shortwave heating and a temperature cap is established. Empirically (ARGO) the cap appears to be about 30C. Where ocean temperature goes land temperatures are constrained within larger range around SST center value depending on how far inland (contintality). The arrangement of the continents and ocean currents thus formed play a crucial role. It would appear a greenhouse can runaway only as far as the poles becoming temperate zones with no change at the equatorial zone other than expanding towards the pole.

During the majority of the earth’s history there were no polar ice caps. The planet was green from pole to pole. This is the most bountiful configuration for productivity of the biosphere. People who think ice is good and conducive to life on this planet aren’t playing with a full deck.

@DS: During the majority of the earth’s history there were no polar ice caps. The planet was green from pole to pole. This is the most bountiful configuration for productivity of the biosphere. People who think ice is good and conducive to life on this planet aren’t playing with a full deck.

Maybe warmth is conducive to life, but how about to big brains?

Life on Earth had a billion years to evolve big brains. Don’t you find it a little suspicious that we primates didn’t evolve seriously large brains until the last 1% of that period when Earth got really cold?

Natural selection is at it most productive in the face of deadly adversity. When deadly cold sets in, more deadly than the nuisance of saber-tooth tigers (velociraptors were gone long before the Eocene Optimum), bigger brains are better equipped to come up on short notice with anti-cold measures.

I take your use of all caps to mean that I’ve hit near the heart of an emotionally sensitive subject for you.

Aside from that, your assumption is easily proven wrong. For instance, the heat content of the ocean is large, and it takes hundreds, if not thousands, of years for ocean circulation to produce a new equilibrium.

The Heat Content of the Oceans is huge and so you can never push them out of equilibrium.

According to the NOAA the Ocean Heat Content was tiny in 1985, in fact zero, see the graph here.

Presumably you meant “heat capacity.” At 5.6E24 J/K for the oceans, this is indeed huge compared say to a bathtub of water of heat capacity 240 kJ/K (2E5 J/K). To warm such a bathtub one degree would take 240 kJ of energy. A 1 kW water heater could do that in 4 minutes.

To warm the oceans one degree would take 5.6 yottajoules (5.6E24 J). That’s a lotta joules.

Now who is the “you” in your “you can never”? If Chris G. then it’s understandable. But why should a huge and very hot object like the Sun be unable to shift the heat content of the oceans significantly?

Fortunately Earth radiates more or less all of the incoming shortwave energy from the Sun back to space as longwave energy. If it didn’t the oceans would boil dry in a few centuries.

But if you put anything in the path of the outgoing longwave radiation or OLR, trouble can brew.

And so can the oceans. We now have the technology to brew the oceans, namely a massive injection of CO2. What a neat experiment!

Kim is correct that the Earth is currently cooling. As people keep pointing out, the past decade has cooled, as can be seen from the trend line for 2001-2010 in this plot since 2001.

Reading between the lines of Kim’s poetry, he seems to be expecting that the decade 2011-2020 will also cool.

But as you can see from the trend line for 2011-2014 in the same graph, the coming decade 2011-2020 is off to a great start by warming at a rate of 4.5 °/century.

While I imagine Kim is expecting this rapid rise in temperature to turn around real soon now, the past century of global temperature makes this extremely unlikely. As can be seen from the trend lines here and here, every 20 years the Earth spends 10 years warming and 10 years cooling. In particular every odd decade has warmed relative to the even decade on each side.

There is no reason to expect a break in that long-standing pattern other than pure wishful thinking. And who better than a poet to articulate wishful thinking?

Not to worry. It will turn around again in 2021 or so, giving climate skeptics their next opportunity to point out how cold it’s getting. The current opportunity is merely getting old rather than cold.

Sorry, Max, but you’re making my point for me, namely that the most recent even-numbered decade has declined relative to the odd-numbered decades on each side. This has been an unbroken patter for well over a century now. The period 2003-now that you selected is showing a decline because most of it is in the even-numbered decade.

The solar cycle is between 10 and 12 years. Over the past century it’s been around 10.3 years and so is drifting later each decade, losing one year every three decades or so. Currently the temperature turnaround seems to be happening around 2011 or 2012. Since 2011 shows a rise I’d say closer to 2011 than 2012.

Did Vaughan Pratt seriously just argue there’s a pattern so we shouldn’t expect the pattern to be broken? That’s all sorts of nonsense.

If one wants to argue the pattern should be expected to continue, one needs to look at why the pattern happens (and why it isn’t blatant cherry-picking). Along those lines, I remember discussing this same topic on this site before. Someone insisted the odds of the sign “flipping” between two periods was 50-50 if there isn’t an underlying cycle when it obviously isn’t. I came up with a simple demonstration proving them wrong, but I don’t know if anything came from it.

Odd even cycles are used to describe the solar attenuation of GCR by the magnetic fields .

Figure 20 also shows that the shapes of the cosmic ray maxima at sunspot cycle minima are different for the even and odd numbered cycles. The cosmic ray maxima (as measured by the neutron monitors) are sharply peaked at the sunspot cycle minima leading up to even numbered cycles and broadly peaked prior to odd numbered sunspot cycles. This behavior is accounted for in the transport models for galactic cosmic rays in the heliosphere

@BS: Did Vaughan Pratt seriously just argue there’s a pattern so we shouldn’t expect the pattern to be broken? That’s all sorts of nonsense.

You have a problem with Laplace’s Rule of Succession, Brandon? This is the rule that says that if n trials lead to m successes, the unbiased probability of a success at the next trial is (m+1)/(n+2). With zero trials the unbiased probability of a success at the first trial is 1/2 (unbiased as when tossing a fair coin). With 10 trials all successful the next trial will succeed with probability 11/12.

But that’s if there’s no physical explanation. If the pattern is well correlated with the solar cycle for example the probability of further successes naturally becomes higher, since we’ve reliably recorded 24 cycles since the 17th century and have anecdotal evidence that the solar cycle is far older.

However you misrepresent my argument. I’m not predicting a bump in 2000, I’m merely pointing out that it happened right on schedule given all the previous bumps. These can be exhibited simply by bandpassing HadCRUT with a 20-year bandpass filter that rejects the considerable amount of noise in HadCRUT at all the other periods on either of 20 years.

Herman Alexander Pope: NO, EARTH IS ALWAYS IN OR VERY NEAR EQUILIBRIUM.

The Earth is never “in” equilibrium. How “near” to equilibrium depends on how you define distance, and what distance qualifies as “near”, or “very near”. If 288K be the current :equilibrium value, most places on Earth are within 10% of equilibrium most of the time. Deep ocean, for example, at about 280K is only 8/288 away from equilibrium. Missouri summer daytime and polar winter nighttime are much farther than that.

Vaughan, is describing, not proscribing. He is simply describing the even and odd decadal shifts. Scafetta has tried to pin this down with his orbital parameters. I have added these to the CSALT model because it is straightforward to add cyclic waveforms.

Note that I have a Pratt 12-9-7 filter on the GISS data waveform and the CSALT model captures all the intricate fluctuations. Very little subjective fitting going on here, this is all based on data, and the free energy variational algorithm determines the assignment of the factors.

Vaughan said:

“But that’s if there’s no physical explanation. If the pattern is well correlated with the solar cycle for example the probability of further successes naturally becomes higher, since we’ve reliably recorded 24 cycles since the 17th century and have anecdotal evidence that the solar cycle is far older.”

Gotta listen to Vaughan, as he is an exemplar of someone that is open to new ideas but is a steadfast believe in logic. Gravity is one of the 4 known energy forces and solar as EM is another of the energy forces, and these cycles do exchange energy with the earth and so are candidates for a free energy variational approach.

Forgive me for being more blunt than usual, but I was at a charity dart tournament tonight, and there was a fair amount of alcohol consumed. With that in mind, here is my answer:

No. I have a problem with idiots promoting that rule when it doesn’t come close to being applicable. I have a problem with morons pretending that rule is some simplistic tool that can be used without any consideration for the assumptions the rule is predicated upon. In short, I have no problem with that rule; I have a problem with the people so lazy they don’t bother to understand what the rule is – generally while seeking to apply it. To wit:

This is the rule that says that if n trials lead to m successes, the unbiased probability of a success at the next trial is (m+1)/(n+2). With zero trials the unbiased probability of a success at the first trial is 1/2 (unbiased as when tossing a fair coin). With 10 trials all successful the next trial will succeed with probability 11/12.

That is not what the rule says. You are ignoring a fundamental aspect of the rule. The rule says, quite explicitly, each iteration of the experiment must be independent of all prior iterations. If that is violated, the rule is inapplicable. Given I’ve previously shown (on this very site) there is no reason to make an assumption of such independence, it is fair to reject that assumption. As such, it is fair to dismiss the application of the rule in its entirety.

It is incumbent upon you to demonstrate the appropriateness of the assumptions for the rule you seek to apply prior to applying that rule. A failure to do so is a failure to make a sensible argument. In fact, it is a failure to make a coherent argument. It is nothing more than making an argument by waving you hands and shouting, “I’m right!”

However you misrepresent my argument. I’m not predicting a bump in 2000, I’m merely pointing out that it happened right on schedule given all the previous bumps.

Continuing with my theme of bluntness… bull. I never represented your argument as “predicting a bump in 2000.” You’re just making things up in order to claim I’ve misrepresented you… in the same post you (mockingly?) asked me if I had a problem with a rule you misrepresented.

We now have the technology to brew the oceans, namely a massive injection of CO2.

Oops!

My BS meter just pegged on that one!

There is not enough inferred possible recoverable fossil fuel left on Planet Earth (WEC 2010) to raise CO2 levels high enough to warm the ocean by more than around 0.03C – even if IPCC’s arguably exaggerated 2xCO2 ECS of 3C is correct and all remaining fossil fuels are totally consumed.

Max solar cycles are 11-years and the pattern only has a couple of cycles recorded. If the long/short GCR peak flux were randomly distributed the observed pattern has about a 25% chance of appearing. The first cycle can’t really be categorized and neither can the last because they’re incomplete. This is about as dependable as flipping a coin four times for head-tail-head-tail and presuming you’re going to get a head on the fifth flip. Duh.

@BS: The rule says, quite explicitly, each iteration of the experiment must be independent of all prior iterations. If that is violated, the rule is inapplicable.

Certainly, but the more likely situation in which that condition is not met favors a larger expectation than (m+1)/(n+2), not a smaller one.

One way to put this is to say that the strong rule (with the condition you mention) is that the expectation equals (m+1)/(n+2) while the weak version (omitting that condition) only says that the expectation exceeds (m+1)/(n+2). This is because violation of independence typically entails a positive correlation more often than a negative one.

Were it the other way round the weak version would instead say that the expectation is less than (m+1)/(n+2).

If you know in advance which way the correlation is likely to go then you can pick the appropriate version of the weak rule.

In this case it seems to me that a very strongly positive correlation is most likely, close to 1 in fact!

Certainly, but the more likely situation in which that condition is not met favors a larger expectation than (m+1)/(n+2), not a smaller one.

One way to put this is to say that the strong rule (with the condition you mention) is that the expectation equals (m+1)/(n+2) while the weak version (omitting that condition) only says that the expectation exceeds (m+1)/(n+2). This is because violation of independence typically entails a positive correlation more often than a negative one.

There are an infinite number of cases. I am at a loss as to how you could possibly know what a violation of independence “typically entails.” I certainly don’t see why you would view it as so obvious that you can simply state it as fact.

I especially don’t know why you think it’s that obvious when I don’t even know why you’re discussing the issue of positive vs. negative correlation. There’s no reason one would be forced to pick between the two. There are a multitude of other possibilities. You seem to have just hand-waved them all away so you could hand-wave your way into a position on a false dichotomy.

On top of that, you’ve suggested using prior knowledge to influence the results but only knowledge which promotes your conclusions. You’ve made no effort to give a fair or balanced examination of our prior knowledge, missing at least one glaringly obvious contrary example (I’m curious if you can guess which it is). That is, you’ve blatantly cherry-picked evidence to promote your results.

My apologies for the misunderstanding.

I find it interesting you apologize for “misunderstanding” me. The complaint I raised was one of misrepresentation. A misunderstanding may be a component in that, but it is only one component. It’s especially interesting to me given you tacitly acknowledged misrepresenting Laplace’s Rule of Succession (after suggesting I don’t accept it) yet made no effort to address that misrepresentation either. Behavior like that tends to get in the way of reasonable discussions.

Ultimately, you’ve offered no more basis for your argument now than you had prior to my initial comment. As such, I’m just going to repeat what I said before:

Did Vaughan Pratt seriously just argue there’s a pattern so we shouldn’t expect the pattern to be broken? That’s all sorts of nonsense.

If one wants to argue the pattern should be expected to continue, one needs to look at why the pattern happens (and why it isn’t blatant cherry-picking).

Schrodinger’s Cat > If this is all old stuff and well known, is there a deficit or not?

Chris G > Well, let’s see. The oceans are getting warmer, there is an accelerating loss of ice mass, my grown children have never lived in a month with below average temperature, Hadley cells are expanding poleward and so are climate zones, and this list goes on. If you believe in conservation of energy, the earth is receiving more than it is sending out.

Ducks the question, which was about the measured radiative balance.
Not the alleged effects you allege are a consequence of an alleged positive radiative balance.

Even if it turns out the oceans are indeed warming – and we are very far indeed from knowing that – but the radiative balance was negative, that would mean ocean warming et al are completely irrelevant to CO2 / AGW.

@BS: If one wants to argue the pattern should be expected to continue, one needs to look at why the pattern happens (and why it isn’t blatant cherry-picking).

You’ve expressed the opinion that I’m lazy, an idiot, and a moron. It follows that any argument I could present to you, however perfectly logical, would be dismissed by you as the conclusions of a lazy, idiotic moron, and that any resemblance to logical precision would be entirely accidental.

Attempting to convince you of anything would therefore be a complete waste of both your time and mine.

You’ve expressed the opinion that I’m lazy, an idiot, and a moron. It follows that any argument I could present to you, however perfectly logical, would be dismissed by you as the conclusions of a lazy, idiotic moron, and that any resemblance to logical precision would be entirely accidental.

Attempting to convince you of anything would therefore be a complete waste of both your time and mine.

It’s interesting you never said a word about me portraying you as lazy, an idiot and a moron, yet you now cite it as a reason not to continue the discussion. If it truly were such a reason, the conversation should have ended prior to now. The fact it didn’t shows you’re being selective with your arguments. It’s cheeky to not say a word about certain contents of a comment when you respond to it yet later claim those contents prove you shouldn’t respond.

A fair reading of this exchange is this: I insulted you for saying something stupid. You responded by tacitly acknowledging what you said was stupid. I then discussed additional points you raised. You responded by saying I insulted you so furter discussion is pointless.

That’s silly. What’s next? Are you going to randomly stop a discussion with someone because they insulted you three months ago? And what about the people you routinely insult? Does the fact you insult them mean further conversation is pointless?

More importantly, what are you smoking? Your argument here is a total non-sequitur. There is nothing about thinking someone is lazy, an idiot or a moron that would mean I must dismiss all their conclusions out of hand. There is nothing about it that would indicate I think “any resemblance to logical precision [from them] would be entirely accidental.” Lazy, idiotic morons are still capable of making perfectly sound arguments.

Maybe you dismiss what people say out of hand if you hold certain negative views of them, but I don’t. I’ve been convinced I’m wrong by people who admitted they are stupid. Heck, the most informative conversations are often those I have with people of lesser intelligence as getting them to understand/agree with me often requires far more structured thought on my part.

The only way your comment supports your conclusion is if we decide you making things up and misrepresenting me proves further discussion would be a complete waste of time. That’d be an interesting argument to advance.

Not at all. I was extending you the opportunity of getting the reasoning back onto a reasonable track. Instead you rejected it and continued your vitriolic diatribe. At that point I bailed.

There was nothing vitriolic about the comment in question. Every critical remark of you in it was founded in a clear argument and made without any special harshness. In other words, it did exactly what you claim you were giving me the opportunity to do.

I don’t see how doing exactly what you claim you wanted me to do is rejecting the opportunity to do it. I certainly don’t see it explaining why you waited to make an issue of how I portrayed you until after I stopped portraying you that way.

That said, what’s more interesting to me is you simply ignored my rebuttal of the logic of your argument. Even if one believes your portrayal of this exchange, it doesn’t follow that further discussion would be pointless. The only way you can advance that argument is by falsely claiming I’d reject anything you conclude, no matter how logical, out of hand because of my opinion of you. That’s a massive misrepresentation of me, and it’s actually rather offensive.

If you want to leave the discussion you can. However, the record will show the reason you’ve given for doing so is based entirely upon a flagrant and illogical misrepresentation of me you’ve been informed of.

I didn’t know you had Attention Deficit Disorder, David. I pointed this out a year ago, and Loehle and Scafetta pointed it out in an interesting 2011 paper that you appear to have either not read or forgotten all about. They have a 20-year cycle that is exactly what you’re referring to as “numerology.”

@BS: the latter will show you being more snide, caustic and bitter than I was in the comment in question.

You barged in here with “Did Vaughan Pratt seriously just argue there’s a pattern so we shouldn’t expect the pattern to be broken? That’s all sorts of nonsense.”

If I’d been barging into your discussions with all sorts of nonsense the way Max Manacker does habitually with mine, I would consider that reasonable. But I do not pester you the way Max pesters me. Your insulting “that’s all sorts of nonsense” was not in response to anything I’d said to you, you just barged in here spontaneously with that little outburst. Hence any comparison with my slowly accumulating impatience with Max has no relevance here.

If you don’t think “that’s all sorts of nonsense” is an insult then you have no conception of civil discourse.

I could continue like this with a great many such insults you’ve showered on me. However in the interests of being constructive let’s focus on your more technical remarks, starting at the very beginning.

@BS: I’ve previously shown (on this very site) there is no reason to make an assumption of such independence.

You seem to have the Laplace Rule of Succession backwards here. Laplace understood very well that we had independent reasons to expect the Sun to rise tomorrow. His point however was that if the only thing we knew was that the Sun had risen 10 times in a row, and absolutely nothing else, what then should our expectation be of its rising tomorrow?

Your argument only makes sense if we have other knowledge. You claim that we do, yet you refuse to say what that knowledge is, merely claiming that it is incumbent on others to produce that knowledge.

On that basis I’d say that it is you that doesn’t understand the Laplace Rule of Succession. Those claiming there is other knowledge are the ones on whom falls the obligation to produce that other knowledge. Those lacking such knowledge are fully justified in applying Laplace’s rule to obtain the best estimate given their state of knowledge.

If you have knowledge they don’t then you’re guilty of insider trading.

“Since many processes do take place at constant pressure, or approximately at atmospheric pressure, the enthalpy is therefore sometimes given the misleading name of ‘heat content’. It is sometimes also called the heat function.”

The article on enthalpy expands on this as follows.

“Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, ΔH”

@max: There is not enough inferred possible recoverable fossil fuel left on Planet Earth (WEC 2010)

Since the 2010 estimate of recoverables was a lot higher than two decades ago, why are you assuming that all of a sudden it will stop increasing?

If that were the case there’d be no further exploration needed for new reserves of coal, oil, and natural gas.

This is similar to your argument based on rising per capita emissions suddenly stopping rising.

You’re also neglecting CO2 from biofuels. What’s your upper bound on that? Biofuels aren’t popular today, but that will change if and when your imagined scenario of exhaustion of fossil fuels ever becomes a reality.

to raise CO2 levels high enough to warm the ocean by more than around 0.03C – even if IPCC’s arguably exaggerated 2xCO2 ECS of 3C is correct and all remaining fossil fuels are totally consumed.

Citation needed. Suppose CO2 were to stop today at 400 ppmv and grow no further. Do you believe that the oceans would immediately cease warming?

The oceans have a huge heat capacity, which means that the additional wattage of modern global warming can only heat them very slowly. The slowness of the heating effect is not a reason to assume ocean warming will stop immediately if CO2 stops rising, quite the opposite in fact.

From policy perspective expected shortage of fossil fuels should lead to many of the same choices as mitigation of climate change. It’s quite possible that problems in availability of fuels are more powerful in leading to changes than attempts to agree on climate policies.

The significance of the limits of availability of fossil fuels is a point on which I agree largely with Max and WHT. Max seems to take that as a reassuring observation, while I and WHT conclude that it’s an argument that strengthens the cause for near term action.

So all three of you believe that the past history of steadily rising reserves thanks to efficient exploration programs is on the verge of coming to an abrupt end, albeit with opposite implications?

It’s almost like some people may have examined the information and considered why a change in pattern might happen rather than merely assuming the pattern will continue indefinitely. That’d be shocking if it weren’t abundantly reasonable.

Interestingly, I disagree with all three of those people on the reason they think the pattern will change. I think the pattern is likely to continue (albeit, in a reduced form). However, I don’t think they deserve to be mocked for their beliefs, and I can even articulate the reasoning behind those beliefs.

But please, provide us more derogatory remarks. I’m sure that will convince everyone you’re right to not even attempt to rebut views which have been subject to a great deal of discussion. Just make sure you don’t do what I did – don’t examine the issue in great detail. If you do, you might be asked to co-author a post on this site about the issue rather than be expected to just mock people.

That sort of thing is horrible. Who could imagine having reasonable discussions about points of disagreement?

Your last post raised several separate but related points, so I will address each one separately.

@max: There is not enough inferred possible recoverable fossil fuel left on Planet Earth (WEC 2010)

Since the 2010 estimate of recoverables was a lot higher than two decades ago, why are you assuming that all of a sudden it will stop increasing?

I am not assuming anything, Vaughan. I have just taken the most recent comprehensive report on world fossil fuels (WEC 2010), which gives an estimate of a) “proven fossil fuel reserves”, and b) “inferred possible recoverable fossil fuel resources” on our planet in 2008

There are several estimates of the first figure – these all agree fairly closely. The second, much larger figure, is a bit more uncertain. These are potential resources that have not yet been “discovered” by actual exploration work, but are “inferred” to exist and be recoverable, based on various estimates. This figure is much higher. Most estimates of remaining fossil fuel reserves are much more pessimistic, with alarming predictions of impending “peak oil” or “peak fossil fuels” in some cases. (Ask Web.)

The WEC estimate represents 85% of all the recoverable fossil fuels that were ever on our planet, i.e. we have used up 15% of the original total, and 85% are still available for use. The study also infers that we have enough fossil fuels to last us 150-200 years at future consumption rates, so “peak fossil fuels” is a long way off. I have seen several more pessimistic estimates, but no more optimistic comprehensive estimate than this one out there.

If this estimate is correct, then the maximum CO2 level from human fossil fuel combustion would be

If that were the case there’d be no further exploration needed for new reserves of coal, oil, and natural gas.

Wrong, Vaughan. It takes exploration and development work to convert “inferred possible recoverable” resources” to “proven reserves” (if you’re lucky – sometimes you just end up with a “dry hole”).

This is similar to your argument based on rising per capita emissions suddenly stopping rising.

Wrong, again, Vaughan. I have made no such assumption. I have looked at the past data (CDIAC and US Census Bureau) and seen that the per capita CO2 generation increased by 10% from 1970 to today. On this basis, I have assumed that the per capita emissions would continue to rise, by an additional 30% by 2100.

You’re also neglecting CO2 from biofuels. What’s your upper bound on that? Biofuels aren’t popular today, but that will change if and when your imagined scenario of exhaustion of fossil fuels ever becomes a reality.

Biofuels are carbon neutral, Vaughan. It takes as much CO2 to create them as they release when they are combusted. So they have no net impact on atmospheric CO2 concentrations.

to raise CO2 levels high enough to warm the ocean by more than around 0.03C – even if IPCC’s arguably exaggerated 2xCO2 ECS of 3C is correct and all remaining fossil fuels are totally consumed.

Citation needed. Suppose CO2 were to stop today at 400 ppmv and grow no further. Do you believe that the oceans would immediately cease warming?
The oceans have a huge heat capacity, which means that the additional wattage of modern global warming can only heat them very slowly. The slowness of the heating effect is not a reason to assume ocean warming will stop immediately if CO2 stops rising, quite the opposite in fact.

Using 980 ppmv as the upper limit as constrained by fossil fuel availability and the IPCC estimate of 2xCO2 radiative forcing, multiplied by IPCC’s factor of around 2.5 to include the impact of postulated net positive feedbacks, I would arrive at enough energy to warm the ocean by around 0.03C.

This is a rough estimate. Gimme a better figure, if you don’t like that one.

Hope this has answered all your points to your satisfaction.

Max

PS Pekka, you are right. Fossil fuels are a finite resource. We all agree with Web on that. They will, some day, be replaced (for combustion use) with an economically viable alternate, and will probably be used only for higher added-value end uses (chemicals, fertilizers, pharmaceuticals, etc.). IMO this will undoubtedly occur within the next century, long before they are completely exhausted.

Your last post raised several separate but related points, so I will address each one separately.

@max: There is not enough inferred possible recoverable fossil fuel left on Planet Earth (WEC 2010)
Since the 2010 estimate of recoverables was a lot higher than two decades ago, why are you assuming that all of a sudden it will stop increasing?

I am not assuming anything, Vaughan. I have just taken the most recent comprehensive report on world fossil fuels (WEC 2010), which gives an estimate of a) “proven fossil fuel reserves”, and b) “inferrred possible recoverable fossil fuel resources” on our planet in 2008

There are several estimates of the first figure – these all agree pretty closely. The second, much larger figure, is a bit more uncertain. These are potential resources that have not yet been “discovered” by actual exploration work, but are “inferred” to exist and be recoverable, based on various estimates. This figure is much higher. Most estimates of remaining fossil fuel reserves are much more pessimistic, with alarming predictions of impending “peak oil” or “peak fossil fuels” in some cases. (Ask Web.)

The WEC estimate represents 85% of all the recoverable fossil fuels that were ever on our planet, i.e. we have used up 15% of the original total, and 85% are still available for use. The study also infers that we have enough fossil fuels to last us 150-200 years at future consumption rates, so “peak fossil fuels” as a long way off. I have seen several more pessimistic estimates, but no more optimistic comprehensive estimate than this out there.

If this estimate is correct, then the maximum CO2 level from human fossil fuel combustion would be

If that were the case there’d be no further exploration needed for new reserves of coal, oil, and natural gas.

Wrong, Vaughan. It takes exploration and development work to convert “inferred possible recoverable” resources” to “proven reserves” (if you’re lucky).

This is similar to your argument based on rising per capita emissions suddenly stopping rising.

Wrong, again, Vaughan. I have made no such assumption. I have looked at the past data (CDIAC and US Census Bureau) and seen that the per capita CO2 generation increased by 10% from 1970 to today. On this basis, I have assumed that the per capita emissions would continue to rise, by an additional 30% by 2100.

You’re also neglecting CO2 from biofuels. What’s your upper bound on that? Biofuels aren’t popular today, but that will change if and when your imagined scenario of exhaustion of fossil fuels ever becomes a reality.

Biofuels are carbon neutral, Vaughan. It takes as much CO2 to generate them as they release when they are combusted. So they have no net impact on atmospheric CO2 concentrations.

to raise CO2 levels high enough to warm the ocean by more than around 0.03C – even if IPCC’s arguably exaggerated 2xCO2 ECS of 3C is correct and all remaining fossil fuels are totally consumed.

Citation needed. Suppose CO2 were to stop today at 400 ppmv and grow no further. Do you believe that the oceans would immediately cease warming?
The oceans have a huge heat capacity, which means that the additional wattage of modern global warming can only heat them very slowly. The slowness of the heating effect is not a reason to assume ocean warming will stop immediately if CO2 stops rising, quite the opposite in fact.

Using 980 ppmv as the upper limit as constrained by fossil fuel availability and the IPCC estimate of 2xCO2 radiative forcing, multiplied by IPCC’s factor of around 2.5 to include the impact of postulated net positive feedbacks, I would arrive at enough energy to warm the ocean by around 0.03C.

This is a rough estimate. Gimme a better figure, if you don’t like that one.

Hope this has answered all your points to your satisfaction.

Max

PS Pekka, you are right. Fossil fuels are a finite resource. We all agree with Web on that. They will, some day, be replaced (for combustion use) with an economically viable alternate, and will probably be used only for higher added-value end uses (chemicals, fertilizers, pharmaceuticals, etc.). IMO this will undoubtedly occur within the next century, long before they are completely exhausted.

The estimates of resources of fossil fuels originally in place have been rather stable for decades, and the remaining part of them is reduced when the fuels are used.

What has not been going down with consumption is the size of reserves as resource development brings every year a new fraction of resources to reserves at roughly the same rate earlier reserves are used. There have been a few major changes in production technology that have helped in that, most of them related in various ways to horizontal drilling. Those changes have made undersea oil fields economic, and they are also an essential factor in cracking of natural gas. In general terms such technology changes were anticipated in the estimates of ultimately recoverable resources, but cracking of gas has produced positive surprises.

Based of everything we have learned so far the positive surprises may give a decade or two more time before maintaining the hoped for level of production becomes too expensive and difficult. We have already seen high oil prices for several years. They are really a sign of the approaching difficulties in maintaining the production rate. It’s more likely that such problems get gradually worse than that they will be resolved by new technology breakthroughs in production of fossil fuels. Here we are discussing on problems that build up in a few decades, i.e. perhaps during the period of positive overall impacts of CO2 increases. Thus the resource scarcity may very bring the first strong reason for reducing the use of fossil fuels.

The climate change gets a more important factor when we judge the use of the lower quality fossil fuels like difficult to produce coal and oil shale over much longer periods.

It takes exploration and development work to convert “inferred possible recoverable” resources” to “proven reserves” (if you’re lucky – sometimes you just end up with a “dry hole”).

This has been true for over a century, Max. Throughout that time new reserves have continually been brought to light. Please give a more plausible reason why all of a sudden new reserves are no longer going to come to light.

I’ll believe that the rate of discovery of new reserves is going to slow down when there’s a sign of it happening. So far there is no sign. Costs are sky-rocketing, but that’s no impediment to ongoing successful discoveries because so is demand

I have looked at the past data (CDIAC and US Census Bureau) and seen that the per capita CO2 generation increased by 10% from 1970 to today.

Surely you’re joking, Mr Manacker. 10% in 42 years? That is nowhere near what all the historical records are saying. I have no idea how you could have got such a ridiculously low figure.

Here’s what the US Census Bureau has to say about population and what CDIAC has to say about CO2 emissions for 1970 and 2012.

If 38.4% over 42 years is sustained for another 87 years (to bring us to 2100), that comes to 1.384^(87/42) = 1.96.

Per capita emissions in 2012 was 10607/7000 = 1.515 tonnes of carbon per capita for that year. Multiplying that by 1.96 gives 2.97. So using those figures, in 2100 the world will emit somewhere around 30 gigatonnes of carbon.

This is in the middle of the range of 13 GtC to 50 GtC that I indicated earlier and therefore should not trigger anyone’s BS meter except yours. I suggest you take yours in for recalibration.

@PP: We have already seen high oil prices for several years. They are really a sign of the approaching difficulties in maintaining the production rate. It’s more likely that such problems get gradually worse than that they will be resolved by new technology breakthroughs in production of fossil fuels.

I think I understand your reasoning here, Pekka. However I’ll buy it when I see a genuine slowdown in the rate of discovery of new reserves.

It’s very reasonable to assume that increasing costs will become an obstacle to maintaining the current pace of discoveries when alternative energies become practical. I don’t have any insight into whether we’re within one decade or five of this happening, though I’m happy to be persuaded of any given number.

Surely you’re joking, Mr Manacker. 10% in 42 years? That is nowhere near what all the historical records are saying. I have no idea how you could have got such a ridiculously low figure.

I’m not going to venture into this specific issue, but it’s interesting to look at per capita emissions. Doing so shows population growth tends to lead to decreased per capita emissions (beyond a point). This is related to the fact population growth as a factor is confounded by things like industrialization. As such, when you say:

That’s an increase of 38.4%, Vastly more than 10%!

If 38.4% over 42 years is sustained for another 87 years (to bring us to 2100), that comes to 1.384^(87/42) = 1.96.

Per capita emissions in 2012 was 10607/7000 = 1.515 tonnes of carbon per capita for that year. Multiplying that by 1.96 gives 2.97. So using those figures, in 2100 the world will emit somewhere around 30 gigatonnes of carbon.

You’re posting nonsense. An increase in per capita CO2 emissions is determined by both the change in population and the change in CO2 emissions. In effect, you’ve taken two linear trends for values (CO2 emissions and population growths) whose growth are non-constant, arbitrarily picked endpoints for both lines, divided one by the other and created some new value which has little to no basis in anything. You may have an excuse in that manacker suggested much of this analysis, but the reality is this entire “analysis” is over-simplified and has little to no value.

Neither CO2 emissions nor population growth can actually be linearized like it seems you two are trying to do, and we certainly can’t linearize a combination of the two values. Even if we could, we wouldn’t lineraize them by averaging endpoints. That’s not how you calculate trends.

I don’t know exactly what you guys are arguing, but in my experience, you can’t have much of a discussion if both sides use inappropriate analyses. One bad analysis presented in opposition to another bad analysis won’t show much.

About 10 years ago I spent quite a lot of time in looking through all available data in preparing my lectures on energy economy. I had written already before reports on the fossil fuel resources for the Finnish Ministry of Trade and Industry and read what was available at that time. One of the sources that I found very useful was US Energy Information Administration. They had just published in 2000 a study on long term world oil supply. It’s main conclusions are presented in these slides

The slide 9 is interesting. On the first sight it appears to tell on a change in estimate of resources after 40 years of no change, but looking more closely the whole change is in the estimate of recoverable share as explained on slide 10. The slides 14-19 present a set of overly simplistic scenarios, which do, however, give a good basis for drawing rough conclusions. Making more realistic assumptions on the rates of increase and decline, and rounding up the top, the conclusion is that a decline is not many decades in the future.

More data has become available since, and OECD/IEA has under the leadership of Birol improved their analysis and publications, which are unfortunately mostly not free. The extensive analyses done tell that maintaining the required rate of investment is very difficult and costly. That explains the otherwise very strange situation that oil price has remained high in spite of the poor economic development. With a stronger growth the price would surely be much higher. The alternatives like tar sands of Canada cannot fill the gap.

Oil is the easiest case to analyze, but the situation is not all that different on natural gas. The problems are coming somewhat later, but they are coming.

Coal and oil shale are more plentiful, but most of their total resources are very difficult to recover.

Many different approaches have been used in studying the likely future of fossil fuel supply. One is extrapolation of past production. The curves on slides 11 and 12 are examples of that. I don’t like that approach as the production depends ultimately on demand as long as there are no strict limitations. Demand may grow more slowly for various reasons and thus lead to wrong conclusions in that approach. The better way is to study the resource base and required investments in a dynamical setting as IEA has done. My thinking is built mostly on what has been published on that basis.

From time to time I encounter some concept or method that I find useless. Unlike you however I don’t go round telling other people that they have to find it useless too merely because I do.

As a rule, it helps to clearly state who or what you are responding to when you respond to it. The most common method of doing so is to provide a quotation to which you’re responding. Failing that, the next most common method is to state which comment you’re referring to. If all that fails, one generally manages to at least state who they’re responding to.

That said, I guess you’re responding to me. It’s hard to tell because you didn’t bother to quote me saying whatever you think I said. I’m not sure what part of my remarks you think said what. You could solve that by quoting my words and stating your interpretation of them. Why you didn’t? I have no idea.

Regardless, I’ll make myself clear. I have never told anyone they must find anything useless. Anyone claiming I have ever done so is just making things up.

Pekka, thanks very much for the pointer to those slides. Given that they were presented nearly 14 years ago, it would be very interesting to see how the key figures in them have changed over that time.

My impression is that forecasts for recoverable reserves of fossil fuels have changed a lot over the last five years. So these April 2000 slides in conjunction with key numbers from their 2013 or better yet 2014 counterpart would be very informative in supporting or refuting my contention that reserves are going to continue to be revised upwards for many decades to come.

The most optimistic (or pessimistic depending on your interpretation) projections of these 2000 slides are the ones I would find most plausible, in particular the 5% recovery level which with R/P = 10 gets us back down to today’s emission levels in 2060.

By that time I would expect interest in biofuels to be picking up, wouldn’t you?

But I would also expect by that time that these estimates for ultimate recoverables will also have been substantially revised upwards.

But all that’s just for oil. I don’t know about NG but presumably coal will still be in full swing well after 2060.

The new World Energy Outlook 2013 of IEA gives an estimate of about 3400 bbl for the original value of ultimately recoverable resources of conventional crude oil. About 1/3 of that has been used so far. Out of the remaining 2200 bbl about 900 bbl is classified as known oil, the rest is about half and half reserve growth and undiscovered. The value of 3400 is more than the central estimate of 3000 presented in 2000 by EIA, but well below the upper limit of 4000 bbl.

Both the older EIA numbers and the newer IEA numbers are largely based on work of US Geological Survey. This fact sheet presents some new results from USGS. Other fact sheets of interest include 2012-3050 and 2012-3051.

One question is, how much oil can ultimately be recovered, another is how long can the production rate be maintained at a level that does not restrict activities still dependent on oil. IEA does not extend its analysis beyond 2035. Their estimate is that production of conventional crude oil remains essentially at its present level over that period or decreases a little, while other fossil sources (natural gas liquids and unconventional oil) are produced in larger quantities. Presently these sources cover about 20% of the demand, the estimate for 2035 is about one third. (In the scenario of strong climate policies the total supply goes about 15% down.)

HermannAlexanderPope: Earth is the sum of all of its parts. All of its parts has its own equilibrium and ALL of its parts is at or near to its equilibrium.

That is an idiosyncratic use of the word “equilibrium”, and even then still depends on what you mean by “near” equilibrium. Central Missouri, for example, has temperature highs above 100F in the summer daytime, and temperature lows below -10F in the winter nighttime. How is that “near” equilibrium?

That would be true if the land on which the biofuels are grown had previously been desert, concrete, or something that didn’t previously consume CO2. But if the biofuels are grown on land that would otherwise have been used to produce food, lumber, etc. then they are not carbon neutral because replacing foodstock, trees, etc. by plants (sugar cane etc.) used for biofuels does not increase the removal of CO2 from the atmosphere.

So if you’re advocating growing biofuels in deserts, or trashing cities and roads to replace them with biofuel plantations, you’re right. Otherwise you’re wrong.

One point I overlooked at first in the swirl of Brandon’s nags was this insightful criticism.

@BS: Even if we could, we wouldn’t lineraize them by averaging endpoints. That’s not how you calculate trends.

This is a reasonable point: I had looked only at 1900 and 2000. Brandon would like the intermediate points to carry weight.

Since there are no datapoints between 1900 and 1950, I redid the trend analysis using the 13 datapoints between 1950 and 2010 inclusive, which are spaced evenly at 5-year intervals. Since per capita fuel consumption is hypothesized to grow exponentially I took its log before fitting a linear trend line. Doing so produced a CAGR of 0.95%, somewhat lower than the 1.3% I’d obtained looking only at 1900 and 2000. Good call, Brandon.

This decreases the 38 GtC forecast for 2100 by that method to 1.31*1.0095^90 = 3.068 tons per person per year in 2100, or around 30 GtC assuming a world population of 10 billion.

On the one hand somewhat below the 38 GtC figure based only the 1900 and 2000 datapoints, on the other still well within the uncertainty range of 13-50 GtC in 2100 that I’d started with.

While it continues to be unclear to me why Brandon has such a strong negative reaction to the notion of per capita fuel consumption, I’m not going to lose any sleep over it, just typing time.

That would be true if the land on which the biofuels are grown had previously been desert, concrete, or something that didn’t previously consume CO2. But if the biofuels are grown on land that would otherwise have been used to produce food, lumber, etc. then they are not carbon neutral because replacing foodstock, trees, etc. by plants (sugar cane etc.) used for biofuels does not increase the removal of CO2 from the atmosphere.

So if you’re advocating growing biofuels in deserts, or trashing cities and roads to replace them with biofuel plantations, you’re right. Otherwise you’re wrong.

Let’s analyze what you wrote.

Biofuels (by definition) require as much carbon to create as they release when they are burned. Period.

How the biofuels are created is another question.

Growing biofuels does not necessarily mean reducing crop growth (or human food consumption). The US corn ethanol fiasco is no model for biofuel creation. Biofuels from food crops are not the answer.

Nor does it mean destroying forests to do so.

Biofuels could be an answer, however, if produced sustainably.

There are a lot of studies out there on biofuels from biomass (this could be crop residues (rice hulls or corn stalks to methane, for example) or from other specific biofuel crops. Sweden and Finland supply around 20% of their primary energy from biomass.

The cumulative reduction in CO2 generated depends on the rate of growth of the specific crop being used. Some studies have suggested the use of a fast growing crop, such as switch grass.

If developed properly, biomass can and should supply increasing amounts of biopower. In fact, in numerous analyses of how America can transition to a clean energy future, sustainable biomass is a critical renewable resource.

and

Most scientists believe that a wide range of biomass resources are “beneficial” because their use will clearly reduce overall carbon emissions and provide other benefits. Among other resources, beneficial biomass includes
1. energy crops that don’t compete with food crops for land
2. portions of crop residues such as wheat straw or corn stover
3. sustainably-harvested wood and forest residues, and
4. clean municipal and industrial wastes.

And then there is the research work on growing algae as a potential source of biofuel as Diesel replacement.

“We are surrounded by insurmountable opportunities” (as they say)regarding biofuels, Vaughan (and these could result in a net reduction of CO2 generated).

to that I would add (5) salt-tolerant varieties of plants, such as salt-tolerant soybeans and mangrove trees, grown with seawater irrigation on land that is now desert. Only experiment can tell how that would actually work out, but there is a lot of arid land within reasonable distances of the sea, and there are a lot of salt-tolerant species, native and developed by breeding.

Sustainable biomass growth (let’s take the example of a fast growing crop, such as switch grass) replaces itself, year after year.

Whether or not the land that was used to produce this crop was once used for something else is immaterial in the long run. This effect can only be counted for the first growing season, if at all, and not for all the subsequent ones, where the biofuel gobbles up as much CO2 to grow as it releases as a fuel.

Biofuels from crop waste is even better: it is simply collecting the biofuel and using it productively as an energy source, rather than allowing the crop waste to turn into CO2 through natural decomposition.

I am not necessarily a big proponent for bio fuels (and particularly not for silly boondoggles like the US “corn for ethanol” fiasco), but they could result in a reduction of CO2 emissions, if properly conceived and managed.

I noted but didn’t comment that Vaughn’s idea of what’s carbon neutral and what isn’t is oddly wrong. He seems to be having a series of senior moments. Plants are carbon neutral. Desert is carbon neutral. Don’t matter what was on the land prior to conversion to biofuel crop. The only thing that isn’t strictly carbon neutral is if trees were being grown and lumber harvested to make durable goods like furniture or homes which in that case the carbon in the wood might be delayed ten or a hundred years until the wooden item is discarded so in the end even trees are carbon neutral.

If someone is either so willfully ignorant or uninformed about what constitutes “carbon neutral” it’s not likely that any discussion with them will be productive. If Vaughn happens to reply “Oh, you’re right Max. Silly me.” I’ll take back the non-productive part but still, do you want to trying to teach grade school environmental basics to a doddering PhD who thinks he knows it all? Not my cup of tea.

The World Energy Outlook 2013 IEA estimate of total original recoverable world oil resource of 3,400 billion bbl (of which 1,200 billion bbl have already been used up) or the 3,900 billion bbl highest USGS survey estimate are both quite a bit lower than the WEC 2010 estimate of “total inferred recoverable oil resources” of 5,100 billion bbl (incl. shale).

So far, WEC 2010 gives the most optimistic estimates I have seen on “total inferred recoverable fossil fuel resources”

These are the estimates that represent 85% of all the fossil fuels that were ever on our planet (i.e. we have used 15% of that total to date), which I have used to establish the fossil fuel constraint on human CO2 emissions and, hence, atmospheric concentrations.

Recoverable is a moving target and subject to a lot of pencil whipping depending on what value is preferred by the target audience. It’s a moving target because as technology improves what wasn’t recoverable before becomes recoverable. There’s many times more fossil fuel beneath the surface than what is currently recoverable. This scares people like WebHubColonoscope who envision things like methane clathrates becoming marginally recoverable at some point. The marginally recoverable stuff will put a lot more CO2 into the air per joule of usable energy than the easily recoverable fuels.

In any case there’s almost certainly a carbon-neutral alternative that will be mature in a few decades at most at cost less than fossil fuels ever were. Synthetic biology is the front runner IMO. It’s in its infancy and already pilot plants are producing fuel at prices competitive with oil. Synthetic biology has cost/performance improvement potential that is like Moore’s Law for Semiconductors. In fact even computers in the not too distant future can be grown instead of manufactured. See here: http://en.wikipedia.org/wiki/DNA_computing It’s really hard to overstate the potential of synthetic biology.

It will be new technology, not a top-down direct or indirect carbon tax, that will eventually lead to the replacement of fossil fuels as the principal source of energy with something that is more economically viable.

Repeating the Biofuels 1.0 meme from the 1980s in boldface does not make it true, Max and David.

A decade ago I attended a meeting of biofuel company CEO’s titled “Biofuels 2.0” at one of the regular MIT/Vlab meetings. As befits CEOs and their marketing lieutenants, they were all amazingly upbeat about the immediate future of biofuels.

Noticeably absent from this meeting however was any mention of plants being carbon neutral. The focus had shifted to the 120 petawatts of energy from the Sun that were reaching the surface of the Earth and being converted by plants to easily burnt biofuels. Carbon neutrality apparently had gone out the window. A review of the recent literature (links below) shows why.

Have you ever considered why CO2 plummeted from 6000 ppm to 180-280 ppm over a period of a few hundred million years?

It’s because plants are far from carbon neutral. They consume it voraciously when it’s available.

Today plants are starved of carbon because over that period they’ve used up most of the atmospheric CO2 and have had to learn to get along with a pitiful 180-280 ppm range. Things got so bad for plants that recently they introduced their C4 model designed to economize on carbon. So far it has not superseded the older and less efficient C3 biochemical mechanism, but give it time.

If you put all the carbon a plant has absorbed from the atmosphere back into the atmosphere by burning it, then it is carbon neutral.

Otherwise plants are carbon sinks. So much so that they’ve learned not to consume what little carbon remains, as then photosynthesis would grind to a halt for lack of carbon. This is the only reason why the carbon cycle today is in equilibrium: the plant kingdom needs that equilibrium to survive!

One might imagine that plants would be smart enough to figure this out, and have crematoria for the vegetable kingdom in order to put back into the atmosphere all the carbon they’d absorbed from it. They might for example have invented the car salesman.

The problem is that plants have the brains of a vegetable. By not seizing that opportunity back when they had the chance, they’ve allowed a huge and unstoppable ecosystem of microscale and even nanoscale carbon consumers to evolve that is contributing to the slow but steady sequestration of carbon deep inside the Earth.

This is true for land plants, especially kudzu and the vicious Venus flytraps, but it is even more true for the meek seafaring plants because the meek have inherited 70% of the Earth. Carbon sequestration in the ocean is a major part of the carbon cycle, and consists of a steady rain of carbon falling to the ocean bottom, with diatoms playing a central role.

So yes, burning biofuels is carbon neutral. But no, plants are not carbon neutral.

At this point I hear grumbling: Pratt’s a knuckle-dragging moron from the 11th century BC, tell him to please shut up.

Fair enough, I’ve said my piece. Let me just leave you with a few likeminded commenters on this delicate issue.

“It’s taken decades to sort whether or not plants are carbon neutral.”

Yeah, but those decades were in the 19th century. The terrestrial carbon cycle isn’t complicated except in the details too small to worry about with regard to partial pressure in well mixed global atmosphere. As I explained the only thing not strictly considered carbon neutral in the terrestrial biosphere is trees but in the long run even trees don’t last forever and when the wood burns or decays the carbon therein is returned to the atmosphere. The following may help you if you aren’t beyond help:

So if you want to point to massive deforestation to plant annuals used in biofuel production you have a point. One that everyone else here either did or would readily concede because it’s common knowledge. But that deforestation usually happens anyway because people want to grow some kind of money crop including selling lumber from old growth trees.

So either stop the babbling and point to a specific case of biofuel crops displacing old growth forest or STFU.

David Springer: Plants are carbon neutral. Desert is carbon neutral. Don’t matter what was on the land prior to conversion to biofuel crop.

Perennials, including switch grass, store substantial amounts of carbon in their root systems and leaf litter. Even when much is harvested for fuel, much is stored year after year. So on the whole they are slightly carbon negative, even when harvested for biofuel. It makes a great difference if the desert is converted to mangrove forest, as has been done now on thousands of acres in, for example, Eritrea: (http://learningenglish.voanews.com/content/gordon-sato-mangrove-poverty-manzanar/1562373.html).

David Springer: As I explained the only thing not strictly considered carbon neutral in the terrestrial biosphere is trees but in the long run even trees don’t last forever and when the wood burns or decays the carbon therein is returned to the atmosphere. The following may help you if you aren’t beyond help:

Your link does not actually support your claim. In most undisturbed biomes (forests and grasslands included), carbon accumulates year after year in roots and humus — even forest and range fires leave behind much unburned carbon, some as roots, some as charcoal, some as soil. Switchgrass and other grasses in temperate savannahs can develop root systems up to 2 feet thick. Not even rotting converts all of the carbon to CO2.

If you have a better adjective than “useless” expressing the meaning of your phrase ” has no basis in anything” I’m fine with using that instead.

A great many people consider the concept of “per capita” very useful. Given that the term is well-defined, fairly accurately estimated, and of interest to many, I’m sure they would be quite offended to be told that it “has no basis in anything” because they would infer (as I did) that you are telling them the concept is useless.

If you didn’t intend people to draw that inference then you should choose a wording for whatever you meant that doesn’t carry that implication.

Vaughan Pratt, you seem to have a problem getting comments to land in the right spot. It’s only chance I even saw this one since even a search for my name wouldn’t have found it.

Then again, maybe it’s unfortunate. Your response is nothing but a waste of time as the “new value” I referred to was not the concept of per capita. That’d have been clear to readers if you had bothered to quote what you were responding to not just some convenient subset of it.

Even worse, you’ve blatantly misquoted me. I never said anything “has no basis in anything.” I specifically said it has little or no basis anything. I have no idea how you could come up with such an obvious misquotation, especially not when you started your comment by by quoting me accurately.

Not only did you misrepresent what I was talking about, you blatantly misquoted me to exaggerate what I said in a way which created a straw man. You then had the audacity to say:

If you didn’t intend people to draw that inference then you should choose a wording for whatever you meant that doesn’t carry that implication.

It’s cheeky to tell me to “choose a wording” that accurately reflects my intended meaning while blatantly misquoting me (as well as misrepresenting what I referred to). When your own comment contains both an accurate quote and a misquotation of the same words, I don’t think my choice of wording matters much.

@BS: Even worse, you’ve blatantly misquoted me. I never said anything “has no basis in anything.” I specifically said it has little or no basis anything. I have no idea how you could come up with such an obvious misquotation.

That deserves to go in “Best of the Internet.” I will enjoy quoting you accurately for years!

Someone we know couldn’t design an experiment to demonstrate the heat trapping of CO2. Funny stuff. Failing that he decided to simply accept CO2 warming as an article of faith and moved away from experiment to a mathematical abstract in an Excel spreadsheet which he paid to have displayed in a poster at the AGU conference. It should be made into a poster of why not to send your kids to Stanford anymore. ;-)

“Vaughan Pratt | December 4, 2013 at 10:42 am |
@BS: created some new value which has little to no basis in anything.

If you have a better adjective than “useless” expressing the meaning of your phrase ” has no basis in anything” I’m fine with using that instead.

###########################

1. Vaughan appears to quote BS first “created some new value which has little to no basis in anything.”

2. Vaughan then paraphrases BS: “If you have a better adjective than “useless” expressing the meaning of your phrase ” has no basis in anything” I’m fine with using that instead.”

3. BS, then accuses Vaughan of misquoteing
“Even worse, you’ve blatantly misquoted me. I never said anything “has no basis in anything.” I specifically said it has little or no basis anything. ”
Brandon misquotes himself. he said “little TO no” not little OR no.

4. But Vaughan did quote BS, then one sentence later he drops the qualifier “little”.

5. Nobody is mislead by this. Its not a misquote. Note that Vaughan is looking for a clarification. What adjective can be used to express the
meaning of “no basis in anything” or little to no basis. we all know what
Vaughan is asking. Its a simple request. But instead of answering
Brandon misreads the request and then misquotes himself.

here is what he could have said.

Vaughan my actual quote was “little to no” basis, not merely “no basis”
The adjective you could use would be ….

he had that free choice. Now we get to ask, why does brandon misquote himself? and why avoid vaughan question. Its an easy question

I don’t know whether to try to explain that no one is saying that any large portion of the energy is accumulating in the atmosphere, or to point out that changing the volume of a given amount of a gas does not change its energy content. Either and both indicates you don’t know what you are talking about, and it looks like you have chosen to believe someone else who doesn’t know what they are talking about simply because they are telling you what you want to hear. Your understanding of physics is too low to ascertain the validity of the arguments you are hearing; so, why else would you choose to believe what you found at that site?

Energy becomes heat and reaches equilibrium and we measure it with temperature. Energy is stored in plants and fossil fuels and wherever, but the energy in air and water and rocks and dirt are measured by checking the temperature and the properties of the substance.

Chris G | November 28, 2013 at 7:00 pm | Reply
or to point out that changing the volume of a given amount of a gas does not change its energy content. Either and both indicates you don’t know what you are talking about,
——————-
Increasing the volume of the gas lowers its temperature and yes energy is lost to work. It is an extremely well known physical law and the basis behind meteorology.

“energy is lost to work.”
What work do you think is done?
Raising the altitude of the atmosphere above? That would imply that the energy is merely transferred from thermal to gravitational potential, not lost. And, it would raise the radiative TOA.

It’s called wind. Ever heard of it? It pushes stuff around like water in the ocean creating currents and waves and vertical mixing. In extreme cases it pushes stuff really hard and can uproot trees, smash houses, make cows and cars fly, and build massive storm surges. It’s powerful stuff. That’s some of the work that happens. It also moves water vapor from lower to higher altitude which then falls as rain and makes rivers and stuff as the water returns to sea level.

@Genghis: Increasing the volume of the gas lowers its temperature and yes energy is lost to work.

Seems like people are talking at cross purposes here. The above is true for adiabatic expansion. However Stefan said “When is extra heat, for any reason -> troposphere expands instantly” by which presumably he means that heating a gas at constant pressure causes it to expand (this is the circumstance for which constant-pressure specific heat capacity c_p is defined). This of course is not adiabatic expansion.

Pretty much every statement that’s been made here is true for some thermodynamic setup.

Chris G | November 28, 2013 at 7:00 pm said: ”I don’t know whether to try to explain that no one is saying that any large portion of the energy is accumulating in the atmosphere”

Hi Chris, oxygen &nitrogen are so sensitive, they expand accordingly to the extra heat and increase as much as necessary the troposphere – and release that much extra heat. O&N are 998999ppm of the atmosphere – they regulate the heat, not a trace gas as CO2.

Chris, I know what you know, the mainstream propaganda – you don’t know what I know,, I wouldn’t have being on the net writing with English not my first, or second language – if I didn’t have all the proofs. My theory is the most solid, because I have the most solid proofs. You can find those proofs on my website: . Be fair to yourself and broaden your mind, instead of all the narrow propaganda crap, on the the truth will win: :http://globalwarmingdenier.wordpress.com/climate/

Chris G | November 29, 2013 at 12:35 pm | said: ”Get a clue. Expanding a gas does not change the energy content.”

Chris, expanding gases (O&N) creates winds – horizontal winds are cooling the surface / vertical winds are cooling the planet. During the day is warmest close to the surface – vertical winds increase and are wasting that heat up – nothing to do with any CO2, the pagan beliefs

on the end I will win; because the propaganda by Warmist and fake skeptics is just that, outdated propaganda

That’s true for adiabatic expansion. But how do you know Stefan doesn’t have isentropic expansion in mind, where it is the entropy that does not change while work is performed on the environment? From context he could have meant the latter.

WebHubTelescope (@WHUT) | November 30, 2013 at 1:23 am said: ”Stefan is otherwise extremely confused (or more likely engaging in Larrikin games) if he thinks something will swallow up his excess energy”

Hello Telescope! Because now you are an adopted Australian, become an Larrican, don’t hold back!.

Still you are with your crappy formulas – time for you to recognize that the atmosphere has oxygen& nitrogen / winds; they are cooling the atmosphere, not your lousy CO2.

When you wake up from your trans, you will realize that the atmosphere is not made up of CO2 and methane, but from oxygen &nitrogen – and there is the thing called vertical and horizontal winds. The warmer it gets -> the more those vertical winds speed up-> take the heat up and equalize in a jiffy. Why do you think that the oxygen &nitrogen expand INSTANTLY, when warmed up extra?

Chris G | November 29, 2013 at 12:35 pm said: ”Expanding a gas does not change the energy content”

Chris, you are wrong, because of your Pagan beliefs that atmosphere is made of CO2 and methane only!!!

Because is the hottest close to the ground – air / oxygen &nitrogen by horizontal wind are collecting the heat-> expand.and get pushed up by colder air from above (vertical winds created) Same as when you blow a balloon on the bottom of a swimming pool -> the balloon / hot air goes up where is cold and wastes the heat. Stop believing in outdated propaganda; learn that the atmosphere has oxygen &nitrogen, and what the horizontal and vertical winds are. Normal physics recognize that oxygen & nitrogen exist, they create winds by shrinking / expanding. Why do you think the O&N expand instantly when warmed up? because they have nothing better to do; or they are regulating ”OVERALL” always to be same amount of heat in the troposphere.

Chris, I know what you know / what the propaganda stands for – but is all wrong to ignore the gases that are 998999ppm in the atmosphere! grow up!

When O&N close to the ground warm up by taking heat from the ground -> expand and go up where is cold, to waste the heat – that is power – hot air balloon can lift 6 people + the basket + the gas bottle + the heavy balloon . up for hours; because the hot air wants to go up>

Chris G | November 29, 2013 at 12:45 pm |
“energy is lost to work.”
What work do you think is done?
—————
Expanding the parcel of air, the same as driving a piston in a motor. Why don’t you look up the term adiabatic?
—————-

“Raising the altitude of the atmosphere above? That would imply that the energy is merely transferred from thermal to gravitational potential, not lost. And, it would raise the
radiative TOA.”

————-
No the parcel of air cools as it expands. Look at the IR of the tops of Hurricanes, they are extremely cold, it actually lowers the effective radiative level.

This is meteorology 101. I am really surprised at the basic ignorance of it.

Chris G | November 29, 2013 at 12:45 pm said: “energy is lost to work.”
What work do you think is done?”

Chris, you are confused like the rest of them, because you work on atmosphere without oxygen &nitrogen, which is 998999ppm! Vertical winds are taking the heat from the ground up to the edge of the troposphere / stratosphere (where is very cold) and wast it.

today is warm, by midnight those vertical winds will cool the atmosphere by 10-15C, in 10hours they cool down by 15C in 10h,, the propaganda thinks that those winds cannot cool by 2C extra in 100y?! what a joke the brain-washers are doing to you lot…

This was in response to one message from David Springer making two good points:

(i) “Recoverable is … a moving target because as technology improves what wasn’t recoverable before becomes recoverable. There’s many times more fossil fuel beneath the surface than what is currently recoverable.”

and

(ii) “In any case there’s almost certainly a carbon-neutral alternative that will be mature in a few decades at most at cost less than fossil fuels ever were. Synthetic biology is the front runner IMO.”

I agree with both of David’s points (unusual given how rarely I agree with even one).

Regarding the first, I made that point early on to Max (and discussed it with Pekka in more detail) with no success. I have no idea whether it’s because David worded it differently or Max merely needed time to think it over — perhaps both.

Regarding the second, more relevant than DNA computing isArtificial photosynthesis. The relevant quote from that article is “artificial photosynthesis may become a very important source of fuel for transportation; unlike biomass energy, it does not require arable land.” [Italics mine].

This is exactly my objection to biofuels: they require arable land. Arable land should be reserved for plants which are not burnt for fuel so as to give their carbon a better shot at sequestration, expanded on in my 1:57 am message (currently in link moderation). Once you take sequestration into account it becomes clear why plants are not carbon neutral. Artificial photosynthesis is carbon neutral, as David astutely points out.

A prerequisite for growing biomass in deserts is that the desert first needs to be made arable. But now you have the paradox of land that according to the above should be reserved for non-biofuels.

As to where the disagreements originated, part of the fault is mine for failing to keep track of all of Max’s comments and therefore reading some things into his earlier comments that were opposite to explicit statements in his follow-up comments that I missed. My apologies to Max for contributing to the negative tone of the discussion in this way. If it was counterproductive that was not my intent, quite the opposite in fact.

Or even carbon negative if some of it is used for other purposes than Carbon-based fuel.

Incidentally you’ll see from that Wikipedia article’s talk page that in 2008 I was confused myself about whether biofuels were carbon neutral—when I created the article I neglected carbon neutrality altogether, but in response to an immediate complaint I subsequently had a shot at fixing both this and the nagging tone here. The article has evolved further since then, reflecting the ongoing debate about that topic in preference to any absolute claim of carbon neutrality. Although sequestration obviously happens it’s not a simple thing to observe in nature.

“Regarding the second, more relevant than DNA computing is
Artificial photosynthesis.”

Artificial photosynthesis has no relevance to synthetic biology except as a competitor. The point about DNA computing was that clean renewable fuels produced by synthetic organisms is just the tip of the tip of the iceberg when it comes to engineering opportunities therein. In principle genetically engineered bacteria can be programmed to construct almost anything at any scale with molecule by molecule precision. Given the known range of environments and materials which extremophiles can work with even the sky isn’t the limit because some can survive even outside the atmosphere. If some solid state electronic device were to outperform chemical photosynthesis we’d still be able to program bacteria to build the artificial leaves. It’s exactly equivalent to a robotic workforce where the robots can build copies of themselves as needed depending on the scale and duration of the task.

“The relevant quote from that article is “artificial photosynthesis may become a very important source of fuel for transportation; unlike biomass energy, it does not require arable land.”

My emphasis above.

This bolded part is either dated or ignorant and anyone here that has read my commentary for long knows that. I have on very many occasions talked about Joule Unlimited and Joule Fuels which has bioengineered and patented several strains of cyanobacteria which in close to one step turn sunlight, non-potable water, and CO2 into ethanol and diesel. They do it on non-arable land. In partnership with Audi, they subcontracted with Fluor to build a scalable demonstration plant in New Mexico and also have a small research plant near me in Leander, TX. The board of directors for Joule, headquartered in perhaps the #1 hotbed of bio-tech, Massachusetts, beginning with the world renowned geneticist George Church reads like a Who’s Who in the energy and genetic engineering fields.

At least watch the video. Joule is already producing unsubsidized ethanol and diesel well below the cost of fossil derived equivalents.

In fact it is my contention that carbon is such a versatile building block for virtually any durable or non-durable good, and that the most available source of carbon for making stuff is atmospheric CO2, that before several more decades have passed civilization’s CO2 concern will be that too much is being removed from the atmosphere to build durable goods. Mark my words. Synthetic biology is transformative technology with equal or greater potential to any transformative predecessor including fire, agriculture, metallurgy, writing, and so forth.

As the earth warms and cools, IF the earth warms and cools, Miami Beach is not where you will find the bigger effects. Ocean currents in Miami do not start in the cold north. You can forget the Miami ice except on rare ocassions or never.

“Thus the earth gives out to celestial space all the heat which it receives from the sun, and adds a part of what is peculiar to itself.”

In other words, the Earth is cooling. It cannot be otherwise. Heat is not a fluid called “caloric”. You cannot store it. All physical bodies radiate energy. All. No exceptions. If they radiate more than they receive they cool.

The Earth radiates away more than it receives. Fourier could see that, unlike so-called “climate scientists” a couple of hundred years later.

Look around you. You are no longer surrounded by molten rock – it has cooled. Insolation is ephemeral. The daytime heating ceases at solar noon, and cooling commences until after dawn the following day. No heat is “trapped”. Things on the surface heat up, and then they cool. Try and stop Nature operating if you wish. Best of luck with that.

‘The failure of many expensive ventilation systems to confer the comfort expected from them has been due to neglect of such facts as those here cited. The attempts to “renew” the air by displacing a certain volume at regular intervals were primarily based on the theory that good ventilation was due to freedom from the chemical constituents of expired air. We now know that this practice did not achieve the end aimed at, because the essential factors in good ventilation are not freedom from carbon dioxid or from a mythical organic poison, but are coolness, dryness and motion.’

Yep, the number one cause of sick building syndrome was mold aggravated by energy saving reduced ventilation, condensation due to tight vapor barriers and fear of Clorox/maintenance. People tend to look for everything but the obvious.

ASHREA, the American Society of Heating Refrigeration and Air-
Conditioning Engineers even has some pretty good literature on the efficiency of radiant barriers. As long as they are clean and dry they work pretty well. Got to watch that water and aerosols though :)

So what the plot says:
Is it appears we are losing heat TOA.
While simply a plus or a minus, the net radiation still throws me.
It also appears OHC has been flattening. It appears all 3 measurements, TOA, atmospheric temperature, and OHC are all stable.

If, as claimed, the oceans are absorbing excess heat (a greater rate of heat storage than “normal”), then that heat isn’t available to warm the atmosphere. This could cause the TOA radiation to decrease – i.e. like the observe trend presented above.

Here’s a thought experiment to dramatize this: If a whole bunch of ice were dumped into the ocean from the *bottom* of an ice cap, it would soak up lots of energy and cool the atmosphere. The TOA radiation would go down, and the earth’s rate of absorbing heat would *increase* (because the TOA is cooler) -until this transient event has been absorbed – which could take a long time.

Ocean heat transport is complex – it isn’t simple convection or conduction. Hence it is not unreasonable for it to fluctuate, altering the temporary global heat flux while not changing the long term trend.

John Moore > If, as claimed, the oceans are absorbing excess heat … then that heat isn’t available to warm the atmosphere. This could cause the TOA radiation to decrease – i.e. like the observe trend presented above.

For there to be any CO2-driven “excess” heat in the first place – to warm the oceans – there first needs to be TOA surplus. If there isn’t a TOA surplus, any ocean warming has nothing to do with CO2.

I suspect hat both the GISS and the IPCC models make the same mistake in trying to model climate with a continuous model. Take a look at the dramatic global average temperature change in 1940. From a rise of 0.15C/decade to a fall of about the same magnitude, all in one year. In my experience a change and magnitude of that sharpness cannot be produced by a continuous differential equation model. If this is correct we must use a discontinuous model. In any event a single model would have to produce spikes like that at other times between 1850 and 2013.

What physical reality could produce such a discontinuous process: of course, a quantum model that recognizes that temperatures can change in a series of ‘steps and stairs, up or down’

This is an approach I have taken in my theoretical model (underlined above).

Rather than demanding a change to a discontinuous model it requires a reassessment of both the sampling errors and the corrections that have been applied and generally accepting that data uncertainty is larger than officially recognised.

Earth’s energy imbalance and its changes will determine the future of Earth’s climate. It is thus imperative to measure Earth’s energy imbalance and the factors that are changing it.

The required measurement accuracy is ~0.1 W/m2, in view of the fact that estimated current (2005-2010) energy imbalance is 0.59 W/m2. The accuracy requirement refers to the energy imbalance averaged over several years. It is this average imbalance that drives future climate. Stabilization of climate requires the energy imbalance averaged over El Nino-La Nina variability and the solar cycle to be close to zero.

There are two candidate measurement approaches: (1) satellites measuring the sunlight reflected by Earth and heat radiation to space, (2) measurements of changes in the heat content of the ocean and the smaller heat reservoirs on Earth. Each approach has problems. There is merit in pursuing both methods, because confidence in the result will become high only when they agree or at least the reasons that they differ are understood.

The difficulty with the satellite approach becomes clear by considering first the suggestion of measuring Earth’s reflected sunlight and emitted heat from a satellite at the Lagrange L1 point, which is a location between the sun and Earth at which the gravitational pulls from these bodies are equal and opposite. From this location the satellite would continually stare at the sunlit half of Earth.

The notion that a single satellite at this point could measure Earth’s energy imbalance to 0.1 W/m2 is prima facie preposterous. Earth emits and scatters radiation in all directions, i.e., into 4π steradians. How can measurement of radiation in a single direction provide a proxy for radiation in all directions? Climate change alters the angular distribution of scattered and emitted radiation. It is implausible that changes in the angular distribution of radiation could be modeled to the needed accuracy, and the objective is to measure the imbalance, not guess at it. There is also the difficulty of maintaining sensor calibrations to accuracy 0.1 W/m2, i.e., 0.04 percent. That accuracy is beyond the state-of-the art, even for short periods, and that accuracy would need to be maintained for decades. There are many useful measurements that could be made from a mission to the Lagrange L1 point, but Earth’s radiation balance in not one of them.

These same problems, the changing angular distribution of the scattered and emitted radiation fields and maintaining extreme precision of sensors over long periods, must be faced by Earth-orbiting satellites. Earth radiation budget satellites have progressed through several generations and improved considerably over the past half-century, and they provide valuable data, e.g., helping to define energy transport from low to high latitudes. The angular distribution problem is treated via empirical angular distribution models, which are used to convert measurements of radiation in a given direction into radiative (energy) fluxes.

The precision achieved by the most advanced generation of radiation budget satellites is indicated by the planetary energy imbalance measured by the ongoing CERES (Clouds and the Earth’s Radiant Energy System) instrument (Loeb et al., 2009), which finds a measured 5-year- mean imbalance of 6.5 W/m2 (Loeb et al., 2009). Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85 W/m2 (Loeb et al., 2009).

The problems being addressed with this tuning probably involve the high variability and changes of the angular distribution functions for outgoing radiation and the very limited sampling of the radiation field that is possible from an orbiting satellite, as well as, perhaps, detector calibration. There can be no credible expectation that this tuning/calibration procedure can reduce the error by two orders of magnitude as required to measure changes of Earth’s energy balance to an accuracy of 0.1 W/m2.

These difficulties do not imply that attempts to extract the Earth’s radiation imbalance from satellite measurements should not be continued and improved as much as possible. The data are already useful for many purposes, and their value will only increase via continued comparisons with other data such as ocean heat uptake.

Pekka Pirilä wonders “Have you been reading the same post I see above, or does your reading stop at that sentence?”

Climate Etc readers are invited to verify for themselves that Steve McGee’s quoted passage fairly reflects the overall thrust of his post, as summarized at the conclusions he gives at the end.

Conclusions Steve McGee would have been better-advised to conclude — less provocatively but more accurately — “Satellite measurements are insufficiently accurate to provide direct evidence regarding earth’s radiative energy balance for the decade of the 2000s.”

And Judith Curry would have been well-advised to spotlight higher-quality satellite-related research here on Climate Etc!

A fan of *MORE* discourse: Steve McGee would have been better-advised to conclude — less provocatively but more accurately — “Satellite measurements are insufficiently accurate to provide direct evidence regarding earth’s radiative energy balance for the decade of the 2000s.”

I am glad to read that you have come around to my view on this topic. Heretofore you have asserted that there has been a persistent positive energy balance. McGee carefully, meaning with caveats instead of “uncritically”, presents a simple empirical relationship: intriguingly, two of the troughs and one of the peaks correspond to well-studied events, and have the expected signs at the times they occur.

Climate Etc readers are invited to verify for themselves that Steve McGee’s quoted passage fairly reflects the overall thrust of his post, as summarized at the conclusions he gives at the end.

Accepting this invitation, I find (among others) the following:

Still, while not “observation” nor “reality”, the CFSR does represent a best assessment of the recent climate based on observations and the same radiative codes that lie within the prognostic climate mod

And…

To the extent that the CFSR radiance is accurate, it implies that earth was in radiative deficit, not surplus, for the decade of the 2000s and that for this decade, there is no ‘missing heat’ to be found. [my bold]

etc.

Conclusions FOMDecption would, if being honest, have never bothered to post this comment. In fact, IMO, his comments are aimed at readers who are unprepared (through ignorance or laziness) to check his claims, and will take his BS as representing valid argument.

If the seas are rising they are gaining heat. If they are gaining heat then we have an energy surplus.

It’s not alarmist evasion, rather you are too pig ignorant to have figured that out, and those on this thread who are bright enough to figure it are keeping very quiet so that morons like you can stay ignorant.

global averages of tide gauge data, after correcting for the effects of post glacial rebound on individual station records, reveal an increase in sea level over the last 80 yr of between 1.1 and 1.9 mm yr−1. As part of the process of removing the effects of post-glacial rebound, we fit those effects to the global tide gauge data and obtain very good agreement with results predicted from post-glacial rebound models. This tends to support the post-glacial model results and it suggests that the global tide gauge data are, indeed, capable of resolving changes in sea level at the mm yr−1 level

Another study (Holgate 2007) concluded that the decadal rate of SL rise has bounced all over the place, from a high of over 5 mm/year to a low of -1 mm/year. Over the 20thC it averaged 1.74 mm yr−1 with the first half of the century having a slightly higher average rate of increase (2.0 mm/yr) than the second half (1.4 mm/yr). The Holgate results have been plotted against several other recent estimates of SL rise.

A study by Wunsch et al. estimated the rate of SL rise over the period 1993-2003 at around 1.6 mm/year, while IPCC reported a rate of 3.1 mm/year, based on satellite altimetry (a totally different method of measurement covering a totally different scope, i.e. the entire ocean excluding coastlines and polar regions, which cannot be captured by satellite altimetry, versus several different coastlines, where humans live). But Wunsch cautioned that thehttp://ocean.mit.edu/~cwunsch/papersonline/Wunschetal_jclimate_2007_published.pdf

The widely quoted altimetric global average values may well be correct, but the accuracies being inferred in the literature are not testable by existing in situ observations. Useful estimation of the global averages is extremely difficult given the realities of space–time sampling and model approximations. Systematic errors are likely to dominate most estimates of global average change: published values and error bars should be used very cautiously.

every few years we learn about mishaps or drifts in the altimeter instruments, errors in the data processing or instabilities in the ancillary data that result in rates of change that easily exceed the formal error estimate, if not the rate estimate itself.

So, until these bugs can be worked out, we are probably better off sticking with the tide gauge record, rather that relying on the satellite altimetry results.

Earlier tide gauge records showed similar rates of SL rise in the 19thC. A 2008 study by Jerjeva et al. shows that SL actually sank during the 18thC, but has risen in both the 19th and 20thC.

So it is reasonable to conclude from the data at hand that SL has been rising at around 1.5 to 1.9 mm/year over the past 150-200 years or so.

It is speculated that some of this rise in SL has been caused by thermal expansion of a warming ocean. This sounds reasonable, but is uncertain, as there are no meaningful long-term measurements of ocean temperature.

Max of course being the world’s clearing house for all sea level information.

Since Max regards himself as the least biased source in the world of all climate-relevant information he would gladly serve in this role, even if the stress of the obligation cost his total life span 3 days.

Far be it for me to challenge such a distinguished scientific luminary as yourself – unless you happen to set off my BS meter (as you did with your CO2 projection).

But lolwot is another case.

He makes the simple statement (which I agreed is logical, but is unsubstantiated by any actual physical data) that if the SL is rising the ocean must be gaining heat.

The data on SL rise are out there, and they show that the decadal rate of SL rise has bounced up and down (from over 5 mm/year to -1 mm/year)since the tide gauge records started, but over the long haul has been fairly constant at around 1.6 to 2.0 mm/year on average since the 19th century.

However, there are no meaningful data on ocean temperature changes prior to ARGO in 2003, so any postulated increase cannot be corroborated.

A fan of *MORE* discourse: Matthew R Marler and AK, where is the heat-energy coming from that is steadily expanding Earth’s oceans and steadily melting Earth’s ice-caps?

good question. Are you backing away from your earlier statement that TOA measurements are too inaccurate to decide whether there is or isn’t an energy deficit? How about your claim that McGee was uncritical?

That’s ordinary scientific common sense, isn’t it Matthew R Marler and AK? “Scientific common sense”, if it is not an oxymoron, is an insistence that all assertions be tested against every sort of evidence that can be made available.

FOMD > Matthew R Marler and AK, where is the heat-energy coming from that is steadily expanding Earth’s oceans and steadily melting Earth’s ice-caps?

Apart from being a fact-free comment, this completely and deliberately avoids the question at hand, namely measuring the radiation budget.
Your unswerving and unalterable pre-commitment to an alarmist conclusion, no matter what, is, as always, showing. Clear symptom of your primary motivation being political.

I doubt anything Hansen has to offer is worth reading. As for McGee’s work here, the problem is your lie claiming he has ‘Steve McGee blithely concludes [uncritically] “Earth was in radiative deficit, not surplus, for the decade of the 2000s.”‘ He has included appropriate caveats, and simply demonstrated that a prima facie look at satellite data suggest “that earth was in radiative deficit, not surplus, for the decade of the 2000s and that for this decade, there is no ‘missing heat’ to be found.”

This simply means that the data seem to be pointing in different directions, and more research and analysis is needed. I see no evidence of “blithe, uncritical, conclusions”.

Josh Willis first detected cooling in ocean surface in 2006.
He was told to get with the program and then retracted his conclusions.
What is a young researcher to do?

JCH > Hogwash. Takmeng Wong would have to be in on it. Graeme Stephens would have to be in on it. And many many others. It’s an absurd contention.

Your claim of hogwash is itself hogwash. Far from it being absurd that researchers are brought into line by the vested interests of their paymaster, it is business as usual. Your claim of absurdity, is what is absurd.

The problem continues to be that the ostensible anthropogenic CO2 warming signal is so small that it falls inside the margin of error in the instrumentation used to study temperature and attribution. Being so close to zero with a margin of error greater than the signal allows researchers to make reasonable yet biased choices in data processing that skew the signal in the direction of making global warming at least alarming enough that more and better instrumentation and funding for further study a high priority. Many people realize this and resent the fact that money is being funneled into the wild good chase called climate change when it could be far more productively spent finding new antibiotics that can work against so-called multiple resistant superbugs, defeating malaria, working out cheaper alternatives to finite fossil fuel supplies, designing food crops that use less water, recycling critical agricultural nutrients like phosphorus, and a plethora of other real, immediate problems that are, unlike anthropogenic climate change, both solvable and badly needed. The Copenhagen Consensus and its champion Bjorn Lomborg are a rare example of sanity in this crap.

But you have to admit, if any measurement goes against the warming story, scientists dig deep to find out why. If a measurement is consistent, it is “expected” and not much effort is expended to verify.

foMd
Wow! Following your links I learned that Sea Levels will rise 34cm resp. 11cm within the next 100 years. Especially the data from the second link made me worry. They come from a gigantic time span of 7! years. But, dont be afraid! 2010 and 2011 Sea Levels dropped 0,5mm per year. Making an extrapolation from that for the next hundred years… And the ice shield of Antarctica was growing over the last years. Ah, got it! You are a clever real estate agent, buying beach houses! Your second link also says that LA uses a gigaton of water each year. Antarctica contains 25 Million gigatons of ice. Should be enough for your drinks over the next years.

So the CO2 knob is not making any sound.
The idea that variation in .04 % of the atmosphere having much effect upon temperature has been absurd.
So the Greenhouse Effect theory has been sufficiently refuted.
One could argue that Greenhouse Effect theory is salvageable and it’s
just that CO2 does not amplify temperature as much as was supposed.

“Mainstream” skeptics as matter of politics, do not want to debate
the matter whether the Greenhouse Effect theory is valid, one could
say they want focus on the low hanging fruit, or fight in best battlefield.
Or you could say “mainstream” skeptics want remain within the actual
mainstream of scientific consensus [not to be confused with phoney
consensus of the “science in settled”]. The idea of getting meters of
sea level rise within century, the idea runaway warming has always
been deviant “science”- never something something most scientists
gave much weight to. And one could say no one has ever attempted
to make any scientific argument for it. That is a religious idea from the
start and end.
But I would say we always had and always will have religious kooks
involved with science, and such kooks have never been fundamental
problem in terms of progress of science. But rather it’s the pseudoscience
of Greenhouse Effect theory which has been the more serious problem.

The Greenhouse Effect is the idea that Earth would 33 degrees cooler
if not for greenhouse gases. Or the atmosphere increases the average
temperature by 33 C. Keep in mind it’s not 30 C nor is it 40 C warmer, but
is precisely 33 C.
So idea is that “today” or in present times we are 33 C warmer due to the atmosphere and in particular a subset of this atmosphere which about 1%
of atmosphere which are greenhouse gases. Or if you could “somehow” remove these greenhouse gases that atmosphere the average temperature of the atmosphere would cool by 33 C.

There is no means of doing this, as Earth is covered with water, and even if water were frozen we would still have water vapor in the atmosphere, and CO2 levels on Earth are so low a percentage of the atmosphere that an objective observer could tell there is life on Earth, plant life, because
of the low levels of CO2 in our atmosphere. Though a more easily detectable aspect of our atmosphere is the large portion of reactive oxygen.
So according to this Greenhouse Effect Theory the present condition [which are by the way poorly defined] we would precisely be 33 C cooler without greenhouse gases. And included in this is present albedo of Earth, the modern term would be Bond albedo which is 0.306 according to:http://nssdc.gsfc.nasa.gov/planetary/factsheet/earthfact.html
So according to theory if Earth’s albedo were to be altered, then the
33 C number would change. So if Earth were to be snow white and lack
greenhouse gases, then Earth average temperature would be much cooler than minus 33 C.
So it is the existing Earth’s Bond albedo plus current level of greenhouse
gases which said to cause Earth to be 33 C warmer.

Those who believe in the greenhouse theory also believe greenhouses
gas are the effect of warmer conditions. Or our earth can not have average temperature of around 15 C and not have as much water vapor
in the atmosphere. So if earth were to get warmer this increases the amount water vapor in the atmosphere [or a warmer atmosphere can potentially hold more water vapor] and that a cooler world can not hold as much water vapor. It is this well known limitation of water vapor, which causes some to imagine that CO2 is a control knob. Or Earth could have 1/2 of the atmosphere being CO2 [a comparative massive increase in CO2 and it would not matter warm or cold the Earth was- other than if cool enough CO2 could freeze out of the atmosphere at the poles- though such massive increase of CO2 would not be considered possible with cooler conditions].
A *slight* problem with the CO2 control knob is that is now known that increasing levels of CO2 in the ice core record, now indicates that rising global CO2 levels follows warming conditions. So like water vapor,
CO2 levels follow warming conditions.

So at the present, we know CO2 was the effect of warming conditions in interglacial/glacier period in the recent [last million years] time period, and at moment rising CO2 levels do not seem to adding much to global temperature- it’s an opinion [rather than an objective fact] that recent rise in global CO2 level may be causing warming.

We also say that recent peaks in global temperature [1998 El Nino- some say super El Nino] was caused not by atmospheric phenomenon
but was result of ocean phenomenon. As well as everyone would expect possible sudden increases in future global temperature being related
to ocean phenomenon- or ocean phenomenon is a causal relationship
to global temperature.
So oceans are not atmosphere- and greenhouse theory is about the atmosphere, not about oceans. Oceans are not suppose to be the causal factor, atmosphere is suppose to be the causation of increasing or decreasing global temperature.

Greenhouse theory has lead to the general idea that we are currently in the warmest period of history, the warmer period ever. Or has caused
nutters to imagine we on verge Earth becoming “Venus-like” is the Greenhouse theory.
BUT everyone who know anything about the history of Earth’s climate
knows we are in an Ice Box climate. That we are in one coolest periods
in Earth’s history.
The inescapable fact that we in cool period, is one, we have polar ice caps, two the ocean is pretty darn cold. Either or both tell you we are
in Ice Box Climate.
So one can perhaps be considered reasonable to ask what would happen were we to suddenly change from an Ice Box Climate to warmer conditions. Maybe that could be reasonable, maybe.
But you would begin by acknowledging that no we are not in one of warmest periods in history, and rather we in Ice Box climate.

So far tens of millions of years, Earth has been in cooler conditions, or has been in an Ice Age. Or for 10 million years this whole period has been what one can call an Ice Age, a period of time in which we have never left this general characteristic of the Ice Age. It has been a general condition of having glacial times in which sea levels are 100 meters lower, and ice caps in non polar regions, and interglacial periods
in which sea levels rise more 100 meters and polar ice caps more or less persist and non polar ice caps have melted. And in addition the average ocean temperature has never gone higher than 10 C.
So not having significant polar ice, and/or having average ocean temperatures above 10 C would not conditions of this Ice Age. And it’s impossible get out of such Ice Age condition within a century [barring, events like super novas, large impactors, or our Sun doing something very dramatic, or end of the world type level of global volcanic activity]. None related to present or future global CO2 or methane levels.

Finally when this old theory of Greenhouse Effect was first proposed, did
allow or predict what is now known. Does it predict or explain why we are
are in Ice Age? Or said differently, do we need to include the ocean as
significant factor which defined global climate?
Is it mostly about oceans rather than about air?

Answer Rather than “backing away”, it is more accurate to say that the climate-science community is advancing toward a affirmative consensus that global measures of heat energy are superior in accuracy and stability relative to satellite measures of energy radiation.

A fan of *MORE* discourse: Rather than “backing away”, it is more accurate to say that the climate-science community is advancing toward a affirmative consensus that global measures of heat energy are superior in accuracy and stability relative to satellite measures of energy radiation.

I was not asking about the climate science community (about which you make an undocumented assertion), but about your own conflicting assertions.

The “climate-science community” serially abandons past claims (remember Mann’s hockey stick, formerly at the IPCC web page, since abandoned? And the disappearing Antarctic ice?), in favor of supportive recent trends. Now they abandon the predictions/projections/forecasts/etc of warming in the early 21st century (made by Hansen, for example) and direct attention to the poorly estimated ocean heat content. Big deal.

There was something in the foggy distant past, about 10 years ago, with McIntyre complaining about Mann’s 1999 paper, but paleoclimate has moved on since then, even if the skeptics are stuck in the past. Check out Fan’s last link for a 2013 update on things (MWP, LIA, etc.).

Jim D
There was something in the foggy distant past, about 10 years ago, with McIntyre complaining about Mann’s 1999 paper, but paleoclimate has moved on since then, even if the skeptics are stuck in the past.

Some brand-new hockey stick frauds, to help the faithful forget about the blowback from the old one.

a fan of *MORE* discourse: It is a pleasure to help correct misunderstandings of climate-science , Matthew R Marler!

fwiw, you are still avoiding the questions that I asked you. That is, naturally, your prerogative, but you have clearly been inconsistent on the question of how accurate the TOA data are. and, you seem to have missed that McGee was quite open about caveats surrounding his informal observation.

This reanalysis seems to show significant swings for El Chichon, Mt P and ’98 El Nino. These features are present but less marked in my assessment of the length of Arctic melting season and both records show the same steady decline since 1989 that is also seen in Arctic Oscillation index:http://climategrog.wordpress.com/?attachment_id=226

foMd
Oh, you bring in THIS James Hansen, the “best available scientist” of our time, as you mentioned in another post (James Hansen -writing- to Yasuo Fukuda, like Einstein to Roosevelt).
What about writing James Hansen to SONY? One of the trickiest computer game players to the most famous computer game producer?
Having played Climate Doom to the ultimate level, in both directions.

“JC note: This post was submitted via email. Since this is a guest post, please keep your comments relevant and civil.”

As this is a ‘guest post’, I guess I need to direct my comments to yourself Judith. There must be a motive behind your publication of Steve McGee’s mail, so I’ll try to expound some of the logic behind my ‘belief’ of your motive and understanding for its publication. :)

My ‘belief’ is that you want to improve understanding within the ‘climate science’ arena ‘per se’. However, I’m not so sure that you are going about this in the right way, but that’s just my own belief.

When a ‘temperature based model’ is used for the ‘global energy budget’ there are ‘other attractors’ missing (I’ve not read other thread posts here).

There’s a lot of ‘hidden heat’ moving around in the troposphere. Take time to watch how a cloud disperses and regenerates when you get the chance to. An enormous amount of energy is ‘sourced/sunk’ here, but without a ‘noticeable’ change in temperature. ‘Latent heat’ behaves more like a temperature ‘buffer’ within the atmosphere, which ‘skews’ the thermometer readings. :)

Same for surface temperature. ‘Where insolation rates are equal’, it’s always a ‘lower’ temperature where water is available for evaporation, but a ‘higher’ temperature where there is no water.

suricat | December 14, 2013 at 7:09 pm said: ”, it’s always a ‘lower’ temperature where water is available for evaporation, but a ‘higher’ temperature where there is no water”

suricat, where water is available to evaporate, days are cooler, BUT nights are warmer.! For Vaughan Pratt is only important the ”hottest” minute in the 24h, because they are obsessed / interested in ”heat” only, not reality, BUT for nature / climate that factor is most important.

if you compare the desert vs rainforest on same latitude- desert has much hotter days, BUT colder nights than the rainforest. If you take in consideration every individual minute in 24h on both places = they have same temp.. cheers!

“if you compare the desert vs rainforest on same latitude- desert has much hotter days, BUT colder nights than the rainforest.”

Quite right! This is why I mentioned that “‘Latent heat’ behaves more like a temperature ‘buffer’ within the atmosphere, which ‘skews’ the thermometer readings.”, but I guess I should have mentioned that this is also true for surface temperature readings (though more complex). I shouldn’t just ‘assume’ the reader would realise this. Thank you for the prompt. :)

“If you take in consideration every individual minute in 24h on both places = they have same temp.. cheers!”

This would suggest that both ‘radiant energy losses’ and ‘latent energy losses’ are ~equal after conduction/convection is accounted for with surface temperature, but this isn’t so as ‘cloud albedo’ and ‘surface albedo’ aren’t addressed for the purpose of ‘surface insolation absorption’.

both independently pointing to the earth in energy surplus. Both these things are ignored. Denied. Even with one commenter above inventing conspiracy theories to faciliate the denial.

No, we must only look at the satellite reanalysis data, which even the author of this atrocious post has had to admit is flawed, but nonetheless offers conclusions without any mention of the OHC or sea level data.

Of course, ocean heat content did fall between 2003 and 2009, somewhat consistent with the CFSR:

The OHC is important, of course, but just like the uncertainties that we must contend with in other measurements, so too must we not lose sight of the questions over OHC, including:

What part of the OHC trend is diabatic and not adiabatic?
How is anomalously warmer water fighting the physical force of buoyancy to heat the deeper waters?
And how, if heat net energy flux is going into the ocean, is there a cooling trend since 2001?
and others.

Your thinking about the thermodynamics of the ocean to atmosphere heat exchange is completely erroneous– but don’t feel bad, as very few have a good grasp of it. Witness, for example, absurd statements like this one from a leading scientist:

“A faster land and ocean surface temperature response to a given forcing will actually slow the rate of increase in the overall ocean heat content because the increased outgoing radiation from a warmer surface means that there is less energy available to heat the ocean.”

This leading scientist, it would seem, has failed to get basic thermodynamics correct. However, it is important to get it right, and thoroughly understand that the net flow of energy is from ocean to atmosphere globally by a very big margin. Thus, except for very isolated areas, the atmosphere does not heat the ocean– it is solar SW that does that for the most part. However, the atmosphere does dictate to a big extent how rapidly the ocean exchanges energy with the ultimate heat sink of outer space. A warmer atmosphere will slow the rate of energy flow from ocean to space.

R Gates, I would not dispute your quoted statement. Perhaps your interpretation is different, but to me it looks fine. It is saying that a forcing change goes either into warming of the surface temperature or increasing the ocean heat content and is generally shared between them. More of one means less of the other.

By way of analogy suppose you put 20% of your monthly salary in an IRA and the remaining 80% in an account that you spend down at 40% a month. If you switch to putting 90% in that account, the account “responds faster” (i.e. grows more quickly) but your IRA now grows only half as fast as before while the 10% extra in your account participates in the 40% outflow. At the end of the year, even though your account may have more in it, your net worth with the 10/90 ratio is less than it would have been with the 20/80 ratio because you’ve been spending more.

This is more than just a matter of perspective. The atmosphere simply never ever ever causes a net warming of the ocean. With over 50% of the energy in the atmosphere coming from the ocean, to suggest that it is an either/or proposition of where energy might go, misses the basic dynamic of the overwhelming flow of energy from ocean to atmosphere– it it always flowing from ocean to atmosphere. It is the concentration of non-condensing GH gases in the atmosphere that act as a regulator (or a thermostat, if you wish), that dictate the rate of flow of energy from ocean to space over the long-term by altering the thermal gradient of that atmosphere. Solar SW is where the oceans get their energy from, and a warmer or cooler atmosphere in itself is not dictating the flow of SW to the ocean. (The role of clouds is a separate issue).

Thus, by basic thermodynamics, a warmer atmosphere actually goes hand in hand with even more energy being stored in the ocean as the gradient between ocean and space is less steep.

Finally, in the case of El Niños or La Nina’s, from a general perspective, during El Niño, we see an increase in SST’s corresponding to an increase in the rate of flow from ocean to atmosphere. This energy is then seen as an increase in tropospheric sensible heat and in the net outward bound radiation at the TOA. During La Niña, the opposite is true, with a net decease in the rate of flow of energy from ocean to atmosphere, lowered tropospheric temperatures, and this a reduced net outward bound LW at the TOA.

Discussing heating of oceans seems very often to get confusing for semantic reasons. (The same issue is a problem whenever heating or warming of anything is argued about.)

One way of looking at, what’s going on tells that the atmosphere never heats the ocean as the net heat flux is always from ocean to atmosphere.

The alternative possibility is to consider changes in heat fluxes, not the full heat fluxes. The change in the net heat flux between the oceans and the atmosphere may have either sign. Based on this approach warmer atmosphere may very well heat the oceans.

My impression (in absence of further evidence) is that the “leading scientist” referred to the second way of looking at the situation, and that R. Gates condemned that based on the idea that only the first way is correct. He seems to be alone on this. Or – perhaps I have misunderstood, what each one is saying.

“One way of looking at, what’s going on tells that the atmosphere never heats the ocean as the net heat flux is always from ocean to atmosphere.

The alternative possibility is to consider changes in heat fluxes, not the full heat fluxes. The change in the net heat flux between the oceans and the atmosphere may have either sign. Based on this approach warmer atmosphere may very well heat the oceans.”

One way to look at it, is whatever heats the ocean causes some evaporation of the ocean.

One must also include that ocean is in constant state of evaporation and condensation. Dry air will cause more evaporation- so wind [not heat but adding drier air] can cause evaporation. And one have opposite of wetter
air descending or somehow arriving at surface causing condensation.
But other than this, heat will have one net effect of evaporation in addition to warming water.

The heat flux from ocean to atmosphere is always the sum of latent heat transfer through evaporation, sensible heat transfer through convection (and conduction at the interface), and radiative heat transfer that’s from ocean to atmosphere also as net transfer. All these together compensate the heating by sun.

Everyone who has taken a serious look on this agrees on the above paragraph. We see often accusations that some scientists or commenters supporting main stream science would not take evaporation into account. Those claims are examples of strawman fallacy.

We see also sometimes claims that evaporation would increase so much that the other forms of heat transfer would go down or that evaporation would prevent totally warming of the surface from increase of DWLR. That’s an erroneous claim, all three forms depend on the temperature difference between skin water and near surface atmosphere. In addition humidity of the atmosphere affects evaporation and radiative heat transfer, and winds affect evaporation and convection.

As far as I understand your question, the jar behaves in the same way as any body of the same size, heat capacity, and emissivity of LWIR. The glass of the jar determines the emissivity, because the glass is so opaque to IR that what’s inside does not affect much the emissivity. The emissivity of glass for LWIR is close to 1.0.

In shade in outer space the jar radiates at almost full blackbody intensity as given by Stefan-Boltzmann law. At night it radiates at the same intensity, but is also heated by IR from all directions, how strongly depends on local conditions. It’s also cooled or warmed by convection/conduction. As the rate of warming is not specified in your question, no further answer appears possible.

“The heat flux from ocean to atmosphere is always the sum of latent heat transfer through evaporation, sensible heat transfer through convection (and conduction at the interface), and radiative heat transfer that’s from ocean to atmosphere also as net transfer. All these together compensate the heating by sun.

Everyone who has taken a serious look on this agrees on the above paragraph. ”
Sure, all things radiate, convect, conduct, and evaporate heat.

“We see often accusations that some scientists or commenters supporting main stream science would not take evaporation into account. Those claims are examples of strawman fallacy. ”

I must have missed this common accusation.
Just saying if anything heats the ocean this is would increase evaporation- as well as increasing other fluxes of heat.

“We see also sometimes claims that evaporation would increase so much that the other forms of heat transfer would go down or that evaporation would prevent totally warming of the surface from increase of DWLR. That’s an erroneous claim, all three forms depend on the temperature difference between skin water and near surface atmosphere. ”

Well, evaporation can be powerful means to transfer heat- particularly with water. And the total heat content of water vapor in the Earth’s atmosphere is massive- more than heat content that 99% of rest of
atmosphere. Water droplets also have pretty high heat content.
And where have higher levels of water vapor and water droplets it’s more significant.
So generally it’s more significant in the tropical region.
And basically, in tropic or near the tropics is the region of what makes Earth have an average temperature of 15 C.
So between 23° 26′ latitude north and south is 40% of entire earth surface area, so just in terms of total area, the tropical region is significant relative to average of entire earth surface area. And in terms of the amount sunlight this region receives, it’s a very dominate region. And if you extend it up to around 38 latitude then in this region is half the world and something like 80-90% of all solar energy is received in this region.
Or above 38 degrees on either hemisphere is 25% of surface area of entire surface area. And if or go up to latitude of Europe, this is a very small portion of the world’s surface. And most of such region is still ocean.

So if obsession is average global temperature, water vapor and water droplets are important aspects. And if you are concerned about a places like Europe- heat transport via oceans is important- or direct heat from the Sun is not very significant- particularly in winter.

-“We see often accusations that some scientists or commenters supporting main stream science would not take evaporation into account. Those claims are examples of strawman fallacy. ”

I must have missed this common accusation.
Just saying if anything heats the ocean this is would increase evaporation- as well as increasing other fluxes of heat. –

I should note that most of solar energy in regards to the ocean does
not heat the ocean skin surface. Or the ocean is water and water [shallow water] is transparent to somewhere around 99% of sunlight.
And so most of Sun’s energy passes through the skin surface.
Such heat tends to, say, delay evaporation process. Or there is low amount of immediate result in terms of evaporation.
And in comparison any heating possible by air, is confined
to skin surface.

Just thought would clarify that, because my statement could be confused
with me somehow suggesting there some kind parity of types of heat fluxes.

What would be interesting factoid would be the percent of all sunlight which reaches earth [Earth as disk above atmosphere- so the “Earth receives 174 petawatts”- wiki, so total percentage which passes through a skin surface of earth’s ocean.

Ok so one has Earth being 70% surface area being ocean. But where most of sunlight goes on Earth one has higher percentage of ocean.
Then you got sunlight not getting through atmosphere, and etc.
But as just a guess it’s some number around 50% of this total sunlight energy which would pass thru the ocean [or any body of water] skin’s surface.

Another factoid could be different question, what percentage of energy
of sunlight directly coming from the sun and reaching the earth surface,
goes beneath the skin surface of oceans and other bodies of water.
So that that would be somewhere around 80%.

Then if you exclude land and focus just bodies of water, one get a higher percentage of sunlight passing thru the skin surface.
Let’s define it for this purpose as skin surface being deeper than 1 cm.

[And I meant in above sentence this latter- sunlight reaching the ocean.]

And finally rather than confining it to only direct sunlight, the question could include all sunlight reaching the ocean in a direct and indirect fashion. So including sunlight which diffused, scatter, or altered in any fashion other than converted into heat, which reaches to ocean surface. What percentage of this passes thru the skin surface of water on earth?
It seems this non direct sunlight would tend to heat skin surface, more. So, not have 99% it passing thru it.

Or said differently, if exclude all direct sunlight, and focus on heat caused
by indirect sunlight, it seems a much lower percentage would not
pass thru the top 1 cm of ocean.

Trenberth, Fasullo and Kiehl (2009) estimate the average absorbed flux of solar radiation to be 145.1 W/m^2 for land and 167.8 W/m^2 for oceans. The more recent paper of Stevens and Schwartz (2012) gives essentially the same value (162 W/m^2 as compared to 161 W/m^2) for the global average. Practically all solar SW absorbed by oceans penetrates deeper than 1 cm, typically several meters. A small fraction penetrates much deeper.

“The alternative possibility is to consider changes in heat fluxes, not the full heat fluxes. The change in the net heat flux between the oceans and the atmosphere may have either sign. Based on this approach warmer atmosphere may very well heat the oceans.”
——
You could certainly look at the change in net heat flux, but that would still not get to the underlying overwhelming flow of energy from ocean to atmosphere. If I heat my kitchen oven up to 250 degrees, turn it off and open the door, it is true that it will cool down a bit slower if it is already 80 degrees in my kitchen versus 30 degrees, but the 80 degree kitchen is in no way heating the 250 degree oven. Likewise, different natural variability of both ocean and atmosphere may cause the ocean to loose heat a little faster of slower over various time periods, but the net flow across the planet is always ocean to atmosphere by a wide margin.

Yes, many people are confused about the flow of heat. Heat does not respond to force fields, so that a thermal gradient by itself does not prevent heat from flowing in both directions. On the contrary, the gradient is the result of the distribution of the heat that has flowed.

If you don’t believe me, look at the definition of the Heat Equation. Not a force field in sight, only PDE divergence operators describing the effects of diffusion.

“Trenberth, Fasullo and Kiehl (2009) estimate the average absorbed flux of solar radiation to be 145.1 W/m^2 for land and 167.8 W/m^2 for oceans. The more recent paper of Stevens and Schwartz (2012) gives essentially the same value (162 W/m^2 as compared to 161 W/m^2) for the global average. Practically all solar SW absorbed by oceans penetrates deeper than 1 cm, typically several meters. A small fraction penetrates much deeper.”

So once such amounts energy passes deeper than 1 cm under the water, this quantity of energy per second is being **trapped** for some period of time.
And there is a lot of such direct radiant energy of the sun which is being trapped/captured or whatever beneath the ocean surface.

Ocean is 70% of Earth’s surface area and each square meter of ocean according to Trenberth, Fasullo, Kiehl, Stevens, and Schwartz has more energy being absorbed in the ocean than land area.

It’s not that I have any particular faith in these 5 guys papers, but would agree with them that more than half of sunlight reaching the Earth’s surface is going into the ocean.

And the ocean is quite different animal than land.
Land functions in basically in the opposite way as the ocean.
With land one *can not* say that practically all energy penetrates deeper than 1 cm, and/or that most of this energy travels meters under the surface.
Rather it’s some number close to zero of the sun’s radiant energy heating more foot under the ground.

So with the land one has about inch of material in which most of energy is being absorbed. Each day it’s warmed and in night it cools
with trickle of heat making it way beyond the inch or two of surface.
The air above the land is warmed. There certainly more heat going into the air as compared going into the soil. Though a warm ocean also is warming the air. Roughly in regional sense the land warm air more during the day as compared the ocean, and cooler ocean comes into land, and during the nite warmer ocean air comes into the
land. The common pattern off shore and on shore winds demonstrates in regional sense that land is warmed more during the day.
So with land it’s absorbing comparative small amount material as compared to 10 of meters of ocean. Or if such numbers were accurate
167.8 W per second is warming meters of water and 145.1 W per second is warming a inch or two of ground. So warmed ground during the day per square meter is warming more air, and radiating more heat
into space, allowing summer conditions sidewalks almost hot enough to fry eggs and air temperatures over 40 C. Not conditions you find in middle of the ocean.
But since 1/2 of time is night and most of surface area is ocean, global air temperature should be mostly heated over water rather than land.
Plus another difference is air temperature over land is usually about 20 C cooler than surface temperatures, and because ocean evaporation [the evaporated H2O gas molecules mixing with air] air temperature over ocean is closer to surface ocean temperature.

Earth system as whole is very far from thermodynamic equilibrium. Therefore many of the rules that are true to systems near thermodynamic equilibrium may lead to totally erroneous conclusions.

Small volumes within the Earth system are often near thermodynamic equilibrium, but the Earth system as whole or major subsystems of it are not by any interpretation of the word ‘near’. With a reasonable interpretation of ‘near’ The Earth system is a dynamic flow system near a stationary state. Thus it makes sense to consider deviations from stationary state of the Earth system as whole or subsystems of it.

When we look at the heat transfer between oceans and atmosphere the stationary system is that where the temperatures and other state variables including heat fluxes stay constant. We may make this statement at different time and spatial scales. The oceans and atmosphere as whole can be stationary only at the level of annual variables, at smaller spatial scales the stationarity may be true at very short time scales as well.

When the dynamic system deviates from a stationary state its state is going to change, sometimes towards a nearby stationary state (whatever ‘near’ means in this case), sometimes it oscillates or diverges in a non-periodic fashion further from the stationary state.

Looking at the system of oceans and atmosphere, in the stationary state:
– sun warms the ocean mostly at depths from meters to few tens of meters,
– the heat from sun gets transferred to the skin either locally or elsewhere through complex circulation,
– the heat is released to the atmosphere (and to a small extent directly to space) by radiation, evaporation, and conduction/convection,
– the heat is radiated to space from atmosphere (or directly from the surface).

All four steps have the same net flux (when the radiation form the surface directly to space is included in two of them), otherwise the system would not be stationary.

The ocean warms when either the energy flux from the sun is increased or the heat loss to atmosphere reduced, it cools in the opposite situations.

Increased CO2 in the atmosphere leads to reduction of heat loss to atmosphere by two mechanisms
– more downwelling radiation at the same atmospheric temperature, and thus a smaller net flux of IR from the ocean skin to atmosphere
– more CO2 warms directly the atmosphere, and that leads to reduction in all forms of heat transfer from ocean to atmosphere.

The two mechanisms lead rapidly to a warmer skin temperature of the ocean. That restores the heat flux from ocean to atmosphere to a value closer to it’s stationary value under the new conditions, but not quite to that until the whole ocean has reached the new stationary state. (That does not really ever happen as new changes in forcings make the stationary state also change.)

Internal variability makes everything more complex. Different oscillations of the oceans may often overwhelm the changes in external forcings and the dynamics that tries to restore the stationary state, but the basic idea should be clear.

Coming back to the question: Can a warmer atmosphere warm the ocean?

The answer is clear. If the system deviates from a stationary state by having a warmer atmosphere, the reaction is a reduction in the heat flux from the ocean to the atmosphere, and warming of the ocean.

Pekka Pirilä
“Coming back to the question: Can a warmer atmosphere warm the ocean?

The answer is clear. If the system deviates from a stationary state by having a warmer atmosphere, the reaction is a reduction in the heat flux from the ocean to the atmosphere, and warming of the ocean.”

Warmer air may warm the skin temperature of the ocean.
But it seems quite possible that colder air can warm the ocean- or skin ocean temperature has little to do with ocean temperature.
To warm the ocean one needs more direct sunlight. If cooler air results in
less clouds, that means more direct sunlight, then cooler air can warm the ocean.
Though if warmer air causes more direct sunlight reaching the ocean, warmer air would then warm the ocean and warm ocean skin temperature.

But since ocean skin temperature largely warms night air and if you can make night air warmer, the result would be get warmer average air temperature.

But since ocean absorbs most of it’s energy beneath skin surface, and receives most of the energy of the Sun to Earth beneath the skin surface this, how ocean warms- and the world is warmed.

As said, ocean is the opposite of land, increase land skin surface particularly the average skin surface temperature and this warms the underneath ground. Ocean warming of the skin temperature is not a dominate mechanism that warm the ocean beneath the skin surface.

Try once more to understand what I wrote. So far you haven’t got the main point at all (or at least you fail to apply it for most cases).

As an example: The temperature of the atmosphere affects the temperature of the ocean always, day and night, winter and summer. Cloudiness is one factor, but the effects exist even, when the cloudiness is exactly the same, and only the temperature changes.

Of course the ocean temperature affects the atmospheric temperature even more than the atmosphere affects the ocean temperature, but the influence goes always both ways, and that influence is in turn partially determined by other factors that affect the atmospheric temperature.

gbaikie, the point you’re missing in this discussion with Pekka is that when the atmosphere stays at a fixed temperature, if you wait long enough the fluxes of heat into and out of the ocean will eventually balance.

Now suppose you’re in this balanced situation and suddenly the atmosphere becomes 50 degrees hotter (to pick a ridiculous extreme, maybe a meteor heated it up or something). You now have very hot air above an ocean that up to now had been stable.

Are you claiming that the additional heat from this hotter air will have no impact whatsoever on the ocean? Not even on the top centimetre?

Try once more to understand what I wrote. So far you haven’t got the main point at all (or at least you fail to apply it for most cases).

As an example: The temperature of the atmosphere affects the temperature of the ocean always, day and night, winter and summer. Cloudiness is one factor, but the effects exist even, when the cloudiness is exactly the same, and only the temperature changes.”

Assuming CO2 warms the air, then it would warm the ocean skin temperature.
And therefore also increase evaporation.
Or globally, if ocean skin temperature is increased, one will get more tropical like conditions. Or temperatures will characterized as less freezing temperatures in higher latitudes regions. And of course doing same manner of warming of surface land regions, this also not limited to regions largely influence by ocean warming effects.

“Of course the ocean temperature affects the atmospheric temperature even more than the atmosphere affects the ocean temperature, but the influence goes always both ways, and that influence is in turn partially determined by other factors that affect the atmospheric temperature.”

I agree, but I believe it is minor effect.
It’s minor effect even if you assume very high sensitivity of CO2.
I don’t believe you accept the idea of a very high sensitivity of CO2
and I know I don’t.

Of I think Earth is warmer than one assume if imagine Earth as blackbody,
because Earth is planet covered with oceans, rather than the content of
it’s atmosphere. Or planet covered by oceans is factor which largely affects
it’s global climate.

And I do think the atmosphere composition does affect climate [temperature], but it seems the tropics already has a large atmospheric greenhouse effect- and is part of what explains tropical climate.

Vaughan Pratt
“Now suppose you’re in this balanced situation and suddenly the atmosphere becomes 50 degrees hotter (to pick a ridiculous extreme, maybe a meteor heated it up or something). You now have very hot air above an ocean that up to now had been stable.

Are you claiming that the additional heat from this hotter air will have no impact whatsoever on the ocean? Not even on the top centimeter?”

Certainly, for top millimeters. Plus wind and waves will mix up a bit.

The heat from meteor [line of sight] is very similar or hotter than sun in terms wavelength- so it would punch thru the ocean. If in normal ocean darkness depth, it would light up that world. But it’s brief and not do much in terms of warming the ocean.

If meteor [or nuclear weapon- they are similar] were to explode over ocean only, one should not get air temperature increasing by 50 C.
Except that clouds could vaporize and that steam would heat the air. So any water or water droplets if close enough would vaporize or warm significantly.
Of course if did over land, and line sight, it burns everything. Things could instantly go to say 1000 C in a second.
But doesn’t do much to oxygen, nitrogen, or CO2 gas of atmosphere- though it will make them glow- like florescent light bulb.

But your question is if the air were suddenly made 50 C warmer what does that do to ocean.
Is does nothing if warmer air is not at the ocean surface.

And 50 warmer air does not stay near surface, it goes up.
But If had 50 C warmer air at latitude staying in area, than the lapse rate will convert lower air temperature into 50 C warmer air- takes a little bit of time but given say minutes, the lapse rate would transform the temperature of lower air temperature.

Ok however we get it, there is 50 C warmer air on surface of ocean. So water evaporates fast, but due to poor conduction of heat of water, it’s does not warm water at depth [more than an inch].
With hours- days of such warm, the water warm does go to deeper depth, very slowly, but if sun is out, it’s also going warm at depth because all heat in water caused by sunlight is blocked from escaping to surface.

gbaikie, (looks like WordPress lost an indent level, so I hope this shows up in the right place)
If the air was 50 C warmer than the ocean, the main way it would communicate that difference is through the IR emitted by its GHGs, which would result in a lot of downward flux, net into the ocean. Another effect would be the conduction heating by contact at the surface which is less efficient. In the real world the atmosphere is cooler, so the net IR is out of the ocean, but that net is reduced by having GHGs, which at least emit something even when the air is colder. The main thing to realize is that GHGs cause the cooling to be less than otherwise, which means a warmer surface than otherwise.

The case of much warmer atmosphere is not the one I have discussed, as that would lead to an energy flow from atmosphere to the ocean, while I have discussed only the situation where the flow is from the ocean to the atmosphere, and how the temperature of the atmosphere affects the strength of this flow.

Another assumption in what I discuss that the change that led to the warmer atmosphere is persistent, not a sudden single addition of heat. The change may be considered sudden, but not as addition of heat but as a factor, forcing, that persists as long as the temperature of the atmosphere has not quite reached the new stationary value getting the weaker the closer the Earth system is to the new stationary state.

The influence of the warmer atmosphere is not restricted to the skin but penetrates deeper to the ocean quite efficiently, because the ocean is not stable and stratified. If nothing else would cause mixing, the heating of water by solar SW would lead to that. Without mixing solar SW would lead to unlimited warming to all depths where its not totally negligible, i.e. to depths of hundreds of meters. That mixing would make a thick top layer of ocean essentially isothermal. The real ocean is not like that as other forms of mixing dominate, those due to turbulence and large scale ocean currents.

Independently of the mechanism of mixing the main point is that whatever happens for the topmost layers affects the rest of the ocean. One rough model of that is the model of effective conduction with a conduction coefficient hugely stronger than that valid for stable water.

The ocean can loose energy only from the very thin skin. Evaporation and conduction involve directly only the topmost molecules, while IR is emitted from the same depths where downwelling LWIR is absorbed, i.e. at depths up to a few µm. The thin skin layer a fraction of 1 mm thick is cooled from the top and warmed from below.

Anything that slows down the net heat flux from the ocean through the skin to the atmosphere will lead to warming trend of the ocean, when the heating by solar SW is not changed. A higher temperature of the atmosphere is one such factor, because all mechanisms of heat transfer are weakened, when the temperature of the receiving side rises.

One of the forms of heat transfer is based on evaporation. A higher temperature of the air leads to more evaporation as long as the absolute humidity is the same, but more evaporation leads rapidly to higher absolute humidity, and by that to a reduction of this effect. More evaporation will necessarily lead also to more condensation, where the latent heat is released to the atmosphere. Evaporation/condensation is thus a mechanism of heat transfer, not of heat removal. When more heat is transferred by this mechanism, the temperature difference is reduced, i.e. warmer atmosphere leads also to warmer ocean. Changes in the evaporation do not prevent the ocean from warming, they are part of the totality of changes that cause also the oceans to warm, when the atmosphere warms.

@Jim D: If the air was 50 C warmer than the ocean, the main way it would communicate that difference is through the IR emitted by its GHGs, which would result in a lot of downward flux, net into the ocean.

Nice physics question here. As Pekka points out the air will get humid, making water vapor the dominant GHG, so that’s where the radiation will be coming from (the CO2 radiation will be negligible by comparison) But the water vapor is also causing heat to flow from the air to the water as the surface exchanges water molecules in both directions, with the down-going molecules having been heated by the air.

The question then is which effect is stronger: the heat from the IR into the water, or the heat from the exchange of water molecules at the surface?

To model the oceanic mixed layer and ocean breezes, assume the water is being stirred gently and the air is being fanned gently so as to encourage a strong flow of water molecules at the surface. If both are at rest the flow will presumably be weaker.

I would guess that as long as there was some mixing to hasten the exchange of water molecules at the surface, that effect would beat out radiation. This might remain true without mixing.

Quite apart from the exchange of water molecules there is also the heating effect of the air itself via conduction at the skin layer. This too is hastened by stirring and fanning.

Bear in mind that GHGs at 50 C only radiate 40% more strongly than at a room temperature of 25 C. The air may seem unbearably hot yet the radiation need not be that much more.

One way of looking at the situation is that all heat fluxes are functions of temperatures of ocean and atmosphere, humidity levels and wind speed to list the most important factors. When the temperature difference between some levels within ocean (say 1 m deep) and in the atmosphere (say 10 m altitude) is increased all components of the heat flux grow. The skin temperature changes also a little. It’s not really important to know the relative sizes of the changes as the important thing to notice is that the total flux grows rapidly with increasing temperature difference.

From that observation we can conclude that thin layers of ocean and atmosphere reach rapidly a new balance where the temperature difference is only little higher. On the atmospheric side the stability criteria lead to relatively rapid adjustment of both temperatures and moisture levels through the whole atmosphere, while the ocean reacts much more slowly. A change in the energy balance at the TOA leads rapidly to a nearly equal change in the energy balance a little below the surface of the ocean. (Various feedbacks result from that change and affect then both energy balances.)

All the above is part of fast dynamics, on longer time scales we have the slow dynamics of heat transfer within the ocean. From it’s properties depends, which part of the original forcing is soon canceled by Planck feedback, and which remains to warm the ocean. The faster the heat transfer is within the ocean the more slowly the surface warms and the larger part of the forcing persists.

These issue are discussed also in the 4. and 5. post of Isaac Held (link in the link list of this site).

This process enables a liquid to exhibit a lower temperature than its environment. In fact, ‘different liquids’ exhibit ‘different temperatures’ (the ‘static temperature’ for ethanol [in ‘free’ evaporation] is less than that for water [within an identical environment]).

The propensity for liquids to become ‘gas’ robs energy from the liquid/solid that they emanate from. The most recognised process for ‘water’ is the Clausius Clapyron effect.

Ayn Rand/Heartland panel members Keith Lockitch, Fred Singer and Robert Carter provide a spirited defense of Gail’s conspiracy-centered “paymaster” view of science. Steve McGee’s over-analysis of poor-quality satellite data would likely have been well-received at this forum!

What a pity that (as the Ayn Rand Institute video apparently shows) the audience was sparse-to-nonexistent … neither (apparently) has there been any public debate, white paper, or survey article associated to the Ayn Rand/Heartland showcase of climate-change denialism.

Conclusion The conspiracy-centric view of climate-change science is being distilled to greater-and-greater ideological purity within a smaller-and-smaller “bubble” of nuttier-and-nuttier denialists.

As one of the more marginal and nuttier alarmist truebelievers here, Fan is now reduced to wheeling out the dilapidated old “conspiracy” strawman, in an attempt to deny the blatantly obvious facts that
– the state funds CAGW dogma
– the state stands to benefit from an acceptance of CAGW dogma

The elephant in the room he strains so hard to not see, is the obvious connection.

And that you don’t need to invent a “conspiracy” theory to explain an organisation acting it its own self-interest. That what all organisations and people do, and what everybody well understands.

What would require some explanatory theory, is people or organisations consciously not acting in their self-interest.

There is considerable discussion on this thread of the physical processes active in controlling climate.
The key factor in making CO2 emission control policy is the climate sensitivity to CO2. By AR5 – WG1 the IPCC is saying: (Section 9.7.3.3)”The assessed literature suggests that the range of climate sensitivities and transient responses covered by CMIP3/5 cannot be narrowed significantly by constraining the models with observations of the mean climate and variability, consistent with the difficulty of constraining the cloud feedbacks from observations ”
In plain English this means that they have no idea what the climate sensitivity is and that therefore that the politicians have no empirical scientific basis for their economically destructive climate and energy policies. In summary the projections of the IPCC – Met office models and all the impact studies which derive from them are based on specifically structurally flawed and inherently useless models. They deserve no place in any serious discussion of future climate trends and represent an enormous waste of time and money. As a basis for public policy their forecasts are grossly in error and therefore worse than useless.
How then can we predict the future of a constantly changing climate? A new forecasting paradigm is required. It is important to note that it in order to make transparent and likely skillful forecasts it is not necessary to understand or quantify the interactions of the large number of interacting and quasi- independent physical processes and variables which produce the state of the climate system as a whole as represented by the temperature metric.
A simple rational approach to climate forecasting based on common sense and Quasi Repetitive- Quasi Cyclic Patterns has been developed on several posts athttp://climatesense-norpag.blogspot.com
There has been no net warming for 16 years indeed the earth has been in a cooling trend since 2003 which will continue until about 2035 and perhaps for hundreds of years beyond that. For estimates of the timing and amount of the coming cooling see the link above.
British, German and Obama’s Climate and Energy policies are based on delusional fantasies of future warming.

@Dr Norman Page: indeed the earth has been in a cooling trend since 2003 which will continue until about 2035 and perhaps for hundreds of years beyond that.

Given that 2003 was only ten years ago, if Dr Norman Page can confidently extrapolate from a single decade to hundreds of years on that basis alone, it’s pretty clear what weight to assign to the good doctor’s input.

The question is how can any of you claim the world is cooling since 2003 when the UAH satellite record shows a positive trend since 2003?

As deniers I would remind you that UAH is your record. Run by your buddies. HadCRUT and GISTEMP can’t be trusted. Just reminding you of this, because for some reason it seems to have evaded your minds!

@JC: May I take it from your post that you agree that the empirical data shows that the world has, indeed, been cooling since 2003?

Yes indeed, as could have been predicted decades ago. The decline for the past 120 months has been −0.33 °C/century.

The trend of +3.4 °C/century for the last 36 months could also have been predicted back then. Do you expect the next 36 months to suddenly turn around and climb right back down?

I’m not suggesting you should say no, in fact you’d disappoint me if you didn’t say it was pretty certain to go down super fast in order to offset that 3.4 degrees per century rise of the last 36 months. You realize hat’s ten times the decline of the last 120 months, don’t you?

Clearly you’re not a betting man or by now you’d be looking around for places to hedge your bets.

“There has been no net warming for 16 years indeed the earth has been in a cooling trend since 2003 which will continue until about 2035 and perhaps for hundreds of years beyond that.”
———
So how is it that the oceans have been storing so much energy, of the Earth has been cooling? Seems the primary storage of solar energy for the planet should be cooling if the Earth is supposed to be cooling. Odd physics you have.

I disagree. I can see nothing inconsistent with the global surface temperatures cooling slightly, while at the same time the oceans are warming slightly. What is inconsistent about these two phenomena?

I disagree. I can see nothing inconsistent with the global surface temperatures cooling slightly, while at the same time the oceans are warming slightly. What is inconsistent about these two phenomena?”

——
If the oceans are warming and the atmosphere cooling slightly, then the Earth ad a system is still very much gaining energy. The atmosphere would have to cool a great deal to balance the relative energy the oceans were gaining, in which case, if they cooled that much, the ocean would develop sea ice all the way to the equator and most all energy would stop flowing from ocean to atmosphere and we’d be back at snowball Earth.

The bottom line is that as long as the oceans are gaining energy, the Earth as a system is gaining energy, which is exactly the condition we’ve had for likely 40+ years.

R. Gates you write “The bottom line is that as long as the oceans are gaining energy, the Earth as a system is gaining energy, which is exactly the condition we’ve had for likely 40+ years”

So what? No-one, so far as I am aware, has proved that this warming was caused by increased levels CO2 in the atmosphere. There is no estimate of climate sensitivity in terms of ocean heat content. How much do ocean temperatures rise as a result of a doubling of CO2? And if you have a figure, what is the reference where this number was estimated? There is no Stern Report equivalent for the damage caused by a slight increase in ocean temperatures.

So why the urgency to reduce the amount of CO2 we put into the atmosphere, and so considerably reduce our standard of living?.

The last 10 years, 120 months, shows an ever so slight warming. So it’s flat.

It’s currently warming at about .5C per decade. That is a rather odd thing to see in the presence of global kooling.

The Eastern Pacific is virtually absent of anomalous cold, so the SAT is warming pretty aggressively during Enso neutral, which will last into the spring. It looks like Enso neutral episodes that include one calendar year of neutral usually lead to EL Nino, and in January the current ENSO neutral episode will include one calendar year of neutral.

We are basically repeating the 1980s, .06C per decade matching the Storch number of .06C per decade for the 2000s. The 1980s led to aggressive warming in the 1990s. No warming until 2035 is not going to come true. Nice wish, but 400 ppm is honkin’.

@Dr Norman Page: If you think I’m extrapolating from 10 years alone you haven’t bothered to read the various posts on the link I provided.

True that. I mistakenly thought you were using the trend since 2003 as the basis. Please ignore that and accept my apologies, I often get things wrong. I’m sure you have compelling evidence of some other kind that the next few hundred years are going to get colder.

@maksimovich: much [accumulated energy in the oceans] =0.06c over the last half century.

…which translated into joules represents close to 0.15% of the Sun’s energy reaching the surface. 120 PW over half a century equals 2E26 J and 0.15% of that is 30E22 J, in the right ballpark for increase in ocean heat content.

That’s a lot of disequilibrium to sustain over 50 years. If that very slow rate of rise of temperature of the ocean is holding back the Planck feedback, 400 ppm may be a lot worse than we’re presently assuming. (Sorry, I was just following a train of thought, it wasn’t intended as a nag.)

“If that very slow rate of rise of temperature of the ocean is holding back the Planck feedback, 400 ppm may be a lot worse than we’re presently assuming.”
—
A lot worse than some are presently assuming. Others are way out in front on this.

Right. A year ago I’d been thinking this delay due to slow warming of the ocean was the equivalent of CO2 waiting 10-15 years to impact us. After more consideration I’m now leaning towards at least 30 years.

Vaughan, “Right. A year ago I’d been thinking this delay due to slow warming of the ocean was the equivalent of CO2 waiting 10-15 years to impact us. After more consideration I’m now leaning towards at least 30 years.”

If you apply a continuous force you will get a continuous response you just have to wait until the response reaches a measurement threshold. Waiting X numbers of years is just waiting for the next natural cyclic phase to amplify signal. So you you really need to understand the range. cause and timing of internal variability before you can tease out the CO2 signal.

“Right. A year ago I’d been thinking this delay due to slow warming of the ocean was the equivalent of CO2 waiting 10-15 years to impact us. After more consideration I’m now leaning towards at least 30 years.”
—–
The assumption that a warming ocean is not impacting us is not founded on any solid facts.

R. Gates, “The assumption that a warming ocean is not impacting us is not founded on any solid facts.”

I don’t think anyone has assumed that a warming ocean doesn’t have an impact. If the ocean warming was actually uniform, it would be easy to tease out the impact and more importantly what is causing what amount of the ocean warming. But taking a partiality period of the warming, ~1955 on and leaping to conclusions is not what most engineers would consider a “valid” approach. Instead of making that assumption, that 1955 was “normal”, some prefer to use all the data that is available to tease out a bit more insight before leaping.

” But taking a partiality period of the warming, ~1955 on and leaping to conclusions is not what most engineers would consider a “valid” approach. Instead of making that assumption, that 1955 was “normal”, some prefer to use all the data that is available to tease out a bit more insight before leaping.

What do you consider “normal” and why?”
______
I find the use of the adjective “normal” to not be very helpful in scientific discussions. Better to simply state the known data, and the impacts of the effects, and the range of uncertainty. If a certain number of Joules per year are being added to the ocean, then how much, where, how do we know, and what are the full range of effects from that addition on biosphere, cryosphere, weather patterns, storms, etc. ? This is all that matters.

R. Gates, ” If a certain number of Joules per year are being added to the ocean, then how much, where, how do we know, and what are the full range of effects from that addition on biosphere, cryosphere, weather patterns, storms, etc. ? This is all that matters.”

And you compare, “All that matters” to what? You can avoid answering that by basing you scientific logic on the comedic styling of Naomi Oreskes or you can actually consider why you are so certain of the things you are so certain of.

A current best estimate of the rate of ocean heat uptake, there are plenty to go around, is about 0.35 Wm-2 +/- 0.4 Wm-2. If a SSW event causes the NH to be 0.5C below normal for 3 months you just erased a years worth of ocean heat gain. According to RSS the “average” temperature of the NH lower troposphere is about 4C 334.5Wm-2 reducing that to 3.5C would be 332.1Wm-2, divide by two for hemisphere and 4 for the quarter year, bang, a year’s worth of ocean heat uptake gone. Does that penetrate you climate change radar? How well did the models do on SSW events prior to the “pause”? Something like SSW events are a like like pressure relief valves. What about that “certain number of Joules”?

Gates and Pratt, Here is some you guys can do at home to impress your friends.

KNMI Climate Explorer has lots of data and a masking option. RSS has their lower troposphere data in degrees K, kind of neat. You will have to download the northern hemisphere data separately, but you can actually compare things that are happening in the real world with the stuff that you were led to believe was supposed to be happening. It looks like lower troposphere temperature tend to follow ocean heat content and even though we have added massive amounts of CO2, the Earth seems to be capable of finding ways to release energy anyway.

Now I know y’all might find this as a bit of a shock, but some scientists become more famous for their mistakes than their successes. Climate Science is going to make a whole bunch of scientist real famous.

It does what he wants, so when Cappy says ” So you you really need to understand the range. cause and timing of internal variability before you can tease out the CO2 signal. “, and then he looks at the results of the CSALT model, it only gets him more upset.http://contextearth.com/2013/12/06/tidal-component-to-csalt/

“If a SSW event causes the NH to be 0.5C below normal for 3 months you just erased a years worth of ocean heat gain.”
——-
Very odd way of thinking about SSW’s. These are largely advective events involving the transport of large masses of air vertically (up and down) and horizontally in the the atmosphere through planetary wave action. They sure stir up the atmosphere, but the net energy lost from the system is unlikely to be a “years worth of ocean heat gain”.

R. Gates, ” They sure stir up the atmosphere, but the net energy lost from the system is unlikely to be a “years worth of ocean heat gain”.”

Think about what you just said, “they stir up the atmosphere.” Energy being lost is what stirs up the atmosphere and the larger the stir the larger the loss. Then the second part, “unlikely to be a…” That is why there is math and physics.

A simple way of looking at it is that if you stir up the atmosphere more energy is transferred to its sink – space. If you stir up the oceans more energy is transferred to its sink – the deeper oceans. Don’t stir and you get the maximum energy retained at the warmest surface. You didn’t like my pot lid rattling analogy, but that is pretty much what SSW and deep convection events are, energy release.

Now why would you assume “unlikely” when there is plenty of data available to take a look?

SSW’s are largely plantary wave activity that (when sufficiently large) results in downwelling air over the pole that compresses the air, often highly distorting or even shattering the polar vortex. There is no doubt that there is some radiational loss of energy during these events, as we can measure it quite readily by satellite, but this radiational loss, largely in the LW, is many orders of magnitude smaller amounts of energy than the SW solar that warms the global ocean continually throughout the year. The measured enhancement to the Brewer-Dobson circulation that is related to the increase in SSW events and also the effect it seems to be having on the QBO, all are strong indications of more, not less energy accumulating in the climate system.

Well good morning Webster, “That’s what Cappy’s copywrited “Redneck Physics” approach is all about.”

Simplicity is what Redneck Physics. I had a professor that retired from Bell Labs that was all about how simplicity is elegance. “Normal” is what is known as a Frame of Reference. If you don’t know what “normal” is, you don’t have a stable frame of reference. Physics is Phun :)

WebHubTelescope:This together, with the work that I have done on fossil fuel depletion modeling means that we finally have a good set of high-fidelity yet first-order system models to be able to forecast global warming in the extended future.

Quietly making the completely false assumption that attribution is settled.

Chief doesn’t realize that the ENSO as characterized by the Southern Oscillation Index (SOI) reverts to a mean of zero. It will never get too far away from a zeroed value while the CO2 signal will keep trending up. Not hard to see which one will win.

lolwot is correct in saying:

” Once you’ve bottomed out, as we have, the slack left for causing more cooling is gone.

It’s like a spring that’s been coiled. The only way now is up, as they say.”

This work is worth publication. The negative trend of TOA is a good observational evidence of global warming, in line with my mathematical work. The earth’s energy budget is neither in deficit nor in surplus. It remains constant. Reduction of radiation at the TOA is compansated by potential energy reduction of the atmosphere such that the total energy of the earth resulting from solar energy exchange remains constant.

I agree it’s worth publication. If it showed more energy entering than leaving TOA and had James Hansens name attached it would be fast tracked through Nature Climate so fast heads would spin,

Your comment saying that an imbalance at TOA can happen while the earth’s energy level remains constant seems to violate 1st Law of Thermodynamics so I’m afraid no reputable publisher would touch it. Maybe Principia Scientific would do it.

Not really, this conclusion is based on respecting the laws of thermodynamics. Reduction of TOA means that energy accumulates in the surface. However, there is a reduction in the energy of the atmosphere both thermal and potential, which is observed. Do the math, the net energy of the earth is constant.

“The earth’s energy budget is neither in deficit nor in surplus. It remains constant.”
___
Completely erroneous. The net energy in the Earth system is never constant but is constantly changing based on the total net forcing to the system (positive and negative) and how quickly that net forcing is changing. We know that during periods of glacial advance for example, the net forcing is reduced and the net energy in the system is also reduced. We know that during periods of very active volcanic activity, the net forcing is reduced (which also can lead to periods of glacial advance). The point is, the energy budget is always in flux and net forcings are what dictate that budget.

L&S gave away the store as they also desperately tried to divert from the impact of GHGs.

This is how the pause is explained by the addition of the L&S solar system energy terms to CSALT:

There is a factor called orbital and a smaller one called bary that contribute pseudo-cyclic fluctuations to the temperature. Note that the orbital factor reached a minimum this year. We can thank Scafetta for that one.

Based on HadCRUT3, L&S predicted 0.311 in 2011, rising at an average rate of 0.25 °C/century.

HadCRUT3 itself, the data they were working from, registered 0.340 in 2011 and rose at an average rate of 5.3 °C/century.

If the actual rise averaged 5.3 °C/century, by what criterion does 0.25 °C/century count as an “accurate prediction.”

Extrapolating the implied uncertainty in their prediction to 2100, what L&S called their “preliminary forecast” for 2100 looks far worse than anything threatened by the IPCC!

In case anyone wants to check the numbers 0.311, 0.314, and 0.319, a glance at this duplication of L&S’s Figure 5 based on their exact formulas will show that their Figure 5 faithfully reflects both their own formulas and this duplication.

If the figures you cited are correct, it looks like Loehle and Scafetta predicted temperatures stabilizing in HadCRUT3 at about 0.35C where they actually increased to, let’s say 0.41C. So they were off by 0.06C.

IPCC, on the other hand, predicted warming at a rate of 0.2C per decade and the actual over the past decade has come in at cooling of 0.04C per decade.

So IPCC was off by 0.24C per decade, or 4 times as much as Loehle and Scafetta.

“If the figures you cited are correct, it looks like Loehle and Scafetta predicted temperatures stabilizing in HadCRUT3 at about 0.35C where they actually increased to, let’s say 0.41C. So they were off by 0.06C.”

In just 3 years that’s a lot. No sign that temperatures have stabilized either.

The previous time you tried to use trend lines on WfT to make your point, David, you ended up contradicting yourself and making my point instead with the two trend lines you drew on the Keeling curve. They showed the slopes that I was claiming, not the ones you were.

Congratulations, you’ve done it again!

It almost exactly matches L&S 2011.

Looking at their Figure 5 (an exact copy of which can be seen here), this didn’t look even remotely true. However I figured I’d better check it numerically in case my eyesight was failing or something. So I plotted the trend line for the same period (2000 to now) for L&S’s model as shown in their Figure 5, based on the parameters given in their paper. Here are the results for respectively the slope and the lowest point reached by the respective trend lines, Wft followed by LS.

Whereas HadCRUT3 has only two years below 0.4, and seven above 0.44, in contrast all 11 years of L&S for that period are below 0.4, with eight of them below 0.35.

We now have slightly more insight into your definition of “exactly matches”, though not much more since you have to admit your track record for what you’ve been able to coax out of WfT to date hasn’t exactly been stellar.

Vaughan, L&S don’t hind cast all that well either if you have have millikelvin standards. Let’s see, your model with the 2.6C sensitivity means you think about 0.8C below the 1955-2012 mean is “normal”. A range of +/- 0.8 C is likely about normal and the data uncertainty is about 0.25C if you consider the whole length. There should be some reasonable margin of error for a planetary scale somewhat complex non-linear dynamic system I would think. If not it will get to be a lot like predicting the stock market.

Webster, “The problem is that L&S do the CO2 incorrectly. It isn’t a piecewise curve like they show but a log sensitivity to CO2 which is continuous.”

Right, and if you take the log of a slight exponential you get a straight line. The difference between their “fat” straight line and a very precise ln is nothing to write home about. Considering the number of linear assumptions used in climate science I thought is was funny, in a humorous way, I am pretty sure both are quite capable of showing over-precision.

@cd: Right, and if you take the log of a slight exponential you get a straight line. The difference between their “fat” straight line and a very precise ln is nothing to write home about.

cd, forget mathematical models of CO2 like exponentials and just look at actual CO2 data. When you do, you get the blue curve in this plot.

What L&S have done is to give a nonphysical model of this physically based curve by fitting two straight lines to it connected at 1942. They call the line on the left “natural warming” and the one on the right natural plus anthropogenic warming, with no physics justification of either other than that the CO2 is supposed growing exponentially.

In a couple of centuries CO2 might be growing at an exponential rate, in which case the log will straighten out. In the meantime it’s going to be some other sort of curve. Furthermore when CO2 has the form 1+x where x is much less than 1 but is growing exponentially, log(1+x) will be linear in x and hence be growing as an exponential function of time.

Vaughan. If their “natural” line is +/- 0.8C wide and the AGW line is +/-1.5C wide there is quite a bit of room for wiggles. As I said if you fit a log function to the CO2 you are assuming you “know” what “normal is, just follow the fit backwards. If you don’t think you know what “normal” is you can use something like they did to exaggerate the point for people that think they do. I think you are missing a little unexpected elegant simplicity.

As far as the log(1+x) make that log (n+a) natural and anthro. Natural also follows a log curve just natural should bend earlier as the system approaches the new “charged” “recharged” or whatever you would like to call the state.

It’s the shape not the exact value, dopey. There’s been no upward trend since 2000 in HadCRUT3. In 2010 L&S forecast a continuation of no upward trend through 2025. So far they were right. Did your forecast predict this pause, yes or no? Don’t lie. Don’t spit out a novella of obfuscating numbers just answer the question.

@DS: It’s the shape not the exact value, dopey. There’s been no upward trend since 2000 in HadCRUT3. In 2010 L&S forecast a continuation of no upward trend through 2025. So far they were right.

David Springer is hiding the decline. For the period he indicated, L&S forecast a decline of 0.71 °C per century. This is clear whether you go by the shape or the numbers.

Springer’s creative approach to hiding this decline is to fit a trend line not to L&S but to HadCRUT3 itself, observe a 0.04 °C per century decline, and then claim that this “exactly matches” L&S.

McIntyre would wipe the floor with decline-hiders who used that method.

Did your forecast predict this pause, yes or no?

I’d been assuming up to now that your questions about forecasting were directed to WHT since he’s the one offering forecasts in this thread. I dropped out of the temperature forecasting business 12 months ago.

However if like Max you’re now holding my 2012 poster against me, Figure 7 plotted temperature from 412000 BC (411999 BC if using the astronomers’ definition) to 2098 AD. The part up to 2010 inclusive was simply the extant data sets, only 2011 onwards left the datasets and relied on the analytic model.

So the portion 2000-2010 was vastly closer to HadCRUT3 than L&S but for the boring reason that it was HadCRUT3.

Out of curiosity I went back and checked what figure 7 had extrapolated for the ten years starting 2011. It shows an upwards trend of 3.51 °C per century.

Taking your “So far [L&S] were right” as license to check initial segments of data, I asked WfT for the trend from 2011 to 2021. I formulated this request to be dependent on when you click on it: if 3 years from now you return to this thread and click on this link you’ll get the trend for the first 6 years starting in 2011. For now it’s 3.

Today WfT reports a rise of 5.87 °C per century for 2011-2021. I’m sure we all hope that this rise will at least have slowed to my projected 3.5 °C/century by 2021, and of course preferably much further!

Don’t lie. Don’t spit out a novella of obfuscating numbers just answer the question.

While I’m not sure whether the innumerate would be the first to accuse others of innumeracy, I’m willing to believe this principle would apply to dishonesty. If you’re also illiterate then what I just said is that those accusing others of lying are the ones most likely to be lying.

The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ± 0.15 W m−2 by Hansen et al. (2005)

OK.

So the net imbalance of 0.85W/m^2 (rounded up to 0.9W/m^2) was not based on an estimate by Trenberth et al. It is a model-derived estimate by Hansen and NOT an empirical number based on actual physical observations.

Our climate model, driven mainly by increasing human-made greenhouse gases and aerosols, among other forcings, calculates that Earth is now absorbing 0.85 ± 0.15 watts per square meter more energy from the Sun than it is emitting to space.

Summarizing Hansen’s determination of the 0.85 W/m^2 “imbalance” figure: Total forcing is estimated by GISS model simulations to be 1.8 W/m^2 (1880-2003), including 1.6 W/m^2 for all anthropogenic forcing and 0.2 W/m^2 for natural factors (direct solar irradiance only). Observed warming was 0.6-0.7 degC. Assumed climate response is 2/3degC per W/m^2 (equivalent to an assumed 2xCO2 climate sensitivity of 3 degC), therefore 0.65 degC warming is response to ~1W/m^2. But since theoretical forcing was 1.8 W/m^2, this leaves 0.8 W/m^2 still hidden “in the pipeline”.

Checking Hansen’s logic, it is “circular”. He starts out with an assumed CO2 climate sensitivity, then calculates how much warming we should have seen 1880-2003, using his model-based estimates. This calculates out at 1.2 degC. He then ascertains that the actual observed warming was only 0.65 degC. From this he does not conclude that his assumed climate sensitivity is exaggerated, but deduces that the difference of 0.55 degC is still hidden somewhere “in the pipeline”. Using his 2/3 degC per W/m^2, he calculates a net “hidden” forcing = 0.82 W/m^2, which he then rounds up to 0.85 W/m^2.[Follow the pea as it moves quickly under the walnut shells.]

So the whole postulated “imbalance” is simply a model-generated estimate based on circular logic, which got “rounded up” from 0.82 to 0.9 in the process.

I don’t remember that I would have given such an estimate. Perhaps you refer to my comment where I told, what two different sources tell: Loeb et al 0.50±0.43 W/m^2 based on OHC and McGee roughly -0.5 W/m^2 for the period staring 2001, but not over the whole period that he considered.

Neither of those is my estimate, and McGee’s has not been either published of verified as technically correctly deduced.

The value of Loeb et al is the only really empirical value of the three (the third being that of Hansen).

Incidentally you can see the theoretical difference between 12 10 8 and 12 9 7 here, namely the dark blue and dark green curves. Although 12 10 8 has the smallest HSSL, if you have significant signal at peridocities of 9 and 7 months (frequencies 1.35 and 1.7) 12 9 7 will take them out better than will 12 10 8. 12 alone will of course take out periodicities of 6, 4, and 3 months.

@PP: All the above is part of fast dynamics, on longer time scales we have the slow dynamics of heat transfer within the ocean. From it’s properties depends, which part of the original forcing is soon canceled by Planck feedback, and which remains to warm the ocean. The faster the heat transfer is within the ocean the more slowly the surface warms and the larger part of the forcing persists.

Could we model the ocean as a capacitor C connected to a pullup resistor R connected to a voltage V? In that model a high resistance corresponds to a slow rate of heat transfer to the ocean, while the voltage models the temperature at which the Planck feedback cancels the original forcing.

To the extent that this model works, R governs only how fast the potential across C approaches V, it does not limit how close C can get to V.

A more accurate model would take the MOC into account, which steadily supplies cold water from the poles to the tropics. One might model this as a second resistor R’ in parallel with C. The voltage at C would then converge to R’V/(R+R’) instead of V.

These issue are discussed also in the 4. and 5. post of Isaac Held (link in the link list of this site).

In 5 Isaac stresses dependence on latitude. Polar asymmetric is also a factor (ACC?). My picture didn’t include these. But what does he have to say in 4 and 5 about the above?

Vaughan, I thought it was pretty much the same thing except he is comparing a fast upper and slow lower response. You would be comparing hemispheres if you are trying to include the MOC impact right?

Then you have an NH average SST close to 20C and an SH SST close to 17C for the 60S-60N latitudes producing a hemisphere imbalance of ~18Wm-2. Using 4C as a reference (approximate thermocline temperature) I get an NH R of 0.19 and an SH R of approximately 0.193, which I think compares pretty well the saturated lapse rate since we do live on a water world.

I love it! Thanks for digging that up, Howard. (It’s from 1962 but the 1849 date on the cover had me fooled for a second.)

I don’t know if there’s enough data to do it yet, but it would great to be able to estimate values for R (from the tropics) and R’ (from the poles), whose inverses (conductivity) very crudely represent the rate at which heat is transported around the ocean.

However since ocean currents are complex the system is probably best modeled as a large number of capacitors, resistors, voltage sources and current sources. Huang’s recent (2010) book Ocean Currents has a wealth of relevant information for those patient enough to try doing something like that. Maybe there’ll be some stuff on it at AGU FM this week, I’ll keep an eye out.

Since when has “politics” blocked development of hugely expensive military weapons and delivery systems? Even huge boondoggles that few think will actually work, like the notorious Star Wars, gets funded. It gets funded because all the congress critters, or enough of them to get it authorized, get a taste of the action in jobs brought home to their districts.

That said, $3-6/gal. for jet-a in the middle of the ocean is a great deal. For people who drive cars that don’t run on kerosene and get highly refined gasoline @ $3/gal which includes expensive additives, retail markup, and plenty taxes that US warships don’t pay, it’s a whole different ballgame.

@ Pekka Pirilä | December 4, 2013 at 4:13 am | … said:
” The extensive analyses done tell that maintaining the required rate of investment is very difficult and costly. That explains the otherwise very strange situation that oil price has remained high in spite of the poor economic development. With a stronger growth the price would surely be much higher. ”

Here in the US, we still import oil, so we haven’t yet become energy independent. So, IMO, the price of oil has remained high due to its shortage.

Can you elaborate on what is meant by “the required rate of investment is very difficult and costly?”

As long as oil can be produced at a significant profit, the rate of investment is sustainable. So, I’m not understanding what is meant by the above. As oil production is still currently increasing in the US, it is expected that the price will drop into the 80’s at some point in the near future. This will cut out higher priced production, but one effect of the higher prices has been that producers have had time to find ways to explore, drill, and produce from oil shale more cheaply. Technology improvements have been abundant and impressive. So, the effect is that more oil can be profitably produced at any arbitrary price point.

Incidentally you’ll see from that Wikipedia article’s talk page that in 2008 I was confused myself about whether biofuels were carbon neutral—when I created the article I neglected carbon neutrality altogether, but in response to an immediate complaint I subsequently had a shot at fixing both this and the nagging tone here. The article has evolved further since then, reflecting the ongoing debate about that topic in preference to any absolute claim of carbon neutrality. Although sequestration obviously happens it’s not a simple thing to observe in nature.

Not counting destruction of old growth forest for farmland (which I don’t believe happened in the US specifically for biofuel crops) the only way a biofuel could not be carbon neutral is if net energy gain is negative. US produced corn ethanol is net positive by a factor of 1.3 and that’s the least efficient biofuel I know of and the most conservative estimate of net energy gain for it. In the earlier days, over 20 years ago, it was reportedly net negative.

Energy balance for corn is somewhat higher now (or at least 2008 which is most recent data I could find) according to USDA. Input/output energy is 1 unit in 2.3 units out when including by-products such as cattle feed. The residue left after fermentation has high protein content – only the carbs are taken out. Much more interesting is that the crop residue (stalks and leaves; stover), which is combustible, is discarded. I’m not sure that discarded is the right word as I believe it’s plowed under which fortifies the soil for the next crop cycle. The interesting part is that if just half the crop residue were burned to produce energy the ROIE (return on input energy) would skyrocket to 25 to 1. In other words there’s nearly twice as much energy in the crop residue as was used to produce the crop. Electrical power plants and process heat plants are able to use the stover for fuel.

An explanation that I imagine Max would prefer is that he’s right. After carefully rereading this thread I realized that on Dec. 2 at 1:32 am Max did indeed allow 10% as the rate of per capita emissions over the past four decades. 18 hours later I was still falsely claiming he was ignoring the possibility that they could increase at all.

Relative to the actual increase however, 10% over four decades is pretty close to no increase at all. I would apply for a moral victory if I thought that was worth anything on CE, but I have to be realistic.

I think you owe Max an apology, Vaughn. The global per capita rate of CO2 emission went up barely 5% from 1980 to 2010 based on figures from British Petroleum.

Where did you get data that indicates otherwise?

BP data appears corroborated by other sources for global per capita energy consumption.

Actually I meant to say that global CO2 emission per capita is not projected to grow at all through 2035. If the actuarial weenies are right global population stabilizes at 9bn circa 2050 so given just conservation, rising price of fossil fuel, modest build-out of known renewable energy sources, we should expect global CO2 emission to stabilize mid-century without any draconian efforts to reduce it. That means anthropogenic warming halts right about the point where even the bandwagon CAGW boffins admit the warming is still a net positive by warming the frozen north in the winter extending growing seasons, fertilizing the atmosphere, and reducing fresh water requirements per unit of plant growth.

What’s not to like except for the fact that there isn’t enough fossil fuel to sustain the consumption rate more than 100 years or so with current economically recoverable reserves… which is the only compelling reason to pursue alternatives. But it’s a really good reason in and of itself!

@DS: The global per capita rate of CO2 emission went up barely 5% from 1980 to 2010 based on figures from British Petroleum.

Those BP figures show an increase of 10% from 1980 to 2010. You can find those figures, sampled 5-yearly, by taking the fourth column of the table in my comment to Max last Tuesday and converting it from GtC to GtCO2 by multiplying by 44/12. These agree spot on with the BP graph you showed.

If you fit a trend line to the 7 data points for 1980 to 2010 you’ll see that the two ends of the trend line are at 4.14 and 4.56 GtCO2, a rise of 0.42 GtCO2. I leave it to you to calculate what percentage of 4.14 that is. Let me know if you get 5% again.

Vaughn Pratt doesn’t seem capable of admitting mistakes.

Besides the above, this is yet another of your mistakes. So far in this thread alone I’ve apologized to three people for making mistakes.

The reason I admit to my mistakes when I make them is that it would make me look bad not to. But we may be addressing different audiences: you might well lose face with your audience by admitting to a mistake.

Speaking of dummies I’m reminded of your failed attempt to find anything conclusive from your botched attempt to replicate the Wood’s experiment. I’d been meaning to ask if you bothered to replicate the pane of regular glass that was placed between the sun and both test chambers. Woods did that so the near infrared from the sun would be absorbed outside the test chambers and not interfere with the results by unevenly heating up the windows into the test chambers. This would explain why you got a much higher temperature reading near the chamber ceiling – you were measuring the temperature near the overheated window.

On the other hand maybe that occurred to you and you weren’t really interested in honestly replicating Wood’s experiment you were just interested in proving a preconceived notion shared amongst northern California liberals that CO2 is bad stuff which is in itself strong evidence that stupid is infectious.

“Have you ever considered why CO2 plummeted from 6000 ppm to 180-280 ppm over a period of a few hundred million years?”

Of course I have. It’s because we’re in a frickin’ ice age. Even the interglacial is cold compared to the majority of the earth’s history where there have been no polar ice caps. There’s the remains of temperate forest underneath the Antarctic ice. How does that square with your voracious plant diatribe? Plants that grow voraciously die and decay just as much more rapidly too. Fercrisakes Vaughn start thinking a little harder about this stuff. You glom onto the first conclusion that seems to make sense to you then you’re immovable.

Max, you need to dig a little deeper. Carbon storage in soil quickly reaches equilibrium. Topsoil doesn’t go down forever. Duh. Granted in some places it’s been badly depleted by agriculture but in the big picture human agriculture takes up only a tiny fraction of the earth’s surface. The Big Kahuna (Vaughn pay attention too) as far as carbon reservoirs is the global ocean. Terrestrial storage is a pittance in comparison. And the reason why atmospheric CO2 is so low today is because the ocean is freaking cold (ice age, duh) and can dissolve a lot more CO2. Are you boys dumbasses or what? Good grief.

“And the reason why atmospheric CO2 is so low today is because the ocean is freaking cold”
David, did you ever realise that “freaking cold” for the deep oceans is ~15K above the infamous 255K effective temperature for Earth, and, more importantly, ~80K higher than the average surface temperature of our Moon?
If you want an answer to the question why Earths average surface temperature is over 90K above the Moons, don’t go looking in the atmosphere.

The Big Kahuna (Vaughn pay attention too) as far as carbon reservoirs is the global ocean.

A more accurate statement would be that you were paying attention to me when I pointed out here the diatomaceous rain of carbon to the ocean bottom, as confirmed by Michael Marler (search this thread for “Science Magazine”). The results of this rain process are very slow. Ocean carbon is approximately 40 teratonnes. Carbon in sedimentary rocks and marine sediments resulting from that process is vastly more at around 80,000 teratonnes. Google for “Estimated major stores of carbon on the Earth”.

@David Springer | December 8, 2013 at 1:46 am |
” The only people who think the earth’s average surface temperature is 90K above the earth’s moon are the Sky Dragon Slayer brigade. Go away.”

Don’t know what the Sky Dragons believe, but I do know what Nasa has MEASURED: http://www.diviner.ucla.edu/science.shtml
Average surface temperature for the moon comes in at 197K, earth around 290K. Are you saying that the difference is not 93K???????

“Oh come off it, Springer. The voracious plants were long before Antarctica relocated to the South Pole. Look it up. Try Gondwana, and also Oxygen catastrophe.”

Correct. That’s because the earth hasn’t had polar ice caps through most of its history. Life evolved mostly without them. How much CO2 dissolves out of the ocean when its much warmer? The high partial pressure of CO2 in the past has nothing to do with plants. It has everything to do with higher ocean basin temperature. Duh.

You can do better than that. Ask your Magic 8-Ball again.
You can do better than that. Ask your Magic 8-Ball again.

Calm down. You’re repeating yourself. Accept the simple fact that you were wrong about plants causing high partial pressure of CO2 in the past and it was because there were no polar ice caps and a much warmer ocean which dissolves less CO2. You don’t take being corrected very well do you?

@VP: Have you ever considered why CO2 plummeted from 6000 ppm to 180-280 ppm over a period of a few hundred million years?

@DS: Of course I have. It’s because we’re in a frickin’ ice age.

Springer is presumably appealing to Theory 1, that CO2 does not control temperature, only vice versa. Hence if we’re in an ice age we can expect lower CO2.

This account competes with Theory 2, that reducing CO2 reduces temperature, and that the invention of photosynthesis (which converts CO2 to oxygen) seriously depleted the atmosphere’s CO2.

These would be equally viable theories if Theory 1 could explain the “frickin’ ice age”.

The problem with Springer’s Theory 1 is that the ice age is something God inflicted on us for no apparent reason other than that God has the sorts of fits of impatience one can read about in the Book of Job.

Theory 2 makes more sense. It says that the invention of photosynthesis introduced oxygen around 2.4 Gya.

Back then the atmosphere was largely hydrogen (molecular weight 2) and the simple lightweight hydrides methane (16), ammonia (17), and water vapor (18). The oxygen oxidized the methane to the much heavier CO2 (44), a very stable gas immune to oxidation.

The photosynthesizers then evolved to use CO2 instead of methane as their fuel (good thinking), which 500 Mya was around 6000 ppm.

Around 49 Mya the Azolla event supposedly occurred, dragging CO2 down from its value then of 3500 ppm to 650 ppm in the short space of 800,000 years. This was a short 4 million years before Antarctica bid farewell to Australia and emigrated to the South Pole to raise legions of penguins and leopard seals.

Unlike Theory 2, Theory 1 offers no mechanism by which the temperatures could plummet.

And by the way Vaughn, photosynthesis was “invented” 3.5 billion years ago by the oldest organism on the planet – photosynthetic bacteria – which are still with us toda and being bioengineered by Joule Unlimited to solve the renewable energy crisis you atheist dimbulbs are so concerned about. Photosynthesis didn’t reduce CO2 it oxygenated the atmosphere and this happened long before terrestrial plants arrived on the scene a scant 500 mya after the Cambrian explosion. It was called the great oxygenation event.

@DS: Accept the simple fact that you were wrong about plants causing high partial pressure of CO2 in the past

Now you’re simply lying. The only reason I can think of for why you make up ridiculous statements like this is to goad people into calling you a liar. Well, congratulations, liar, you’ve finally succeeded.

@DS: Ah Vaughn Pratt now reveals himself as a bigot in an attempt to make me look like a bible thumping Christian…I’m an agnostic.

What? I never mentioned Christianity, and the only reason I can imagine for why you think Job has something to do with Christianity is that agnostics are ignorant about such things.

In any case I don’t care. All I care about is why you believe in your Theory 1, that the temperature declined for some other reason than that the CO2 declined. If it’s an article of faith for you, fine. If not, why did the temperature decline?

@DS: How much CO2 dissolves out of the ocean when its much warmer? The high partial pressure of CO2 in the past has nothing to do with plants.

You must have misread something. It was the low partial pressure of CO2 that had to do with plants. Plants have never raised the CO2 level.

It has everything to do with higher ocean basin temperature.

This doesn’t seem to fit the facts. 3.5 Mya was very hot yet there was no significant CO2 in the atmosphere, just hydrogen and the light hydrides methane, ammonia, and water vapor.

If you were right the high temperatures would have driven CO2 out of the ocean.

Photosynthesis didn’t reduce CO2 it oxygenated the atmosphere and this happened long before terrestrial plants arrived on the scene

You seem to have gotten this from me where I wrote “the photosynthesizers then evolved to use CO2 instead of methane as their fuel” except that you left out the bit about them getting their carbon from methane before they switched to CO2. Apparently only the oxygen output is of interest to you and not the carbon the photosynthesizers were getting as their payoff for photosynthesis.

There is nothing that can be done with a Smartest Guy in the Room, as any attempt to convince him that he is missing out on a huge aspect of bipedal development will simply lead to him thinking that you don’t understand his unique and highly sophisticated perspective. In that sense, the Smartest Guy in the Room is stuck in a state of arrested development, at approximately age fourteen.

He is a lost cause and cannot be rehabilitated.

Don’t worry about it Vaughan old buddy. Jabberwock is a hopeless case. What can you do with such people but descend to their level. As you know – a white trash Aussie like me can’t help but play even dirtier in a street fight – and we never back down. You’ve been very restrained I thought.

What this does to Judith’s ambitions for a civilised eSalon I don’t know. If it was me I’d be throwing Jabberwock off the blog.

Can I do the formalities?

Jabberwock the Jarhead – the tribe has spoken. Hasta la vista baby.

I like that idea. We could take it further. Last survivor wins a brand new only driven on Sundays to church climate model and a petabyte of DNA data storage.

What are you doing up at 3.00am? I never blog at 3.00am – I try to go back to sleep. I hope you are well? I seem to be at the stage of life of surly complaining about my health. I just spent 2 days in bed alternately shivering and sweating and not eating much at all for some 3 days. Last night I ate a chicken wing and some fruit and put my chicken leg in the fridge. All day I have been looking forward to that chicken leg but when I went to get it it was eaten. ‘I didn’t eat it,’ she said, ‘I just nibbled on it.’ I would be really pissed but it just adds to the general air of being hard done by.

There is nothing that can be done with a Smartest Guy in the Room, as any attempt to convince him that he is missing out on a huge aspect of bipedal development will simply lead to him thinking that you don’t understand his unique and highly sophisticated perspective. In that sense, the Smartest Guy in the Room is stuck in a state of arrested development, at approximately age fourteen.

He is a lost cause and cannot be rehabilitated.

Don’t worry about it Vaughan old buddy. Jabberwock is a hopeless case. What can you do with such people but descend to their level. As you know – a white trash Aussie like me can’t help but play even dirtier in a street fight – and we never back down. You’ve been very restrained I thought.

What this does to Judith’s ambitions for a civilised eSalon I don’t know. If it was me I’d be throwing Jabberwock off the blog.

Can I do the formalities?

Jabberwock the Jarhead – the tribe has spoken. Hasta la vista baby.

I like that idea. We could take it further. Last survivor wins a brand new only driven on Sundays to church climate model and a petabyte of DNA data storage.

What are you doing up at 3.00am? I never blog at 3.00am – I try to go back to sleep. I hope you are well? I seem to be at the stage of life of surly complaining about my health. I just spent 2 days in bed alternately shivering and sweating and not eating much at all for some 3 days. Last night I ate a chicken wing and some fruit and put my chicken leg in the fridge. All day I have been looking forward to that chicken leg but when I went to get it it was eaten. ‘I didn’t eat it,’ she said, ‘I just nibbled on it.’ I would be really pissed but it just adds to the general air of being hard done by.

This new threading format is difficult to negotiate in these long threads.

There is nothing that can be done with a Smartest Guy in the Room, as any attempt to convince him that he is missing out on a huge aspect of bipedal development will simply lead to him thinking that you don’t understand his unique and highly sophisticated perspective. In that sense, the Smartest Guy in the Room is stuck in a state of arrested development, at approximately age fourteen.

He is a lost cause and cannot be rehabilitated.

Don’t worry about it Vaughan old buddy. Jabberwock is a hopeless case. What can you do with such people but descend to their level. As you know – a white trash Aussie like me can’t help but play even dirtier in a street fight – and we never back down. You’ve been very restrained I thought.

What this does to Judith’s ambitions for a civilised eSalon I don’t know. If it was me I’d be throwing Jabberwock off the blog.

Can I do the formalities?

Jabberwock the Jarhead – the tribe has spoken. Hasta la vista baby.

I like that idea. We could take it further. Last survivor wins a brand new only driven on Sundays to church climate model and a petabyte of DNA data storage.

What are you doing up at 3.00am? I never blog at 3.00am – I try to go back to sleep. I hope you are well? I seem to be at the stage of life of surly complaining about my health. I just spent 2 days in bed alternately shivering and sweating and not eating much at all for some 3 days. Last night I ate a chicken wing and some fruit and put my chicken leg in the fridge. All day I have been looking forward to that chicken leg but when I went to get it it was eaten. ‘I didn’t eat it,’ she said, ‘I just nibbled on it.’ I would be really pissed but it just adds to the general air of being hard done by.

@CH: What this does to Judith’s ambitions for a civilised eSalon I don’t know. If it was me I’d be throwing Jabberwock off the blog.

Appreciate the supporting words, mate. Judith’s policy seems to be temporary moderation, perhaps on the ground that permanent moderation can be worked around by reincarnation as a sock puppet. Judith doesn’t have Wikipedia’s 170-person staff to regulate sock-puppetry.

Of course it’s my bloody fault for breaking my vow of silence with Davidwad, which then gives him the opening to see how long he can drag out the unpleasantries. He acquires targets by lobbing shots in various directions that he’s found promising in the past. Once he’s acquired one he focuses on keeping it acquired as long as he can. It must be hard work but apparently he finds it rewarding. I’ll try to limit my vow-breaking to a dull roar.

What are you doing up at 3.00am?

If you mean the time on the comments, that’s East coast time, 3 hours ahead of us on the West coast.

Sorry to hear about your personal climate change, doesn’t sound like fun. I’m doing fine myself, bypasses and radiation are a great comfort in my old age. Hang in there.

You cite a NOAA tutorial suggesting that the total carbon content in all remaining fossil fuels on Earth is around 6,000 Gt and that a significant portion of this is in methane clathrates beneath the ocean.

@VP: The voracious plants were long before Antarctica relocated to the South Pole.

My mistake, that should have read “The voracious plants were long before Antarctica separated from Australia.” I was picturing Antarctica drifting south after the separation in 45 Mya but it was Australia that drifted north.

Max, you can’t predict what carbon future technology can recover based on today’s technology.

If technology for mining the carbon in sedimentary rocks is developed, you can’t predict the limits of such future technology other than on the basis of the total amount of carbon in sedimentary rocks.

The 40,000 GtC of carbon in the ocean is far from being all the carbon, it’s merely the carbon actively participating in the carbon cycle DS was referring. Carbon sequestered in marine sediments and sedimentary rocks does not participate in the carbon cycle.

The Wikipedia article Abundance of elements in Earth’s crust quotes three sources, respectively 0.18%, 0.094%, and 0.02%. Googling sources for mass of the crust puts it at around 20 exatonnes. Those percentages therefore correspond to crustal carbon masses of about 40, 20, and 4 petatonnes. Even the last is a hundred times the 40 teratonne figure for carbon in the ocean.

While we’re obviously far from having the technology to mine all this, a thousand years of technology development could conceivably tap into a large percentage of this resource.

There’s a race here between peak carbon and technology. Who’s to say which will win?

Whether humans will completely abandon carbon-based fuels at some point before then is no less speculative than the above.

This is not to say that a significant fraction of this carbon will end up in the atmosphere since future technology may be able to address that too. Today these are unknowns.

The total amount of carbon including carbonates is hardly relevant, when we consider releases of CO2 from energy production or any similar activity to the atmosphere as carbon in carbonates is already essentially as tightly bound as it can be.

The temperature of the moon differs wildly at the surface even at adjacent locations because the regolith conducts poorly, there’s no air, and so albedo of two side by side rocks make for wildly different temperatures. We determine the mean temperature easily on earth in any one place by digging down until the temperature is constant all year round. That’s about a meter deep in most places. Two Apollo missions placed temperature probes at various depths up to 3 meters. At a depth of one meter the temperature became constant as one would expect.

The mean measured temperature of the moon is ~250K which is very close to the 255K expected of a blackbody at that distance from the sun. The measured mean temperature of the earth is given at 288K from which we get the very commonly cited figure of 33C for the earth’s greenhouse effect.

@Springer
Since the temperature in deep craters near moons poles is ~25K iso 2,77K one can assume the moon still has internal heat escaping at ~50 mW/m^2 or so. The deeper you drill the warmer it gets (on earth ~25K / kilometre).
The large temp. fluctuations at moons equator are noticeable in the top 30 centimetre only.
“The mean measured temperature of the moon is ~250K which is very close to the 255K expected of a blackbody at that distance from the sun. ”
Not sure where you got this nonsense from, but NASA has measured moons mean temperature as 197K.
With a TSI of 1364 W/m^2 spread around a whole sphere the expected BLACKbody temperature would be 278K.
Earth reflects 30% so its GREYbody temperature would be 255K, the moon reflects 11% and the result is 270K.
In the real world the sun only heats halve a planet, so calculating the average greybody temp for the moon:
Light side is .89 x 1364 / 2 = 607 W/m^2 SB -> 322K
Dark side is 2,77K (Cosmic background radiation)
-> average greybody temp for the moon is (322+2,77)/2 = ~163K.
Add the “base” temperature of around 30K and we get really close to the MEASURED result of 197K (rest is due to heat transport from day to night side)
The difference between the two methods stems of course from the fourth power in the SB formula.
I’m not sure if NASA gathers around the Principia Scientific flag, but I’m pretty sure they are not cranks seeing they can get two satellites in orbit around the moon.

Vaughn, since there were far more plants on the planet prior to the ice age I don’t understand how you can come to believe that plants lowered partial pressure of CO2 to the low level of last few million years. CO2 goes down with temperature. Interglacial is 280ppm and glacial is 200ppm. There are fewer plants during the glacial epics so by your logic there should be more CO2? Please explain. Clearly the temperature of the ocean is what controls CO2 partial pressure.

I can’t come to a meeting of the minds with California liberals. For instance you will tenaciously cling to your conclusion that ocean temperature is not the driver of CO2 partial pressure when every bit of data and knowledge about dissolution of gases in water clearly makes that evident. I think you’re dishonest because I have a difficult time believing you’re that ignorant.

The carbon in methane clathrates might not be economically recoverable but a warmer ocean will certainly release it. IIRC methane has a lifetime of about 5 years in the atmosphere before it decomposes into CO2 and H2O. As the ocean gets warmer it raises atmospheric CO2 and as it cools it lowers it. This isn’t disputable except by those who are either ignorant or dishonest or both.

“This doesn’t seem to fit the facts. 3.5 Mya was very hot yet there was no significant CO2 in the atmosphere, just hydrogen and the light hydrides methane, ammonia, and water vapor.”

I think you must mean 3.5bya not 3.5mya. That far in the past the earth had a lot more internal heat of formation left in as well as far higher levels of radioisotopes decaying and adding to the crust temperature. That far back in time is not comparable. In general you should stick to the time since the Cambrian explosion which was 500mya. Terrestrial plants hadn’t yet evolved before that point so any land masses were quite barren.

I can hardly find a more authoritative source. You knowledge of the earth’s history and the course of evolution is filled with errors. It’s frustrating. I’ve been obsessed with natural sciences for over 50 years and it’s painfully evident it’s only been a hobby for you beginning late in life.

I tested it early this year. She’s capable enough to make sock puppets expend more effort to defeat IP blocks and anonymous proxy rejection that I won’t make the effort except once or twice just to make a point. If she puts someone in moderation that’s pretty much where they stay until she decides otherwise. I must say she’s one of the most forgiving blog owners I’ve had the dubious pleasure of being 86’ed by. So much so I felt bad about testing her limits.

“The problem with Springer’s Theory 1 is that the ice age is something God inflicted on us for no apparent reason other than that God has the sorts of fits of impatience one can read about in the Book of Job.”

Vaughan Pratt | December 8, 2013 at 4:02 am |

“What? I never mentioned Christianity, and the only reason I can imagine for why you think Job has something to do with Christianity is that agnostics are ignorant about such things.”

Sorry. I didn’t realize you meant to mock Jews and Muslims too. Usually horse’s asses like you don’t have the balls to do that and just focus on Christians.

wouters I left a link to the Apollo 15 heat transfer experiment which measured the temperature of the moon below the surface deep enough so it becomes constant

the moon is dead by the way it has no internal heat

you can either trust the physics for gray bodies and trust the experiments deployed by Apollo 15 and 17 which agree with the theoretical temperature or you can keep up with the infrared surface measurements which measure the top few molecules of the surface, up to you

@ Springer
Have a look at this PDF: http://www.lpi.usra.edu/meetings/leag2012/presentations/Greenhagen.pdf
Page 15 has a comparison between the Apollo 15 & 17 data and the Diviner data.
Get used to it. The calculation of the effective temperature for the moon resulting in 270K is dead wrong. Consequently the result for earth at 255K is equally wrong. Spreading incoming solar over a whole planet was a stupid idea. We have a day and night side with only one sun.
A greybody at our distance from the sun, albedo the same as the moon and rotating once every orbit has a radiative balance temperature of ~163K.
Introducing more rotation will increase this temperature somewhat, but then you have to specify a heat storage capacity.
Now lets see how our atmosphere can increase the average surface temperature more than 90K above what the sun is doing on the moon.

I was pretty surprised how well the satellite SST and lower troposphere actual temperatures tracked. Unfortunately, both Reynolds and RSS have issues closer to the poles where most everything is happening.

Right, capn, the poles are a huge deal. The MOC conveys ice cold water from the poles to the rest of the ocean, which helps explain why it’s so cold below the main thermocline even far from the poles. (Before I knew about the MOC or thermohaline current I used to think it was a combination of the slow diffusion WHT talks about with the deep cold of last glaciation. However the MOC moves much faster than that, making the last glaciation largely irrelevant to ocean temperatures at all depths.)

There does seem however to be a Hadley cell of sorts in the ocean that causes warm tropical water to dive down at 30 degrees from the equator (both sides), displacing cold water from the depths which then wells up at the equator (more or less, modulo the ITCZ perhaps). Commenter AZ brought this to my attention recently based on ARGO data, very insightful, wish he’d send me an email so I knew who he was. Does any of this gibe with your modeling?

There have only been polar ice caps for the past few million years. In the big picture they’re not a regular feature of the earth. Ocean is a lot darker than rocks and with Greenland and Antarctic melted the ocean surface is a lot bigger. What happens is the ocean gets a lot bigger and warmer. But not warmer at the low latitudes and surface, warmer at high latitudes and at depth. The globe gets a greening that stretches from pole to pole accelerated by the ocean burping enough CO2 out of solution to drive atmospheric partial pressure to a couple thousand ppm.

I don’t get the concerns of the so-called green movement. They should look forward to this and be on-board for burning enough fossil fuel to tip the earth out of the ice age and have it on its way to being green from pole to pole again. It’ll take thousands of years to happen but it’s got to happen fast before the next Milankovitch cycle pushes the planet towards the tipping point back into another glacial epic. What we should be spending money on is figuring out how much anthropogenic warming is enough to end the ice age. But noooooooooooooooooo…

Deep ocean temps are a balance between the vertical transport by turbulent eddies created both by surface winds and bottom bathymetry and the tendency of warm water to rise to the surface. There is nothing keeping it in the deep.

Cold water sinks in a couple of places on the planet and rises in a few other places. The residence time in the deep ocean is something like 1000 years.

Cold water will tend to stay at the bottom until turbulent abyssal flow drives it to the surface. Deep ocean heat wouldn’t change at all if this was the only factor.

The temperature variability is – as I said – driven by a dynamic balance between turbulent eddies transporting warm water deeper and buoyant rising of warmer water. Warmer water tends not to stay in the deep. It is replaced by water turbulently transported deeper.

Cold water sinks in a couple of places on the planet and rises in a few other places.

Cold water sinks EVERYWHERE on the planet where there is less dense water beneath it. That’s a phuck of lot of places. Do you bother to think about what you write, even for a moment, before you commit it to posterity?

My modeling is mainly static so any asymmetry stands out like a sore thumb. That is what led me to papers by Toggweiler, Stott, Briereley etc. In fact my estimate of the impact of zonal SST temperatures gradients of 0.5C agrees with Briereley’s estimate of 0.6C, so I like my static models.

Ideally, you would have Ekman transports starting at the Hadley Ferrel and polar Cell convergences, but the NH Ferrel – Polar cell convergences is erratic thanks to land/ocean distribution while the ACC keeps the SH near perfect. That is a huge asymmetry impact on ocean heat transport.

Because of that you really need actual temperatures instead of anomalies to see what is going on. Toggweiler’s “Shifting Westerlies” is a nice short to the point paper on how migration of the thermal equator aka ITCZ has longer term impact on climate likely greater than CO2 forcing.

You spend your time abusing, berating and insulting people and complain about flaming? Such a complete lack of self awareness shouldn’t be surprising.

I thought you might like a picture – showing the locations of the few places where the specific combination of salinity and temperature result in deep water formation. Obviously you didn’t scroll to the end. Practice patience and intellectual rigour grasshopper.

The Texas A&M University site is one I have visited many times over the years. It is very good. However – if you want to progress beyond kindergarten climate science you need to have broad sources – multiple – and compare and contrast. Not merely Wikipedia or a 30 year old Mother Earth News article.

Of course Jabberwock suffers from he smartest dickwad in the room syndrome.

There is nothing that can be done with a Smartest Dickwad in the Room, as any attempt to convince him that he is missing out on a huge aspect of bipedal development will simply lead to him thinking that you don’t understand his unique and highly sophisticated perspective. In that sense, the Smartest Dickwad in the Room is stuck in a state of arrested development, at approximately age fourteen.

@CH: Please to call it Ekman Transport – in response to equatorial trade winds – rather than a watery Hadley Cell. On the equator – Ekman flows move both north and south.

Thanks for bringing that up, Chief. My explanation of why I believe it’s both can be seen in my November 8 comment here in response some beautiful animations of ocean flow worked up by AJ (whom I’d meanwhile misremembered as AZ – wish I had his email). I wrote,

“In the Hadley cells (between 30S and 30N) the wind has a strong easterly (i.e. westward) component (geostrophic flow induced by Coriolis force in the atmosphere). Ekman transport then predicts poleward surface currents near the equator (geostrophic flow induced again by Coriolis force, with net rotation 90+90 = 180° from the original equator-ward atmospheric flow at the bottom of the Hadley cell) . This would drive the hot equatorial surface water polewards. If it behaved like a Hadley cell this hot polewards surface current would then dive down at 30 degrees latitude in each hemisphere, exactly as shown in your second figure. In the meantime the flow from the equator will draw cold water up from below the equator. That would explain the upwelling there. If that’s not the explanation I’d love to know what is.”

The beautiful ARGO-derived animations by AJ on which my hypothesis of an oceanic Hadley Cell is based can be seen at

The common element of the first three animations is that they show depth vertically and latitude horizontally, 60S to 60N in the first two and 50S to 50N in the third.

The second animation showing the top 1000 m gives the clearest picture of what I mean. The isotherms show the flow lines, albeit imperfectly (see below). Although the animations don’t show the actual flow, since we know by observation (explained by Ekman) that the flow at the surface is away from the equator we can infer that the warm flow at latitude 30 on each side must be a downwelling rather than up.

Navier-Stokes for an incompressible fluid then tells us that the green flow in the middle must be upwards. We know from Davidwad that this is physically impossible, an apparent inconsistency best dealt with by ignoring Davidwad (which I normally do but failed badly at on this occasion).

The fact that the green isotherm starts at 20 S is not at all surprising given that the ACC does such a great job of stirring up the southern ocean: at 120 Sv it’s a major current. However Navier-Stokes requires a more balanced flow than that or the Northern Hemisphere would drain the Southern! This suggests that there’s an upward flow at 10 N. However the picture north of the equator is muddied by averaging over longitude so one should not read too much into the right hand side of the second animation.

Instead of averaging out longitude and showing evolution from 2005 to 2012 as in the second animation, the third averages time over 2005-2012 and shows a meridional section 0-1000m sweeping westwards across the Pacific from 230E to 160E (New Zealand pops up as a big white bar at starting at 174E). Because the latitude is cropped at 50S and 50N you can’t see the cold region in 60S-50S that’s clearly visible in the second animation. (I have no idea why AJ cropped at 50S, and 50-60N could usefully have been kept too since BC and Alaska don’t take out too much of it.)

This third animation shows that the warm downwelling is stronger in the western half. Puzzlingly however there is no sign of the warm upwelling north of the equator seen in the first animation.

The fourth animation explains this puzzle. It starts at the surface of the whole globe from 60S to 60N and sweeps a horizontal section over 0-2000m. Over the range 700-2000m (mostly missing in the other animations) we see great warmth in the Western Indian Ocean and the North Atlantic. In the latter you can see the hot Gulf Stream (or at least its isotherms) diving down from 400m to 1000m (yet another reason to ignore Davidwad) as it flows eastwards and then seemingly turning around over 1000-2000m as though flowing westwards (at least that’s what the isotherms seem to be doing).

This fourth animation gives perhaps the best picture of all. It shows the Hadley cells in all three of the Atlantic, the Pacific, and the Indian Ocean, but (consistently with animation 3) shows that the warm downwelling at 30S and 30N sweeps west. In the Indian Ocean this brings the two downwellings to respectively Madagascar in the south and the Western Indian Ocean in the north. In the Pacific however the warm downwellings are all gone by 1000m (also true around Madagascar). The Atlantic shows the greatest polar asymmetry, with the southern warm downwelling gone by 700m while in the north the US seemingly generates a blast of heat at around 400m that drives east and down as noted above before bouncing off Europe.

It seems likely that the North Atlantic is making the second animation hard to interpret meaningfully on the northern side. It’s main advantage may be that it visualizes what I’m imagining to be Hadley cells the most clearly of the four animations.

I spent some time going through Huang’s Ocean Circulation [2010] for information that might explain some of this but came up empty handed, though at 800 pages and a not terribly complete index it’s easy to overlook relevant material.

I’ll make a point of bugging oceanographers about it at AGU this week. With any luck they’ll have already come up with a satisfactory explanation of this ARGO data AJ has visualized so nicely with his animations.

This is endlessly compelling. It is sort of like a moving Rorschach image. Fig. 2 – exotic dancer.

I will come back to it but need to start my week now.

A couple of quick comments.

You can see the upwelling in the central Pacific in this colour enhanced image of the strong 2008 La Nina.

It is a fundamental of ENSO behaviour – as is the Ekman Flow transport of warm surface water to higher latitudes.

Trade winds evolve from a combination of surface properties – SST differentials north and south and east and west leading to Hadley and Walker Circulation patterns. This is why the trade winds have their characteristic south-easterly (SH) and north-westerly (NH) origins.

But I suggest that comparing atmospheric cells with ocean heat transport is a little loose as the processes are very different.

@CH: But I suggest that comparing atmospheric cells with ocean heat transport is a little loose as the processes are very different.

Very loose indeed, Chief. Yet there is one crucial point in common. Both involve flows that move polewards. In both flows it’s the top flow, meaning the one further from the center of the Earth.

Any poleward flow, whether air or ocean, converges on a point, namely the pole. That’s obviously impossibly crowded, so long before reaching that point the flow “realizes” that things are getting crowded and the moving flow has to find some other route. (Although air is compressible, treating it as incompressible is a reasonable approximation for the atmosphere when solving Navier-Stokes.)

In both cases there is only one escape route, namely down and then back to the equator in order to close the loop. This then creates a meridional current.

Two Hadley cells might conceivably suffice. The advantage of three for the atmosphere is that the overcrowding only reaches a factor of cos(30) = 86% at 30 degrees from the equator. If that’s the optimal choice for the atmosphere it may well be optimal for the ocean too.

I see no sign of a Ferrel cell (30-60) in the ocean, which would be a big difference.

Although meridional currents in the ocean are well known, like you I’ve never seen the connection made with the atmospheric Hadley cells when talking about meridional currents.

Yet the same mechanism of congestion at latitude 30 degrees applies to both!

Except that flows away from the equator allow cold sub-surface water to rise in the central Pacific.

You’re not a real Texan. We quite like Texans. I once commented that the Great Western was the only pub in the world where a man could drink, spit, swear and ride bulls. Someone linked to a pub in Texas. Cool. You not so much. Remember tha you have a standing invitation to the Great Western. Settle it like a real man and not a dickwad.

There’s a lovely discussion of how the ocean modulates atmospheric CO2. Vaughn is under the impression that plants somehow control the partial pressure of CO2 in the atmosphere. I corrected him saying it’s the ocean and the reason there’s so little at present is because the earth’s in an ice age and the global ocean is very cold which causes it to dissolve more CO2 out of the atmosphere. This confirms, clarifies, and elaborates on what I told Vaughn:

The Oceans as a Reservoir of Carbon Dioxide

The oceans are the primary reservoir of readily available CO2, an important greenhouse gas. The oceans contain 40,000 GtC of dissolved, particulate, and living forms of carbon. The land contains 2,200 GtC, and the atmosphere contains only 750 GtC. Thus the oceans hold 50 times more carbon than the air. Furthermore, the amount of new carbon put into the atmosphere since the industrial revolution, 150 GtC, is less than the amount of carbon cycled through the marine ecosystem in five years. (1 GtC = 1 gigaton of carbon = 1012 kilograms of carbon.) Carbonate rocks such as limestone, the shells of marine animals, and coral are other, much larger, reservoirs. But this carbon is locked up. It cannot be easily exchanged with carbon in other reservoirs.

More CO2 dissolves in cold water than in warm water. Just imagine shaking and opening a hot can of CokeTM. The CO2 from a hot can will spew out far faster than from a cold can. Thus the cold deep water in the ocean is the major reservoir of dissolved CO2 in the ocean.

New CO2 is released into the atmosphere when fossil fuels and trees are burned. Very quickly, 48% of the CO2 released into the atmosphere dissolves in the cold waters of the ocean, much of which ends up deep in the ocean.

Forecasts of future climate change depend strongly on how much CO2 is stored in the ocean and for how long. If little is stored, or if it is stored and later released into the atmosphere, the concentration in the atmosphere will change, modulating Earth’s long-wave radiation balance. How much and how long CO2 is stored in the ocean depends on the deep circulation and the net flux of carbon deposited on the seafloor. The amount that dissolves depends on the temperature of the deep water, the storage time in the deep ocean depends on the rate at which deep water is replenished, and the deposition depends on whether the dead plants and animals that drop to the sea floor are oxidized. Increased ventilation of deep layers, and warming of the deep layers could release large quantities of the gas to the atmosphere.

The storage of carbon in the ocean also depends on the dynamics of marine ecosystems, upwelling, and the amount of dead plants and animals stored in sediments. But we won’t consider these processes.

I don’t expect you to write all that down, Vaughn, but I would greatly appreciate it if you read and remember it so you know what the phuck you’re talking about next time the subject comes up. Thanks in advance.

Oh, and Vaughn, contrary to what your northern California liberal bigotry informs you about people who don’t share your love of Nancy Pelosi, I didn’t mention anything out of the bible in conjunction with natural science and tend to treat them as Non Overlapping Magisteria. I can help you to understand either but just because I happen to know both don’t confuse that with me being a knuckle dragging bible thumper again. Thanks in advance.

@DS: Vaughn is under the impression that plants somehow control the partial pressure of CO2 in the atmosphere. I corrected him saying it’s the ocean

Can you make a clearer statement of what you believe in this regard than simply that I’m “under the impression that plants don’t control the partial pressure of CO2 in the atmosphere.” You didn’t actually say that this was false, you merely said you “corrected me” (which could mean all sorts of things). Further a direct negation of my “impression” is clearly contradicted by the well known fact that plants remove more CO2 by photosynthesis during the daytime than they return to the atmosphere by respiration throughout the 24-hour day.

So what exactly do you claim about the relation between plants and atmospheric CO2?

Chance has provided the preference of rubisco for CO2 over O2,eg Nisbet 2012.

We propose the hypothesis that natural selection,acting on the specificity or preference for CO2 over O2 of the enzyme rubisco (ribulose-1,5-bisphosphate carboxylase/oxygenase), has controlled the CO2:O2 ratio of the atmosphere since the evolution of photosynthesis and has also sustained the Earth’s greenhouse-set surface temperature. Rubisco works in partnership with the nitrogen-fixing enzyme nitrogenase to control atmospheric pressure. Together,
these two enzymes control global surface temperature and indirectly the pH and oxygenation of the ocean. Thus, the co-evolution of these two enzymes may have produced clement conditions on the Earth’s surface, allowing life to be sustained.

So not only can one say that in order to maintain their supply of CO2 the plants had to regulate their consumption of it, this rubisco-nitrogenase ratio hypothesis provides the mechanism they came up with to achieve it.

A sight better bunch of chemical engineers than triffids, those plants.

Is it conceivable that the C4 plants had been plotting a regime change by using PEP carboxylase to tweak the rubisco-nitrogenase ratio in order to drive CO2 down to the point where it killed off those uber-neat C3 neanderthals? That would be well within their power if relations broke down between them.

If so the C3 plants should be infinitely grateful to humans for foiling that little plot in the nick of time.

C4 plants achieve this by respiring less CO2 during photosynthesis (i.e. breathing less heavily) at the expense of using twice as much energy to do so. Sounds paradoxical but that’s what great chemical engineering is all about.

The only person in this thread to even mention either “bible thumpers” or “knuckle draggers”, let alone suggest a connection, just happens to also be the only one to level accusations of bigotry.

Reminds me of those who object to being called deniers because that makes them Holocaust deniers.

I deal with them by saying instead that they’re in denial.

For Springer, instead of saying he attributes the “frickin’ ice age” to God I shall in future say he invokes a deus ex machina. That is, he’s appealing to some supernatural force that drove down the planet’s temperature, a force that has no explanation in natural terms.

This is to address your reference to Gondwana which I thought was a bit strange as Gondwana hasn’t been around for hundreds of millions of years and has nothing to do with when Antarctica was covered in forest.

The Antarctic plate has been at the southern pole for at least a couple hundred million years and certainly wasn’t far from where it is now just 40 million years ago.

The Antarctic Ice Sheet began forming at the time of the Eocene-Oligocene Extinction. No one knows what happened, super volcano and/or asteroid impact in most views, but whatever it was ended the age where the earth was green from pole to pole. Mammals at the time were small and shrew-like with an odd resemblance to Stanford faculty photos which is probably just coincidental.

As the age of mammals commenced, the continent of Australia-New Guinea began gradually to separate and move north (55 Mya), rotating about its axis to begin with, and thus retaining some connection with the remainder of Gondwana for about 10 million years.

About 45 Mya, the Indian Plate collided with Asia, buckling the crust and forming the Himalayas. At about the same time, the southernmost part of Australia (modern Tasmania) finally separated from Antarctica, letting ocean currents flow between the two continents for the first time. Antarctica became cooler and Australia became drier because ocean currents circling Antarctica were no longer directed around northern Australia into the subtropics.

So the Australia-Antarctic portion of Gondwana remained intact until 45 Mya. The reason the Antarctic has forests dating back to 45 Mya is the same reason that Australia did: the remainder of Gondwana at that time was warm enough back then to support forests.

I see you admitted to being in error elsewhere about the position of the Antarctic continent admitting I was correct it’s been over the pole for a couple hundred million years and that Australia drifted north rather than Antarctica drifting south. Therefore the tropical forest on Antarctica ~40mya is not the result of the continent being located in a warmer latitude at that time.

According to Wikipedia Gondwana began breaking up almost 200mya early in the Jurrasic.

Gondwana began to break up in the early Jurassic (about 184 Mya) accompanied by massive eruptions of basalt lava, as East Gondwana, comprising Antarctica, Madagascar, India and Australia, began to separate from Africa.

So boy I guess you got me good on that one where I said Gondwana hasn’t been around for hundreds of millions of years. I should have said almost hundreds of millions of years. The only point I wanted to make however was that it was long gone 40 million years ago and my point still stands thank you very much.

Vaughan Pratt, Even though the data gets wonkie as you near the poles I did R-values for both starting at 50 degrees.

Notice how the NH R-value decreases while the SH stays stable. NH “sensitivity” is decreasing which is pretty much what should be expected. The whole concept of linear climate “sensitivity” is misguided at best.

Vaughan, “Which is it?” Atmosphere so you can estimate oceans. What I do with a static model is to assume that there is some “equilibrium” and determine the atmospheric R-value at that point. Then you can estimate what a change in the atmospheric R-value would have on the ocean R-value. Then since it is a three dimensional problem, do a static balance from the equator to the poles and try to make everything stable. Sounded like a great idea until I found that you can’t reduce it below about +/-10 Wm-2 because of the asymmetrical mixing. I felt real bad about that until I found out Stephens et al. had +/-17 Wm-2 of uncertainty :)

Vaughan, btw, I have tried using a recovery response time to estimate the ocean R-value and it appears to be approximately 1/10 of the atmospheric which makes sense using a Carnot efficiency efficiency estimate. Still 0.0192 to 0.02 is a lot of difference looking for a 1% change.

@cd: I have tried using a recovery response time to estimate the ocean R-value and it appears to be approximately 1/10 of the atmospheric which makes sense using a Carnot efficiency efficiency estimate.

Excellent! So combining this R-value with the ocean’s heat capacity C, what RC time constant do you get? Two years, 10, 20, 50?

Vaughan, “But now I have another problem. The vigor of the ACC suggests a lower R-value for SH than NH. Yet your graph seems to show the opposite. Please reconcile.”

The R-value is based on sea surface to ~lower troposphere. There is less turbulent mixing in the SH than NH so the SH is more resistive to heat loss from the sea surface to lower troposphere. The tropics with the most turbulent mixing have the lowest R-value. That seems counter intuitive with the wild surface winds over the ACC, but they are at least consistent surface winds with a tight gradient. Note though that the satellite era includes the strong shift in NH dynamics, the ’76-98 great Pacific climate shift. The average difference is likely not so much.

Because of the ACC I would also expect the Pacific equator to pole SST lag to be less in the SH but both have about a 102 month lag (8.5 year which Schwartz found). So the Sea “Surface” R-value should be about the same as the ~0-700 meter value. Below 0-700 there should be longer lags depending on surface mixing/ACC which impact the MOC. For the Pacific I have a second correlation peak at 75.8 years. The correlation being between the tropics and extra-tropics for both hemispheres.

That is the 360 correlation of the north and south Pacific extra-tropics with the Indian Ocean tropics (25s-25n). I set the axis tics to 102 months so you can see how often there are synchronizations. These synchronizations agree well with the Oppo IPWP reconstruction and the 2000 years of Climate instrumental reconstruction I started.

~27 months and ~102 months are internal somewhat common mixing frequencies which seem to produce the pseudo-cyclic patterns with less than perfect harmonics that can drive you nuts.

Springer, what you are looking at is what throws people off. The laminar barrier is from ~60S to the pole or the rough wall of the southern polar vortex. That reduces heat loss from the system. The turbulent mixing you are seeing mainly inside the system and is the reason the sh oceans are so well mix and retain more heat but at a lower temperature. What I am talking about is turbulent mixing through the polar low or polar vortex barrier.

Turbulent outside to keep flies out and turblent inside to keep energy in which creates a laminar barrier in between. The polar vortex is especially efficient since it has dry air on one side and moist (realitively) on the inside. That produce a very tight pressure/temperature gradient much like a horizontal isotherm or thermocline. Park a forklift in the door and you defeat the mechanism. Or as I had to tell the Navy, park planes in the direction of the paint spray hanger air flow or the methylisocyanate solvent in the paint might just kill a few people.

You’re babbling. You claimed turbulent mixing was greater in the northern hemisphere. I showed you two separate studies showing the mixed layer depth is greater in the southern hemisphere. Man up fercrisakes and either admit you were wrong or provide some evidence you were right.

David, or can I call you SFB, The R-value I posted is the difference in energy transfer between the surface and lower troposphere from 50 to 90 latitude. That defines an insulation barrier. There is more heat transfer through the NH barrier than the SH barrier because there is more turbulent mixing through as in penetrating, the barrier. There can be any amount of turbulent mixing on either side of the barrier, but unless it mixes through the barrier, there is less heat loss.

Since you are not particularly up on this subtlety of fluid dynamics nor is Dr. millikelvin I provided an example of an air curtain to get you started on the right path. I have to admit though that you are in good company, Kevin Trendbirth doesn’t get it either :)

David, since you obviously missed this, “Notice how the NH R-value decreases while the SH stays stable. NH “sensitivity” is decreasing which is pretty much what should be expected. The whole concept of linear climate “sensitivity” is misguided at best.” and this, “The R-value is based on sea surface to ~lower troposphere. There is less turbulent mixing in the SH than NH so the SH is more resistive to heat loss from the sea surface to lower troposphere. The tropics with the most turbulent mixing have the lowest R-value. That seems counter intuitive with the wild surface winds over the ACC, but they are at least consistent surface winds with a tight gradient.” I must think you have become a devotee of Webster’s blog style where you type before you think.

Don’t bother apologizing, just send a small donation Minnesota Morons on my behalf.

R-value btw is dT/dQ in K/Wm-2 which is “sensitivity” Turbulent mixing through and “thermodynamic” barrier or boundary layer changes the R-value. This is why one should think of individual “shells” or “envelopes” instead of assuming you can average across thremodynamic boundary layers with associated phase changes to estimate a “sensitivity”.

If you want to redefine turbulent mixing you should say so up front. How was I to know you meant something different than the ocean mixed layer and using its depth like everyone else does as the standard for how much turbulent mixing is going on. There is more of THAT kind of turbulent mixing, i.e. what everyone else thinks of when you say turbulent mixing, in the southern hemisphere and that’s a fact jack.

Turbulent mixing, as evidenced by mixed layer depth, is greatest in the southern hemisphere. If you state that it is greater in the northern hemisphere then you are redefining the term “turbulent mixing”. Getting you to admit a mistake is evidently not possible as there appears to be no limit to how low you can go when squirming.

David, The heat can mix into the ocean or out to space. The boundary layer mixing I am estimating is the atmosphere, so in the SH there is likely more mixing into the ocean and less mixing to space. My point A is the SST and my point B is the lower troposphere so that is what I can estimate. In the NH, at least for the satellite period, there is more mixing to space than in to the oceans as compared to the SH.

The polar vortex creates the dynamic boundary layer. That is in the atmosphere above the Antarctic or the Arctic. Since there is land under the Antarctic PV, it doesn’t mix into the oceans above latitude 65S. In the Arctic there is sea ice instead of land, so that PV can mix into the oceans but the temperature differential much is much larger out than of in, so the heat more heart is lost to space. So you want to know how much mixing and the direction of the heat flow. Right?

The R-value just gives me an estimate of what is happening between the points I pick. So when I say inside or outside, you have to think of what points I am using because I am only considering one boundary layer at a time.

“The polar vortex creates the dynamic boundary layer. That is in the atmosphere above the Antarctic or the Arctic. Since there is land under the Antarctic PV, it doesn’t mix into the oceans above latitude 65S. In the Arctic there is sea ice instead of land, so that the PV can mix into the oceans but the temperature differential is much larger out than in, so there is more heat is lost to space. So you want to know how much mixing and the direction of the heat flow. Right?”

@ captdallas 0.8 or less | November 30, 2013 at 12:28 pm | said
“That depends on what “surface” you are considering. Cloud cover attenuates the CO2 related DWLR below the clouds. Above the clouds, CO2 is still there doing its thing. Adding more CO2 will increase the effective temperature at the cloud tops which increases upper level convection. MODTRAN does a pretty good job.”

jim2, “More IR above the cloud tops should enhance cooling of the Earth, no?” Yes with caveats. More IR above the clouds increases the energy that can be transferred poleward. If it goes south, cooling with some increase in ocean heat uptake. If it goes north, then there is a whole can of worms that has to be considered. The poles are like poles apart :)

DS offered the following in this thread. It might be interesting to see how people’s opinions on these evolved with time.

If in the long run the consensus was that they were mostly correct one could say they’d stood the test of time.

If however the consensus was that they were mostly wrong, DS could still argue that science was based not on consensus but on facts, and that all his statements were scientific facts, consensus be damned.

————————–

Any solar energy stored in chemical bonds of plant matter is released back into the environment when it oxidizes.

we’ve reached peak oil already,

energy cost is rising which has capped the rate of consumption

the growth experienced from 1950 to 2000 in anthropogenic CO2 production is not sustainable

If you believe in the greenhouse effect this should have resulted in a runaway greenhouse millions of years ago when CO2 levels were as much as 10 times greater than today. There is almost certainly something at play which sets a ceiling temperature for the earth.

Plants are carbon neutral.

the only thing not strictly considered carbon neutral in the terrestrial biosphere is trees but in the long run even trees don’t last forever and when the wood burns or decays the carbon therein is returned to the atmosphere.

Artificial photosynthesis has no relevance to synthetic biology except as a competitor.

the ostensible anthropogenic CO2 warming signal is so small that it falls inside the margin of error in the instrumentation used to study temperature and attribution

The global per capita rate of CO2 emission went up barely 5% from 1980 to 2010 based on figures from British Petroleum.

global CO2 emission per capita is not projected to grow at all through 2035.

(in response to “Have you ever considered why CO2 plummeted from 6000 ppm to 180-280 ppm over a period of a few hundred million years?”)
Of course I have. It’s because we’re in a frickin’ ice age.

The Big Kahuna as far as carbon reservoirs is the global ocean. Terrestrial storage is a pittance in comparison.

the reason why atmospheric CO2 is so low today is because the ocean is freaking cold (ice age, duh) and can dissolve a lot more CO2.

Vaughn is under the impression that plants somehow control the partial pressure of CO2 in the atmosphere. I corrected him saying it’s the ocean and the reason there’s so little at present is because the earth’s in an ice age and the global ocean is very cold which causes it to dissolve more CO2 out of the atmosphere.

Cold water sinks EVERYWHERE on the planet where there is less dense water beneath it.

No one who matters cares what happens in the southern hemisphere. All the action is north.

Consensus has to do with politics not science so that can be damned right now regardless.

These were taken out of context which is intellectually dishonest. For instance “The southern hemisphere is growing”. Accurate but in context it was referring to ice extent in the southern hemisphere. “Nobody cares what happens in the southern hemisphere.” Accurate but in context it was a tongue-in-cheek dig to get a heated reaction out of Chief Hyperbologist who flies off the handle at any perceived insult to Australia. And it worked perfectly.

So what happened to your vow not to engage with me? How long did it take you to assemble that particular engagement? Amazing. I must really get under your skin.

Accurate but in context it was referring to ice extent in the southern hemisphere.

An earlier version actually had a comment explaining that this was probably what you meant. Unfortunately you did not have any verbiage explaining this in your original quote, or I would most certainly have included it at the time I stripped all my metacomments out to avoid being accused of biasing anything with my personal interpretations.

Approximately 1% of the time it took you to type the context, based on your taking 10x as long to type something as for me to copy it with ^C^V, and 10x as much context from which it was copied. This freed up the rest of the day to take in the first day of AGU plus an hour’s driving time each way. How was your day?

In context I was discussing Ekman transport at the equator. Jabberwock the Jarhead thinks I am defensive of the entire Southern Hemisphere? Odd indeed – but not that surprising coming from the Jabberwock.