Trenberth and Fasullo wrote a follow on perspective published in Science entitled “Tracking earth’s energy: from El Nino to Global warming.” The punchline of the paper is there is observational evidence for missing energy accumulating in the climate system for the last 10 years.

Roy Spencer responded to Trenberth and Fasullo here and here, There is a back and forth exchange between Trenberth and Pielke Sr here. The gist of their arguments relate to problems in the measurements and the possible impact of clouds.

This week, there is discussion about a new paper by Knox and Douglass entitled “Recent Energy Balance of Earth” is in press at the International Journal of Geosciences and some text is posted at WUWT, with the following overall punchline: they find that estimates of the recent (2003–2008) ocean heat content rates of change (observed from ARGO floats) are preponderantly negative, which does not support the existence of either a large positive radiative imbalance or a “missing energy.”

Let me know if you spotted any other interesting discussions of this issue.

JC’s comments: I haven’t been following this too closely or reading all the papers in any details, but here goes. Measuring the Earth’s radiation balance (and changes thereof) is very difficult. Nevertheless, there is the expectation that if we keep dumping more greenhouse gases into the atmosphere, we should see surface temperatures warm, with some allowances for this warming to be masked for short periods of time by natural climate variability. Assuming that the Knox and Douglass analysis holds up, it appears that there is no storage of the heat below the ocean surface. The other choice seems to be a redistribution of clouds that has changed the earth’s radiation balance (Roy Spencer has written on this topic at length). If a climate shift has indeed occured ca 2001/2002 (see the climate shift thread), then the associated circulation changes would not surprisingly be associated with a change in global cloud amount or a redistribution of clouds.

443 responses to “Where’s the “missing” heat?”

I think long-term cloud-cover trends have been dismissed too easily in the Trenberth paper. Other papers like Palle et al. (2006, Eos) have indicated cloud albedo reduced during the 90’s and has been going back up in the 2000’s, and can have large enough effects compared to the 1 W/m2 that is missing. If it is a cloud cover increase, the missing energy is not in the earth system, but never got into it in the first place.

I have recently started thinking that Trenberth’s observation of ‘missing heat’ is quite a good sceptical observation. But the impression I get from his exchange with Pielke is that it has gone beyond just an observation and is morphing into the explanation of what is happening. It’s essentially an observation that something is wrong with our understanding of this problem. Trenberth’s explanations for this problem, dodgy ARGO data or energy hiding in unmeasured regions, seem to be both unrealistic and unsupported by anything in the real world.

The reality is that Trenberth raising this problem should be a spur to further investigation. That would require all possible causes of the present problem be investigated. So far it looks like trenberth hasn’t been brave enough to investigate one of the obvious options, which is that his (and other) estimates of the energy imbalance are based on wrong assumptions. The question is whether the rest of the climate science community feels bold enough to give this a full investigation. Or whether the post-normal conditions will make an unfetted investigation acceptable.

‘Missing heat” as an observation of the limits of our understanding is fine, as an explanation for what’s going on in the real world it seems a little embarrassing if it’s the best the great minds of climate science can come up with.

Travesty Trenberth said it all in his famous 2008 NPR interview when he speculated that the missing heat had escaped to space.

And? Are you saying this confirms that CO2 is a serious problem or refutes it?

The temperature is going up whether or not we know why. If you claim insider knowledge into the missing heat then you’re talking about the cause of the temperature increase, not about the fact of it. That makes you more of a theorist than an empiricist.

(Personally I think a lot of the missing heat is in an incorrectly analyzed hydrological cycle. But that’s another story.)

And? Are you saying this confirms that CO2 is a serious problem or refutes it?

Do you lways ask unrelated questions?

The temperature is going up whether or not we know why. If you claim insider knowledge into the missing heat then you’re talking about the cause of the temperature increase, not about the fact of it. That makes you more of a theorist than an empiricist.

The question isn’t whether the temp is increasing, but why it’s not increasing faster ? If the heat is missing, then it’s either not there at all or it’s hiding someplace. If it’s not there, then the whole thing is moot – except for – where did it go? If it’s hiding, then where? And when and how will it reappear?

(Personally I think a lot of the missing heat is in an incorrectly analyzed hydrological cycle. But that’s another story.)

And that’s possible. But the point here is that you DON’T KNOW. Which is precisely the reason for most of those pesky “denialist” arguments. Not just on this point, but on many others.

The question isn’t whether the temp is increasing, but why it’s not increasing faster ? If the heat is missing, then it’s either not there at all or it’s hiding someplace. If it’s not there, then the whole thing is moot – except for – where did it go? If it’s hiding, then where? And when and how will it reappear?

(Sorry, I had to attend to something else for a few days.)

I don’t understand what you’re saying here. Trenberth says he thinks it’s escaped to space, he just doesn’t know by what path. If he’s right (which I believe he is in this case even though I don’t always agree with him) then it’s not hiding and won’t reappear.

So that leaves only the other alternative you offered, “it’s not there at all.” If it’s in space then of course it’s not here at all, it’s out there in space. You need to clarify what you mean by “it’s not there at all.” What “there” are you talking about.

Or did you mean that if some of it is missing then all of it must be missing and therefore there’s no global warming? If so then I’d love to see how you argue that.

I am glad this thread was posted. It was an NPR story on this very topic in 2008 that provided me with my personal “tipping point” on the global warming issue. Before this, I just assumed, “of course, the planet is getting hotter, every idiot knows that.”

I even remember rolling my eyes and trying to hold back a gusher of sarcasm when one of my friends insisted that no, the planet is not getting hotter. Hearing the story on NPR (which I regarded as an authoritative source, though I feel a bit differently today) made me confront the fact that what “everybody knows” and what is assumed to be so glaringly obvious to any rational person, might just not be the case.

To my knowledge, NPR has never done a follow up story. I have noted in the past few years that there seem to be few news stories about the Argo ocean temperature sensing system. I’ve often wondered if this is because the actual data produced by Argo simply does not comport with the global warming paradigm to which the currently dominant climate science is committed.

I suspect that if the Argo system was detecting a pattern of changes indicating warming oceans, the public would be quite well informed of its existence and would be regularly reminded of its data trends.

Thanks, Ken, for the link and for your story, witness to an open mind.

For those who don’t follow Ken’s link, here is NPR’s Richard Harris:

‘But if the aquatic robots are actually telling the right story, that raises a new question: Where is the extra heat going?

Kevin Trenberth at the National Center for Atmospheric Research says it’s probably going back out to space. The Earth has a number of natural thermostats, including clouds, which can either trap heat and turn up the temperature, or reflect sunlight and help cool the planet.’

Yep, said it all.

I think I’ve never heard so loud
The quiet message in a cloud.
==============

I suspect we’ll hear kim’s line of reasoning a lot in this thread. I answered it here.

To make it clearer why this reasoning is backwards, let me express Trenberth’s missing-heat claim as the inequality ΔQ ≫ CΔT asserting that our estimate of the increase ΔQ in heat is much greater than our estimates of the product of the Earth’s thermal capacity C and increase ΔT in temperature.

This discrepancy can be accounted for by any of the following. We overestimated ΔQ, or we underestimated at least one of C and ΔT. Trenberth’s question is which of the three must be changed in order to make his inequality an equality.

One popular choice is the second, on the ground that it’s the one we seem to understand the least.

For Trenberth to pick the first does not imply that therefore ΔT must be less than we thought. Reducing ΔT makes no sense as an answer to Trenberth’s question. It would only make sense if we had started with the equality ΔQ = CΔT and then said we’d overestimated ΔQ. (Although even in that case it might have been C that we’d overestimated.)

(Personally I think ΔQ has been overestimated by seriously underestimating the capacity of the hydrological cycle, the evaporation heat-pipe carrying surface heat up as water vapor, dumping the heat in the clouds by condensing the vapor to water droplets, and allowing the droplets to fall back to Earth when they get big enough. Far too little of that heat-pipe has been looked at carefully enough to make an accurate estimate of its capacity. It’s already the single biggest heat conduit from the surface of the Earth, so a serious underestimate of its capacity will seriously overestimate ΔQ.)

Thanks for that pointer, JCH. No, I hadn’t read either Lyman et al or Trenberth’s comment on it. I interpret the latter as saying “not so fast, guys.” Is that how you read it?

It’s certainly a nice question what a deeper understanding of OHC change will add to our already pretty good (in my opinion) understanding of climate in general. I don’t have any preconceived ideas either way on that. On the other hand I’ve been assuming for some time now that we don’t really understand terribly well how the deep ocean works, Lyman et al’s improved measurements notwithstanding.

So I’d have to say I’m 100% behind Trenberth on this particular point of his.

“It went into space.” I said I want the truth. “It’s in the oceans. Into space (slap.) The Oceans (slap.) Into space, into the oceans (slap, slap, slap.)” I said I want the truth. “It went into space AND into the oceans!”

JHC: That’s correct, Willis is not one of the authors. Knox and Douglass acknowledge his assistance in providing data. Sect. 4. Acknowledgements “The authors are indebted to Joshua Willis for the Argo OHC data.” HG

Nothing is more stabilizing than ICE and WATER! When Earth is cold and the Arctic Ocean is frozen, there is no source for Arctic Ocean Effect Snow and ice retreats, Albedo decreases and the equilibrium temperature of earth increases. When Earth is warm and the Arctic Ocean is thawed, there is a source for Arctic Ocean Effect Snow and ice advances, Albedo increases and the equilibrium temperature of the earth decreases.
Look at the history of the temperature of the earth. Especially look at the ice core data for the past half eight hundred thousand years. It has gotten warm, time and time again. Every time that it got warm, it then got cold. When it got very warm it then got very cold. When it got a little warm it then got a little cold. Come on people! Look at the data. Warm melts Arctic ice. Exposed Arctic Water causes Ocean Effect Snow and that makes it cold.
Ice and Albedo are what controls our temperature.

Herman, I’ve seen you write this post several times before. Are there any published papers or data to support your statements or is it just a personal theory? I’m not being rude, I am just interested to know.

RobB, you did ask if there are any published papers. Yes, my climate theory is derived from the theory of Maurice Ewing and William Donn. They published papers in the1950’s and 1960’s. Data on NOAA’s website is consistent with Ewing and Donn theory. Data on NOAA’s website is inconsistent with modern consensus climate theory. I will state my current theory on a post of my own, soon.

Just for comparison, using the same data, the mainstream view is that the natural thermostat is CO2 itself and that the cooling is caused by rock weathering. Judith’s colleague, Peter Webster, seems keen on this explanation. See here:http://climateaudit.org/2010/06/15/unthreaded-39/

Where he says:
“I think that the concentration of basic or background greenhouse gas is the backbone of Earth’s climate. Largely the balance between source and sink of CO2 determines the background concentration: the carbon cycle. The source is volcanism and the sink is weathering of rocks (rain being slightly acidic due to the dissolution of CO2) and the uptake of CO2 by the oceans. Absorption takes place because of the formation of carbonates by the biosphere and the precipitation of these carbonates to the bottom of the ocean once the beastie dies. The carbon then enters very long geological cycles and is eventually expelled through volcanoes and etc.”

I queried that assumption and I’d have liked an answer to my 2nd question but he didn’t revisit the thread.

Herman,
You are correct but you missed the retreating Ice of an Ice Age and the cold water left behind. When you run out of Ice to melt, that leave water left. When that starts to run out, you have more plant and animal life behind it to increase gases that then this increases atmospheric pressure.

Dig a little deeper and you’ll be pleasantly surprised at the results.

Assuming that the Willis et al. analysis holds up, it appears that there is no storage of the heat below the ocean surface. The other choice…clouds

No storage seems to me an easier explanation than clouds that have no established process. The low penetration of long-wave radiation, and the capacity of evaporation to dissipate superficial heating, should limit the capacity of GHGs to warm the ocean, but they can still warm the land by direct radiation. This would explain a lot of disparate observations: the growing land-sea temperature differential, increasing humidity, and increasing global precipitation.

I think this has been said a while ago by Singer but I have seen little discussion of it. RC did a cursory post on it. AFAIK the coupling constant is uncertain to an order of magnitude.

The consequence of the low coupling of longwave radiation to ocean, is a short characteristic time scale, and rapid equilibration before the main energy store (the ocean) can heat, causing as Trenberth said, long-wave forcing to be largely radiated to space. Short-wave is a different story as it is penetrates the ocean. All forcings are not created equal.

David – A common misconception is that solar shortwave radiation contributes more to ocean heating than longwave back radiation from atmospheric GHGs. In fact, the back radiation warms considerably more than the solar contribution. This has been discussed in some detail elsewhere, but to be brief, both SW and LW radiation are very rapidly mixed in the upper ocean mixed layer by turbulence (with some help from convection). On a W/m^2 basis, the LW radiation dominates. The net flux from each is outward toward space, and so the effect of back radiation is sometimes described as “reduced cooling”, but at the level of individual photons, the effect is an actual physical warming. In the tropics, evaporative cooling limits the rise in mixed layer temperatures, but because it is indeed a “mixed” layer, with relatively shallow temperature gradients, the evaporation is the result of both the solar and LW contribution. The effect of LW radiation on land and on water differ somewhat in quantity, but not in principle – both are warming phenomena (or if you prefer, “reduced cooling” phenomena).

Fred says…
“In fact, the back radiation warms considerably more than the solar contribution.”

Fred, I will never ever believe that statement. I don’t care how many peer reviewed papers are produced about it or how many lab experiments are done.

MY LIFE EXPERIENCE of owning swimming pools (some very large) has shown me over and over again that a large body of water ALWAYS warms quicker under direct sunlight than not.

At the tropics, up to 1000Wm2 of direct sunlight can reach the top of the ocean. This light reaches down to about 100 metres depth. (about3%)
Compare that to the pissy backradiation that penetrates only nanometres and I’m supposed to believe that warms the oceans more than direct sunlight? Pullease

I think the argument about back radiation is that it prevents the heat accumulated during the day from escaping as fast as it would otherwise, not that it directly warms by itself. Then again I may have misunderstood.

Hasn’t this tired old chestnut been trotted out and beaten down enough already?

You’re repeating a basic mistake by people who just don’t grasp the simplest tenets of how to apply Thermodynamics to closed systems, or who do understand but just can’t get over their prejudices.

Heat in the form of radiation doesn’t know what direction it’s flowing; there’s no GPS tracking system inside of photons telling them not to head from a colder body to a warmer one.

Heat backradiated from a colder body to a warmer body will not result in a net heat transfer from the colder to the warmer, if you take the closed system as a whole without cherrypicking just the part of the dynamic exchange that doesn’t make you happy.

Thermodynamics is not violated.

The effect of backradiation is still considerable, measurable, confirmed by instruments in the field and in the lab, verified, well-founded, and simple physics. It has too much support in fact to even be worth posting a link to source material. There’ve been hundreds of posts to previous discussions on this at Climate Etc.

Jim,
Why would the back radiation end up vaporazing any more than the other heatflows do.

Neither SW alone or LW alone can heat the water as much as LW radiation cools it, but put together they warm. The SW penetrates deeper and heats those layers enough to initiate a mixing in the uppermost ocean. This is one reason for a mixing layer of almost constant temperature which extends from few tens to few hundreds of meters to the ocean. The winds are also mixing, but the SW radiation alone would be enough to induce it as mixing is the only way those layers can release the extra heat they receive from SW radiation.

Jim,
When you consider real world, you cannot exclude part of the reality without making serious errors.

In this case it is essential that the penetration of SW radiation leads to mixing layer of almost constant temperature. Thus the surface is not significantly warmer than layers a bit deeper. Actually it may be slightly cooler. The evaporation is controlled by the temperatures of the water and air at the surface and the moisture of the air. What are the mechanisms that heat the surface water are irrelevant, when the temperature is known.

When analyzing a system, discerning the workings of the subsystems is often enlightening. That is what I’m doing. The whole may display emergent properties, but an understanding of the parts is helpful in understanding how the entire system works.

Jim,
Analyzing subsystems may be enlightening, but then one must remember that it is also limiting. Otherwise one may be lead to such erroneous conclusions as the idea that LW radiation would only cause additional evaporation and be therefore not warm seawater at essentially equal efficiency than SW radiation.

Are you claiming: “Otherwise one may be lead to such erroneous conclusions as the idea that LW radiation would only cause additional evaporation and be therefore not warm seawater at essentially equal efficiency than SW radiation.”
AFAIK, incoming LW does not heat the ocean with the same efficiency as SWR. The Trenberth figures seem to back this up.

The OUTGOING LWR from the skin of the ocean originates thus: 1. The SWR heats the ocean. 2. The heat is then spread predominately by conduction, convection, and mixing from wind. 3. Any water thus heated that moves to the skin radiates away the heat as LWR. But the heat in this scenario originated with SWR. The SWR is absorbed and converted to heat energy.

(I was waiting for Fred to answer BH on the following but he seems not to have so I’ll try.)

At the tropics, up to 1000Wm2 of direct sunlight can reach the top of the ocean. This light reaches down to about 100 metres depth. (about3%)
Compare that to the pissy backradiation that penetrates only nanometres and I’m supposed to believe that warms the oceans more than direct sunlight? Pullease

BH, I think I see two problems with this line of reasoning.

First I think you may be miscalculating the ratio of insolation to back radiation. Your figure of a kilowatt only holds at noon, whereas the back radiation of 324W (Fig. 7 of Kiehl and Trenberth) is the average over 24 hours.

In order to average insolation over 24 hours you must divide the noon value by 4. (If it were constant for 12 hours, then divide by 2, but of course it isn’t and the math turns out to require 4.) Hence the correct comparison over 24 hours is 250W vs. 324W.

(It is interesting to ask how constant the 324W is. I used this infrared thermometer (with emissivity set to unity) to directly measure the temperature of the sky vertically a couple of days ago at 8 pm and just now (1:30 pm). Even though the air temperature around the thermometer is 12 °C right now, via a different kind of thermometer, the IR one shows the sky at -10 °C. At 8 pm the other night it was -15 °C at 8 pm.

These temperatures in Kelvin are 263 K and 258 K respectively. So we have 5.67*2.63⁴ = 271W of back radiation from the sky at 1:30 pm and 5.67*2.58⁴ = 251W at 8 pm. (The 5.67 is the Stefan-Bolzmann constant for when temperature is given in units of 100K.) But this is in the dead of winter at 37 °N so it is hardly surprising to find it quite a bit lower than the global annual average of 324W. But even then it averages to more than the 250W of insolation. I will make a few more measurements to see how much it varies from day to day. If you run across one of these things and measure the day and night temperatures of the sky, namely the back-radiation, let us know what you get in the heat of the Australian summer.)

Second, the reason the sea is not black is because 2% (at normal incidence) is reflected at the surface and a lot more of the light entering the sea is scattered and returns back. But neither is the sea white, because water absorbs more strongly at longer wavelengths, making the scattered light predominantly blue. Under red light (e.g. a red searchlight aimed down at night) the sea can be expected to look black except for the specular 2% reflection in the surface, consisting of a mirror image of the searchlight. Under back-radiation it is also black.

The point here is that the sea reflects more insolation than it does back-radiation. Hence it absorbs all but about 2% (the reflected part) of the 324W of back-radiation, but significantly less of the 250W of insolation due to not absorbing all the light at the blue end of the spectrum. That can be more than it may seem because human eyes are less sensitive to blue than green and we tend to underestimate the energy of a given brightness of blue.

the IR one shows the sky at -10 °C. At 8 pm the other night it was -15 °C at 8 pm.

Sorry, ignore that. Turns out the thermometer had accumulated lint in its collector from carrying it around in my pocket for weeks. The lint was at body temperature and generating enough thermal radiation of its own to completely throw off the reading for low temperatures like the sky. In future I’ll keep a cover on it to keep out the lint.

Theorists have no appreciation for the hazards of experimental science. Whoever said “seeing is believing” must have been a theorist.

This makes sense given that the tropopause tends to be around −40 °C (typical of the temperature outside your plane, which conveniently is also −40 °F).

Now I want to know what the temperature is (a) at noon, (b) in midsummer, (c) with clouds, (d) at other latitudes. Only then will I feel I’ve confirmed or disconfirmed Trenberth and Kiehl’s average figure for back radiation of 324W.

OK, I’ll concede your 324W to 250W argument as well as conceding the blue spectrum observation (good one)
Shall we say only 81W of SW? (that’s just a quarter of 324W LW)?

For the two to have equal warming affect, my SW only has to penetrate 4 times deeper than LW. It does, way way more. Many metres to less than a mill.

Also your LW suffers from quick evaporation especially at low lats and turbulent waters. My SW suffers the same only at the top of the surface. Light penetrating further down doesn’t suffer from evaporation.

Without mixing the SW heats continuously the deeper layers that it reaches. That would continue until the water is so warm that the temperature forces mixing. There is no way there could be a significant inversion in the ocean water without a lower salinity at the surface than below.

The consequence is that even a rather weak penetrating radiation leads to layer of essentially constant temperature at the top of the ocean. Wind induced mixing is not needed for this although it may lead to a thicker mixing layer.

The evaporation is finally determined the total radiative energy balance without significant dependence on the ratio of SW and LW components.

That would work if you’re comparing apples and apples. But the 324W is an orange in this case.

The reason for dividing the 1000 watts of sunlight reaching the earth’s surface by four is
(i) it doesn’t reach the night time back of the planet, which radiates both up and down both day and night, and
(ii) your 1000W is only at high noon near the equator, which is not representative of the whole planet.

You lose one factor of 2 in the fact that the back of the Earth is not getting any sunlight even though its radiating at the same rate as the front.

You lose the other factor of 2 in the fact that, in say summer, the north pole is getting minimal warmth while the south pole is in pitch darkness for six months.

The former factor of 2 is obvious. The latter factor of 2 bringing the total up to 4 is not so obvious because it to do it right requires integrating over the surface of the earth, which requires high school calculus. That makes it a bit of a challenge for those not equipped to teach beyond elementary school maths. How’s your’s, BH?

The 324W of back radiation is an annual average that fluctuates hugely. One might suppose its fluctuation to be dependent on day vs night, but instead it is dependent on cloud vs clear. At 11 am this morning it was a gorgeously sunny day and I measured 232 W/m2. Yesterday it was overcast and I measured 364 W/m2 (and this is midwinter in Palo Alto at 37 N!).

The difference is that clouds
(a) collect heat dumped into them and
(b) radiate heat much more effectively than anything else up there including CO2 which is constant year-round, unlike water vapour which is hugely variable.

In contrast, radiation from the Sun is boringly predictable: 1000W reaching the equatorial surface at noon every furshlugginer day. Much less at night and at the poles.

The latter factor of 2 bringing the total up to 4 is not so obvious because it to do it right requires integrating over the surface of the earth, which requires high school calculus.

What I meant by this is that computing the area of a sphere takes calculus. Taking the unit of length to be the radius of the Earth, the surface of the Earth has area 4π. But if you were standing on the Moon during an eclipse of the Sun by the Earth, you would see just a circle of unit radius blotting out the Sun. That circle has area π, only one quarter the area of the surface of the Earth.

The heat captured by Earth uses the circle area of π. The heat radiated up and down between the surface of the Earth, the atmosphere, and outer space, uses the sphere area of 4π. If you accept those two area formulas then you have the factor of 4 without any calculus, which is then hidden in the math for the area of a sphere.

Actually, the reason for dividing by four is simple geometry. A sphere has four times the surface area as a circle of the same radius. Hence the amount of sunlight intercepted by the Earth can be represented by a circle of Earth’s radius, but when we average that over the entire spherical surface, you need to divide by four to account for the area of the sphere.

But now that I think of it, your point that insolation penetrates deeper than backradiation is an excellent one. If the skin heating from the latter increases evaporation sufficiently then it might be enough to overcome the big difference between < 250W and 324W.

However the usual turbulence at the surface might be able to mix the skin with what's below faster than evaporation can remove the heat. If I had to guess I'd say the mixing ought to undermine the evaporation. But that's not a knockdown argument, and it would be interesting to quantify the mixing and the evaporation enough to tell for sure.

Vaughan,
If the SW would not penetrate further down tho heat lower layers until mixing starts and without wind induced mixing the temperature gradient of upper ocean would be much stronger. With the same SW (but assumed non-penetrating) the skin would end up approximately in the same temperature as in the real ocean as this corresponds to the energy balance, but the layers below would be colder.

Beats me, BH, but maybe we can work on it together. (I’m just a theoretician, all we theoreticians do is try to explain what’s bugging you empirical guys. If we never ever hit a home run we theoreticians would all be out of work.)

From part 1
Finally, we find that the wind power in the tropical Pacific is subject to significant decadal variations, especially related to the climate shift of the late 1970s (e.g. Guilderson and Schrag, 1998; Fedorov and Philander, 2000). Thus, our results indicate that accurate estimates of the global wind power, and hence of the net wind work on the ocean general circulation, require a careful consideration of the tropical ocean.

Page 17

The decrease in the mean wind power in the tropical Pacific over the last 50 years has resulted in a reduction of the ocean available potential energy, which indicates a reduction in the thermocline tilt over the same time interval (Fig. 5). This flattening of the thermocline occurred around the time of the climate regime-shift in the late 1970s (Guilderson and Schrag, 1998) and is associated with a weakening of the zonal winds along the equator (Vecchi et al., 2006). A flatter thermocline can lead to stronger El Nin˜o events (Fedorov and Philander, 2000, 2001).

Federov 2009

On interannual time scales, the perturbation available potential energy E is anticorrelated with sea surface temperatures in the eastern tropical Pacific so that negative values of E correspond to El Niño conditions, and positive values correspond to La Niña conditions (Fig. 1). This correlation is related to changes in the slope of the thermocline associated with El Niño and La Niña. When the thermocline slope increases (as during La Niña; Fig. 1a), the warmer and lighter water is replaced by colder and hence heavier water thus raising the center of mass of the system and increasing its gravitational potential energy..

As the GPE increases there must be a tithe to pay,how much and from whom are the relevant issues.

From part 1
Finally, we find that the wind power in the tropical Pacific is subject to significant
decadal variations, especially related to the climate shift of the late 1970s (e.g. Guilderson and Schrag, 1998; Fedorov and Philander, 2000). Thus, our results indicate that accurate estimates of the global wind power, and hence of the net wind work on the ocean general circulation, require a careful consideration of the tropical ocean.
Page 17
The decrease in the mean wind power in the tropical Pacific over the last 50 years has resulted in a reduction of the ocean available potential energy, which indicates a reduction in the thermocline tilt over the same time interval (Fig. 5). This flattening of the thermocline occurred around the time of the climate regime-shift in the late 1970s (Guilderson and Schrag, 1998) and is associated with a weakening of the zonal winds along the equator (Vecchi et al., 2006). A flatter thermocline can lead to stronger El Nin˜o events (Fedorov and Philander, 2000, 2001).

Federov 2009

On interannual time scales, the perturbation available potential energy E is anticorrelated with sea surface temperatures in the eastern tropical Pacific so that negative values of E correspond to El Niño conditions, and positive values correspond to La Niña conditions (Fig. 1). This correlation is related to changes in the slope of the thermocline associated with El Niño and La Niña. When the thermocline slope increases (as during La Niña; Fig. 1a), the warmer and lighter water is replaced by colder and hence heavier water thus raising the center of mass of the system and increasing its gravitational potential energy..

As the GPE increases,what is the tithe ,and from where are the interesting questions.

Vaughan,
Your careful breakdown of energy from insolation vs back radiation is very useful for understanding the energy balance of the oceans. Thank you. I believe surface turbulence due to wind and waves has been tested and shown to essentially always increase evaporation, day or night. I will look for the study.

a large body of water ALWAYS warms quicker under direct sunlight than not.

A few degrees of warming during the day is nothing compared to what the back radiation is accomplishing day and night for your swimming pools, namely preventing them from freezing to −18 °C. Don’t underestimate the value of the back radiation to life on Earth, without which it would be Snowball Earth.

Lets leave life on Earth to one side and go back to the swimming pools.

A similar question to the one I asked above just now about the doldrums around the tropics. My pool doesn’t experience waves or churning. How is it that the 5ft 6ft levels are not frozen? Surely backradiation doesn’t penetrate that deep. Micrometres only in fact.

Also, as an experienced late night skinny dipper, I can tell you that when the air T drops, surface water gets quite cool. I walk over to the deep end to get warm. The deep end didn’t warm via backradiation IMHO because it’s never warm during overcast days irregardless of air Ts.

Here in Brisbane, we’ve had nothing but overcast and rainy days since before Xmas though Ts have been hovering around 26-9max and 22-4 min. Kids have used the pool only twice, both sunny days and they’ve been complaining about the cold water simce.

Not too scientific I know, but one can’t beat real life experience ha?

Couldn’t agree more about real life experience, BH. Though some of the $30 or less instruments one can buy on Amazon etc. these days can enhance that experience, at least for those wondering
(a) how strong it is (ans: 324W/m2 or so averaged over the whole planet) and
(b) how much it varies between cloudy and clear days (ans: hugely!).

I would expect the top few inches of your pool to be largely unaffected directly by sunlight, given how deep SW goes in water, as you rightly point out.

One factor is indeed the down radiation, which is at a wavelength that the top few millimetres of your pool will absorb.

But I’m not sure radiation is the biggest factor here, since even though it varies hugely between cloudy and clear days that may not be enough to explain variations in temperature of the top few centimetres of your pool.

More important may be the impact of the patio around your pool. You skinny dippers have surely noticed how unbearably hot the patio gets sometimes on a clear day. Even though the down radiation is maybe a third of what it is on a cloudy day, the direct sunlight heats the ground and hence the bottom few metres of air above it pretty quickly, which then drifts over your pool and heats the top few centimetres of water.

On a hot day you should find that layer much hotter than say a metre down. Let me know if I’m not making sense here.

When the clouds are preventing the sunlight from heating your pool patio, the air cools off and so does your patio and hence the air and hence the top layer of the pool water.

Show this to your kids and ask them whether they think air temperature caused by hot patios explains their complaints about the cold water at the top of the pool. I’ve been finding the current generation of US kids pretty smart and I don’t see any reason why Aussie kids today should be any different. We were two generations ago.

One nice thing about pool water whose top few inches are hot is that it tends to take the edge off the colder water lower down.

(On a barely related note, even though my parents’ MBBS degrees were from Melbourne, and my maths and physics and CS degrees were from Sydney, my sister’s degree was from Queensland and my parents lived in Coraki then Nambour then Surfers then Palm Beach. So my higher-latitude origins notwithstanding, I’m not entirely unacquainted with these lower latitudes you speak of. You my friend are living in the lap of thermal luxury compared to Melbourne.)

……” Though some of the $30 or less instruments one can buy on Amazon etc. these days can enhance that experience, at least for those wondering
(a) how strong it is (ans: 324W/m2 or so averaged over the whole planet) and
(b) how much it varies between cloudy and clear days (ans: hugely!).”………

As you rightly say you can point your $30 instrument at anything you like.
The composition of the atmosphere as well as its temperature has huge changes.
So then when your instrument reads 324W/m2 how confident are you of its accuracy?

So then when your instrument reads 324W/m2 how confident are you of its accuracy?

Whoa, that question is orders of magnitude in quality above your usual comments, Bryan. It would be greatly appreciated if you could sustain that level in future.

I have a query in to the instrument’s manufacturer on the question of its accuracy for this purpose, relative to pyrgeometers designed for exactly this kind of measurement. Until they respond you have the moral high ground here, Bryan, good job.

(Always ready to apologize for any perceived irony on these blogs when I’m caught in flagrante delicto.)

……”I have a query in to the instrument’s manufacturer on the question of its accuracy for this purpose, relative to pyrgeometers designed for exactly this kind of measurement. “……..

Its a welcome development that you have gained enough perspicacity from the participation on Judith’s site to rise to the level of questioning the accuracy of instrumentation.

But the same question should be put to the manufacturers of the pyrgeometers.

How do the readings take account of the mixed composition and temperature of the atmosphere?
A further question would be, how exactly are the pyrgeometers calibrated against the International set of Fundamental units.

1. Some of the models use a fixed value for emissivity.
There is an offset control and the Stephan -Boltzman equation is supplied with the operator left to make their own adjustments.
I consider the results from these to be of little use.

2. The more modern ones particularly those which are used as reference instruments are placed in a blackbody enclosure and calibrated in the range of expected use.
I would think that this type would be of use in measuring sources which we were sure were perfect black bodies.
Does the atmosphere qualify for this standard?

The international agreement on cross checking with the reference instruments seems to be a voluntary arrangement among serious pyrgeometers users.
As far as I know this instrument has not been endorsed as being fully in compliance with bodies responsible for maintaining and developing the International set of Fundamental units and their measurement.

They’d better be using emissivity 1 (E100 on the MT-250) or their results would mean something different from what I’m talking about, which is the IR energy reaching the surface of the Earth from above. All these instruments use IR energy to heat a thermopile which returns the wattage received from a precisely calibrated area. The Stefan-Boltzmann formula is irrelevant when talking about watts of down radiation since it serves only to convert that radiation to temperature.

For instruments that read out temperature rather than watts, emissivity 1 is the only way to recover the watts it received. This is because what a lower emissivity setting accomplishes is to raise the temperature being displayed by the instrument in response to a given wattage, which is not at all what one wants here.

I would think that this type would be of use in measuring sources which we were sure were perfect black bodies.

Again you’re confusing power and temperature. These instruments measure power, not temperature. They use the black body assumption to convert power to temperature. Hence the correct way to read power from an instrument that reads out temperature is to realize how it derived temperature from power and simply invert its calculation. You then have precisely how much power it measured.

Even if the black body assumption is wildly wrong, even if the Stefan-Boltzmann constant is off by orders of magnitude, undoing the computation of temperature that results from using it will give you the right power—only the temperature on the readout was wildly wrong but that’s irrelevant after you’ve undone it.

As far as I know this instrument has not been endorsed as being fully in compliance with bodies responsible for maintaining and developing the International set of Fundamental units and their measurement.

As far as I know, the micrometers I pay $50 for to measure thickness of things have not been endorsed as being fully in compliance with bodies responsible for maintaining and developing the International set of Fundamental units and their measurement. Are you suggesting I’ve been ripped off?

When my IR thermometer tells me on a clear night like tonight just now at 11:30 pm with just some light cirrus cloud off to one side that the temperature of the sky is -25 C, and when it’s a bitterly cold and totally clear night sky then -39 C, and when its heavily overcast its 10 C, and when this all fits extremely well with the physics of climate science, then the way I feel about this thermometer is the way I feel about mercury and alcohol thermometers measuring air temperature, and rulers measuring length, and scales measuring weight (for less than 0.2 kg I use this which is accurate to 0.00001 kg but for that accuracy you pay a premium).

I am very comfortable with measuring equipment, and am very sensitive to when it seems to be yielding inconsistent results. So much so in fact that when my MT-250 IR thermometer was telling me the sky was -10 C when I was expecting much lower I got suspicious and looked at its little heat collector. Turned out it had filled up with lint from my pocket, where the device had been living for weeks. When I blew out all the lint under high pressure it read what I’d been expecting.

So yes, equipment can deliver wrong results. You have to have a nose for when they’re doing so, and you have to understand all the ways your instruments can lie to you, which can be legion for the unwary, especially with complicated instruments.

But when you and your instruments understand each other, they can serve you well. As I feel reasonably confident my little MT-250 does. Though I really want to hear back from MicroTemp for confirmation of my confidence. I would be very disappointed if they told me they didn’t trust their own instrument.

The instruments seem to have changed in the course of the reply exchange.
I thought from your comment that you had bought a handheld pyrgeometer.
I now find that you are talking about an IR thermometer.
However you might like to look at how a temperature measuring device reacts if the emissivity changes.http://www.pyrometer.com/Tech/emissivity.html

The genius of R W Wood is shown by the brevity of method needed to reach a robust conclusion.
This he established in this case;
1. The heating method inside a Greenhouse(glasshouse) is by stopping convection.
2. The radiative effects of CO2 and H2O at near Earth Surface are so small they can be ignored for most practical puposes.

…….”namely preventing them from freezing to −18 °C. Don’t underestimate the value of the back radiation to life on Earth, without which it would be Snowball Earth.”…..

The is nearly always(even at night) more radiation of every frequency leaving the Earth Surface than entering it from backradiation.
Also the radiative effect of the 1% or so of the atmosphere that’s involved has been shown by R W Wood and others to be so small as to be ignored for all practical purposes.

The is nearly always(even at night) more radiation of every frequency leaving the Earth Surface than entering it from backradiation.

Source?

Also the radiative effect of the 1% or so of the atmosphere that’s involved has been shown by R W Wood and others to be so small as to be ignored for all practical purposes.

What on earth are you talking about, Bryan? Woods wrote his 1.5 page paper in 1909. First, he said he didn’t pretend to have to looked into this in any detail. Second, in 1909 no one had clue about the sorts of things we know today.

Quoting Wood on climate is like quoting Jesus on how to raise kids today. If we haven’t learned anything about raising kids since Jesus then what on earth was the human race doing all that time? Trimming its toenails?

Well, I have one advantage over you, Bryan. Since I don’t believe anyone on either side of this disagreement, I’ve looked into it personally. So far I have found not a shred of data to support what Wood was claiming. I would therefore greatly appreciate guidance as to how I could possibly fudge my experiments to get anything remotely like what Wood claimed.

There is no way way Wood could have observed what he claimed. It is impossible theoretically, and when you measure it empirically, what Wood claims to have observed is far further from the theory than what is easily established by experiments anyone could perform, even you conceivably.

Have you considered trying to confirm Wood’s conclusions yourself? For that matter have you considered calculating how his experiment should have turned out? The calculations are well within the capabilities of a competent high school physics student. Where do you stand in that regard?

Wood’s expertise was in ultraviolet radiation, and in 1909 when he dashed off that little 1.5 page note he had published nothing at all previously about atmospheric physics, unlike his contemporary the great astronomer Charles Greely Abbot, who at that time was director of the Smithsonian Astronomical Observatory and later served as Secretary of the Smithsonian Institute from 1929 to 1945.

Five months after Wood published his note claiming no numbers about anything to do at all with his experiment except his claim of 55 °C for both boxes, with the caveat that he hadn’t gone into this in any detail, Abbot did go into it in detail with a great many numbers, in a follow-up paper in the same journal Wood published his little note in. Abbot expressed great puzzlement that Wood could have observed no difference between the two boxes. And no wonder, since it was theoretically impossible.

If you wrote a 1.5 page paper with a single number, 55 degrees, and submitted it to a journal, and the conclusion was in striking contradiction to what theory would have predicted, would you expect it to be accepted today?

G&T did a brief test and came to the same conclusion as Wood.
However have you thought further about the paper you read only last week
Especially as it comes from a source with no “spin” on the AGW debate.

The way I read the paper is it gives massive support for the conclusions of the famous Woods experiment.

Basically the project was to find if it made any sense to add Infra Red absorbers to polyethylene plastic for use in agricultural plastic greenhouses.

Polyethylene is IR transparent like the Rocksalt used in Woods Experiment.

The addition of IR absorbers to the plastic made it equivalent to “glass”

The results of the study show that( Page2 )

…”IR blocking films may occasionally raise night temperatures” (by less than 1.5C) “the trend does not seem to be consistent over time”

More or less brief than Wood’s? Brevity was Wood’s problem. He didn’t even think to ask whether “temperature of the box” was well defined. He said the two boxes were within 1 degree of each other.

If instead of using just one thermometer he’d measured the temperature at the top and bottom of the glass window and the temperature at the bottom of the box, and if he’d been doing the experiment midsummer in California instead of midwinter in Maryland, he might have gotten something like 39, 54, 75. For the salt window he should have seen 36, 48, 74.

So if he’d simply sat his thermometer on the bottom of the box he’d have seen a 1 C difference.

But if it had occurred to him to spend an extra few minutes exploring the question of whether this 1 C difference obtained everywhere in the box, he would have been puzzled as to why there was such a huge difference elsewhere in the box.

Given that he did the experiment to prove his point that salt and glass would give the same temperature, he’d have been faced with a difficult decision. Publish just what he measured at the bottom, or mention what happened else where in the box?

I prefer to think that looking more deeply into the matter simply didn’t occur to him. Not out of any generosity on my part in attributing motives, but because that’s what he wrote himself at the end of his paper: “I do not pretend to have gone very deeply into this matter.”

A masterful understatement in view of the huge impact of his paper a century later.

David – Some links to measured back radiation are given by Judith Curry in her post on radiative transfer models. The measurements confirm values estimated by RTE modeling. Almost 100 percent of downwelling IR radiation is absorbed by ocean water (based on the IR absorptivity of water), and the consequences of mixing are apparent in the observation that the temperature of the skin layer, which absorbs the back radiation, is actually slightly cooler rather than warmer than the water directly underneath. (If I can retrieve a reference I just saw on this, I’ll link to it here).

Also, while I’m sure you already know this, I’ll point out that while loss of heat from the surface is due to evaporation as well as radiation, the laws of physics tell us that we can’t cool water to a lower temperature by heating it. In other words, surface evaporation can be increased only by making the surface hotter. In this case, the entire mixed layer is hotter than it would be without both SW and LW absorption, but the skin layer is not disproportionately hotter, which would be the case if the evaporation were mediated mainly by the LW component.

Finally, the air overlying the tropical oceans is almost 100 percent water saturated, and so evaporative capacity is dictated primarily by the ability of winds or other forms of surface turbulence to blow away saturated air, and the ability of warming of the air by radiation to increase its ability to retain water vapor via the Clausius-Clapeyron equation.

Another way of understanding the situation is based on the energy balance of the skin layer.

The layer is heated by LW and SW radiation and cooled by emission of LW radiation and evaporation. In addition there is convection and conduction in the water bringing heat from below and some conduction to the air. The convection and conduction must be heating and not cooling the skin, because SW penetrates deeper in the ocean. The additional heat absorbed by the deeper layers can escape the ocean only through the skin and it can come to the skin only by convection and some conduction.

The skin cannot be much colder than the layers immediately below because that would lead to rapid mixing, but there is a small difference in this direction.

The net effect of LW radiation is not to warm but to cool the skin. This is also a proof of the fact that LW is not the linked in any particular way to evaporation. All the energy of SW heating is needed for the actual balance of energy flows and that includes also that part of radiation that penetrates tens of meters or more into the ocean.

If Kiehl and Trenberth’s figure of 324 W/m2 of average back radiation from the sky to the surface is to be believed, then the corresponding temperature of the source of that radiation is sqrt(sqrt(324/5.67))*100 = 275 K = 2 °C. That’s not out of line with my measurements of the sky’s temperature using an infrared thermometer, which have ranged from 12 °C to −39 °C. (Right at this minute, 15 minutes to midnight in midwinter at 37 °N with a perfectly clear sky but grounds to suspect high humidity up there, it looks like −23 °C.)

So unless the ocean’s skin temperature averages below 2 °C I would have to agree with you (as I almost always do).

Fully half the energy of direct sunlight is in the infrared, which is relatively strongly absorbed by water. It is the blue light that passes through more easily and is scattered back and re-emitted to make the deep ocean look blue. So I can easily see the ocean skin being warmed by sunlight in addition to the warming effect of the back radiation.

The Knox and Douglass paper ended its data in 2008, and since that time, OHC has risen globally. The rise isn’t major, as it had been from the mid-70s to the early 2000s, but it is a rise.

The flattening over the past few years on the other hand is due to the sharp declines in the North Atlantic and South Pacific. And the North Atlantic had been responsible for a disporportionately large portion of the rise since the mid-70s, which hints at AMO/AMOC.

Additionally, there is little to no evidence of an anthropogenic signal in the NODC (Levitus et al) OHC data . To see this one has to divide the Atlantic, Indian and Pacific ocean basins into tropical and extratropical subsets. The major upward shifts (which are the largest component of the long-term rise) in OHC coincide with strong or multiyear La Nina events, and with shifts in sea level pressure. The additional variability of the North Atlantic should result from AMOC.

Fred
I also have trouble believing that solar SW radiation isn’t the more significant source of energy although, like Baa, my disbelief is based merely on a practical perception rather than any particularly solid scientific foundation. I could therefore be easily swayed. Please could you link to the discussion/paper you referred to and I will attempt to overcome my uninformed skepticism? As an aside, regardless of this SW/LW discussion, energy still seems to be missing and therefore whilst you may be correct to address a popular misconception, it doesn’t seem to address the heart of the problem. Where do you think it’s gone? Regards, Rob

JC also stated that some portion of LW heats the seas, but I have seen no numbers on the ratio of SW/LW heating. While LW might manage to heat oceans to some degree, it has not been established here that it is significant compared to SW.

Two first values are adding energy to the ocean, three last taking it away.

There is not much reason to doubt that these numbers are reasonably close to the truth. The net flow which is given as 1.3 W/m^2 for the oceans (0.0 W/m^2 for the land areas and 0.9 combined) may leave more to speculations, but the bigger numbers need not change much to modify this difference drastically.

Pekka, the LW back radiation coming to the oceans seems an extraordinary amount compared to the SW radiation arriving at the surface from the sun and the amount of LW radiation emitted from the ocean. I know it is quoted in the paper you cite, but (pardon my lack of knowledge) has it actually been measured?

Rob,
The LW radiation from the ocean is the best known of all. Ocean water is an almost perfect black body for IR. Thus this is known precisely from Stefan-Boltzmann law or in more detail from Planck’s law.

The back radiation from the atmosphere including clouds is measured at some locations, but I do not know the details.

The argument is if the second item on your list actually heats the ocean. Any incoming LWR would vaporize the skin of the ocean. Maybe I am missing something? (I appreciate the fact that the net LWR cools, but I am focusing on incoming LWR only. What does it do to the ocean? It probably vaporizes the skin and does not heat the ocean to any depth. That’s what I think.)

the aggregate error bars on this estimate are likely to exceed the residual 1.3 W/m^2. For example, sensible heat is estmated at 12 W/m^2. this suggests anywhere between 11.5 and 12.5 W/m^2, on numerical accuracy alone, even before we consider estimating accuracy. the net flow could easily be negative or positive.

My point based directly on what Trenberth, Fasullo and Keith have written is, that the aggregate error is many times larger than the residual. The believe for other reasons that the residual is likely to be close to 0.9 for the global average and 1.3 for the oceans. Then they use this estimate as a constraint to calculate the other numbers better than they could do it otherwise.

This study does not give any support for the numbers 0.9 or 1.3, it just uses these numbers and tells where they should appear in the balance.

Actually they tell that they would rather get a number like 7 than 0.9, if they would do the analysis other way round. This 7 is the number Trenberth refers to as “surely wrong”. Other earlier estimates have resulted in even larger discrepances.

The empirical data and its analysis is to give energy fluxes is not accurate and reliable enough to help in determining the residual.

It is wrong to say that longwave heats the ocean as net longwave has a cooling effect. Otherwise it is equivalent to saying reflected shortwave cools the ocean. It doesn’t make sense to talk about components individually in terms of understanding the flow of energy.

When measurements disagree, it is tempting to speculate in the direction that supports one’s preconceived views, but that is a treacherous approach to arriving at an accurate understanding. In this case, the principal disagreement is between the CERES observational data indicating that a positive (warming) imbalance has been increasing, and various sets of observational ocean data (ARGO, sea level, etc.) that fail to account quantitatively for the putative excess of stored heat. My own speculation will be quite tentative.

First, there is no a priori reason why all the measurements might not be contributing to the discrepancy. The CERES data are known to lack very high absolute accuracy, but their stability over time is good, and so it is likely that the TOA difference between incoming and outgoing flux is changing in a warming direction. In other words, I suggest a need to account for at least some of the calculated excess, without concluding that the measurements provide an exact accounting. I should mention parenthetically here that Roy Spencer emphasizes a downward trend in outgoing shortwave (SW) radiation, while Kevin Trenberth focuses on the more recent reduction in the longwave component. My guess is that the reduced SW flux is attributable at least in part to recent reductions in solar irradiance, but other factors, including cloud changes, may also have contributed.

Assuming stored heat as yet unaccounted for, we know that the ARGO float process, although improving, is still undergoing technological improvements and remains subject to sampling limitations. Nevertheless, the ARGO data do show some indication of a recent cessation in ocean heat content (OHC) increase, with some interannual dips as well. The most serious limitation of the ARGO data is the restriction to the upper ocean – mainly down only to 700 meters, although this is changing. When one evaluates OHC changes from another perspective – steric sea level (the expansion of water as a consequence of heating) – recent data are consistent mainly with a flattening or the OHC trend rather than a substantial reversal. The 2009 paper by Leuliette and Miller provides data up to 2008. It appears as though a shallow steric rise continued until about the end of 2006, which saw a slight decline. The authors speculate that the decline might have reflected the 2007-2008 La Nina, in which case the 2008-2009 El Nino might have restored the rise in OHC (and the current La Nina may be reversing it once again). Unfortunately, we don’t have adequate post-2008 data to assess these possibilities.

I tend to concur with Trenberth’s conjecture that much of the unidentified heat may have been transferred to the deep ocean, inaccessible to direct measurements. Trenberth is appropriately uncertain whether this offers an adequate accounting, and Spencer has claimed that heat cannot mysteriously descend to the deep ocean without leaving an OHC signal at shallower depths. The claim is valid, but two points are worth noting. First, the upper ocean OHC signal will be diminished to the extent that transfer to the deep ocean occurs faster than has generally been estimated – i.e., the distribution between upper and deeper layers may result in less heat remaining in the upper layers within a limited (intradecadal) timeframe than often assumed. A recent J. Climate paper by Purkey and Johnson lends credence to this possibility and implies that a larger fraction than suspected of the global energy budget resides at great depths (the link I cited is to a preprint, unfortunately without figures, because the J. Climate paper is behind a payline).

The second point, mentioned by Trenberth in his 2009 article, is that the coefficient of thermal expansion is highly pressure sensitive, and so heat at great depth mediates a substantially smaller increase in water volume than heat in the upper ocean.

The consequence of both of the above points is that OHC, steric sea level, and the CERES imbalance may be closer to reconciliation than first appears. If one acknowledges the possibility that every one of the measurements is also subject to some degree of error, without asserting that any one is completely invalid, the discrepancy may turn out to be less of a mystery than claimed.

Alex – Your comment on estimates of global heat storage (GHC) are reminiscent of Kevin Trenberth’s lament about the disparity between the apparent increase in GHC derivable from CERES measurements and the absence of a total accounting in terms of temperature change and observable OHC data. One can only speculate, and mine is that the CERES observational data are probably reasonably accurate, and so the “missing heat” is somewhere, and most likely in the deep ocean inaccessible to accurate measurements.
On the other hand, I think it important to recognize the uncertainties regarding OHC measurements. The Argo data are invaluable, but still plagued by technical issues, sampling inadequacies, and a restriction mainly to the upper 700 meters (although this is changing).
That said, I believe that current evidence suggests that OHC is continuing its long term upward trend, despite some Argo data challenging this conclusion. Over the long term, possibly the best metrics for OHC changes entail changes in sea level, which can be measured more accurately than OHC heat content at our current level of technology. Without doubt, sea level has been rising for many decades, and probably at an increasing rate, although with periodic bumps and dips. Part of the rise reflects melting of land ice (the “eustatic” rise), which itself appears to have accelerated over the past century in response to a warming climate. However, it is difficult to ascertain the temporal relationships between warming and ice melting, because once a temperature imbalance exists, ice can continue melting despite a lack of further temperature changes (albeit at a declining rate).
A more reliable component to the sea level metric is therefore the “steric” sea level rise, reflecting the expansion of sea water as a function of temperature. This has its own problems. For one, steric sea levels are typically calculated by subtracting the estimated eustatic component from total sea level rise, and are subject to uncertainties regarding the extent of ice melting. Second, the thermal coefficient of expansion of sea water is not a constant, but varies with pressure, temperature, and salinity. Nevertheless, over the long term, steric sea level rise can be interpreted as a good indicator of increased OHC.
Both total and steric sea levels have been rising over the decades – again with periodic ups and downs. A good source for both is at Sea Level – visit both the “time series” and “steric” pages for relevant data.
Note that the steric data on that page end at 2003, but it’s clear that total sea level has continued its upward trend up to the present. Has the recent rise been due exclusively to ice melting, reflecting a marked acceleration of this phenomenon, or is a steric component still present, confirming a continued increase in OHC? I’m unaware of data for the past two years, but a 2009 paper by Leuliette and Miller provides data up to 2008. It appears as though a shallow steric rise continued until about the end of 2006, which saw a slight decline.
The authors speculate that the decline might have reflected the 2007-2008 La Nina, but I find it easier to attribute it to the 2006-2007 El Nino (probably slightly out of phase), given that the latter entails a transfer of heat from the ocean into the atmosphere. If this is correct, we might expect new data to show resumption of a steric increase subsequent to the 2007-2008 La Nina, a later decline with the 2008 El Nino, and a rise again after the recent shift between El Nino and La Nina conditions, but that is conjectural. These interannual variations aside, the data are most consistent with a continuation of a long term increase in OHC until the present. Whether the steric changes adequately explain the total changes in GHC estimated from observational data remains to be seen.

Oops – I inadvertently copied part of a reply to Alex Harvey at the end of my intended comments here. Judy – if you’re monitoring this, could you strike out everything in the comment from “Alex” on downward. There’s nothing in it that’s totally irrelevant, but it ends up being repetitive.

Fred,
I think you misunderstand Trenberth’s comment. You seem to think he is talking about the disparity between CERES and OHC. Not so. I don’t see that he mentions OHC at all. He mentions weather, surface air temperature. Trenberth’s email is claiming the radiative imbalance from CERES would lead to warm weather. Meanwhile they are suffering from record cold temperatures.

Yes, the oceans are a huge heat sink but a radiative imbalance the size CERES is suggesting would not be swallowed up by the oceans in a day. One would expect record warmth for that time of year, not record cold.

Yes, I saw your comment, and I saw the comments at Pielke Pere’s for 4/19/10 in the thread entitled ‘Further Feedback from Kevin Trenberth and Feedback from Josh Willis on the UCAR press release’. It is one thread after Judy’s link in the post above. I prefer Josh’s and Roger’s interpretation to your own.
=================

It seems that everyone agrees we don’t know enough to make decisions. The travesty is that the world was expected to make decisions, and climatologists knew damn good and well that we didn’t have the information to make those decisions.
===============

Fred, you are handwaving in support of Kevin’s idea of deep transport of the missing heat. Look, we don’t have good enough data about radiation fluxes. We don’t have good enough data about albedo changes. What we do have is good enough data for the transport of heat in the upper ocean layers, and that data is not supporting Kevin’s, and your, idea.
=================

Kim – Before replying to your latest comment, I’ll mention that above, I speculated that perhaps the slight recent decline in OHC might have reversed since the beginning of 2008. That turns out to be the case – curiously, the only actual graph I’ve seen is one in WUWT, which I hadn’t seen when I wrote my comment.

As to El Nino and La Nina, the former exposes more heat to the atmosphere than the latter, and so might be expected to lose OHC, while La Nina might be expected to gain it. However, this becomes more complicated when one factors in positive feedbacks from the warmed atmosphere during an El Nino – these might then restore OHC after a lag to levels higher than those at the beginning of the phenomenon. If one also considers regional differences, the effects become even more complex. I believe excessive speculation about the time course, magnitude, and sign of OHC changes in response to ENSO phenomena is unwarranted. This is one case where the models can help to generate testable hypotheses, but GCMs still don’t handle ENSO as well as they do longer term phenomena.

I quote you above, Fred: ‘The authors speculate that the decline might have reflected the 2007-2008 La Nina, in which case the 2008u-2009 El Nino might have restored the rise in OHC(and the current La Nina may be reversing it again).’

But what about the fresh water reseviors that we humans have created in the last couple of decades? Not to mention the really big one in China? More static water to store energy in, and less fresh water adding to the seas, so the rise may be less then if these rivers were never dammed?

As to ocean heat accumulation, Science of Doom has a very good series. Essentially net heat transfer as I understand it is always from the sun to the ocean to the atmosphere. There is some LW heating at the ocean skin but the result is reducing/slowing heat transfer from the ocean to the atmosphere. Not a net heat gain from atmospheric LW to the ocean.

That is my understanding. If the effect of LW surface heating is to effectively reduce thermal conductivity from the ocean to atmosphere, then that would act to increase the effectiveness of solar variations (not increase GHG warming).

ivpo – Thanks for the Tisdale reference. I’m intrigued that it tends to confirm my speculation that El Nino’s, at least in some regions, are initially accompanied by an OHC decline, with OHC subsequently restored.

Regarding solar and longwave heat transfer to the ocean, they both transfer thermal energy to the ocean, with a larger contribution from the LW back radiation than from the solar radiation. The net flux is from ocean to atmosphere, but the ocean is still warmed to a higher temperature than it would have exhibited without the SW and LW input. See the exchanges upthread for more details.

Fred,
The coefficient of the thermal expansion of water increases with pressure. Thus heating of water at higher pressure increases its influence on sea level rise, not the other way round. This is, however, reversed by another factor, which is the temperature dependence of the coefficient of thermal expansion. There is practically no expansion, when the heating occurs at the temperatures of the deep ocean. For pure water at low pressure the coefficient even changes sign below 4 C, but at high pressure and for sea water even at low pressure the coefficient just approaches o.

To conclude: Heating deep ocean seawater of temperature of less than 5 C or close to that temperature leads to very little expansion in spite of the strengthening effect of higher pressure.

Hmm…The use of steric sea rise as a proxy for changes in OHC seems a bit suspect to my mind with too many uncertainties about the quantity of melted land ice to make it a realistic measure of changing OHC, notwithstanding further issues about measuring thermal exansion at different depths and salinity levels.

Pekka – Thanks for the correction. At great depths, water is known to expand very little upon warming. However this is due, as you point out, to the low temperature and not the high pressure (which tends to increase the expansion coefficient).

Fred you do a good job of describing the technical difficulties associated with ARGO but you have to remember all these things are relative. In an ideal world the perfect measuring system would exist but it doesn’t. You have to remember what came before ARGO. Data was compiled from multiple different measurement methods which are both temporally and spatially incomplete. ARGO was specifically designed to provide us with this sort of metric (global ocean heat content) while these multiple other methods never were. ARGO exists because what came before was poor.

What worries me about those who point out the possible flaws with ARGO data is in fact the likelihood is the pre-ARGO data is much poorer at giving us an accurate estimate of OHC than ARGO. Willis, one of the world experts on ARGO, seems to think the data is unlikely to go through anymore major re-adjustments suggesting OHC estimates are unlikely to change.

So when approaching this data should we be more sceptical about the pre-ARGO data or the ARGO data? Which estimates should we have greater belief in? If you start looking at the problem with no preconceived notions about what the data should tell us it seems fairly clear to me that we should have greater faith in the ARGO data estimate even with all it’s possible flaws.

In an imperfect world nobody can argue that ARGO is flawless but it seems to represent a huge improvement on what came before. You should be saving your critisism for the pre-ARGO data and wondering why Trenberth’s energy balance data fits so well with this and not with the better quality data.

This is an interesting issue and interrelates with other interesting issues, measurements uncertainties, possible existence of a radiative imbalance and shifts between warm and cool climate regimes. Earlier you discussed the Tsonis2007 and Swanson2009 papers. I just learned David Douglass published a response to the Swanson2009 paper titled “Topology of Earth’s climate indices and phase-locked states” at

In this paper, Douglass discusses ocean heat content and climate shifts and top of the atmosphere measurements of a radiative imbalance.

I would really like to see discussed is the reliability of the datasets. For example, Spencer likes the AMSR-E satellite for sea surface temps but he does not like the other satellites. I believe he says they are unreliable because of drift but my memory is faulty. Douglass and others trust the Argo data and no warming trend is seen since 2003. Lyman et al did not trust the Argo data in the May 2010 paper showing robust warming of the oceans. I think, but am not certain, they do not trust Argo because it disagrees with other measurement data (possibly the satellite data Spencer doesn’t trust). At any rate, someone needs to figure out what data is trustworthy and what is not.

You can’t say Lyman 2010 don’t trust the ARGO data, from memery WIllis is a co-author and he’s Mr AGRO. You’ll have to have another go at that argument. But I accept your point that evrybody is going to favour the data set that supports their pet theory. As an outsiders it’s one of the more difficult aspects to grasp the quality of the data.

Hmmm,
I think I know where the missing heat is. It can be found in the calculations of a global heat budget that was based on several wide assumptions which may not be true. Trenberth took a swing and missed. There are multiple lines of evidence that indicate the rise in OHC has stopped or reversed. Cooling oceans means no hidden heat. The Deep Ocean Heat Theory is not consistent with observed evidence.

A portion of the potential energy is locked up in regional systems, the rest exited long ago into space. The oddity is the notion that it resides Earth-bound over time when the system clearly indicates that it doesn’t.

Dr Curry, thanks for starting this thread. I’m curious to see how it evolves. My question is for any who want to respond. I’m not a climate scientist, so maybe I’m way wrong, but I’ll try anyway. If nothing else, I might learn something.

He uses CERES TOA data and other sources to estimate atmospheric forcings shown in Fig 4 with an estimated net imbalance of about 0.9 W/m2, which of course he attributes to AGHG’s. The error bars are pretty large naturally and place the imbalance anywhere between 0.4 and 1.4 W/m2. Summing all his negative forcing error bars in the negative direction would eliminate all positive forcings for a net effect of zero imbalance.

He also adds the statement that a “1% increase in cloud cover could increase reflection of solar radiation by 0.8 W m2, enough to offset global warming from greenhouse gases.” Really? Only 1%? So if cloud coverage increases as a response to increased surface temps, then all of AGW could be negated by a mere 1% increase in cloud albedo? And doesn’t Dessler conclude that clouds have basically no net affect on the energy budget? I don’t get it.

My main question though is, if AGW is right, shouldn’t there be a significant correlation between the change in imbalance at TOA and change in energy absorption of AGHG’s?

This should be producible with data we have (no models needed). We should know how much energy is absorbed as a result of AGHG’s and CERES apparently provides fairly accurate TOA measurements. It’s the change-over-time relationship I’d like to see.

If CO2 is ‘the Earth’s thermostat’ and AGHG’s are the only explanation for the change in global mean temperature, shouldn’t the trend lines be almost the same?

Greg – The CERES data as well as other observational data indicate that cloud cover is probably declining slightly rather than rising (a point made by Spencer in one of the linked references above), and so this would most likely amplify rather than diminish warming effects of GHGs (although the type of clouds and their altitudes also determine the balance between warming and cooling). Regarding your other points, the CERES data are not accurate enough for instantaneous comparisons of incoming vs outgoing radiation, but their “drift” is small enough so that they identify the direction of changes fairly well – currently, these appear to be going in a warming direction, although as you can see from the discussion above, this issue remains contentious. Finally, there is no claim anywhere that GHGs are the only factors affecting climate. Others include anthropogenic aerosols, volcanoes, and internal climate variations such as El Nino and La Nina.

And my point regarding cloud cover above isn’t whether there is more or less, but rather that according to Trenberth cloud effect on the energy budget is HUGE but it is also HUGELY variable. So there’s simply no way for us to determine how it is responding to AGHG forcings given such gross variability. A 1% change in cloud cover cancelling all anthropogenic forcings means that basically on any given day there is AGW or there isn’t.

Regarding transfer of heat from atmosphere to deep ocean, I fail to see how energy is transferred through the upper ocean without corresponding temperature increases in SST’s. And yet this GISS (http://data.giss.nasa.gov/gistemp/2008/Fig2b.gif) graph shows SST anomalies going down since 1998. How does heat get teleported from the atmosphere to the deep ocean?

And I had hoped that CERES data would be more useful. My thought is that by isolating TOA imbalance measured over time and then isolating and comparing to aggregate AGHG heat absorption, the question, “Are AGHG’s heating the planet?” could be answered without needing to account for the chaotic forcing/feedback mechanisms in the lower atmosphere.

I’m reaching the conclusion now after my two year journey to understand the AGW argument that if it is happening, there’s currently no way for us to know or by how much given the incomplete/inaccurate observational record. All funding for modeling should be diverted to getting better observational data. I toss any paper that has findings based on model data.

I believe that there will be increasing resistance to any AGW-based public policy so long as empirical support of the theory is this weak. The more I delve into the underlying science of AGW, the more I’m convinced that believing it is a matter of faith. And that this faith has so permeated the science that any analysis of noisy data will ALWAYS result in an AGW finding. It’s like seeing Jesus in the clouds; he’s there if one’s determined to see him.

Greg – Briefly (because some of this is already addressed elsewhere in this thread):

1. Most cloud data show either little change or slight reduction in total cloud cover over recent decades. However, both models and observational data you cite indicate some increase in high cloud cover. Low clouds tend to cool, and high clouds tend to warm, and it is the latter phenomenon that is included in model assessments of cloud feedback as positive.

2. SST data are variable, but have trended upwards over the years, with many ups and downs due to El Nino/La Nina events. Recent SST anomalies were positive, due in part to the 2008-2009 El Nino, probably superimposed on a multidecadal trend due to GHG forcing. A La Nina is now in progress, and so SST anomalies are again declining. SST has very little to do with ocean heat content, and so I would ask you to review the data I and others have offered on the latter – it too exhibits a multidecadal upward trend punctuated by bumps and dips, at least in the upper oceans. The deep ocean is probably more stable in accumulating stored heat.

Another clarification – short term SST variations are often uncorrelated with changes in ocean heat content. Over the long haul, SST anomalies and ocean heat content anomalies have both risen – as might be expected. The recent SST variations have been dictated more by ENSO (El Nino/La Nina) events than by changes in the total heat content of the oceans. The latter must be assessed independently.

The Trenberth paper shows *total* cloud cover increasing by about 10% since 1988; not just high level clouds.

Here’s a paper that offers a good explanation of high altitude cirrus clouds (http://www.atmos.ucla.edu/~liougst/Group_Papers/Liou_Yearbook_2005.pdf). From the paper, cirrus clouds might have a net positive or negative forcing depending on how ice forms within the cloud. But more importantly, it essentially says we have *no* idea what net radiative effect they have. But of course the models have to assume a net positive forcing (nudge, nudge, wink, wink).

So per Trenberth, a 1% increase in cloud cover could negate all of AGW and cloud cover varies that much daily. Any AGW signal will be *lost* in the noise of this variable alone.

Greg – The Liou chapter you referenced, which is excellent although slightly dated, discusses both cirrus cloud forcing and feedback. As the chapter notes, feedback is considered positive, although at the time of the writing, less observational data was available for confirmation. Readers should look at the section entitled “Cirrus and Greenhouse Warming” for details. As you note, quantitation is still imprecise. In one of your other links, the increase in high clouds was shown as accompanied by a commensurate reduction in low (cooling) clouds, which is also consistent with net positive cloud feedback.

Since then, data from Dessler and others have provided observational evidence for positive cloud feedback.

Regarding the deep ocean storage, the contribution to sea level rise is what is expected, given that at the relevant depths, seawater expands only minimally on warming. It is therefore not negligible. The quantity stored on a decadal basis is compatible with a substantial response to a TOA imbalance when the requirement is considered that the heat moves downward gradually over multiple decades from below the 700 meter level where most sampling ends to the sub-4000 meter level. There is a vast quantity of water in between, and so the total deep ocean storage appears to be substantial. You are correct in implying that we don’t yet know whether that storage fully accounts for the heat Trenberth refers to, but it is also true that the 0.9 W/m^2 imbalance represents a warming influence that would be expected to operate over many decades. Trenberth is concerned as to whether it is being reflected in heat storage or warming at anticipated rates, and not whether it would be completely eliminated over the course of a single decade.

Here is the link again that you cited previously on cloud data, which in general, show that the proportion of high clouds has been increasing, and in some estimates, both low clouds and total clouds have been declining, with most of the decline in the low cloud fraction –Cloud Trends

Regarding the Purkey and Johnson paper, you quoted only a portion of their statement. Here is the full statement on deep ocean storage:
“the rate of abyssal (below 4000 m) global ocean heat content change in the 1990s and 2000s is equivalent to a heat flux of 0.027 (±0.009) W m–2 applied over the entire surface of the Earth. Deep (1000–4000 m) warming south of the Sub-Antarctic Front of the Antarctic Circumpolar Current adds 0.068 (±0.062) W m–2. ”

This does not add up to 0.9 W/m^2, but that figure is the current estimated imbalance, and the imbalance was less during the 1990’s when some of the storage was occurring. If Trenberth is correct that the imbalance has grown significantly, current storage may be occurring at a higher rate than recorded in the paper. It seems reasonable at this point merely to conclude that deep ocean storage accounts for some of the heat not found elsewhere, and reserve judgment about how full an accounting it provides at the current rate of storage.

Greg,
The estimate of 0.9 W/m^2 is based on climate models. The more detailed paper of Trenberth, Fasullo and Kiehl tells:

There is a TOA imbalance of 6.4 W m^2 from CERES data and this is outside of the realm of current estimates of global imbalances that are expected from observed increases in carbon dioxide and other
greenhouse gases in the atmosphere. The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ± 0.15 W m^2 by Hansen et al. (2005) and is supported by estimated recent changes in ocean heat content (Willis
et al. 2004; Hansen et al. 2005).

Their own best estimates is essentially taken from the value of Hansen et al. and used as a constraint in calculating the least accurately known components of the energy balance.

IIRC, tracking of cloud cover is patchy and with huge error bars. Given that a miss by 1% could cancel all putative GHG heat buildup, how can anyone even make a straight-faced guess at what GHGs are doing? Talk about rearranging the deck chairs on the Titanic! (The Titanic being the AGW hypothesis).

There’s an interesting subfield of mathematics that looks at Fractal Dimension.

The short of it is, fractal curves connecting points A and B contained within a finite area may be infinitely long, but some are longer (more infinite) than others.

Apply the same idea to currents and wind patterns. Some of these patterns are well known, like the Jet Stream for example, and well known for their usual location, altitude and cyclic state changes. By necessity, these changes in location, altitude, speed, and overall length store or release energy. It takes more energy to be the Jet Stream at a higher altitude (if I recall correctly), more to be the Jet Stream at more northerly latitudes, more to be the Jet Stream over mountains, more to be the Jet Stream with more humid air, against more turbulent neighboring systems, against a higher pressure gradient neighborhood, more to be a Jet Stream with a kinked flow pattern than a straight one, more to be a wider stream (I think?), more to be a faster one.

We don’t have so far as I’ve seen any precision mapping of all of these features of all of the various cycles and flow patterns, their ‘structure’ over the decades in question.

The amount of energy that can hide in small structural changes is immense, even if not literally infinite, and all that the casual observer will note is that the weather seems more changeable.

If we’re just speculating about things we believe are there but haven’t seen.

You folks study climate but I study you, more specifically your reasoning and the logic of the debate. This is a great thread for observing uncertainty. Now if I can just figure out how to measure that. The first no-feedback sensitivity thread was also beautiful. Anyone claiming that the science is settled should be asked to read these threads.

If this discussion is of the “travesty” of the “missing heat” mentioned by Kevin Trenberth then I think a starting point should be what Kevin Trenberth meant. He used the word “travesty” in this context in a ‘climategate’ email, and his meaning is clear from the email which was as follows:
*************

Well I have my own article on where the heck is global warming? We are asking that here in Boulder where we have broken records the past two days for the coldest days on record. We had 4 inches of snow. The high the last 2 days was below 30F and the normal is 69F, and it smashed the previous records for these days by 10F. The low was about 18F and also a record low, well below the previous record low. This is January weather (see the Rockies baseball playoff game was canceled on saturday and then played last night in below freezing weather).
Trenberth, K. E., 2009: An imperative for climate change planning: tracking Earth’s global energy. Current Opinion in Environmental Sustainability, 1, 19-27, doi:10.1016/j.cosust.2009.06.001.

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.

So, the “travesty” was “that we can’t account for the lack of warming at the moment”, and Trenberth assumes and asserts that this is because “Our observing system is inadequate”. In other words, the “travesty” he asserts is that the data must be wrong (and, therefore, he deduces that “Our observing system is inadequate”).

However, the claim that “Our observing system is inadequate” because it disconfirms a theory is an assertion that the theory decrees what the data must show.

I disagree with Trenberth on this because – to me – it is a travesty when the scientific method is abandoned, and the scientific method says that the theory must rejected when it fails to explain the data unless and until the data is shown to be wrong.

The data may be wrong but that has yet to be shown. And the data may be right (e.g. because the data can be explained by alterations to clouds as several have mentioned above).

Simply, there is no “missing heat”. There is only a hypothesis that is disconfirmed by the available data and uncertainty concerning the data.

In this circumstance, it is pointless to discuss the subject prior to obtaining additional data that confirms the hypothesis. Until then, any scientist would say there is nothing to discuss because the hypothesis is disconfirmed by the available data. And no information that confirms the hypothesis has been evinced; indeed, some recently published data (e.g. on ARGO data published by Knox & Douglass) supports the disconfirmation of the hypothesis.

Richard,
Revisiting Trenberth’s email in this context is a good idea. Thank you. Here is my paraphrase of Trenberth.

“Everyone is asking “where is global warming?” We have had dang cold weather here and I feel like we are being mocked. The CERES data measuring heat imbalance at the top of the atmosphere says we should be getter hotter and hotter on a daily basis, but it’s not happening. The CERES data must be trash. There is a lot of talk about the PDO being decadal, but I don’t buy it. I think it is really ENSO at work and we know that is not decadal.”

I used fewer words, but I think that is the gist. I learned a great deal from this revisit, because I thought Trenberth was referring to ocean heat content data. Thank you, Richard!

“We are asking that here in Boulder where we have broken records the past two days for the coldest days on record. We had 4 inches of snow. The high the last 2 days was below 30F and the normal is 69F, and it smashed the previous records for these days by 10F. The low was about 18F and also a record low, well below the previous record low. This is January weather (see the Rockies baseball playoff game was canceled on saturday and then played last night in below freezing weather).”

But he clearly then changes the subject from the local weather to consideration of the global (n.b. global) “lack of warming” when he writes;

“The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.”

CERES data pertains to global (n.b. global) lack of warming and not the “January weather ” in Boulder.

Frankly, I cannot understand how anybody can spin his words to mean other than what they say, and I certrainly cannot interpret them as you say you do.

It is quite easy, Richard. In spite of the warmists are always saying “it’s weather, not climate” they fall into the same thought patterns whenever there are hurricanes or any bad news about weather. All bad weather is the fault of global warming.

The CERES data shows a strong radiative imbalance at the top of atmosphere globally, which would certainly not allow record breaking cold weather in Boulder. Trenberth is not referring to ocean heat content at all. I had just assumed so because Hansen and friends always point to the ocean and say “there’s heat in the pipeline.”

It is obvious Trenberth is saying “if a radiative imbalance existed anything like the CERES data indicates, we would be having record heat today and last week, not record cold.” And he would be exactly right. A radiative imbalance of that size would almost certainly cause record breaking surface temperatures all over the planet.

Good post. However, it seems to me that the important issue here is not the disconnect between AGW theory and the ocean heat content data but Trenberth’s reaction to it, which is that “the data are surely wrong”. How does he arrive at this conclusion? Clearly he’s convinced himself that AGW is real, which means that any data which suggests that it isn’t must be by definition incorrect. This hardly qualifies as objective scientific reasoning.

And it goes a lot farther than Trenberth. AGW research is in fact permeated by the concept that any data that don’t fit the theory must be wrong, which makes it acceptable to “correct” them until they do fit the theory. The only major temperature data set that doesn’t get massaged in this fashion is the MSU upper stratosphere record, which is the only one which in its unadjusted form behaves the way AGW says it should. In every other case the massaging results in a closer fit to what AGW says should have happened. (Even the sea level records get “corrected”. The result, unsurprisingly, is about twice as much global sea level rise since 1950 as the raw tide gauge records show.)

To me the travesty isn’t that Trenberth can’t find his “missing heat”, if indeed there ever was any. The travesty is that he and his colleagues appear to see nothing wrong with tweaking the data to fit the theory.

Roger,
There is not much doubt about the conclusion that the data is in error, because it tells that earth should be warming 6 times faster than the IPCC expectation. We know that the earth is not warming so fast and the only explanation is that the data is in error. It is not about saving the model, but about a obvious inconsistency in observations. Many parts of the data are known to be inaccurate for other reasons as well. Thus the discrepancy is not a big surprise.

The conclusion is that we cannot get either support or refutation of the AGW calculations from data that is so inaccurate. The data helps in verifying the overall understanding of the radiative fluxes, but not at a accuracy level required for testing the proposed AGW models.

Please see my reply to Roger Andrews and my original comment that said;

“However, the claim that “Our observing system is inadequate” because it disconfirms a theory is an assertion that the theory decrees what the data must show.

I disagree with Trenberth on this because – to me – it is a travesty when the scientific method is abandoned, and the scientific method says that the theory must be rejected when it fails to explain the data unless and until the data is shown to be wrong.

The data may be wrong but that has yet to be shown. And the data may be right (e.g. because the data can be explained by alterations to clouds as several have mentioned above).

Simply, there is no “missing heat”. There is only a hypothesis that is disconfirmed by the available data and uncertainty concerning the data.”

I stand by every word of that. Prove the data which disconfirms the hypothesis is wrong (not merely that it may be wrong) or reject the hypothesis until the data is shown to be wrong. Otherwise, accept that adoption of the hypothesis has no basis in science.

In other words, the empirical data is evidence of the validity of the hypothesis, but the hypothesis cannot indicate anything about the empirical data.

Therefore, when the empirical data indicates that a hypothesis is wrong then the hypothesis is disproved by the data. Lack of accuracy, precision and/or reliability of the data may prevent the data from completely disproving the data, but the disproof exists.

“The data is not in contradiction with some sophisticated and complicated model, but it is in contradiction with the conservation of energy or some very conservative estimates of possible energy flows”.

Rubbish!
The data only contradicts those “estimates of possible energy flows” that utilise assumptions (e.g. of cloud behaviours). The data disproves those assumptions.

Thankyou for answering my question. However, your answer is another question, and I interpret this to mean that you do not understand the issue. So, I will try to explain it to you.

All data has limitations provided by its accuracy, precision and/or reliability. And not all data is pertinent to an issue.

So, it may be true that a shoe fits a left foot but that is not pertinent to whether it is – or is not – waterproof. And an assertion that the shoe is waterproof only has meaning if one knows the conditions (e.g. pressure and temperature) that the assertion assumes.

Hence, all data is assumed to be correct but the degree to which it is correct has to be assessed on the basis of its accuracy, precision and/or reliability. Data with unknown accuracy, precision and/or reliability has to be assumed to be completely unreliable so should be ignored.

In the case of the “missing heat”, the pertinent data has assessed accuracy, precision and reliability so should not be ignored. However, it can only provide an indication (n.b. not a certain proof) because its accuracy and precision are both inadequate to provide such a proof.

No!
An “outlier” may be the most important datum in a data set because it indicates something which was not considered prior to the collection of the data.

However, the possible existence of “outliers” induced by experimental or measurement error does exist. If the purpose of an analysis is to obtain e.g. gross effects from a data set then a statistical analysis can be conducted on the bulk data.

But, in the absence of clear information that an “outlier” is a result of some artifact then the datum cannot be ignored in any proper analysis of the data.

Despite that, erroneous rejection of assumed “outliers” does happen. For example, this basic error of rejecting assumed “outliers” is one of the problems with ice-core analyses which reject ‘high’ values of CO2 on the assumption that they are biogenic contamination.

Scientists are not allowed to choose data without objective reasons, but scientists should also avoid drawing conclusions based on possible outliers. When one or few data points differ from other data so much that the results change significantly depending on their inclusion, the scientist should study the situation further and present their conclusions with reservations until the issue has been resolved.

I fully agree and support your statement as presented, but I would want added that the few possible outliers should be investigated to discover what they indicate. Perhaps they indicate ‘faults’ but unless those faults are demonstrated to exist then the data stands.

There is far, far to much data ‘homogenisation’ in climate science that is based on assumptions and not evidence.

I don’t understand where the hypothesis is in Trenberth’s work. The ‘missing heat’ strikes me as an observation of a hole in our understanding. It is a call for an hypothesis not an hypothesis itself. the problem comes when the missing heat is deemed to be something real, an explanation for what is going on in the real world. Trenberth’s initial stabs at explaining the causes of that hole in our understanding seem weak and incomplete. Surely the hypothesis is going come from further observations that identify the real cause for the hole in our understanding.

Perhaps I can take a stab at teaching you how your (implied) thinking is “backassward” (as my mother use to say).

Early in my career, I was responsible for forecasting various data series for a telephone company. Every month, we would receive the actual recorded data for a variety of data series. We would “test” the reasonableness of the data by cross comparison (we knew that some data series were well correlated with others) and also statistically. If a datum was outside 2 standard deviations of the mean or trend, we would speculate that the datum was an outlier.

Once we had identified a “suspicious” datum, we would FIRST verify that the measurement and/or recording system was accurate. If indeed the measurement and recording system was as reliable as every other month, we then had a problem. The problem was to identify what was the cause of this observation. We would develop and test a number of different hypothesis. We used the data (including the new datum) to support or reject the hypothesis. NEVER did we use a hypothesis to reject or accept the new datum.

So accurately measured and recorded data are “meaningful” such that they can be used to formulate and test hypotheses (is that the plural form?). One does NOT need a hypothesis to give any meaning to data. Data just are what they are and “outliers” are just as meaningful data as non-outliers.

As stated by someone else here, we found that we learned far more about the underlying causes and systems which produced the observations because of “outliers” than we did from any other observations.

Once again, hypothesis do not give any meaning to data. Data give meaning to a hypothesis.

Richard,
Maybe that I was thinking the data from a different point of view. My point of view started from considering the CERES data after it was interpreted as a energy flux. Accepting this energy flux as data, the conclusion is that earth must be warming rapidly as the net energy flux is six or seven times larger than in the IPCC estimates. In my interpretation all steps required to get this number are part of the experiment.

The other possibility is to say that only values given directly by the instruments are experimental data and all calculations interpreting the data are theory as they certainly can be interpreted to be. Then the error may well be in the models used in converting instrumental data to energy flux. Then it may tell that we cannot use CERES to measure this flux accurately, because the required models are not good enough.

I am not expert enough to even guess where in the determination of the energy flux the error is. Wherever in the chain it is, it tells about the problems in this empirical determination, not about the estimate of the real overall energy flux for which we have limits from general principles.

Thankyou for that. You seem to have found a point which we can agree. You say;

“I am not expert enough to even guess where in the determination of the energy flux the error is. Wherever in the chain it is, it tells about the problems in this empirical determination, not about the estimate of the real overall energy flux for which we have limits from general principles.”

OK. I can accept that (at least conditionally). But it is a massive jump from that to an assertion that the data is wrong.

I hope you would agree with me that the data disconfirms the analysis but the analysis is still plausible because the accuracy of the data is not sufficient to completely refute the hypothesis.

Richard,
Another way of formulating my point of view is to say that the totality of empirical observations and models used in processing it to observe energy fluxes involves presently an inconsistency of about 6 W/m^2. Therefore the empirical data cannot confirm or disprove anything dependent of a better accuracy.

The data indicates what it indicates. The uncertainties of the data limit the certainty that can be accepted for the indication. The data tells about the hypothesis, but the hypothesis tells nothing about the data.

So, I think we have to agree to disagree and remember our different views.

Thank you for your reply. However, I’m not sure I understand you when you say that “There is not much doubt about the conclusion that the data is in error, because it tells that earth should be warming 6 times faster than the IPCC expectation.” Could you please tell me what data you are referring to here?

FYI, a good example of the “theory trumps the data” mentality is to be found in the 2006 U.S. Climate Change Science Program Synthesis and Assessment Report (﻿http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-frontmatter.pdf) which states as follows:

“﻿Previously reported discrepancies between the amount of warming near the surface and higher in the atmosphere have been used to challenge the reliability of climate models and the reality of human-induced global warming. Specifically, surface data showed substantial global-average warming, while early versions of satellite and radiosonde data showed little or no warming above the surface. This significant discrepancy no longer exists because errors in the satellite and radiosonde data have been identified and corrected.”

The point is the wording of the statement. The authors were convinced of “the reality of human-induced global warming”, so they were bound to conclude that any data that didn’t support this “reality” were wrong whether they were or not.

As to whether the lower troposphere records did in fact contain “genuine” errors, all I can say is that when I compared the unadjusted MSU and radiosonde records some years ago I found good agreement between them, and when two independent data sets agree with each other it’s normally assumed that there is nothing seriously wrong with either of them. Certainly there’s no objective basis for applying any “corrections”.

As to whether error must always be assumed when dealing with observations, well, it’s always a good idea to bear the possibility in mind, but making the blanket assumption that all observations that conflict with the theory must be erroneous is what has gotten us into the mess we are in.

Now that radiosonde, satellite, and surface temperature records all agree with each other, to large extent, we can state that the claims of manipulation (or worse) applied to the various analyses of surface station data are not defensible, correct?

I still don’t see how one can make sense of the data without a hypothesis. A mere set of numbers doesn’t *mean* anything.

You demonstrate your complete ignorance of the scientific method by asking this question.

Of course the “6” is “legitimate”. It is a measurement result. Its difference from the othger measurement values may result from an infinite number of possibilities some – but only some – of those possibilities are methodological errors. If investigation fails to determine such errors then investigation of difference of the “6” from the remainder of the data set is an indicated requirement.

What a scientist cannot do (if he/she is to be a scientist) is to call the “6” an “outlier” and to ignore it.

This blog is not the proper place for you to seek an elementary understanding of science. So, please refrain from further poasts on this blog until you have completed a high school science course. Your ignorant interuptions are disrupting discussion on this blog.

The “6” cannot be understood as anything until a hypothesis is formulated, as Michael pointed out. Until one has a hypothesis, the “6” is neither correct nor incorrect – it simply is.

For some reason, you, Richard, seem to believe that the data is all one needs. That’s wrong – and hypotheses are what informs us as to the value of the data. You seem to want to impose an artificial division between data and hypothesis – it ain’t there.

That’s quite wrong. A simple example will suffice. The raw data for temperature stations are a mess of different instruments, changing instruments and different observing practices. A simple example: If you have a station that took min/max temps at 7AM from 1850 to 1940, and then switched to an observation time of midnight, you WILL induce a bias if you fail to correct for that change. That bias can be estimated and corrected through an adjustment. Further, if you take a station that has used a LIG thermomenter from 1850 to 1979 and then switch it to a MMTS instrument you will on average introduce yet another bias into the record. This bias can also be estimated and the raw data can be adjusted accordingly. Further, if you take a station that is at 500m altitude and then move it down to sea level, you also will introduce a bias into the raw data. This bias can also be estimated and corrected for.

There is no such thing as ‘raw’ data. All data recorded from an instrument brings with it certain assumptions, When those assumptions are violated, you have choices on what to do with the data.

Steven,
Many people are too dogmatic about the scientific method. They do not accept that real science is continuously facing problems that cannot be solved according to dogma, but require honest use of common sense and understanding that comes from long experience.

Scientists must be very careful in manipulating data, but they must also be very careful in accepting data without manipulation.

When the problems are different each time, no dogmatic rules lead to best and as objective results as a real world scientist can produce.

First you state that “The raw data for temperature stations are a mess of different instruments, changing instruments and different observing practices”, with the clear implication being that you think the raw data are too heavily biased to be used without applying bias corrections. This isn’t true. A simple and robust test for biases is to compare different records in the same area, with the idea being that if they track each other within reasonable limits they can be considered substantially bias-free. My reviews of the raw temperature records (and I’ve reviewed thousands of them) show that the majority – I don’t have any specific numbers, but I’m guessing 70-80% – pass this test. It is in fact perfectly possible to put together a global data base of raw surface station records that show no signs of bias and which can be used without adjustment. I know it is because I’ve done it.

Second, you assume that whatever biases may exist in the raw records can be “estimated and corrected for”. This isn’t true either. Metadata-based attempts to correct out the impacts of station moves, instrumentation, reading time changes etc. have invariably introduced more distortions and biases than were there to begin with. Examples include the USHCN corrections to the US records, the Australian BOM corrections to the Australian records and the (recently-discredited) NIWA corrections to the New Zealand records, all of which generate warming gradients that aren’t visible in the raw data.

As I have noted in other posts, the a priori assumption that all raw climate data are biased and that we can “correct” them to achieve a more convenient result has had a major and strongly negative impact on the quality of the time series we use.

If I go out and measure a number of ants from a colony of a certain species, I may find that of 100 ants, 99 fall in the range 1.5 – 2.0 mm, but one is 5 mm long.

So far all I have is data, and no hypothesis. So now I start hypothesising. Two reasonable hypotheses are that a). the larger ant wasn’t of the same species or didn’t come from the same colony; b). Some ants, whilst having identical morphology, are bigger because they act as soldiers in the event of attack by sizeable predators.

It’s only now that I test the hypotheses. This time I go out and collect 1000 ants, making sure that they come only from the actual interior of the colony. I now find that there are 12 bigger specimens between 5 and 6 mm long, with a mean length of 5.3 mm.

Next, I release an ant-eating predator near the colony and observe that a good number of the larger ants emerge and attack it. Looks like hypothesis b. has been to some degree confirmed by evidence.

Lest this example seem too contrived, I would remind people that when new or unfamiliar species are encountered, it’s standard practice to measure things like length for comparison against published species descriptions, and that’s one way that new species can be identified.

So in this case, we didn’t have an outlier that indicated anything was wrong with the data, and nor did we start with a hypothesis. On the contrary, the outlier wasn’t an outlier, and the anomaly helped me posulate a testable hypothesis.

Had I started with a hypothesis, e.g. that all ants of a given species must necessarily be close in length, and rejected the exceptional one as an outlier (assuming it was of a different species that had just been there by accident), this would be an example of hypothesis being presumed to have greater weight than data, and that “anomalous” data should *a priori* be treated as an error.

Data very frequently precedes the formation of hypotheses. I suppose a famous example is the Mpemba effect, observed by a Tanzanian schoolboy, that warm liquids can freeze more quickly in a fridge than cool ones. After the usual dissing of the data, the phenomenon was found to be real, requiring the formulation and testing of hypotheses to explain it.

D64
Maybe you are confusing hypothesis with questions?
e.g. Data comes in that says 1000 people die in road accidents. This data is usefull for a myriad of applications without any hypothesis.
If a question was asked “what is causing the accidents?” one can proceed to analyse the data with or without a hypothesis. In fact, having a hypothesis may even introduce confirmation bias.

Derecho64 | January 8, 2011 at 8:56 pm |
Let’s say you measure something with the following numbers as a result:

1.0, 1.1, 1.0, 1.2, 1.15, 1.12, 6.0.

Is the “6″ legitimate? How does one explain if it’s known that there is no error?

=========================

In these circumstances it strikes me you’d develop multiple hypothesis with 6 as an outlier and real. Then wait for more data to come in to see which were correct. As well as designing more experiments to test each hypothesis.

The problem with Trenberth is that so far in his speculations about ‘missing heat’ there is one possible hypothesis that he’s failed to suggest. That is that his energy imbalance estimates are wrong and the OHC data is fine. In fact in the present circumstances the OHC data is a test for his earlier energy imbalance estimates. They fail the test, in these circumstances every option should be investigated.

CERES products are not ‘data’ or observations in the sense of the word I think you intend. They are the result of many many processing steps. Those steps involve theory. yes, if fact they involve RTE’s. This is one of those interesting cases where if you want to except the CERES product as “fact” you are logically committed to accept all the theory that gets applied to render the final product, namely RTE.

In truth the notion that one has “data” over here and “theory” over there is itself a dogma. The raw data from a sensor is voltages. The theory that gets applied to these voltages to render other units of measure transforms that “raw” data into a theory laden entity.

If you merely google CERES and look through all the processing steps and look at the actual algorithms used to create the “data” product you’ll see that its not at all like measuring ants.

Steven – That’s a fascinating link. Thanks for the information. Your overall point is well taken, but I believe that if you look at the processing algorithm, the TOA fluxes (responsible for computing imbalances) are mainly derived from instrument measurements and averaging processes, but not from RTEs – at least, that’s the way I interpret the diagram. The RTEs appear to relate primarily to surface fluxes. In that sense, the TOA data are less dependent on theories of radiative transfer. Instrument calibration and errors as well as sampling issues of course remain a concern with measurements of this type.

“The only major temperature data set that doesn’t get massaged in this fashion is the MSU upper stratosphere record, which is the only one which in its unadjusted form behaves the way AGW says it should.”

But sadly it must be wrong. We’ve already received the wisdom of a couple of the world’s foremost climate modellers that the models are correct, there is absolutely no need to do any validation or verification of them and that nobody without a PhD in Radiative Physics is even qualified to have an opinion about the quality of the models.

So Joe Sixpack and I are in a bit of a fix. How can we reconcile the irreconcilable? It is by definition impossible that climate scientists can ever be mistaken. And yet the real world is not behaving as it should. What a conundrum…….

Perhaps we haven’t been building our Temples to Gaia tall enough?

Suggest that in UK we make a contribution by making a vast erection about 1500 feet tall somewhere like Orkney and sticking blades on it. Then, using the Cargo Cultists as our exemplars, if we all set fire to everything in sight, the heat will come back and all will be well with the world.

Equally when its all cold and frosty and the wind isn’t blowing, if we all blow out simultaneously at the vast erection, we can make the blades turn and light a small bulb … which will show that Mother Gaia is smiling once more upon us.

Indeed.
If you put ice in your drink on a hot day, your drink gets cold.
The heat previously in your drink doesn’t ‘hide’ around somewhere in your drink, only to come back with a vengeance once the ice is melted – it’s gone.

There is NO missing heat. What we are “missing” is adequate knowledge of a very complex system. Unfortunately, we have stupidity in abundance and too many “bright” ideas, and too many jackels ripping everything to shreds because they want a “piece” of the $$$$Pie$$$$, and some have said we have too little time and thereby raised the stakes of finding answers “immideately” to save the World from the great, Unknown. It’s NOT the heat that missing. It’s NOT at all the heat that’s missing. We have the heat we have, no more, no less. We’re missing something, but it ain’t heat.

I first came across the problem with the “hidden heat”, when I read Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model. Smith et al Science 10 August 2007 Vol 317 pp 796 to 799.

What had happened was that the peak temperature in the El Nino year of 1998 had not produced ever increasing global tempereatures. In fact temperatures had remained steady. Some explanation seemed to tbe required. So Smith el al tweaked their model, and hindcast the HAD/CRU data for several years. Then with this “validated” model, they forecast what would happen for the next decade. This was done in 2005.

What the results showed was that temperatures would not rise until after 2009. But then, whatever natural factors were masking the CAGW, the warming would return with a vengeance, and the original rate of rise would resume. In other words, there was an implicit assumption that for several years the heat from CAGW would hide somewhere, and then reappear. My criticism of this paper at the time was that the physics of how this hiding was to take place was not discussed.

What the forecast included was that 2014 would have a temperature anomaly above the 2004 level of 0.30 +/-0.21 C. Also that half the years following 2009, the temperature anomaly would be greater than that recorded in 1998. We can now start to see how likely these forecasts are to be true. 2010 will not be above 1998, So 2011 should be. It has become the custom of the UK Met. Office to issue a forecast for the year when they have all the data from the previous year. This occurs in the third week in January. It remains to be seen whether they will have a forecast for 2011. If they do, will it follow Smith el al, and forecast a temperature above 1998? Or will it be more realistic, and take note of the current La Nina, which is forecast to persist well into 2011?

In any event, whether the predicitons of Smith et al come to fruition will be somewhat of a test of whether the heat was, in fact, hiding.

Swanson shows the Smith et al trend on his shift graph. This is why I said the warmth of 2009 and 2010 appears to be a bit of a problem for the “shift”. Too brief to be a deal breaker, but marriage in trouble on honeymoon.

For sake of argument, let us assume your understanding of the Smith et al. predictions is correct. In that case, the anticipated effects are
1.
2014 would have a temperature anomaly above the 2004 level of 0.30 +/-0.21 C
2.
half the years following 2009 (i.e. two or more of the years from 2010 to 2014), the temperature anomaly would be greater than that recorded in 1998.

Well, we do not have long to wait, but the prediction is not difficult to achieve because of the error band of +/-0.21 C.

If the global temperature stays at its 2004 level then both predictions could be said to have been achieved within the errors if temperature were lower than in 2004 in all subsequent years before 2015.

So, the predictions are meaningless.

However, the “missing heat” is missing. Nobody has found it anywhere, and I have checked that it is not down the back of my sofa.

The change in the global annual average change in the oceans in Joules over a year is a robust diagnostic to determine the radiative imbalance of the climate system (i.e. the top of the atmosphere or tropopause level). I discussed this in the papers:

It is much easier to diagnosis this heating/cooling rate from a time- and mass-weighted integrated quantity than from fluxes at the top of the atmosphere (or tropopause level).

There are issues, of course, if the data is of poor quality. However, since 2004, the data are considered robust enough (certainly in the levels from 700m to the surface) to diagnose these heating rates.

Heat could also be transfered deeper into the ocean. If this is true, it would likely be horizontally and vertically mixed such that its remegernce to levels about 700m would likely be slow. Moreover, it is difficult to see how this heat could been transfered to depths below 700m without being seen in the 700m to surface layer.

As a final comment, updated Argo analyses (and the global annual average heating in Joules) was promised for this Fall by Josh Willis. So far this data analysis has not been forthcoming. It is amazing that a climate metric as important as ocean heat content changes is not available in near real-time.

Perhaps the warming rates will continue to be well below that forecast by the IPCC models (as shown in the Knox and Douglas, 2010 paper – http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf). If so, this would be a clear example of testing those models and showing them to be failing in predictive skill with respect to this climate metric. Alternatively , heating could have resumed since 2008 and the models could not be rejected.

In either case, the testing of the models against the ocean heat is a more robust evaluation than using the global annual average surface temperature trend with all of its uncertainties and systematic biases; e.g. see

JCH I still struggle to understand the pre-occupation with the quality of ARGO data when it clearly represents a step change in data acquistion compared with what came before. The real question is why we should trust the spatially and temporally poor pre-ARGO data. I don’t argue for throwing away the pre-ARGO data just putting it into context.
Willis did state on Roger’s website that any correction is unlikely to be large so it looks like we’re probably stuck with the same conclusions from the ARGO data.
If wait and see is the best possble response at the moment then it’s worth remembering that Trenberth failed to do this in identifying ARGO as the likely sourse of the ‘missing heat’ problem.

Thanks, Dr. Pielke for the references. I recall the Klotzbach et all reference with interest, because it was a departure from the standard speculation about surface/troposphere differences. It will be interesting to see how this plays out with future land and ocean-based temperature measurements.

Regarding your statement above:“Heat could also be transfered deeper into the ocean. If this is true, it would likely be horizontally and vertically mixed such that its remegernce to levels about 700m would likely be slow. Moreover, it is difficult to see how this heat could been transfered to depths below 700m without being seen in the 700m to surface layer.”

Because the data are uncertain, one can’t draw firm conclusions, but I believe the CERES data, the Leuliette/Miller steric sea level data, and the Purkey/Johnson deep ocean data (all referenced in my long comment upthread) combine to make storage in the deep ocean the most plausible explanation for at least some of the heat not accounted for elsewhere. The Purkey/Johnson data, in particular, not only imply that the deep ocean has stored more heat than we had previously envisaged, but that heat can be transferred downward from the upper ocean faster than assumed. Faster transit through the upper layer will reduce the signal at that level.

Deep ocean storage of some of Trenberth’s extra heat is admittedly speculative, but would seem a better fit to the observations of both the CERES imbalance and the Purkey/Johnson deep ocean data than an interpretation that concludes that no extra heat has accumulated within the climate system.

Roger, I’d like to thank you as well. Your comment that “the global annual average change in the oceans in Joules over a year is a robust diagnostic” is so true. Climate science has been primarily focused on air temperature (out of necessity), which is very difficult to measure accurately. On top of that, the heat capacity the oceans is orders of magnitude greater than that of the atmosphere, so that’s where most of the heat is. And we can measure it directly at varying depths and with reasonably evenly spaced probes. IMO, Josh Willis has perhaps the most important job in climate science. I hope he’s very good at it. It’s a travesty that there is still so little attention and so few resources applied to it.

Moreover, it is difficult to see how this heat could been transfered to depths below 700m without being seen in the 700m to surface layer.

Easy. If the volume of the ocean below 700m is a million times that above, then the answer to your question is obvious: 99.9999% of it is going into the deep ocean, so why would the remaining .0001% have any discernible effect on the temperature of the top 700m?

Obviously the ratio is not that large, I just exaggerated it to show that without a concrete number for the ratio, no conclusion of the kind you’re claiming can be drawn.

“2010 was Australia’s wettest year since 2000 and the third-wettest year on record (records commence in 1900).” This was predicted years ago by Kristen Byrnes (Ponder the Maunder), the 15-year old genius slayer of Al Gore, the AGW True Believers’ king with no clothes. It was predicted by Byrnes at a time when the suffering of many during a severe drought in Austrailia was used by the Left to sow superstition and fear about global warming being the cause of the drought,

Byrnes’ prediction was based on the same science that enabled a high school student to see through the lies and deceptions of arguably one of the most powerful men on Earth. And, she had the courage to challenge the winner of a Nobel prize and also confont the superstition and ignorance of the adults around her.

Teach your parents well, Kristen. And now, more Austrailians than ever understand the truth. Global warming alarmism has never been anything more than a hoax based on homemade mathematics. And, the hoax has been a major tactical weapon in the politics of fear that has been waged by the Left against Western civilization for years.

I think it might be useful to go back to square one and read the official explanation given to the public as to why the Argo System was implemented in the first place and what its purpose is. The rationale given on the Argo home page lays this out clearly, placing great emphasis on the need for Arg0-supplied data to improve the climate models. The rationale assumes–at least it seems to me–the predetermined conclusion that global warming is accelerating and that it is dangerous. I will leave it to others to discuss whether the facts stated in this rational reflect objective reality.

“We are increasingly concerned about global change and its regional impacts. Sea level is rising at an accelerating rate of 3 mm/year, Arctic sea ice cover is shrinking and high latitude areas are warming rapidly. Extreme weather events cause loss of life and enormous burdens on the insurance industry. Globally, 8 of the 10 warmest years since 1860, when instrumental records began, were in the past decade.

“These effects are caused by a mixture of long-term climate change and natural variability. Their impacts are in some cases beneficial (lengthened growing seasons, opening of Arctic shipping routes) and in others adverse (increased coastal flooding, severe droughts, more extreme and frequent heat waves and weather events such as severe tropical cyclones).

“Understanding (and eventually predicting) changes in both the atmosphere and ocean are needed to guide international actions, to optimize governments’ policies and to shape industrial strategies. To make those predictions we need improved models of climate and of the entire earth system (including socio-economic factors).

“Lack of sustained observations of the atmosphere, oceans and land have hindered the development and validation of climate models. An example comes from a recent analysis which concluded that the currents transporting heat northwards in the Atlantic and influencing western European climate had weakened by 30% in the past decade. This result had to be based on just five research measurements spread over 40 years. Was this change part of a trend that might lead to a major change in the Atlantic circulation, or due to natural variability that will reverse in the future, or is it an artifact of the limited observations?

“In 1999, to combat this lack of data, an innovative step was taken by scientists to greatly improve the collection of observations inside the ocean through increased sampling of old and new quantities and increased coverage in terms of time and area.”

Thanks for this post and thread. It has given a lot of leads for thinking about modeling energy balances at the TOA (and the measurement issues involved). So much to do so little time.

I must say I’d be less concerned about adjusting things to force a balance (as per Trenberth and others) and more about using the balance to help identify the uncertainty in any model. The former rather than the latter is however a practice that becomes inevitable in complex GCMs (aka parametrization based on best expert knowledge, rather than statistical inference) and perhaps this explains the mindset.

Could deep layers warm without intermediate layers warming? To answer this you must consider that upwelling occurs in mainly a few small areas, such as off the coast of South America, etc. Similarly, deep water creation occurs for example where cold and salty arctic water sinks in the North Atlantic. If the deep water sinking is warmer than usual, then heat is being stored. To detect this, these specific sinking zones would need to monitored, and this is not being done. So, “hiding” heat is not impossible, BUT there is no evidence for it either.
Also, re Trenberth’s travesty: it is not merely that CERES says we should be warming more than models predict, but importantly far more than clearly the real world is warming (models or not). The source of the error, as others have discussed, is not clear, but I don’t think the world really warmed much and the heat is hiding (though as I just said, I can’t prove it).

This missing heat issue is obviously one of the most importnat issues right now to get solved or to study in more depth. For if it turns out that the missiong heat is not there, then neither is any substantial AGW, but if, as Trenberth and others suspect, it is in the deeper ocean, as some recent research indicates, then that additonal heat is yet more conclusive proof that AGW is occurring. I sent a query to Dr. Trenberth about his take on the Knox & Douglass paper, and he graciously took the time to give me this response:

“I have now read the paper and I dismiss it entirely. The authors do not describe what data they use. Argo data have undergone
several major revisions. It also is varying in time in amount and coverage, and some floats were “bad” and some had calibration problems (the surface pressure was recorded as negative, indicating depth problems).
They also do not use the Lyman et al results, or our commentary on it:

They end up with a statement about their opinion. Well I will say emphatically that their opinion is wrong and we have evidence that it is so. This sort of paper should not have been published, and really it hasn’t been because this “journal” has no credibility. It is clear what the biases are of these authors.

Looking at the figure in the paper also reveals a clear problem: The values at the end are higher than any others yet they have a downward trend. Clearly any “trend” they get depends critically on how they get it and it highly dependent on the time period. By taking a 12 month running mean they discount the last 6 months.
_______

Too late Kim.
Funnily enough, whenever a set of data doesn’t agree with the AGW hypothesis, that data “must surely be wrong.”
But data from a handfull of rotting tree stumps are so accurate as to indicate global temperatures to within tenths of degrees. Now that data they defend to the death (and still do in the case of the hockey stick)

That’s why, I will bet London to a brick that some time soon a paper will come out declaring all the missing heat and then some (it’s worse than we thought) is deep deep down in the abyss where it’ll be impossible for sceptics to refute.

The quality of journal is a worry but doesn’t necessarily make the study illegitimate. To be honest I don’t see how Knox & Douglass is that far away from Trenberth. They both seem to recognize ‘missing heat’ it’s just the way they interpret that result that differs.

Knox & Douglass bias/opinion is to take the ARGO data at face value to suggest the world isn’t warming as fast as predicted under CAGW.

Trenberth’s bias/opinion is to assume that the world really is warming and the error lies in the ARGO data.

I really don’t see how Trenberth is any more or less speculative than the others in this respect. The problem would be Trenberth’s opinion makes it into the most prestigious science journal on the planet while Knox & Douglass’ doesn’t.

I would at least like to get a list of the peers who reviewed the Knox & Douglass paper. The fact that their paper appeared in a “pay to publish” journal isn’t in itself a resaon to dismiss their paper, but the quality of the peer reivew and the question as to why they wouldn’t publish in a more well-established journal (this is the first volume of this journal) does at least raise an eyebrow I would think…

Information needs to be assessed on its merit and not on the basis of who provided it and/or how and where it was provided.

The Wright brothers published their seminal work on aeronautics in a journal on bee-keeping. The importance of that work is demonstrated by the existence of the Boeing 747 and not where it was published.

The journal policy is reviews from 3-5 authors who are referenced in the paper, plenty of skeptics were referenced (including Pielke), maybe Willis was included. If you’re calling for open review policy I’m willing to give it a go and support you on that.

You’re taking a rather hash attitude surely the work can be assessed on it;s merits. There are many authors who are forced to publish in such journals, including those from developing countries. It’s a sad fact that something like the country were the work is done can weigh heavy on the ability to publish. As I said the journal itself doesn’t make the work illegimate.

Looking at the figure in the paper also reveals a clear problem: The values at the end are higher than any others yet they have a downward trend. Clearly any “trend” they get depends critically on how they get it and it highly dependent on the time period. By taking a 12 month running mean they discount the last 6 months.

This criticism of the Knox & Douglass paper is justified as this is the result the authors want show most visibly. They perform, however, three different trend calculations including the only one that makes any sense for this data. The only sensible one is their fourth method of using separate 12-month averages. This method gives a trend that is still slightly negative (coefficient is 27% of their first value), but not to a statistically significant degree.

I wonder what one should conclude from the fact that they present in addition of the correct method also one wrong method, which is emphasized most strongly, and two methods which are statistically pretty worthless due to excessive random noise in the monthly data.

Good point.
However.
The poles can hardly claim “heat” as their strong point. “Heat” of any amount worth consideration happens at the equator.
Therefore, any water sinking at the poles can hardly increase the T of the greater ocean.

I’m not going to argue with you on that point. It was stated earlier that research hasn’t really been done to answer that question.
That would be my point. Trenberth’s suggestion is little more than an educated guess about where the heat is. It could also be described as clutching at straws. Who really knows?

In 2005 James Hansen, Josh Willis, and Gavin Schmidt of NASA coauthored a significant article (in collaboration with twelve other scientists), on the “Earth’s Energy Imbalance: Confirmation and Implications” (Science, 3 June 2005, 1431-35). This paper affirmed the critical role of ocean heat as a robust metric for AGW. “Confirmation of the planetary energy imbalance,” they maintained, “can be obtained by measuring the heat content of the ocean, which must be the principal reservoir for excess energy” (1432).

Their expectation was that the earth’s climate system would continue accumulating heat more or less monotonically. Now that heat accumulation has stopped (and perhaps even reversed), the tables have turned. The same criteria used to support their hypothesis, is now being used to falsify it. When all is said and done, if the climate system is not accumulating heat, the hypothesis is invalid or fundamentally inadequate.

Bill – Since the time of your guest blog almost two years ago, important evidence has emerged supporting, although not proving, the conclusion that substantial heat previously unaccounted for is being stored in the deep ocean. In addition, the latest ARGO data show a resumption of OHC rise in the upper ocean. The papers by Purkey and Johnson on deep ocean storage, and by Leuliette and Miller on steric sea level are pertinent. You may wish to read through some of the extensive commentary earlier in this thread for details.

It now appears that heat is transferred more rapidly and in greater quantity to the deep ocean than previously assumed, and so earlier values suggesting the minor importance of this transfer on intradecadal timescales must be revised in light of the new data. In addition, the steric sea level data are compatible with substantial deep ocean storage, because at those depths, additional heat results in almost no thermal expansion of seawater (in your blog post, you may have overlooked this principle).

Uncertainties remain, but the explanation that best accounts for all the data is storage of at least some extra heat at great ocean depths. Whether that provides a full accounting remains to be seen.

Fred, thanks for your comments. I am glad OHC has finally received more careful scrutiny as of late.

As you suggested above, the existence of deep ocean heat is still speculative. Indeed, the evidence is conflicting at best. E.g., “Recent energy balance of Earth” (R. S. Knox and D. H. Douglass) mentioned above.

The burden of proof is on those who espouse CO2 driven global anthropogenic warming. Consequently the hypothesis has not yet been validated, nor has it been falsified. Unfortunately proponents of the hypothesis have not proposed a testable criterion for falsification.

Just for the record: I do believe that humans are modifying climate in a variety of ways. But CO2 is just one of many forcings.

Judith you raise the idea of [sudden] climate shifts. I haven’t seen any suggestion that one occured in the early 2000’s, of course Trenberth’s missing heat could be the first observation. Is there any other metrics suggesting such a shift?

I agree it would be a suitable explanation for Trenberth’s ‘missing heat’ but is it anything but highly speculative?

Yes Virginia there is an explanation for why hot ocean water rises and it’s not a ‘travesty.’ We have, in fact, two converging explanations that help us understand this natural phenomena that plays a part in global warming and cooling.

First is the the concepts of a `torque’ and the second is the dynamic power of `swirling vortices,’ that is scientists are trying to model mathematically.

Both of these phenomena relate to the roles of the atmosphere, the oceans, the Earth’s `molten outer core’ and formation of Earth’s magnetic field on climate change.

Adriano Mazzarella (2008) has criticized the GCM modelers reductionist approach because he realized that the reductionist approach fails to account for many of the factors that should be a pare of a more more robust holistic approach to global warming and cooling.

And of these factors many other factors — which itself is just a part but an important part of a larger process that might be described as a single unit comprised of the ‘Earth’s rotation/sea temperature’ — are changes in `atmospheric circulation which, like a torque,’ can themselves cause `the Earth’s rotation to decelerate which, [which] in turn, causes a decrease in sea temperature.’

Looking at the the second explanation, it was recently reported that UCSB researchers (results to be published in the journal Physical Review Letters) ‘filled the laboratory cylinders with water, and heated the water from below and cooled it from above,’ to better understand the dynamics of atmospheric circulation and the ‘swirling natural phenomena’ observed in nature.

As applied to Earth science, these researchers hope that it won’t be long before it can be conclusively shown that Trenberth is never going to find the global warming that he is looking for in the deep recesses of the ocean.

The reason is simple it won’t be found is simple: it’s not there. No matter how much AGW True Believers may wish otherwise, global cooling is not proof of global warming.

Soon, the mathematics of the UCSB researchers will help reveal that given differences in temperatures — for example, in ocean temperature — in a real world where the Earth rotating on its axis with warm water at the bottom of an ocean of colder water on the top, that the cold water will sink. The difference in the temperature from top to bottom is itself a ‘causal factor’ that drives the flow downward.

I think we all knew this already as a simple process of convection. But, let’s hope that a sensible mathematical representation will make the process more accessible and hopefully will also make the government science authoritarians stop acting like persecutors of Galileo.

If a value for the global radiative imbalance is around 0.9W/m^2 (Trenberth, Hansen and probably others) and that say 90% of it is going into the oceans which make up about 70% of the surface, we have around:

0.9 * 0.9 / 0.7 =~ 1.16W/m^2 of ocean

given that the ocean has a surface area of ~360,000,000,000,000 m^2
that is a total flux of about 0.42PW (420,000,000,000,000W).

These are big numbers but for some perspective the Atlantic MOC flux is about 1.2PW so we are talking about 1/3 of the heat provided by the Gulf Stream.

It is also equivalent to taking about 6Sv (6,000,000 m^3/sec) of tropical lower water at 18ºC down into the abyss and mixing it with bottom water at 2ºC. For comparison the part of the Gulf stream that descends to form the western boundary current at depth is around 14Sv.

Given that the increase in OHC (0-700m) is very low, something of the order of the above must be taking place.

Also as the energy books were it seems in balance just a few years ago (~2003) , this mechanism must have commenced since then.

Now there are people who would say that this is a bit of stretch. That somehow the heat equivalent of about 1/3 of the Gulf Stream is sinking into the abyss without trace in a fashion that wasn’t occurring 7 years ago.

It is one of those glorious moments when either I have got it totally wrong or someone else has got it totally wrong. The total heat budget (what gets absorbed transported and dumped back out of the oceans) is about 2PW and apparently 20% of this has defied bouyancy and plunged into deep blue under. Now I do not know a whole lot about the oceans but I do know that they tend to reject heat. They take down cold polar waters to make bottom water which rises most everywhere towards the surface at around at a rate of 3-4m/yr. Heat diffusing (actually some turbulent form of mixing) down finds it does so against the up esacalator that is the rising ocean hence tending to maintain the surface much warmer than the abyss. Now heat due tothe warming since ~1910 is almost certainly descending through the 700m level but it almost certainly was 10 years ago too. It may be 0.1 W/m^2 or even 0.2 W/m^2 (figures in the Hansen approved ballpark) but I really to not think it is going to be 1/3 of a Gulf Stream’s worth.

Now how big does a disceptancy have to get before one has say hold on a minute, please give answers in units of Gulf Stream.

On more thing, where are all the Oceanographers when you need them? I need one now to tell me that I have got it all horribly wrong.

Areal coverage: good or very good.
Angular resolution: good
Temporal coverage: poor
Angular coverage: very poor

By temporal coverage I mean that they may know what the tropics are like at 1:30PM & 1:30AM (Aqua) or 10AM & 10PM (Terra) but are clueless about times more than 1hr away from these. The problem is not so bad towards the poles but it is still significant in my view.

By angular coverage I mean that the sensors have a narrow field of view (good resolution) which is extended to make a wider synthetic aperture by scanning. I believe the normal operating mode is XT (cross track) but this still only views angles close to the zenith. It seems that the sensors to not scan limb to limb so I can not see how they can deal with variable anisotropy, i.e the zenith angle view for LW is affected by GHGs like water vapour (which varies with the weather) to a degree and in a fashion that is not the same as the limb view.

Now I really don’t know how it is all meant to pan out and give a result that is equivalent to the global energy balance. From what I have read I can neither see that it can, nor that it was ever designed to do so. I would have expected many more satellites with limb to limb apertures for that job.

Now I really don’t know how it is all meant to pan out and give a result that is equivalent to the global energy balance. From what I have read I can neither see that it can, nor that it was ever designed to do so.

Alex, isn’t that sort of what you were told at RC, or am I remembering it wrong?

The trouble is we need people to comment who have specific expertise. In this case oceanographers and remote sensing experts. Did I forget to mention modellers.

One minute I am lead to think that we have specific and astonishingly accurate energy balance data from satellites, then Pekka points out that is not the case the value comes from a Hansen paper which in turn relies on the OHC (1993-2003?). So what we are saying is that the OHC trend from 2004-2010 doesn’t agree with the 1993-2003 trend, which we know already, so I cannot not see how the CERES data is informative as it is pegged to the old OHC trend. So because Hansen’s model agreed with the old trend we seem to be inferring that the new trend must conform to Hansen’s model, as must the CERES data. So to that degree it is a straight fight between the model and the OHC. Did I forget to mention the SSTs. We seem to be all at sea. :)

One thing that I should like to see is the AOGCM OHC data (it is not available at Climate Explorer and I have not seen it elsewhere). Does it agree with OHC upto the end of the 20th century model runs? Is it commented on in AR4? I have not seen it. There is some chance that some of the models have very thermally heavy oceans that bury oodles of heat. But I don’t know.

With the exception of the 2001 and 2003 wiggles the OHC (or just take 1993 and 2010 as endpoints thereby extending the Hansen paper) you get about half the heat flux they are looking for ~0.5W/M^2. Why is it considered so unlikely that this is the actual flux rate? Where is the contradictory evidence? I don’t know.

So, since it is reasonably well established that the heat is missing, isn’t the interesting scientific question now, “What are the changes to the model(s) needed to explain the empirical observations?”

A system of weather stations continuously gathering temperature, velocity, composition, pressure and radiative data every few meters along the surface and from below sea level to the lower ionosphere every few seconds, to provide sufficient inputs to build such a model?

The best explanations we have of all things we see in the world share with fiction that they are fiction, stories we make up to give things sense that satisfies our need for narrative.

There isn’t enough computing power in the world, nor are there enough measuring platforms, nor is the mathematics advanced enough, nor the understanding of the best experts intricate enough, to trace the energy in the biosphere through every phase of its passage into and out of every buffer, store, transition and vector.

There are a great many things models are good for, and possibly changes to the models might help theorists form a better picture than they now have, but to fully explain the empirical observations, is, I think, not within the power of models for the simple reason of the amount of Chaos in the system.

There are several assumptions in each of the models; e.g. aerosol forcing changes and magnitudes, cloud covers and types, etc. A small change to one of these assumptions would alter the thermal input to OHC indicated by the models.

And that is why it is a travesty to assume the empirical data is wrong when it disconfirms the models. In any real science a model is only correct to the degree that it is supported by the available empirical data.

Arm waving about the inadequacy of the data (and the OHC data is inadequate as is much other climate data) does not change the fact that the models are only correct to the degree that they are supported by the data.

Orkneygal. It will usually be possible to change the fudge factors of ANY model so a to make that model correctly hindcast any set of data that is known. This will NEVER make a model which is fundamentally wrong, correct. You cannot make a silk purse out of a sow’s ear.

Except that you don’t have “empirical” “observations”. You have samples of instrument outputs that are made according to certain theoretical assumptions and readings that are then processed using algorithms with certain theoretical assumptions and then data “products” that are summarized using certain statistical proceedures that depend upon assumptions. This is not measuring ants.

I’m suggesting that many people who talk about “data” disconfirming “models” have a idealistic view of what data is. Specifically in the case of CERES data which is highly processed and interwoven with physical theory. For example, any sensor data from space that takes clouds into effect is dependent on the same physical theories that play a part in every GCM ( for example Radiative Transfer equations) So that, for example, one cannot consistently accept data ( from CERES) which relies on this physics and at the same time question that physics when it is embodied in a GCM.

WRT Argos. I wouldn’t render a judgment without looking at it, but a brief review of the design documents ( all I care to do and need to do to make my point) will show you several things.
1. the “data” produced is the result of the applications of physical theory. Consequently, it is not “raw” data, but its accuracy depends upon taking certain physical theory as true.
2. The data collection procedure ( the sampling methodology) is informed by circulation models. That is, in order to plan the number of floats and verify that the collection system would be able to provide good samples the team employed the same kind of circulation models that are used in climate models.

The point being this. the idea that there is a “model” over here and “data” over there, and that somehow the process of comparing them is a simple straightforward go/no go decision is an idealization that bears no relation to the facts of the case. It is a “theory” of the scientific method that is not supported by the very “facts” of the case.
we may dream that science operates this way, but it does not in fact operate this way.

WRT Trenberth all we know is that “models of climate” and “models of data” are not in alignment. That’s all we know. We may suspect the climate models because they are more complex, but we cannot rule out substantial errors in the models of data ( what you call observations). Neither can we rule out errors in both. In fact, we cannot rule out errors even if they agree! For example, Looking at one particular issue in a GCM folks found that the model bias in one direction was perfectly offset by an observation error in the other direction. Both were wrong, but they agreed. Ha. So, all you ever have is a tenative agreement between the two or a tentative disagreement. You want certainty, sorry not happening.

Yup. I wonder how many of these guys actually take the time to study the data flow diagrams and design documents for these measurement systems. And if they even realize how model/theory dependent “observations” are.

As Per Mosh,
Dr, Tobis, please re-read Mosh’s screed above. If you really want to get into the communication biz, study this example.
Precise, scientific, no spin. From this, I, your maybe audience, have now learnt a bit more about the science and have moved my understanding of CC.

No wailing or moshing of teeth about the catastrophe if we don’t do something, anything. If nothing else, the majority of the people that read these blogs are somewhat proficient in understanding science generalalities.

You assert that there is no direct measurement, but only models that provide the CERES data and all other climatological data sets. But a scientific model is a representation of a hypothesis or theory which is an understanding of information (i.e. data). So, in the absence of information then there cannot be a scientific model: there can only be an imagined effect.

Therefore, according to your argument, the AGW hypothesis is not science but is science fiction.

However, the AGW hypothesis is a scientific idea because there is real data and the CERES data is some of it.

Fred Moolten refuted your assertion that the CERES data is merely a model above, and I agreed with his refutation saying;

“As you say, the TOA flux data derive their precision and accuracy errors from the measurement method. In this sense, they are “theory free”.

The issues of measurement method assessment are precisely analagous to those of “measuring ants”. Indeed, they are the same but a different ‘ruler’ is used.”

And I strongly suspect that if the CERES data did not disconfirm – but, instead, were to confirm – the AGW hypothesis then you would not be trying to dismiss it as merely another model.

You misunderstand entirely. If you were to read my comments on the web since 2007 you will find that I make the same consistent
point WRT “raw” data and the “disconfirmation” of “theory”. So, if you suspect that I would change my opinion about CERES or ARGOS data, you are wrong. And worse your suspicion was based on NO EVIDENCE that you even looked at my past postings. The notion that CERES outputs are “data” or observations is an idealization. Here is what you have. You have a sensor that collects EM and outputs a voltage. That voltage is then processed. the processing depends upon and relies upon theory. unproven theory (all theory is unproven) So that when the “data” from CERES ( voltages transformed into other units of measure by physical theory) is compared to the output of a climate model ( other data transformed by physical theory into data products) You are actually comparing the outputs of two models. Whether they agree or disagree I would make the same point. When they disagree you do not know WHY they disagree. You may suppose that the data from CERES is more sound ( and thus accept its underlying physical theory) you may suppose that the climate model data is more sound and suspect the CERES data. the MERE FACT of disagreement does not on its face tell you which is wrong. All it tells you,on its face, is that one or the other or BOTH should be investigated further. You might choose to accept CERES without even looking at it. Trusting the work of others. You might choose to suspect the more complex and less well tested climate models. but logically, the mere fact of disagreement doesnt tell you where the flaw is. In short, you cannot be certain the CERES data is correct and you cannot be certain that the climate model is wrong. Now, its gets even more interesting because the CERES data is processed with algorithms (RTE) that some people want to deny.

So for example. When you measure ants with a ruler there is a physical theory attached to your measurement data. Namely, a ruler has a physical property called length and the length is invariant in the reference frame it is being used. Nobody wants to waste time retesting that theory. we just accept the ruler “data” as “observation. With CERES there is a much more elaborate physical theory that governs the final output. you just have to read through the “data” flow algorithm to see that. The same goes for ARGOS although the algorithms there are less elaborate. The point is this. A model doesnt simply “fail” if it is disconfirmed by “data”. Neither is it simply confirmed if it happens to agree with “data” ( recall I pointed out the case where the data was wrong and the model was wrong BUT they both agreed)

As I said this is a philisophical position. Not a position taken up merely for convience.

“According to the Duhem-Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton’s Law of Gravitation in our solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led, not to the rejection of Newton’s Law, but rather to the rejection of the hypothesis that there are only seven planets in our solar system. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, etc.

One consequence of the Duhem-Quine thesis is that any theory can be made compatible with any empirical observation by the addition of suitable ad hoc hypotheses.”

since the physics the models were based upon were supposed to be based on empirical data and now you are claiming there is no empirical data I have to conclude that it is ALL a fantasy. Thank you for disproving AGW/ACC/ACD/whatever yet again!!

Your long screed says nothing of importance except for its errors that are an attack on the scientific method; e.g. these statements:

“A model doesnt simply “fail” if it is disconfirmed by “data”.”

YES IT DOES.

” The notion that CERES outputs are “data” or observations is an idealization. Here is what you have. You have a sensor that collects EM and outputs a voltage. …”

NONSENSE!
It is no more (indeed, somewhat less) of an “idealisation” of a measurement than determination of the physical dimensions of a virus by use of a s.e. image of a scanning transmission electron microscope (STEM).

Your entire post is sophistry as is demonstrated by its concluding statement; viz.

“One consequence of the Duhem-Quine thesis is that any theory can be made compatible with any empirical observation by the addition of suitable ad hoc hypotheses.”

ANYTHING CAN BE ARM-WAVED AWAY BY “ADDITION OF SUITABLE AD HOC HYPOTHESES”.

The AGW-hypothesis is disproved by empirical investigation.
1.
At altitude the ‘hot spot’ is missing.
2.
In the oceans the ‘missing heat’ is not found.
3.
In the temperature record for 15 years the global temperature rise has been missing.

And in addition to those scientific disproofs, the funds which fuel the AGW ‘gravy train’ will soon be missing because political support for action on AGW was missing at the Copenhagen and Cancun conferences.

The AGW-hypothesis is a ‘dead parrot’, you can delay its ‘falling over’ by nailing its feet to its perch with sophistry, but it is an ‘ex-parrot’. Live with it.

Richard,
Some empirical observations are very relevant concerning our understanding of the strength and attribution of the warming. Some other observations tell about something else.

The discussion between Roger Pielke Sr. and Fred Moolten in this chain tell clearly that the problems in deciding, whether warming follows the mainstream views or not, are on the level of few tenths of W/m^2. Both alternatives are in strong contradiction with the value 6.4 deduced from the CERES data. The accuracy of the actual CERES measurements is with 97% certainty about 0.3 and 0.1 W/m^2 for the two satellites according to the CERES group. Thus the explanation has to be found some where else.

Based on the knowledge that I have been able to find, the most likely explanation for the discrepancy between the number 6.4 and the well supported common understanding that the real net flux cannot exceed 1 W/m^2 and may be significantly less, is in the steps leading from the actual measurements to the value 6.4 W/m^2. These steps are based on various models of the atmosphere. I think these models should not be classified climate models as their task is to describe certain features of the actual situation at the time of the observations in more detail than climate models do, but they are certainly part of the family of atmospheric models. Thus the discrepancy is evidence on specific weaknesses in the understanding of the atmosphere and modeling it.

Funny old thing about gravity is how it it creates diispative structures due to the redistribution of mass eg Federov

On interannual time scales, the perturbation available potential energy E is anticorrelated with sea surface temperatures in the eastern tropical Pacific so that negative values of E correspond to El Niño conditions, and positive values correspond to La Niña conditions (Fig. 1). This correlation is related to changes in the slope of the thermocline associated with El Niño and La Niña. When the thermocline slope increases (as during La Niña; Fig. 1a), the warmer and lighter water is replaced by colder and hence heavier water thus raising the center of mass of the system and increasing its gravitational potential energy..

There is an important difference between the programs converting the sensor voltages to temperature/pressure/whatever numbers and the GCMs — the “measurement” programs are more like engineering models, which are verified 97 ways from Sunday. (The resulting numbers are, of course, then handed over to the gentle ministrations of GISS, NASA, and CRU, who fold, spindle, and mutilate them until they show It’s Even Worse Than We Thought.)

Likewise modern measuring hardware is designed wherever possible to do self-checks and “confess” if it finds anything wrong — for example, the MSUs on at least one series of satellites recalibrate themselves every 24 hours by pointing their sensor into deep space and checking that they get the correct 3K background reading.

The GCMs, on the other hand, are heavily “parametrized” (i.e. loaded with armwaving fudge factors), based on unsupported assumptions, and have never been in any sense formally validated.

So while your basic point is true, it overlooks a qualitative distinction rather like asserting moral equivalence between Bernie Madoff and shoplifting a doughnut.

Boiling this down it reduces to the rather unremarkable statement that observation depends upon theory (in the extreme it becomes “how do you know that what you see is what’s happening”).

The issue about whether to use GCM over more directly observed data lies in which is likely to give you greater accuracy (and you allude to this). Observations (and let’s include GCMs) that don’t allow for quantification of uncertainty are of much less use than those that do. As I said further down in this thread a bit of time spent just measuring the missing heat and its uncertainty is still required.

I noticed the Trenberth paper 2009 you referenced is still using a slightly modified variant to the original K&T 97, Khiel & Trenberth 1997, drawing. The original figure description suggests there are potentially some numbers present that could be off by as much as 20%. It appears there are at least two numbers there off by over 20% in the simple calculations category. The more recent paper has modified those numbers by a few percent and also applied another significant figure of claimed accuracy on to some of the net numbers at the boundaries. The numbers I’m referring to are surface and atmosphereric/cloud albedo.

It seems that Trenberth is working under the assumption that his two numbers from the figure are actually correct, causing him to assume a similar magnitude effect for cloud albedo and IR cloud blocking. It can be shown in K&T97 style that the cloud albedo is underestimated while the surface albedo is overestimated, leading to a result that is in serious error.

In K&T97, it is assumed that there is 62% cloud cover with most of the clouds being assumed to be optically thick, by stating emissivity = 1 or for a small fraction, emissivity = 0.6. Roughly speaking, one winds up with the equivalent of about 60% coverage for optically thick clouds, assuming that K&T 97 didn’t use 62% with the assumption of optically thick. For this, no surface albedo penetrates the cover and no direct incoming solar reaches the ground as it’s all diffuse.

Albedo can be calculated for gross averages by taking the ocean fraction and land fraction for the surface and combining the surface with the cloud/atmosphere. These must contain the cloud fraction weighting and that seems to be where the original K&T 97 error originated. The diagram shows a correct total but it is composed of two incorrect numbers for surface and cloud albedo power reflection. The 2009 drawing is closer in value but still off with cloud/atmosphere albedo reflecting 79 W/m^2 and surface reflecting 23 W/m^2. As albedo, the numbers provided by K&T are something like 0.08 and 0.22 for surface and cloud/atmosphere albedo. The surface under clear skies has an albedo of 0.08 and the contribution weighted for clear/cloudy skies results in closer to 0.038, leaving clouds/atmosphere to contribute 0.26, assuming that the albedo is only 0.30. Assuming incoming average power is 342w/m^2, this becomes 13 w/m^2 for the surface reflected power and 89 w/m^2 for the cloud/atmosphere reflected power. Assuming cloud/atmosphere is all clouds should get us to about a 10% accuracy. We now have 89 w/m^2 of reflected power for every m^2 of additional cloud cover.

Comparing this to the cloud blocking is a bit different. Using 288.2K (1976 std atm mean surface T) and stefan’s law with emissivity of 1.0, quite reasonable for deep IR, we get 391 w/m^2 of outgoing power radiating from the surface. After an albedo of 102, we’re left with 240 w/m^2 of power absorbed by the Earth & atmosphere. The difference yields 151 w/m^2 which must be the blocking for the atmosphere – GHGs + clouds. Note that this blocking is the average for clearsky/cloudysky and is the difference in what is absorbed by the cloud and what is radiated out to space. Using a clear sky result of 120 w/m^2 for total GHG blocking contribution, one is left with 31 W/m^2 of blocking due to our 62% cloud cover. Converting that to 100% gives us the blocking effect of clouds which is about 50 w/m^2.

So what we have is 50w/m^2 for IR blocking and 89W/m^2 for albedo. If we subtract out 10w/m^2 for atmospheric albedo that becomes 79 w/m^2. Or a net difference of 29 w/m^2 for each additional square meter of total cloud cover.

One can make a cloud fraction vs incoming/outgoing power graph. It intersects at 62% and diverges with an imbalance as one gets away from that point.

Seeing as you’ve been reasonably polite in an earlier response, I will reply here, and where it won’t be squashed up through indentation.

1. I don’t think it’s true that one can’t say anything about data without a hypothesis. In the Mpemba example, one could say that the data showed warm liquids froze quicker than cool ones. Mpemba himself did not know why, he merely noticed the phenomenon, but that in and of itself was highly interesting and significant.

In the ant example, one can say at least that 99% of the ants lay within the range 1.5-2.0 mm. This says a lot depending on the prior knowledge of the observer. There are things that insects of this size can and cannot do. They may be able to rest on the surface meniscus of water without sinking, for example. But they may well be drowned by a raindrop. They are likely to have a very high physical strength/size ratio. There could be huge numbers of them for a small biomass … all sorts of things could be usefully inferred without raising any new hypotheses.

2. I don’t think it’s true that without a hypothesis, data are just gobbledygook. What’s gobbledygook about data that samples the number of galaxies in small sectors of the sky with the aim of estimating the number of galaxies in the whole universe?

Some of the bread and butter of science is basically just stamp-collecting to get a feel for size or scale or for fitting in existing classification schemes.

I don’t know why you have this fixation for the primacy of the hypothesis in all circumstances, or that data is inherently meaningless.

If you live in a town where unattached members of the opposite sex are very scarce, the competition is fierce, and you are looking for a spouse, is it meaningless to be informed that in the next town, the opposite sex outnumber yours by two to one?

Data isn’t just contextless numbers. If it is, then, yes, it has no intrinsic meaning except perhaps to mathematicians of number theory. But scientists don’t go out to collect numbers like that. There’s usually an embedded intrinsic meaning or significance involved; and additionally, sometimes data is rather more qualitative than quantitative – an old joke is that the plural of anecdote is anecdata. Talk to physicians about that one…

I never said that hypotheses have primacy – but that numbers are just numbers without meaning *until* a hypothesis is attached to them. We cannot say anything about a number until we understand what it is, and that requires a hypothesis.

Our paper “Recent Energy Balance of the Earth”, the subject of this thread, has generated considerable discussion. We address the issues raised by Trenberth.

In a post by “R. Gates | January 8, 2011 at 5:40 pm there is a reply by Dr. Trenberth about his take on the Knox & Douglass paper:
“I have now read the paper and I dismiss it entirely. The authors do not describe what data they use. Argo data have undergone several major revisions. It also is varying in time in amount and coverage, and some floats were bad and some had calibration problems (the surface pressure was recorded as negative, indicating depth problems). They also do not use the Lyman et al results, or our commentary on it: Trenberth, K. E., 2010: The ocean is warming, isn.t it? Nature, 465, 304. [PDF].
They end up with a statement about their opinion. Well I will say emphatically that their opinion is wrong and we have evidence that it is so. This sort of paper should not have been published, and really it hasn’t been because this journal has no credibility. It is clear what the biases are of these authors.

Looking at the figure in the paper also reveals a clear problem: The values at the end are higher than any others yet they have a downward trend. Clearly any trend they get depends critically on how they get it and it highly dependent on the time period. By taking a 12 month running mean they discount the last 6 months.”

Our response:

We take Trenberth at his word that he has read our paper. However, he does not appear to understand it. We take up all of his critical points.

[a] We describe exactly the data we use. It comes from J. Willis.

[b] Willis is the acknowledged expert on Argo data and provides the scientific world with “official” OHC estimates. He attests to the robustness of the data. As recently as September 21, which post-dates the submission of our paper, he states in an email to Roger Pielke, Sr. “… In fact, corrections of the Argo pressure data may result in a small but significant systematic change in the early years of that curve. However, from 2005 on, the answer will not change much. So, yes it is now possible to test the 5-year warming rate from Argo. …” [The Willis statement is abstracted from an email exchange published on Roger Pielke Sr.’s web site with Willis’ permission.]

[c] We were aware of the Lyman et al. paper and Trenberth’s comments. In fact our paper not only mentioned the Lyman paper (our ref 1), our paper was written to show that their estimate of the global warming trend was misleading, as they averaged the data across an event that they described as a “flattening” that occurred in 2001-2002. That event is almost certainly an abrupt
climate phase transition previously reported in other studies [Tsonis et al, GRL 34, L13705 (2007), Douglass and Knox, Phys. Lett. A. 373, 3296 (2009)]. The conclusions of the Lyman paper also relied heavily on theoretical estimates of FTOA by Trenberth et al. See next point.

[d] What we said was “In our opinion, the missing energy problem is caused by a serious overestimate by TF of FTOA, which, they state, is most accurately determined by modeling.” This is based upon the following statement by Trenberth, Fasullo, and Kiehl [“Earth’s global energy budget” Bull Amer Meteorol Soc 90, 311-323 (2009), page 313]

“… The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ±0.15 W m-2 by Hansen et al. (2005)”.

The later Trenberth papers then use this “probable best” source as the basis of their conclusions about large energy imbalance and “missing energy.” Thus “missing energy” is inferred from models. Had the results been based on the observational CERES data, large error bars would have prevented any such conclusions.

[e] In regard to the last 6 months of data, our method 1 which uses a 12 month symmetric running mean does in fact use the last 12 months.

Dr Curry does not seem to be online at the moment, but I am sure she would want somebody on her blog to thank you for posting your comments. There is a real mixture of people that contribute here but I’m sure that they all appreciate it when somebody close to the grindstone takes the time to air their views and contribute to the debate. Thanks again, and we hope to hear more from you. Regards, RobB

David,
Concerning the use of the data of last 6 months, the way you determine the trend of the figure gives the last 5 months and the first 5 months less weight than for other months. This is particularly serious error, because of the large difference between the first and the second halves of every year.

Your fourth method is the only sensible simple way of determining the trend in the data. Using only one month of each years leads to too large random components over a period of only 6 years.

Thanks for that excellent reply. I would like to (and intend to) invite Dr. Trenberth here to directly respond to your comments. It seems we have a huge opportunity here to make this blog exactly what Dr. Curry intended it to be. I would hope that Dr. Trenberth would see the value of such…

As a follow-up, I sent the following email to Dr. Trenberth (who had sent me his original response to the Knox and Douglass paper):
__________
Dr. Trenberth,

I wanted to call to your attention the fact that Robert Knox and David Douglass have responded to your criticism of their paper. They have responded on Dr. Judith Curry’s blog, Climate etc. http://www.judithcurry.com

As a layman, I am of course not aware of the professional dynamics that may exist between you and other professionals such as Dr. Curry, nor am I aware of how you may feel about her creating a blog to discuss issues of climate science etc., but I do know that there are many educated non-professionals such as myself who are very interested in the topic of climate and who would love to see an open discussion of the issues between the very top researchers in the field. This is, in essence, the vision Dr. Curry has for her blog. In the spirit of this I would encourage you to respond directly on Dr. Curry’s blog to Knox & Douglass’ rebuttal to your criticism of their paper.

David Douglass, in disputing both Trenberth’s criticism of the Knox and Douglass paper as well as Trenberth’s assessment of heat added to the climate system, states:“[d] What we said was “In our opinion, the missing energy problem is caused by a serious overestimate by TF of FTOA, which, they state, is most accurately determined by modeling.” This is based upon the following statement by Trenberth, Fasullo, and Kiehl [“Earth’s global energy budget” Bull Amer Meteorol Soc 90, 311-323 (2009), page 313]“… The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ±0.15 W m-2 by Hansen et al. (2005)”.

The later Trenberth papers then use this “probable best” source as the basis of their conclusions about large energy imbalance and “missing energy.” Thus “missing energy” is inferred from models. Had the results been based on the observational CERES data, large error bars would have prevented any such conclusions.”

Although the statement that the Trenberth imbalance estimate is based on models is largely correct, the implications of Douglass’s comment strike me as misleading. What Trenberth did was to start with raw ERBE/CERES data showing substantial imbalances, and then adjust the imbalances downward to correspond better to model data, while keeping the adjustments within uncertainty ranges allowed by the data. It is correct that the error bars for the raw data are large, but the data almost uniformly show a significant imbalance in the warming direction. If unadjusted, the putative “missing heat” would be greater rather than less than the value cited by Trenberth. For additional details, see Fasullo and Trenberth J. Climate

Although not directly relevant to the above point, Trenberth has also pointed out that recent CERES data show an increasing imbalance, implying an increasing net heat flow into the system. While the absolute CERES measurements are vulnerable to acknowledged inaccuracies, CERES trends are less vulnerable. It appears therefore that identifying the source(s) of heat storage remains an imperative. I’ve provided links upthread to the Purchey/Johnson paper indicating that a partial answer lies in deep ocean warming. A full accounting remains to be determined.

One imbalance is the difference between the net energy flux calculated using CERES and what is thought to be maximally possible. CERES measurements indicate an apparent net flux of 6.4 W/m^2 according to Trenberth, Fasullo and Kiehl, while anything much larger than 1 appears to contradict severely observed warming. As the figure 6.4 is known to be inaccurate due to the complexity of processing the data this discrepancy is not a big surprise, but all steps in improving the accuracy would be very welcome.

The second imbalance remains, when the net flux is assumed to be only 0.9 W/m^2 and this is the difficulty of finding out where even this smaller heat flux is going.

The first imbalance must be resolved by improved empirical data that improved models in processing the empirical data. The second imbalance is in the heart of the models of atmosphere and oceans. The second imbalance can be resolved either by figuring out, where the heat has gone, or by determining that the net flux was after all smaller than 0.9 W/m^2.

Exactly, however my point is that the 0.9 W/m^2 is an artifact of the models, not empirical observation, so unless you only wish to demonstrate that the models aren’t working it has little use as an assumption. (The very fact that there are these large discrepancies should warn that the models are potentially unreliable witnesses on this score).

The answer lies in observations (and I suspect that disaggregation to the sea and the atmosphere is premature until the higher level systems is made consistent – but I need to give this more thought).

The problem is that TFK etc have simply chosen a number (no matter how well intentioned) that biases everything from that point on (while in so doing putting aside useful information about the uncertainty of the systems being investigated).

In some senses this is akin to dealing with the question “when did you stop beating your wife” – the false assumption behind the question leads one into the kind of complexity that is filling this thread.

Well, D64, I have nothing more to say to you because you have just demonstrated yet again that your stubborn obtuseness knows no bounds. So I leave you with amessage from King Arthur to the black knight:

It is quite thorough and not comprehensively addressed by Knox & Douglass. This result stated by the Lyman paper is worthy of mention:

” Accounting for multiple sources of uncertainty, a composite of several OHCA curves using different XBT bias corrections still yields a statistically significant linear warming trend for 1993–2008 of 0.64 W m-2 (calculated for the Earth’s entire surface area), with a 90-per-cent confidence interval of 0.53–0.75 W m-2.”

Is that because the two papers are looking at different timescales? I couldn’t access the Lyman paper (paywall) but read the abstract. Lyman refers to a trend between 1993 and 2008. Knox and Douglas are talking about more *recent* trends and their abstract reads as follows:

“A recently published estimate of Earth’s global warming trend is 0.63 ± 0.28 W/m2, as calculated from ocean heat content anomaly data spanning 1993-2008. This value is not representative of the recent (2003-2008) warming/cooling rate because of a “flattening” that occurred around 2001-2002. Using only 2003-2008 data from Argo floats, we find by four different algorithms that the recent trend ranges from –0.010 to –0.161 W/m2 with a typical error bar of ±0.2 W/m2”

So, in essence, Knox & Willis are just concerned about the recent flattening? If this is the case, the the Lyman etc. al 2010 paper and their paper are talking about two different issues as no one is disputing the recent flattening and the additional heat ( .53 – .75 wm2) added to the oceans from 1993-2008 is still there and the recent “flattening” may just serve as base for the next leg up. Additionally, it would be more interesting to find some correlations between this flattening and other climate cycles such as the rather quiet sun we’ve had during the flattening period or the cool phase of the PDO beginning, etc.

Thank you for bringing the Lyman et al. paper to my attention, because I hadn’t realized exactly how bad the OHC data were until I read it.

Lyman’s Figure 1 compares seven published estimates of OHC increase since 1993 that were constructed using the same raw data but different “mapping techniques” and “bias corrections”. According to the highest estimate OHC has increased by about 18 x 10^22 joules since 1993. According to the lowest it has increased by only about 6 x 10^22 joules. In other words, estimates of the amount of heat added to the ocean since 1993 vary by up to a factor of three depending on who “maps” and “corrects” the records.

Lyman’s Figure 2 re-plots some of the estimates using the same mapping techniques and climatologies to make them more directly comparable. Now the estimates mostly agree on about 13×10^22 joules of net OHC increase between 1993 and 2006, but between 1993 and 1999 they show anywhere from zero to 12×10^22 joules of net OHC increase and between 1999 and 2009 anywhere from 12 back down to zero.

The question raised by these results isn’t whether we can make a “robust” estimate of OHC increase since 1993, but whether we can make any meaningful quantitative estimates of OHC increase at all over this period.

The OHC data is much longer than Argo data alone. According to the Knox and Douglass paper, the Argo data (global coverage since about 2003) is trustworthy and shows no warming trend. This is in agreement with the AMSR-E satellite data trusted by Roy Spencer.

You can always argue about trends by choosing appropriate dates….and they do argue of course! I think the difficulty some people have with the Knox and Douglas findings are that they could call into question the recent radiative forcing modelling for the TOA. Hence Trenberth’s objections. I hope your invitation is accepted because, as a fellow layman, I find this absolutely fascinating. RobB

Rob – Please see my comment above in response to Douglass. The imbalance in a warming direction estimated by the models is smaller rather than larger than the imbalance derived from the raw observational data. Substantial uncertainties remain, but they are difficult to reconcile with a conclusion that little or no imbalance exists. Future data may change this assessment, but at this point, it’s reasonable to assume the task of identifying where the extra heat is being added. The deep ocean remains a candidate for at least part of the answer.

Yes, Fred. I think I do understand your point. In effect, the CERES data at 6W/m2 seem to indictate warming that is most unlikely and so I agree that it can be discounted. However, even the 0.9 W/M2 modelled radiative forcing suggests that heat is missing if you believe the ARGO data. At the end of the day, until we can improve CERES and confirm the ARGO data, we just don’t know whether heat is mising or whether it was never there in the first place. A travesty indeed. As a rather cynical aside, I will watch for revisions of data very carefully. Regards, Rob

Then it would be a loss for everyone. I think the kind of open dialog on important climate issues (and other issues) that Dr. Curry has encouraged here is the future.

I don’t know the dynamics of the relationship between Dr. Trenberth and Dr. Curry, but I would hope that she could encourage him as well. There is no better court of public discourse on these issues than you’ll find here, and even perhpas on WUWT. I think back to last year when Dr. Walt Meier posted on that site, and even though he and Anthony disagreed, the discourse was incredibly helpful.

actually, i know trenberth and fasullo each very well (i was on Fasullo’s Ph.D. committee). trenberth is an insanely busy guy. i wonder if we can coax fasullo into participating, i will give it a shot.

He is leaving for travel to Europe until Jan. 19 (Bern ISSE 9-14; Grenoble ECRA 15-18) and will not be posting on this site but he is going to be responding in full in a future paper, showing what “rubbish” the Knox & Douglass paper is.

It wouldn’t be surprising he seems to think the ARGO data is rubbish so any work based on it is going to suffer in his eyes. Add the fact that thw authors are outside the consensus and ….

I guess the question is whether the critisism will be based on more evidence than used to critisise the ARGO data which seems to little more than a hunch.

Trenberth seems incapable of questioning his own energy imbalance calculations. It’s always somebody elses work that’s at fault. This isn’t a trait specific to him, or even climate science. Some of the nicest human beings I know turn into attack monsters when their science comes under question.

HR,
If you read the paper by Trenberth, Fasulla and Kiehl, you will notice that they question their own energy balance figures quite a lot. The state clearly that the figures are in many ways uncertain and that there are unresolved imbalances in the data.

My interpretation of the whole paper is that they want to create the best estimate for the main components of the energy balance both for the ocean areas and for continental areas. The result is summarized in the well known picture and that is really essentially all that they end with. There is no suggestion that the results should be interpreted as anything more precise or as giving any evidence on the net energy flux. They just summarize the present knowledge as best they can.

Very many discussion on this site have shown that even an uncertain and inaccurate but still reasonable overview of the different contributing processes is very useful and that is what Trenberth provides by this work as an update of earlier similar calculations.

I am not a climate scientist. I am also not an idiot. It seems to me that if the current state of the quality of the instruments and methods used to gather climate data are insufficient, then the junk used in the past was even worse. How would we know that the earth’s total heat content has increased significantly, as a result of man caused increased CO2, or for any other reason?

Can some climate scientist explain why we should believe that this year is the second warmest on record, by some few hundredths of a degree of something or other, when one of the most famed of your well-educated brethren has said that the current measurement technology is so deficient that it is “a travesty”?

I humbly submit my summation of this discussion, with a few quotes and (my comments):

$ Defenses of the Dogma (you know who said what)

$ The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate. (The punch line is correct. And so too our understanding of climate is inadequate)

$ The conclusion is that we cannot get either support or refutation of the AGW calculations from data that is so inaccurate. The data helps in verifying the overall understanding of the radiative fluxes, but not at a accuracy level required for testing the proposed AGW models. (The data helps verify the hypothesis, but it is not good enough to test it. Huh?)

$ The data is not in contradiction with some sophisticated and complicated model, but it is in contradiction with the conservation of energy or some very conservative estimates of possible energy flows. (Models good, hypothesis good, data bad.)

$ How does one make sense of data without a hypothesis? (No comment necessary.)

$ Many people are too dogmatic about the scientific method. They do not accept that real science is continuously facing problems that cannot be solved according to dogma, but require honest use of common sense and understanding that comes from long experience. (Scientific method is dogma; too constraining. We got problems with our own dogma that necessitate substituting our experience for the scientific method)

$ When the problems are different each time, no dogmatic rules lead to best and as objective results as a real world scientist can produce.. (Experience of famed scientists trying to save the world trumps scientific method dogma)

$ Deep ocean storage of some of Trenberth’s extra heat is admittedly speculative, but would seem a better fit to the observations of both the CERES imbalance and the Purkey/Johnson deep ocean data than an interpretation that concludes that no extra heat has accumulated within the climate system. (We can make speculation work. We have experience.)

$ It now appears that heat is transferred more rapidly and in greater quantity to the deep ocean than previously assumed, and so earlier values suggesting the minor importance of this transfer on intradecadal timescales must be revised in light of the new data. In addition, the steric sea level data are compatible with substantial deep ocean storage, because at those depths, additional heat results in almost no thermal expansion of seawater in your blog post, you may have overlooked this principle.

Uncertainties remain, but the explanation that best accounts for all the data is storage of at least some extra heat at great ocean depths. Whether that provides a full accounting remains to be seen. (We often discover previous assumptions were wrong, when we need to plug a whole in our dogma.)

* Consistent with Common Sense (* it’s a snowflake)

* Heat could also be transfered deeper into the ocean. If this is true, it would likely be horizontally and vertically mixed such that its remegernce to levels about 700m would likely be slow. Moreover, it is difficult to see how this heat could been transfered to depths below 700m without being seen in the 700m to surface layer. (No problem. It was transported down there through “the pipeline.”)

* Since the heat that is ‘missing’ is found only in the models and not in the natural world, then the models must be wrong. (Heretic.)

In Conclusion:

So, we should be worried about this missing heat buried in the vast expanse of the frigid oceans’ depths? My guess is a lot more heat can go missing and be stored down there in the cold abyss and we would still be none- the-wiser. Maybe that’s why the earth didn’t burn up when the concentration of CO2 in the atmosphere was in the thousands of ppm. Yeah, that’s it! The heat goes through the pipeline, gets stored down there, and when there is an ice age, the heat comes back out and melts the ice. Intelligent design? Makes sense if you believe that the heat got down there, without being detected up here.

England was still under ice 2000 years ago while the deserts of Egypt were green due to the receeding ice.
This is why the northern part of our hemisphere was poorly populated in a society that did a great deal of roaming.

It is not the heat that is missing, it is the predictions about it that are lacking.
Who do we believe, our eyes (data) or the predictions of people who have made a living selling scary, inaccurate stories?

Hunter, the “data” really isn’t data. It’s highly processed theory laden. it’s a “model” of data. Let me ask you do you accept the CERES “data” and all the physical theory required to produce the final numbers?

I think you miss the point. You and others seem to have this notion that there is a hard bright line between “models” and “observations”. My very specific point is this. There is no bright line between the two. One cannot say with certainty MERELY ON INSPECTION of a difference between “models” and “data” that the “data” is right and the model is wrong. NOR can one merely Toss data out that doesnt fit a model, even if that model is useful and “true” in many regards. Some people like to think that when the “data” doesnt “fit” the model that this means the model is automatically and logically thrown in the trash heap. That’s factually untrue. What actually happens depends upon a whole host of pragmatic details. Sometimes the data gets another “scrubbing”, sometime the model gets new equations added to it.. adjustments are made all over the place as people try to figure out what exactly is going on. So, if you dropped a rock in a vacuum and noted that it fell twice as fast as phsyical laws predicted, you would not throw out the model we have for gravitation. You’d probably examine your data collection, you’d repeat the experiment, you do many things before you gave up a working theory.
Trouble with climate science data ( say OHC or TOA flux) is you don’t get to go back and say.. re do the data collection. So, quite naturally when data and theory come into conflict ( they are always in conflict) you will find some people looking to change the model while others look at the data, while others look at both. That’s practically what happens. To suggest that we should always and forever “trust data” over models, OR trust models over data, is a unscientific dogma. What history shows us factually is that sometimes models are wrong, sometimes data is wrong and sometimes they both are wrong. That’s balanced factual view of what actually has happened in the history and advance of science.

Unmitigated rubbish! When data is found to be wrong it’s almost always by comparing it to more data, not by any models. At least in other fields apart from climate science that’s the case and frankly I expect the same happens in climate science too, despite efforts by some people outside modeling groups to distort that process.

Models absolutely rely on data for validation. If you have to change the data to fit the model it’s just cheating and is totally invalid. Models are literally fudged to favour actual data, whether right or wrong, logical or illogical, even if it goes against theory x,y and z because that’s what real life is telling you. That’s what parametirisation means – fudging the equations to simulate reality. And the reality in question is the available data – however imperfect.

Your post is even more misleading because you presume there is a snowballs chance in hell of the model ever being correct. As a modeler i can tell you that even attempting to model earths climate with such gross assumptions, overly large grid elements and timesteps and far too many independent variables is an absolute fools errand. All you can do is compare some results of this model run with that model run and declare it’s less unrealistic than before. The few times I’ve read comments from climate modelers on the net they were always worried about how poorly their model fit the data, not the other way around.

Have you ever even done any modeling anywhere at any time or are you just making things up like too many people?

Dear Richard Courney and Steven Mosher, You both have valid points. No one denies in principle that data can demolish hypotheses. However Steve makes a very valid point that datasets and data processing can contain errors, so unless you are fully (or rather sufficiently) confident in the data, you may not be able to completely discard the hypothesis. I’m remainded of Dr. Wunch’s comment on determining sea level “At best, the determination and attribution of global mean
sea level change lies at the very edge of knowledge and technology”. The issues over the accuracy of the ocean temperature measurement have been discussed over the past 23 years at least. One recent example is http://atm-phys.nies.go.jp/~ism/pub/ProjD/doc/Ishii-Kimoto-2009.pdf .
They explain the “discovery” of bias in the XBT units, and different bias in different units. However they also fully understand that some significant number of measurements were taken with “XBT-unknown” and so they make estimates. In the end, they acknowledge “the XBT depth bias correction
affects the interannual to interdecadal variations of
ocean heat content (OHC) as well as long-term averages
of ocean temperature. However, it is hard to verify these
changes directly by other oceanographic observations, the
spatio-temporal coverage of which is not sufficient for
such an investigation in general”.

By the way, I note that the warming trend largely results from the “cooling” in the 70’s as a result of the “correction”. Perhaps so. At any rate, in my view it is difficult to place too much confidence in the ocean heat data prior to 2004, but the confidence since then is greater, and doesn’t show the warming predicted by the calculated radiative inbalance.

To give another example, people who are debating sea level talk as if TOPEX/Poseidon measurements can really measure global mean sea level to tenths of a mm (annually). Anyone reading the detailed technical literature on the processing and assumptions in calculating the “sea level” might feel the error bars are larger than usually indicated (see http://www.mdpi.org/sensors/papers/s6030131.pdf ).

You make a valid point that I agree when you write;
“Steve makes a very valid point that datasets and data processing can contain errors, so unless you are fully (or rather sufficiently) confident in the data, you may not be able to completely discard the hypothesis.”

But if you read my postings on this matter you will see that I hold to the post-enlightenment view that empirical data provides indications. Those indications may disconfirm a hypothesis but lack sufficient precision and/or accuracy to disprove the hypothesis.

It is a denial of the scientific method to ignore empirical data merely because it does not confirm a hypothesis. And that denial is compounded by assertions that the data must be wrong when it fails to support the hypothesis.

“It is a denial of the scientific method to ignore empirical data merely because it does not confirm a hypothesis. And that denial is compounded by assertions that the data must be wrong when it fails to support the hypothesis.”

Nowhere have I suggested that one merely ignore the data. nowhere have I suggested that the data must be wrong. What I’ve pointed out is that when the “data” and the hypothesis are in conflict, you have three logical paths to follow:

For example if I handed you observations that indicated that F=m^2A as opposed to F=MA, you would rightly say, “check your data steve.” In that case you would suspect the data and not the theory. Why? The same logically structure of data says X, theory says not X, exists.
In that case you pick the theory, because you take the theory as “proven” or “settled” or “useful” or central.
Its more likely that I got the data wrong.
Plus its a lot less work for me to double check my data than for all newtonian mechanics to be scrapped.

Now when the theory is just emerging ( climate science lets say) and the measurement systems are only recently deployed, when the theory and the data dont match, there is work to do on all sides of the question. So it’s simplistic to merely say “data prove the theory wrong. That the method.” It’s not the method. It’s factually not. Factually, people look at both the data and the theory for an explanation of why they don’t match. That’s a characteristic of emerging science.

Analyses of OHC using ARGO data, if I understand Dr. Pielke’s language, show the real world OHC at the end of 2008 is reasonably close to the GISS model’s “hunch”, based upon the Theory of AGW, of what OHC would be in 2008.

So from the perspective of those modelers, is there any missing heat as of 2008?

The 1993 to 2008 value is close to the Hansen prediction despite the flattening of the heating of the upper ocean reported in the Lyman et al 2010 paper since 2003 [if we use Jim Hansen’s expected radiative imbalance at the end of the 1990s of 0.85 Watts per meter squared and use 80% of that to represent the upper ocean heat content change, his prediction of the heating rate of the upper ocean is 0.68 Watts per meter squared. This is within the uncertainty of the Lyman et al analysis]. – Dr. Peilke

But you have to bear in mind that the pre Argo data wasn’t worth much. Argo was set up because nobody trusted what was there before. It was supposed to show warming and prove the hypothesis once and for and everyone was very surprised when it didn’t. That was when they looked for errors and found some slight errors but not enough to show warming. Of course if the error had been on the warming side they wouldn’t have bothered to look for errors at all: We all know that, whether we say so or not!

Because the “missing heat” concept was proposed by Trenberth, I revisited his article advancing this concept at Trenberth 2009

Although Trenberth asks “Where is the heat that would reconcile observational data with an estimated 0.9 W/m^2 TOA imbalance?”, I asked the reverse question – “What TOA imbalance would be required for consistency with the observational data?”

Based on his Table 1, it appears that an imbalance of about 0.6 W/m^2 would be consistent, and perhaps slightly greater if one factors in the Purkey and Johnson data demonstrating an unanticipatedly high rate of deep ocean warming. In that sense, the modeled and observed data are not extraordinarily discrepant even without invoking missing heat. A greater discrepancy remains between observed data on heat storage and the CERES TOA flux measurements, which indicate a greater warming imbalance than the models do. That issue is still clearly unresolved, and may require technological improvements in the accuracy of TOA flux measurements.

“In that sense, the modeled and observed data are not extraordinarily discrepant even without invoking missing heat.”

The difference between 0.6 and 0.9 W/m^2, seems extraordinarily discrepant to me. Aren’t there a lot of m^2 up there at TOA? Isn’t 0.9 50% greater than 0.6, usually? If your broker told you that your retirement account contained $600,000, instead of the $900,000 that you knew it to contain, would you not consider that to be extraordinarily discrepant?

“The Willis et al. measured heat storage of 0.62 W/m2 refers to the decadal mean for the upper 750 m of the ocean. Our simulated 1993-2003 heat storage rate was 0.6 W/m2 in the upper 750 m of the ocean. The decadal mean planetary energy imbalance, 0.75 W/m2, includes heat storage in the deeper ocean and energy used to melt ice and warm the air and land. 0.85 W/m2 is the imbalance at the end of the decade.”

A critically important research question that needs to be examined is what evidence exists for the continuation of this heating rate since 2003? It appears that the mode prediction for this heating rate and the observations are not in agreement in recent years.

I agree that evidence for continuation of the cited heating rate, based on decadal means, should be examined. I agree as well with the disparity you note relative to ARGO float data in recent years (italics mine). Whether “years” are the most informative assessment interval remains an important question, particularly when the repository of most ocean heat – the deep ocean – is not sampled.

The Hansen source you cite also states:“We note the absence of ENSO variability in our coarse resolution ocean model and Willis et al. note that a 10-year change in the tropics is badly aliased by ENSO variability. Given also the large unforced variability of the distribution of ocean heat storage among our 5 model runs, there is no expectation that simulated geographical patterns of heat storage should match in detail those of observations.”

The last several years have witnessed major ENSO events. For all the above reasons, I believe these sources of variability preclude the use of intradecadal or even shorter term (year to year) changes in OHC sampled only in the upper oceans as criteria for judging mean rates of ocean heat storage, particularly in light of recent evidence for greater deep ocean storage rates than previously anticipated.

Apparently, the latest ARGO data reveal a resumption of upper ocean OHC increases. If this continues, perhaps concerns about a lack of ocean heat storage will be resolved. On the other hand, if this rise is temporary, and is followed by flat or declining OHC not merely in the aftermath of the current La Nina, but for more than a few years beyond that, we will certainly need to reassess our estimates of perceived TOA imbalances.

You keep pressing the deep ocean heat aspects very hard, and in your final paragraph above you said: “………particularly in light of recent evidence for greater deep ocean storage rates than previously anticipated.”

What is that evidence? Has the heat been detected? Surely, it’s more of a case of the heat has gone missing and therefore your assumption is that it must have gone deep? In my view, you press your point a little more strongly than the evidence suggests. Regards, Rob

Rob – Some of the heat storage not previously accounted for has been described in the Purkey and Johnson paper, which identifies repositories in the deep Southern ocean and basins fed by it (the Arctic ocean and Nordic seas were not included in their analysis). Their data refer mainly to the 1990s (with a few samplings later) – a time when the radiative imbalance was probably smaller than today – and it’s conceivable, although not proven, that subsequent storage rates have been greater. In response to Rob Starkey’s comment below, this deep ocean storage will represent changes reflecting radiative imbalances extending back millennia, but strongly dominated by recent decades, and so it is likely that warming attributable to GHGs has played a significant role.

As I pointed out above, the Purkey and Johnson data do not completely resolve the discrepancy between OHC data and either CERES data (showing a large radiative imbalance) or model estimates (showing a smaller imbalance of 0.9 W/m^2 but still more than accounted for). It may be that even the 0.9 modeled figure is too large, although perhaps not by much. Alternatively, there may still be heat not yet accounted for, possibly because of inaccuracies in OHC measurements.

Fred- I will read the paper, but I do not see how they would have been able to determine that higher temps would have been caused by human caused CO2 vs. other potential causes. I can suggest other potential causes as I am sure you can. Let me read the paper, before writing further since they may have covered my concerns.

The paper doesn’t address the cause of heating. My point was that deep ocean heat accumulation reflects surface heating extending far back in time, but with a correspondingly smaller contribution the further back one goes. It is likely that some of the storage reflects increases in solar irradiance from the early 20th century, but by now, that contribution will have been very much flattened in its time trend, whereas recent surface heating will not have come nearly as close to equilibrium and will mediate a steeper trend. The cause of warming in recent decades is a separate topic, and has been covered in this blog in a number of posts on radiative transfer, attribution, feedbacks, etc. It’s too large a topic to incorporate into these few comments.

I read the paper and they seem to have compiled pretty good temperature data, but you are correct they did not indicate anything that would indicate that that cause of the temperature change was due to increased CO2. It would seem that geothermal warming could easily be an alternate cause. We have been seeing the shift of the magnetic poles and this clearly indicates a significant change in the earth’s core, which would seem to indicate the potential for geothermal changes.

Fred
The paper does indeed suggest some warming of the deep ocean but it is a tiny amount and there are a lot of ifs and buts including the geographical coverage of the study data and the extent of the error bars, The following quotes are pertinent to my continuing doubts:

“To gain more precise estimates of the deep ocean’s contribution to sea level andglobal energy budgets, and to understand better how the deep and abyssal warmingsignals spread from the Southern Ocean around the globe, higher spatial and temporalresolution sampling of the deep ocean is required. The basin space-scale and decadaltime-scale resolution of the data used here could be aliased by smaller spatial scales andshorter temporal scales. Furthermore, the propagation of the signal can only beconjectured, not confirmed, with the present observing system.”

I think this is a bit small to draw firm conclusions although I accept it seems to suggest that warming in the deep ocean is physically possible. That said, I’m not sure that the horizontal transfer of heat is fully understood or indeed mapped and therefore there is still a question about whether the deep is warming or whether energy is moving around laterally. We are talking about a change of around 0.002 degC after all!!
Rob

Fred Moolten says:
I believe these sources of variability preclude the use of intradecadal or even shorter term (year to year) changes in OHC sampled only in the upper oceans as criteria for judging mean rates of ocean heat storage, particularly in light of recent evidence for greater deep ocean storage rates than previously anticipated.

Fred,
one of the reasons deep ocean accumulation of heat was disregarded previously is that it raises the question of how much more solar derived heat is stored down there than ‘extra’ heat from any enhanced greenhouse effect.

The AGW hypothesis is caught between the devil and the deep blue sea here. If heat stored in the deep ocean takes time to come back out, then the slight reduction in solar cycle amplitudes since the ’50’s no longer provides a rationale for excluding heightened solar activity during the mid-late C20th as a possible main cause for ‘global warming’. Such as it was/is.

http://www.argo.ucsd.edu/FrArgo_data_and.html
Looking at the Argo data, the variation by year, and by longitude and latitude (hemispheric or in smaller boxes), I wonder what the “meaning” of a global mean is. If the huge temperal and spatial variations aren’t first explained, then we cannot understand what the variation on aplanetary basis over several years signifies.

Whether or not heat is being accumulated in the ocean below 700m is an important research issue. However, even if it is (and the evidence is that the magnitude of this heating is small), there is still a descrepancy since 2004 between the models and the Argo data at heights shallower than 700m. Moreover, heat that would be stored at below 700m would be relatively slow to reappear above the thermocline and in the atmosphere, and thus affect the rest of the climate system.

The issue is also not of the spatial patterning of the heating, which is a much more difficult modeling challenge, but of the global average heating rates. Jim Hansen is quite clear on his estimate of the global annual averaged heating rates we should expect in the upper oceans. He wrote “Our simulated 1993-2003 heat storage rate was 0.6 W/m2 in the upper 750 m of the ocean.” The 2004 to 2014 rate should be at least as large IF the IPCC global climate models are accurate with respect to this climate metric.

You also wrote “the latest ARGO data reveal a resumption of upper ocean OHC increases”. What is your source for this information? I agree with you that if the heating rate returns to the values of the 1990s, this isue of “missing heat” will disappear. Alternatively, if this heating does not restart, or is muted compared with the past, the IPCC models should be rejected as useful predictors of global warming .

JCH and plazaeme – Thanks!
Regarding – “The Knox and Douglass paper ended its data in 2008, and since that time, OHC has risen globally. The rise isn’t major, as it had been from the mid-70s to the early 2000s, but it is a rise.”

What is the magnitude and statistical significance of this recent trend? Where is original source for this finding? If there is a recent positive trend, what is its magnitude in Watts per meter squared?

Hi Roger. I wrote what you quoted, “The Knox and Douglass paper ended its data in 2008, and since that time, OHC has risen globally. The rise isn’t major, as it had been from the mid-70s to the early 2000s, but it is a rise.” Refer to my January 8, 2011 at 4:44 am comment.

Sorry, Bob. I must have misunderstood what I found. Is it a combined data set based on XBT and ARGO? I assumed it was XBT because I didn’t see any reference to ARGO. The quote that I found when I followed your link to the graphs showing the recent changes said:

“As described in their explanation of ocean heat content (OHC) data changes, the changes result from “data additions and data quality control,” from a switch in base climatology, and from revised Expendable Bathythermograph (XBT) bias calculations.”

Rob: Figure S2 in Levitus et al 2009 shows that MBT use ended in 2001 and that XBT continued after then, so I would conclude they use a combination of XBT and ARGO to date. I’m surprised that I find no mention of the fixed platform sensors (also make measurements at depth) that are part of the TAO, TRITON, and PIRATA projects in the tropics. They’re included in the GODAR database, and Levitus is Project Director of GODAR.

Strictly speaking 0.6 degrees per century is statistically meaningless too. In more enlightened times we might have considered it just a random walk. However the important item to remember is that ocean heat content was not supposed to pause at all, it was supposed to, according to all expectations, just continue to rise. No point coming back and saying we expected that to happen because nobody did. And the huge uncertainty of the pre-2003 data is such that you really shouldn’t be using it for anything. Buckets, engine intakes, sharp unexplained step changes? Come on!

In Bob Tinsdale’s post he noted that in regard to the NODC OHC data: “… there is a minor rise in the OHC trend since 2003, but it’s significantly less than the long-term trend.”.

In our paper “Recent Energy Balance of Earth” we also considered the NOAA/NODC OHC data and calculated the slope of the “minor rise” since 2003 from the annual data. We said

“For 2003 to 2009, one calculates FOHC = 0.009±0.129W/sqm. Although this slope is not negative it is well within the error bars produced above [for the Argo data] and far below the Lyman et al.1993-2008 value.”

Checking the NODC web site there are no new annual data after 2009 (as of Jan 11. 2011) although the values for each year listed have changed slightly. Based upon the new values, the new slope is FOHC = 0.043±0.140 W/sqm – slightly different from before.

I wonder if trenberth will deal with this issue the way Santer did; by ensuring all the datasets were “corrected” then using a huge inappropriate error bars, using the wrong stats, talking about “consistent with”, models then cherry-picking the years.

I just received this message from John Fasullo, who is coauthor on some of the papers being discussed:

Judy,

Please feel free to post this correspondence. Also, I’m happy to discuss the science further if you’d like…

****

Judy,

My quick take on the work of Knox and Douglass 2011 (KD11) is that they seem to conflate two issues: one being our ability to close the energy budget on interannual timescales and one being the mean imbalance. Trenberth and Fasullo 2010 was not intended as a commentary on the mean imbalance (other’s have done that) and does not rely on any particular value of the mean imbalance to make its point. Rather the study focusses on our inability to explain the stasis in warming in 2008/9 as it relates to interannual variability in the net TOA flux. KD seem to focus on the former issue and thus perhaps it is really Hansen et al 2005 and subsequent literature with whom they should take issue.

In regards to the points we were making in TF2010, I think the KD11 results underscore the fact that indeed there is a problem. This problem stems from a contradiction between ocean heat content (OHC) and CERES estimates of the variability in the net TOA flux, which have been corroborated by subsequent CERES releases (EBAF ed2.5). This disagreement does not rely on models or conjecture and is based on the stability of the CERES instruments, which is quite good, rather than their absolute calibration, which is relatively poor. This is an important point that KD11 seem to miss. Rather, it is heartening to see that KD11 corroborate the contradiction that we identify.

Nonetheless, this is not to say that KD11 is without major flaws. It is clear they have failed to address the dominant sources of uncertainty in the Argo estimates relating to sampling, calibration, and an evolving observing system. This has been a problem in the OHC community for some time and recent work (Palmer’s white paper) has highlighted nicely the fact that the various OHC estimates that exist often do not even fall within each other’s error bars. In this sense, the ARGO estimates do not yet represent a credible commentary on the Hansen et al 2005 value of the mean imbalance in my view. Moreover, KD11 fail to consider the holistic nature of our observing systems, involving not only OHC but mass, sea level, temperature, forcings and energy. These things must all jibe in the end and I think OHC estimates generally are likely to be our Achilles heel. Perhaps I am biased, as a member of the CERES science team, but it is my sense that the capability of our TOA observing system in regards to sampling, calibration, and stability is very likely to exceed that of the ocean systems at this point, at least in regards to interannual variability. Perhaps you remember ocean cooling circa-2006 and the major spurious features of decadal variability in earlier OHC estimates?

In any case, I welcome further discussion. This issue is without question an interesting problem and no doubt will be the subject of debate and progress for some time to come.

Re the “missing heat”. Trenberth’s hypothesis is that is global warming has occurred and as the global temperature has not risen as much in the last decade as it has in previous decades then there must be heat hidden somewhere. And so the the global heat balance is computed to prove the hypothesis was right. But one could pose another hypothesis: there is no global warming, perhaps only natural variability, and therefore there is no “hidden heat”. And one can make the same computations of heat balance to test this second hypothesis. But I wonder, given the errors and biases in the data sets whether or not either hypothesis is distinguishable? That, to me, is the question. If you believe that global warming is “unequivocal” then I guess there is no alternative hypothesis to test.

I cannot say much about K&D paper and perhaps the profiler data is a problem. But is it any more of a problem than the satellite data used for the computation of the energy balance? Of course, Trenberth could be right and K&D completely wrong.

Reply to Bob Tisdale – Thank you for the follow up.
In the figure from http://bobtisdale.blogspot.com/2010/10/update-and-changes-to-nodc-ocean-heat.html, there is no statistically significant warming (or cooling) since 2004 (or 2003) in either the original or adjusted 0-700m ocean heat content data. There is warming prior to this time period which is in close agreement with the GISS model results. The jump in the heat just before the heating became nearly flat appears overstated, as Josh Willis has indicated on my weblog.
From this figure, it appears a large amount of heating must occur over the next few years to bring the observations back in agreement with the models.
There is also an advantage with respect to heat measurements of the ocean. We do not need to assess long term linear trends. We just need an accurate measurement of the upper ocean heat content at any given time. Once the annual cycle of the global ocean heat content is subtracted out, this provides the global warming or cooling at any given time. I provide a definition of global warming and cooling using this information in my post http://pielkeclimatesci.wordpress.com/2011/01/11/the-terms-global-warming-and-climate-change-what-do-they-mean/.
I also appreciate the constructive engagement by John Fasullo on Judy’s weblog. His comment
“….the capability of our TOA observing system in regards to sampling, calibration, and stability is very likely to exceed that of the ocean systems at this point, at least in regards to interannual variability”
is a remarkable assumption. This requires us to accept that measurements of small differences in large energy fluxes is a more robust diagnostic of the global radiative imbalance than a mass-time weighted measurement of heat content changes. As an analog, it is easier to measure the heating rate of a pot of water by determining the its accumulation of Joules over time rather than measuring the heat flux (input) from the burner, and the heat flux (loss) of heat out of the water.
I do urge John to do more than take a “[a] quick take on the work of Knox and Douglass 2011 (KD11)”.

Roger – I address this set of questions to you because John Fasullo is not actively participating at the moment (although he may), but it involves points you both make.

1. It appears that the accuracy of CERES data in quantifying an imbalance at any given time is poor, but John and others have asserted that CERES data are a more reliable guide to changes over time (his “stability” argument). Do you agree? If the data now show a growing imbalance since before 2003, isn’t this more consistent with a significantly positive imbalance now rather than a negative imbalance earlier – at a time when OHC appeared to be rising fairly steeply? (The reverse question is one I would put to John – given uncertainties in OHC data that were greater in the past, how confident are we about the earlier OHC rises?)

2. To what extent is Argo data variability an instrumentation problem, a sampling problem, and/or a problem of intrinsic climate variability? How much does ENSO contribute to the variability, and if it contributes significantly, is this because ENSO events alter OHC overall or because they change the distribution between OHC above and below 700 meters (even though these events mainly operate at shallower levels)? Even above 700 meters, how much is sampling variation affected by rapidly changing conditions, such that sampling that would be highly representative under steady state conditions becomes less so when individual ocean regions are adjusting to a departure from previous states?

My sense from all the uncertainties is that most likely, our current radiative imbalance is more than trivially positive, but we don’t know by how much, and so we don’t know how much extra heat, if any, to search for. I also have the sense that both data uncertainties and intrinsic variability make it hazardous to extrapolate from interannual variations to longer term trends. An imbalance estimated from models of about 0.9 W/m^2 is a mean value, but is it not true that at any given moment, that mean value may represent the average of imbalances that vary not only in magnitude but also in sign – at some instances representing a net energy loss to space –CERES and other TOA flux measurements ?

1. The CERES data provide important information on the climate system. However to conclude that flux measurement changes over time is more accurate than a time-space integrated measurement (the OHC) is counter to observational science. Integrated measures are more accurate in terms of physical quantities such as heat and changes of heat over time.

There is another way to ask this question also. What is the uncertainly in Joules of the annual global average heat content given the issues you bring up? Taking the upper magnitude of heating rate since 2003 (or 2004) what is the diagnosed radiative imbalance?

2. ENSO (and other regional ocean-atmospheric features) are part of the climate system and are, of course, involved in global warming and cooling. If these climate features alter the heating/cooling on time scales that are short enough to affect the spatial representativeness of the Argo (and other ancillary) data, it is an issue. However, over the time period since 2003 (2004) this would seem a long enough time to integrate in order to obtain a robust diagnosis of heat changes. Of course, the ocean specialists can tell us otherwise, but up to the present they have not refuted papers such as

What I recommend in moving forward is to adopt upper ocean heat content changes as the primary metric to diagnose and monitor long term global warming. If we can agree on a time period (10 year averages perhaps), it seems your concerns would be resolved.

Roger -Thanks. I concur with your view that OHC changes are an optimal metric, and that probably upper ocean OHC changes are a good representation if observed long enough – provided of course that those measurements can be relied on.

It seems to me that the problem of resolving disparities between different observational data sources is not limited to comparisons between TOA and ocean data, but also resides within ocean measurements themselves. I wonder what your view is regarding the following:

The U. Colo sealevel data show a consistent rise during 1993-2010, at about 3.1 mm/yr, with relatively small variations in slope:Sea Levels 1993-2010

The same source lists steric sea level rise between 1993 and 2003 at about 1.7 mm/yr –Steric 1993-2003

How should we interpret this? Was the 1.7 mm/yr decline in the steric sea level slope during the latter interval almost exactly compensated by a 1.7 mm/yr increase in eustatic rise due to an acceleration in ice melting, and if so, why would that happen?

To me, that seems unlikely, and my inclination is to “split the difference” and suggest that perhaps sea level rise has not been as constant as portrayed, and OHC changes not as variable. Again, though, I think this reinforces the need to refrain from overinterpreting short term changes.

On reviewing their paper, I notice that the Leuliette and Miller figure 1 shows total sea level rise for the 2004-2008 interval to be less than 3.1 mm/yr – probably closer to 2.7. mm/yr. This would still require a significant increase in the eustatic rise to offset the decline in steric sea level.

(note – in my above comment, the link to their paper is the one entitled “Steric 2004-2008” – clicking on their name links to the U. Colo data instead)

Thanks, HR. Instead of asking “where’s the missing heat?”, maybe we should be asking “where’s the missing water?”, given the difficulty of reconciling total sea level rise with the sum of its components.

I agree with you, of course, that the global average sea level trend is another very important climate metric. It is harder to explain its different contributions, however, as contrasted with upper ocean heat content. In fact, as an added complication for sea level attribution, the effect of changes in the global average level of the bottom of the oceans on sea level needs further exploration.

When you look at the much longer term data (500 M year level… Hallam curve) isn’t it true that the earth is currently near to the long term sea level lows? If you also agree that this is true, why isn’t it logical to expect sea levels to rise independent of any human actions based upon long term trends?

No this is not my field of expertise, so I apologize if it seems like a basic question, but I am just an engineer

Rob – Neither anthropogenic nor non-anthropogenic trends can occur without a phsyical mechanism. Long term climate trends in the past have reflected changes in solar irradiance, volcanism, the location of the continents, changes in the geometry of the Earth’s orbit relative to the sun, and in some cases, changes in atmospheric methane or CO2, possibly triggered by some of the afore-mentioned factors or by changing biological patterns on the Earth’s surface.

The same principle applies to sea level. We live in relatively cold times compared with some much earlier intervals, and so sea levels will tend to remain low in the absence of a mechanism for their increase, reflecting the fact that cold water occupies less volume than warm water, and the fact that land ice holds more water than in warmer times. Still, we are in an interglacial period rather than the depth of an Ice Age, and so sea levels today are estimated to be about 125 meters higher than during the Last Glacial Maximum of about 20,000 years ago.

Some rises in sea level have occurred since the 1700’s for reasons independent of human activity (as far as we know). However, the increases accelerated in the past century and even more in recent decades as the oceans have warmed and land ice has melted and entered the oceans – the rise has averaged about 3 mm/yr in recent decades.

One additional factor of importance relates specifically to your question as to why sea level was probably higher in some earlier times when climates were colder. The answer appears to be a progressive sinking of the sea floor as a result of changes in the Earth’s crust – Ocean Basin Dynamics

We appreciate John Fasullo’s comment and we welcome the chance to clarify in our minds several of the issues raised by the substantial recent literature on energy balance, to which John and Dr. Trenberth have contributed much.

In TF it is stated that FTOA, the net inward energy flux at the top of the atmosphere, exceeds FOHC, the rate of change of ocean heat content per unit area, by 1.2 W/m2, raising the question “where exactly does the energy go?” In a steady state, and because of conservation of energy, FOHC and FTOA should be equal to one another except for a small geothermal component Fgeo. For Earth in energy balance, FOHC should be zero and FTOA nearly zero (actually –Fgeo) .
We summarize some points in TF.
(a) The rate at which energy is stored in Earth’s climate system may be well approximated by FOHC (Pielke, Physics Today 2008); TF show a plot of this quantity, with values ranging from 0.5 W/m2 (year 2000) to 0.3 W/m2 (late 2009). No error bars are given.
(b) For the rate of energy input, TF claim a “post-2000” radiative imbalance at the top of the atmosphere FTOA = 0.9 ± 0.5 W/m2. They show a “heavily smoothed” FTOA curve with values ranging from 0.5 W/m2 (year 2000) to 1.5 W/m2 (late 2009), passing through the value 0.9 in late 2004.
(c) Accordingly, FTOA – FOHC (the claimed rate of production of missing energy) ranges from 0.0 to 1.0 W/m2 over the 2000-10 decade, rising sharply at 2005, according to the TF plot.

The provenance of the TF value FTOA = 0.9 ± 0.5 W/m2 is now explored. The top-of-atmosphere value that they consider “probably [the] most accurately determined” is that of climate models given in H, which is used to scale the measured values to an “acceptable but imposed” 0.9 W/m2. This adjustment is described on page 313 of TFK. While the modeled value according to H has an error bar of ±0.15 W/m2, TF quote the error as ±0.50 W/m2, as derived from their extensive analyses (FT, TFK) of CERES data. These analyses involve obtaining the difference between numbers whose magnitudes are hundreds of W/m2 and whose uncertainties are of the order of several W/m2. One must know the uncertainties in the uncertainties to evaluate the accuracy of the FTOA. We note that in the development of the “imposed” 0.9 estimate in TFK many empirical adjustments were made. For example, one adjustment is made in the longwave component having an “upper error bound” of 1.5 W/m2, in reducing an original quoted 6.4 W/m2 imbalance from CERES data to the imposed value 0.9. The uncertainties in FTOA CERES values alone are estimated to be 2σ = 4.2 W/m2 [N. Loeb et al., J. Climate 22 (2009) 748].
Considering that these calculations on which the TF analysis is based involve an explicit matching to the estimate of H, one cannot possibly regard FTOA = 0.9 W/m2 as a purely empirical result without assigning error bars so large that the “missing energy” is lost in them. This is the basis of our opinion that missing energy is most likely an artifact due to the acceptance, or imposition, of the modeled value FTOA = 0.9 W/m2.

It is pointless to talk about “missing heat” and go around constructing energy budgets at this stage of our knowledge. Two reasons:

As noted above by John Fasullo, there are too many uncertainties in the existing data sources.

Heat is not the only form of energy involved in the climate system. Given there is no systematic measurement at all of the other (very significant) forms of energy involved (eg the mechanical energy involved in ocean circulation, chemical energy involved in ocean chemistry changes, latent energy in phase changes of water) we cannot tell if heat is being converted to other forms of energy or vice versa.

I’m still of the view that solution lies in getting some boundary conditions right (which if chosen well eliminated a lot of these problems). I’d still go for the TOA and treat the globe plus atmosphere as a black box, although I see others would go for the ocean surface and treat the atmosphere as the black box.

This specific discussion represents, to me, the crux of the warmist/skeptic problem: uncertainty in data. Consider:

1. The solar insolation, as given a 341.5 W/m2, averaged over the entire globe. Actually, during the course of the year, it varies by 6% as a result of the Earth’s orbital eccentricity, so it is really 341.5 +/- 10.25 W/m2 at any given time. The solar insolation varies approximately 0.07% over the “longer” term, or about +/- 1.12 W/m2.

2. The albedo of the Earth is given as 0.296, although that changes, and how much that changes over the year, the decade or longer from changes in simple cloud type or area/volume, is not well known, but assumed to be constant over the multi-year level. Note the word, “assumed”. The seasonal variability in albedo is estimated at 15-20%, or about +/- 8.84 W/m2 . The multi-year 1984 – 2004 variability experienced has been estimated at +/- 5.5 W/m2. Also, changes in the absorption of the atmosphere due to soot, aerosols and dust is significant enough to show volcanic eruptions and give the warmists the excuse for 1940-1965 cooling from pollution, since 1965 so wisely corrected by the green movement to show us the “background” warming event going on.

Despite these complexities, the IPCC breakdown of the insolation is (as near as I can determine) as follows:

a) reflection from clouds, land and snow/ice: 101.0 W/m2,

b) refraction through the atmosphere: 1.75 w/m2,

c) absorption by clear air: 68.75 w/m2,

d) absorption by land and water: 170.0 W/m2, of which

i) 70.9% comes from oceans (including sea ice) or 120.5 W/m2, and
ii) 29.1%, or 49.5 W/m2 from land (incljuding snow and land-ice or glaciers, and freshwater bodies)
(on an area basis – the albedos of the different types of land/sea surfaces are significantly different but not pertinent for this post.)

So, we are living in a world in which through the year the sun’s power changes by 20.5 +/- 1.1 W/m2. Since 70% of this, on average, is absorbed by the Earth, this means that the heating power changes over the year by 14.4 W/m2. Earth’s reflectivity changes by 17.7 W/m2, (with a claimed 6.8W/m2 gain for 1984-2000 and 5.5W/m2 loss for the 1984-2000 period, which we won’t discuss here). Refraction at 3.5 W/m2 may be assumed to be stable within (?) a few percent, so we can ignore that. But absorption (clear air), at 68.8 W/m2, needs only a 1.5% change due to aerosols or water vapour to give you 1.02 W/m2, which is significant because the humidity of the world’s atmosphere has not/does not remain constant – according to the IPCC, no less. (Let’s assume a 0.75% variability, or 0.51 W/m2 for absorption changes, just for arguments sake.)

All the heating of the Earth comes from the approximately 238.8 W/m2 that atmospheric and sea/land absorbs from the Sun. What is the variation on that 238.8? On any given day, +/- 16.5 W/m2 (half the total annual variation). Without measurement uncertainty! What is the uncertainty of 238.8 W/m2 from simple measurements? Remember, uncertainties in measurements ADD. Another 1.0% in measurement uncertainty is 2.4 W/m2. So our warming power is 238.8 +/- 16.5 (+/- 2.4) W/m2, or +/- 7.9% on any day over the course of 15+ years. But Trenbeth and others claim that they can work out heating values for the earth over this multi-year time frame to an accuracy AND PRECISION of less than 1.0 W/m2, or 0.42%. He is saying that, despite the huge known variations in the variables, the end result stays the same to 0.42% or less than 1.0 W/m2.

On basic principles it is insane to think that we know the energy flow of the Earth over a multi-year basis to a 0.85 +/- 0.15 W/m2 level of accuracy and precision. In the non-academic world you would dismissed as dangerous for believing in your own work to an unreasonable extent. They would slide pizzas under the door to encourage you to come up with interesting things, but they would take away your company credit card.

And this is where the warmist-skeptic debate is: arguing over the alleged presence of gnats on the head of a pin. Actually, computer MODELLED (imagined) boots on the feet of gnats on the head of a pin. The temperature data at a planetary scale is in the same fundamental position.

The oceans are dynamic entities, they don’t just suck up heat like a theoretical mass in a formula might suggest, they share it with all they touch. The ‘missing’ heat energy is contained-embedded in the clouds that once were water Recent analysis of Argo data in relation to the historical record show an increase in salinity in evaporative mid-latitude regions and a freshening at high latitudes and tropical convergence zones. This pattern may imply an increase in the global hydrological cycle by several percent”- ARGO. These bigger better clouds increase the earth’s overall albedo and are the agent of cooling that probably is the earth’s most sensitive thermostat.

Further the oceans are melting ice in the arctic, greenland and antarctica and, if i remember my cocktail mixology correctly, the melting ice should be cooling the oceans [ice to liquid also embeds energy within each molecule’s structure]. Then there’s the salinity-density issue-the salinity is dropping because of all the fresh water melting into the seas, the ocean is expanding much faster because of meltwater than thermal expansion.

It’s a wonder that the ocean is heating at all. We’re in a time of low solar output and only a couple decades ago climate science thought we were headed slowly toward the end of the interglacial period [cooling] Why aren’t we now? Why are the glaciers not advancing instead of retreating [as they in fact are all over the world]? Must be because there’s more heat energy in the system than there is naturally forced cooling. Now that energy doesn’t get here on the wings of angels or aliens, it isn’t coming from solar output as it’s been falling the last 60 years, but it is there, we can all see the effects. Ice has no political agenda-the flora and fauna aren’t moving cause they’re bored-the clouds and i’ve looked at clouds from both sides now, as joni mitchell says, too care none for politics.

The earth has heated up [far beyond what it is now] many times in the half million year ice core data. Everytime Gaia’s feedback loops contain its climb and reverse it-they will this time too, sooner or later. It might be a bit more difficult this time because we rogue primates have become an atmospheric carbon source. During these huge climate swings untold biological-ecological change happens too. Exactly what change is uncertain, but change it will be.That’s where the precautionary principle enters the calculas.

Where’s the missing heat? Like the banker explained in Capra’s ‘It’s A Wonderful Life’, the money [heat in our case] isn’t ‘in’ the bank it’s embedded in the world around us.

How does deep ocean heat content increase without it being detected by SST and the upper 300m? The upper 300m OHC dropped rapidly during El Nino 2010 while at the same time SST had risen and SAT followed. How can the heat drop down to deeper ocean when rising SAT (amplified in MSU) during El Nino are clearly a result of oceans releasing heat?

Over the long term average, I think you are correct in concluding that changes in deep ocean heat will be in the same direction as changes in upper ocean mixed layer heat, but during transitional intervals, the two may diverge. Heat flows from warmer to cooler regions. The upper ocean is warmer than both the air and the deep ocean, and so heat will leave it in both directions, and precise quantitation is necessary to determine whether the observed changes demonstrate the appropriate balancing with heat inflow from the sun and atmosphere. In my view, current measurement uncertainties make it diffiult to do this for intervals of only a few years.

Some of the issues mentioned in recent posts (here) are discussed in our earlier paper, Douglass and Knox, “Ocean heat content and Earth’s radiation balance,” Physics Letters A 373, 3296 (2009). In this paper we look at FTOA as implied by FOHC over a 50-year period and identify some proposed climate shifts.

bob–would you agree that the “proposed climate shifts” could/should be describled as “potential changes”, and that the probability of these changes are not necessarily higher than a number of other alternatives?

Rob, I think not. In my post “proposed” simply referred to the fact that we used the name “climate shifts” in connection with changes in the sign of d(OHC)/dt… because some of these changes occurred at points where other phenomena had been used to infer “climate shifts.”

An update of interest is found in a new GRL paper by Kopp and Lean indicating that solar irradiance measurements should be corrected downward by about 4.6 W/m^2 due to previous instrument errors. After dividing by 4 and subtracting albedo, this reduces estimated absorbed solar radiation by about 0.8 W/m^2, and would tend to bring CERES data more in line with other data, including model estimates. The paper is atKopp and Lean

No. The CERES observation-derived imbalances were significantly greater than 0.8 W/m^2, and even with the reduction, they will still exceed the modeled imbalance of 0.9 W/m^2. You could ask the question, “doesn’t the reduction also reduce the modeled imbalance?” The answer is “probably not much if at all”. The model assumes a given solar irradiance to be the one based on the previous measurements and then computes outgoing flux (OLR) on that basis. If its input is changed to involve a lower irradiance, it will yield a lower OLR output, and so the imbalance won’t be significantly affected.

curious that trenberth had a total of 0.8 w/m^2 for his imbalance, averaged over the whole surface. If one removes the albedo fraction, ~0.30, for the most likely value of the averaged difference between SORCE and the standard value, one gets 0.8 W/m^2.

Now the question arises as to the nature of the subtracted value in your model. OLR means outgoing longwave radiation and that is based upon the longwave IR being emitted from the surface and the atmosphere. This is dependent upon temperature not with incoming solar power or the value of the number.

While I’m not sure just what CERES has for an imbalance or how it is derived, I’m curious why trenberth chose to claim only 0.8 w/m^2 for imbalance if CERES was able to determine more than that.
Please elaborate

Re: the paper-I am reading it and have done further reading but could not get a clear picture on a point made:

“The construction of a composite irradiance record suitable for climate change applications requires that measurements made by individual radiometers be adjusted to align their different absolute scales. This is crucial because inaccuracies in individual measurements can be
greater than 0.1%, exceeding estimated long‐term solar variations.”

any idea why these adjustments were necessary when the instruments would seem to have better accuracy than that? The statement that “individual measurements inaccuracies can be greater than .1%”, without any explananation as to why is kind of scary. Very interesting information however…still reading

Rob – I’m out of my depth evaluating the technology. From the text, it appears that the inaccuracies were those of the previous radiometers. It is still not clear how much more accurate the new technology will prove to be.

It is a bit incidental to the debate here but it does highlight some issues particularly where the CERES (Terra) flux gradients are localised. It seems that during the period of the rapid decrease in SWR (shortwave reflective) the oceans and land diverged markedly, with and increase in SWR over the tropical oceans (a tendency to cool) and a much bigger per area decrease in SWR over the land and extra-tropics, (a tendency to warm), during the period 2000-2005. This heating pattern was largely realised giving the marked diveregence in land and ocean temperature trends. Obviously we have no baseline to this data but I will view the fluctuations as short term and largely about a long term warming trend but of much greater amplitude. That is the period started with anomalous short term rapid warming that was perhaps the reversal of a short period of rapid cooling, which would be in agreement with the temperature records.

Now the divergence, might inform us that over the ocean the net effect was the reverse of this and that the oceans failed to warm much in terms of retained heat. Not that the full depth OHC did not increase, but that the heating trend was lessening rather than increasing (which is compatible with the OHC(0-700m) with an assumption of a continuation of a small but steady down flux across the 700m boundary).

From what I have seen of more recent CERES data, it seems that the rapid change in recorded SWR occurred largely during 2000-2003 period. During that period the oceans went from warming to (OHC and SST) and levelled off, whereas the land temperatures warmed rapidly (2000-2002) followed by a lessening warming trend through to 2006. Throughiut the period the land-ocean temperature anomalies diverged mor or less steadily.

Given that temperatures (land and ocean) dropped significantly 1998-2000 followed by OHC 1999-2001 we have evidence for a change from radid cooling to rapid warming of which the CERES (Terra) was only available to record the rapid warming.

So the question is, where should we look for the missing heat.

It is appreciated that on climatic timescales the heat is overwhelmingly taken up in the oceans, but we are looking at a short period fluctuation, so we can tendatively look eleswhere, not in the oceans over which SWR was increasing (a cooling tendency) but where SWR was increasing and the rate of temperature increase was greatest and most sustained, over the land.

Now it is a stretch to put a lot of heat into the land over an extended period but that is not necessarily the case if its actual surface temperature (not the 2m level) changes rapidly over a short period. Now unfortunately we do not have a global land surface/sub-surface temperature record but we now that the land warmed and we know from CERES that this corresponded with lessening SWR (more sunlight) which could give rise to the surface warming more, perhaps much more than the 2m level. Also we would need to examine changes in vegetative land cover as that can have a susbstantive effect on “surface” temperatures by many degrees C. This brings into play possibilities for rapid transport of heat downwards via percolation of precipitated water. There are many more small effects from tundra melt to why surface roughness increases the short term thermal uptake in a way that cannot be sustained into climate scale periods.

I think that we might be unwise to dismiss the possiblilities for a short term uptake by land sub-surface and hyrological process of the substantive proportion of any genuinely missing heat. It might also seem wise to look for heat in places that have surfaces that warmed (post 2003) rather in places that haven’t.

Trying to determine OHC from SLR without doing a proper attribution of all contributing factors is not convincing yet seems to be generally accepted. Where are the other components such as deforestation, dam construction, and ground water mining? This hand waving as was done in AR4 that the balance of these attributing factors is probably negative is unacceptable without studies that support the said hand waving. Since AR4 a study on ground water mining was completed.

Another issue I have with some of the comments here regards a reported recent acceleration in SLR. The evidence for this is poorly supported and to the best of my knowledge is only supported by comparing altimetry measurements to tide gauge studies. These do not measure the same thing. The studies that I have seen that use tide gauges, the only long term record we have, show no acceleration such as this one

Dad was a hurricane forecaster in Grady Norton’s day. He instilled in me a respect for their order of magnitude.

Given that a hurricane removes something like 5*10^19 joules per day from the ocean (http://www.aoml.noaa.gov/hrd/tcfaq/D7.html), I too am curious as to hurricanes’ purpose in Mother Nature’s scheme of things.
200 hurricane days in three years is not much of a stretch and it’s all the energy you’re missing.
Ten a year at a week apiece…. Have I mis-read that noaa page?

Looks to me like hurricanes are Mother Nature’s safety valve that blows excess heat right around the insulating layer of GHG, exhausting to the troposphere. From there I assume it’s an easy hop to space.

I’m an old boiler guy, so the safety valve analogy is natural to me. Others might prefer the analogy of fire sprinklers.

With regards to the “missing Heat”, it is to the credit of Trenberth that he was prepared to state the obvious.

The information contained in the work mentioned below has been available for 15 years but there were nobody in the scientific community willing to read it. Besides all other information concerning physics contained in the work; then a much abbreviated abstract requires that the daily rotation and orbit of the Earth relative to the distance to the Moon and Sun (also to the position of the great planets) results in thermal increases and decreases that are gravitationally induced.
Due to a belief that the referred to work has a value to humanity I take the liberty of including below, the Message to Professor Judith Curry

Dear Professor the following email was attempted to be delivered to your email address but it was unable to be delivered. Except for the attachment, the following is most of the text of the message.

With regards to things scientific, I have had a lifelong (65 years) love affair with Physics and have produced a 168 paged paper titled Matter and Associated Mysteries that is available and goes unnoticed in paperback form at Lulu.con. Within the body of the work there is an attempt to define the fundamental dynamic nature of gravity and gravitation in general. Utilizing such concept of gravity to gain an explanation of the two Pioneer anomalies, the work gives an instant by instant conceptual description regarding the gravitational effect that increasing distance has on the particles forming the mass of the Spacecraft. There would be an increase in mass at the expense of velocity and there would be increased cooling in excess of that resulting from diminishing radiation from the Sun.
The excess volcanic activity on Jupiter’s moon Io as now believed to result only from “gravitational squeezing” results from a reversal of that phenomena acting on the Pioneer spacecraft. If the Spacecraft were approaching a massive bulk body such as Jupiter or the Sun, they would be subjected to heating resulting from decreasing mass such as the internal heating of comets during their unhindered acceleration when approaching the Sun. The heating and cooling (gravitational) of the Earth’s matter strictly obeys the same phenomena that I believe affected the Pioneer spacecraft and therefore should be investigated regarding that effect on the Earth’s atmosphere.
Regarding the excess slingshot effect other than that expected from Newtonian gravitation; that excess results because the gravitational balance of planets is in proportion to their velocity.
There is also a description on page 155 of an inexpensive and simple experiment that can conclusively prove or disprove the gravitational effect, and therefore the value to humanity of the work.

With regards to the “missing Heat”, it is to the credit of Trenberth that he was prepared to state the obvious.

The information contained in the work mentioned below was available 15 years ago but there were nobody in the scientific community willing to read it. Besides all other information concerning physics contained in the work; then a much abbreviated abstract requires that the daily rotation and orbit of the Earth relative to the distance to the Moon and Sun (also to the position of the great planets) results in thermal increases and decreases that are gravitationally induced.
Due to a belief that the referred to work has a value to humanity I take the liberty of including below, a Message to …
Dear Professor, the following email was attempted to be delivered to your email address but it was unable to be delivered. Except for the attachment, the following is most of the text of the message.

With regards to things scientific, I have had a lifelong (65 years) love affair with Physics and have produced a 168 paged paper titled Matter and Associated Mysteries that is available and goes unnoticed in paperback form at Lulu.con. Within the body of the work there is an attempt to define the fundamental dynamic nature of gravity and gravitation in general. Utilizing such concept of gravity to gain an explanation of the two Pioneer anomalies, the work gives an instant by instant conceptual description regarding the gravitational effect that increasing distance has on the particles forming the mass of the Spacecraft. There would be an increase in mass at the expense of velocity and there would be increased cooling in excess of that resulting from diminishing radiation from the Sun.
The excess volcanic activity on Jupiter’s moon Io as now believed to result only from “gravitational squeezing” results from a reversal of that phenomena acting on the Pioneer spacecraft. If the Spacecraft were approaching a massive bulk body such as Jupiter or the Sun, they would be subjected to heating resulting from decreasing mass such as the internal heating of comets during their unhindered acceleration when approaching the Sun. The heating and cooling (gravitational) of the Earth’s matter strictly obeys the same phenomena that I believe affected the Pioneer spacecraft and therefore should be investigated regarding that effect on the Earth’s atmosphere.
Regarding the excess slingshot effect other than that expected from Newtonian gravitation; that excess results because the gravitational balance of planets is in proportion to their velocity.
There is also a description on page 155 of an inexpensive and simple experiment that can conclusively prove or disprove the gravitational effect, and therefore the value to humanity of the work.

Yes, Kevin Trenberth. About a year ago we exchanged views which turned out to be incompatible. When he told me to read the IPCC report on the Arctic I quit. But here he is again with his energy question, throwing out a number of possibilities to explain where the energy disappeared in 2008. If he had bothered to read my book then he would not be confused about it now. Since then I have brought out a second edition from which he could learn more than the first edition contained. One of his energy quest papers that appeared in Science (16 April 2010) showed a global net energy budget where everything was under control until 2004 when energy started to disappear. According to the graph in his paper, by 2009 eighty percent of that global energy was missing. Something happened in 2004 that set the energy loss in motion but he has no idea what it could be. I don’t know what natural phenomenon could cause such a huge energy loss. But reading the paper again I found this statement in it: “Since 2004, ~3000 Argo floats have provided regular temperature soundings of the upper 2000 m of the ocean, giving new confidence in the ocean heat content assignment -…” So what do you know, new equipment comes on line and energy does a disappearing act. If I had been the reviewer I would certainly have sent him back checking and calibrating the equipment until the discrepancy was resolved, but I guess papers by big shots just get waved through.