New CERES Data and Ocean Heat Content

We have gotten three more years of data for the CERES dataset, which is good, more data is always welcome. However, one of the sad things about the CERES dataset is that we can’t use it for net top-of-atmosphere (TOA) radiation trends. Net TOA radiation is what comes in (downwelling solar) minus what goes out (upwelling longwave and reflected solar). The difference between the two is the energy that is being stored, primarily in the ocean.

The problem is that according to the raw, unadjusted CERES data, there’s an average net TOA radiation imbalance of ~ 5 W/m2 … and that amount of imbalance would have fried the planet long ago. That means that there is some kind of systematic error between the three datasets (solar, reflected solar, and longwave).

So, the CERES folks have gone for second best. They have adjusted the CERES imbalance to match the Levitus ocean heat content (OHC) data. And not just any interpretation of the Levitus data. They used the 0.85 W/m2 imbalance from James Hansen’s 2004 “smoking gun” paper. Now to me, starting by assuming that there is a major imbalance in the system seems odd. In any case, since the adjustment is arbitrary, the CERES trends in net TOA radiation are arbitrary as well. Having said that, here’s a comparison of what the Levitus ocean heat content (OHC) data says, with what the CERES data says.

Figure 1. CERES and Levitus ocean heat content data compared. The CERES data was arbitrarily set to an average imbalance of +0.85 W/m2 (warming).

I must admit, I don’t understand the logic behind setting the imbalance to +0.85 W/m2. If you were going to set it to something, why not set to the actual trend over the period of the CERES data? My guess is that it was decided early on, say in 2006, when the trend was much closer to +0.85 W/m2 and people still believed James Hansen. In any case, the way they’ve set it doesn’t tell us much. Let’s see what else we can learn from the two datasets. First lets take a look at the full Levitus dataset, and its associated error estimates.

I gotta say, I’m simply not buying those errors. Why would the error in 2005 be the same as the error in 1955?

In any case, we’re interested in the period during which the CERES and the Levitus datasets overlap, which is March 2000 to February 2013. To compare the two, we can adjust the CERES trend to match the Levitus data. Figure 3 shows that relationship. I’ve included the error data (light black lines.

Figure 3. Ocean heat content, with the trend of the CERES data re-adjusted to match the Levitus data. Light black lines show standard error of the Levitus data.

Now, I’m sure that you all can see the problems. In the CERES data, the change from quarter to quarter is always quite small. And this makes sense. The ocean has huge thermal mass. But according to the Levitus data, in a single quarter the ocean takes huge jumps. These lead to excursions that are much larger than the error bars.

To visualize this, we can plot up the quarter-to-quarter changes in ocean heat content. Figure 4 shows that relationship.

Figure 4. Quarterly changes in the ocean heat content. Note that this shows the quarterly change in OHC, so the units are different from those in Figures 1 and 3. Standard errors of the quarterly change are larger than those of the quarterly data, because two errors are involved in the distance between the two points.

As Figure 4 highlights, the disagreements between the Levitus and the CERES data are profound. For some 60% of the Levitus data, the error bars do not intersect the CERES data …

Conclusions? Well, my first conclusion is that I put much more stock in the CERES data than I do in the Levitus data. This is because of the very tight grouping of the CERES data in Figures 3 and 4. Here are the boxplots of the data shown in Figure 4:

Figure 5. Boxplots of the quarter-to-quarter differences of the Levitus and CERES datasets.

Remember that the tight grouping of the CERES data is the net of three different datasets—solar, reflected solar, and longwave. If you can get that tight a group from three datasets, it indicates that even though their accuracy is not all that hot, their precision is quite good. It is for that reason that I put much more weight on the CERES data than the Levitus data.

And as a result, all that this does is reinforce my previous statements about the error bars of the Levitus data. I’ve held that they are way too small … and both Figures 3 & 4 show that the error bars should be at least twice as large.

Next, the CERES data doesn’t vary a lot from a straight line. In particular, it doesn’t show the change in trend between the early and the later part of the Levitus record.

Finally, the CERES data provides a very precise measurement of the quarterly changes in OHC. Not only is their overall variation quite small, but they are highly autocorrelated. In no case are they greater than 0.5e+22 joules.

So for me, until the Levitus quarter-to-quarter changes get down to well under 1e+22 joules, I’m not going to put a whole lot of weight on the Levitus data.

101 thoughts on “New CERES Data and Ocean Heat Content”

A simple point, made clearly. Nice one Willis.
The big lie in all this is uncertainty, as Curry and others have been saying for a long time.
Here they are wilfully missing out the sampling error and just using the measurement error. The early error estimations that you provide here are farcile. How anyone can be stupid enought to publish something like amazes me.
This is the old micro-kelvin accuracy BS all over again. You can measure one stop sample very accurately but that does not tell you how accurately that sample reflects the world ocean.

What strikes me about fig 3 is just how stable the satellite derived heat content is despite the comilation of three different datasets which could add all sorts of noise and confounding variables the heat content shows very little variation.
In contrast the Levitus data clearly is strongly infulenced by SST. We see the post El Nino drop and the 2003 heat surge. In contrast the rad data shows these fluctuations are completely irradicated, indeed there is a slight opposite effect.
You know it’s almost as if the climate system had a strong negative feedback or even an industrial “PID” controller ensuring stable heat content.

The other notable feature, accepting the hypothesis of a steady systematic error in CERES is that there is basically not the slightest correlation between the two datasets at any scale (apart from the artificial matching of the overall slope).
What is the correlation coeff of these two series?

As Stephen Richards says, if the CERES and the Levitus data sets are both demonstrably wrong or inaccurate using them together will only make any conclusions even more suspect, not improve understanding. Willis’ point about CERES adjustment demonstrates the fundamental flaw in almost every ‘climate’ measurement, as so succinctly labelled in the Climategate readme file, FUDGE factors.

If there is to be some expectation of correlation there is at least one missing factor from the equation. Latent heat. OHC is _the_ major reservoir but its not the only one.
The heat energy required to evaporate a mass of water is enough to raise it’s temp by about 50 deg C. Then there’s the ice/water phase change.
The TOA values reflect the net sum, OHC does not.

If there is to be some expectation of correlation there is at least one missing factor from the equation. Latent heat. OHC is _the_ major reservoir but its not the only one.
The heat energy required to evaporate a mass of water is enough to raise it’s temp by about 50 deg C. Then there’s the ice/water phase change.
The TOA values reflect the net sum, OHC does not.

Thanks, Greg. While everything you say is true, it doesn’t affect the data very much for two reasons. First, the size of the energy stored as latent heat doesn’t change much over the year. This is because at the same time that ice is forming in the Arctic, it is thawing in the Antarctic, and vice versa. As a result, the net storage as ice is only the difference between these two terms.
Global sea ice area, for example, only varies by about 6 million km^2 over the year. The amount being thawed and frozen is the thin stuff, and averages only about a metre thick. Lets be generous, call it 2 metres thick.
That gives us 1.2e+13 cubic metres of ice melted/frozen annually, which is about 1.1e+13 tonnes. To melt this takes ~ 330 megajoules per tonne. This gives us a total of 0.4e+22 joules, or a swing of 0.2e+22 joules per quarter. This is small in the current context.
The second reason this doesn’t affect the data is that this annual swing of 0.4e+22 joules is the regular swing due to ice. As a result, the regular recurring part of it would be removed with the rest of the seasonal variations, and all that would remain would be the residual annual variations in ice volume.
Net result? Latent heat is a small effect.
Best regards,
w.

The other notable feature, accepting the hypothesis of a steady systematic error in CERES is that there is basically not the slightest correlation between the two datasets at any scale (apart from the artificial matching of the overall slope).
What is the correlation coeff of these two series?

Good question, Greg. The correlation Levitus—Ceres is 0.68 … which sounds impressive until you know that the correlation of the Levitus data with a straight line is 0.75.
w.

In any stable system at thermal equilibrium there CANNOT be any difference between heat in and heat out. Whilst the earth cannot in any way be at thermal equilibrium it has to be compared to space where all the long wave ends up.
If we were at thermal equilibrium there would be no weather but there would be climate.

The problem is that according to the raw, unadjusted CERES data, there’s an average net TOA radiation imbalance of ~ 5 W/m2 … and that amount of imbalance would have fried the planet long ago. That means that there is some kind of systematic error between the three datasets (solar, reflected solar, and longwave).

Sorry if I appear a bit slow on this, but can you tell me: Does the raw CERES data imbalance (~5 w/m2) have a trend or is it simply a noisy constant average.
If it’s the latter is it, then, the case that the trendless data has been replaced by the “Levitus trend”. Figure 1 does suggest this but I wanted to be sure.
Thanks

Ocean heat is irrelevant. If the atmosphere is not warming, then there is no AGW relevance to any ocean heat data. The oceans warm or cool by 3 means : 1) geological (volcanoes, earth heat etc), 2) radiation from the sun and 3) convection from the atmosphere.
The only one that is relevant to the AGW debate is convection from the atmosphere. BUT if the atmosphere isn’t warming (for 17 years now), then we know that AGW can not be influencing ocean heat. Air gains and loses heat much faster than water, so the air MUST heat first before it can convect any heat into the ocean. If the atmosphere was warming and the ocean was warming, then you could link the two, but if the atmosphere is not warming then this is nothing more than another AGW lie.

ok for the calculation for ice but what about the same kind of calculation of water vapour in atmosphere?
and regarding the latent heat we should look at the amount of vater ice and vpaour in the climate system as a whole, inclouding clouds ,, rain rivers, makes ,( biomass??), aquifers..and so on…
i guess it doesn’t change much the whole thing ..

It is really disappointing to see just how climate monitoring satellites need to be adjusted to produce meaningful results.
I mean, there are weather satellites that seem to be able do what they were designed for, GPS satellites are amazing, etc. Instruments in space fail regularly of course and there is always some processing required for the basic data.
But the two dozen or so earth observing satellites never seem to produce a raw product that works despite the $billions spent on launching them and operating them. They always end up adjusting the data to match whatever the climate models/other theories expect the data to be. At the end of the day, that means all of the data cannot be relied on. Does one rely the adjustments made to the sea level satellite data for example?
The individual components of CERES, however, can still be tracked over time I assume. SW in, SW out etc. If they are out by 5 W/m2 on balance, how does that 5 W/m2 change over time.

Willis, some time ago I did a test for autocorrelation of yearly running trends in the OHC- Data. It works with Durbin- Watson http://en.wikipedia.org/wiki/Durbin–Watson_statistic . In my eyes the “d” is a term for the thermal inertia of the system you look at. It should not differ too much over time. You can calculate the “d” of running trends of the landtemps or the sst and you’ll find different values of course… the trends of landtemps have a greater “d” because the residuals to the trend are more independend than those of the sst. The OHC- data behave very strange: The “d” is changing very much over time. This seems to not very likely, that’s why the data could be suspicious.

5W/m2 is lot of missing heat. Wikipedia has total global photosynthesis at 130TW (I wonder how accurate this estimate is?). Radius of Earth is 6378km for a surface area of 511 ^12 m2. So photosynthesis would only account for 0.25W/m2?

Sampling bias (spatiotemporal pattern nonrandom) & error estimates based on false assumptions.
Bill Illis (January 5, 2014 at 4:39 am) asks a good question:“If they are out by 5 W/m2 on balance, how does that 5 W/m2 change over time.”
Also some worthwhile questions about water vaporization/condensation (not to be confused with freeze/thaw).

‘I must admit, I don’t understand the logic …’ the first mistake is think this change was made on logical grounds . The next mistake is forget ‘the use ‘ of this data , its main value is political rather than scientific therefore you can understand why the changed it in the way they did .

Willis, as a couple commenters have already noticed. Greg may have a bigger point than ice caps. The Heat of Vaporization is much greater than the Heat of Fusion for water. Could part of the imbalance mismatch be simply changes to global humidity?

Bill Illis says:
January 5, 2014 at 4:39 am
The individual components of CERES, however, can still be tracked over time I assume. SW in, SW out etc. If they are out by 5 W/m2 on balance, how does that 5 W/m2 change over time.

That pretty much what I was asking in this post

John Finn says:
January 5, 2014 at 4:10 am

If the 5 w/m2 imbalance is trendless then there is no justification for introducing a trend. Willis’ Fig 1 suggests an increase in CERES imbalance up to about 2003 but essentially flat after that. This pattern is not dissimilar to the surface temperature record.

Willis Eschenbach: “Global sea ice area, for example, only varies by about 6 million km^2 over the year. The amount being thawed and frozen is the thin stuff, and averages only about a metre thick. Lets be generous, call it 2 metres thick.”
Could you make the assumption behind this a little more explicit? It seems that you’re assuming that the freezing and thawing occurs only on the ice that appears and disappears, not on the ice that from above seems to remain permanently. So if dA is the area change and h is the thickness, you conclude that the volume change is h * dA.
But isn’t it possible that some freezing and thawing occurs beneath the “permanent” ice? Suppose, for example, (obviously, contrary to fact) all the ice took the form of a single cone, whose base is what we see from above. Then the volume change would not merely be proportional to the product of h and dA but instead proportional to the product of dA and the square root of the entire ice area, including that which is “permanent.” That would be a considerably different quantity.

As we are dealing with differences between large numbers, it is good to keep in mind what Prof. Walter Lewin of MIT says, “Any measurement that you make without knowledge of its uncertainty is meaningless”. Listen to this first lecture by Prof. Lewin at: http://ocw.mit.edu/courses/physics/8-01-physics-i-classical-mechanics-fall-1999/video-lectures/lecture-1/
His quote starts at about 4:40 min into the video.
Differences between measures values doubles the uncertainty, so that also has to be taken into consideration. As the measurement of ocean temperature is also uncertain, using this value to establish what the net radiative imbalance in the TOA is, is incredibly naïve.

Willis says:
I must admit, I don’t understand the logic behind setting the imbalance to +0.85 W/m2.
————————————————
The logic is Hansen’s model is within the margin of error of the measurements. It’s not ruled out, so they use it because his model predicts DOOOOOM!
Unfortunately, the margin of error is so large for those CERES TOA measurements, they don’t rule out many predictions. Even global cooling is supported.

Willis, many thanks for your two CERES TOA net flux imbalance/ OHC posts. Can I ask which particular CERES dataset you are using, and where the 0.85 W/m2 adjustment you refer to is documented?
The EBAF-TOA CMIP5 Data at http://ceres.larc.nasa.gov/cmip5_data.php has 5 variables, from 3 of which the net TOA imbalance can be derived. The related documentation, at http://ceres.larc.nasa.gov/documents/cmip5-data/Tech-Note_CERES-EBAF-TOA_L3B_Ed2-7.pdf , says that the SW and LW fluxes in EBAF Ed2.7 are adjusted to give a net TOA flux of 0.58 W/m2 over July 2005-June 2010, not 0.85 W/m2 (which was used in EBAF Ed1.0 and 2.5).
I think the adjustment to 0.58 W/m2 is a constant offset applied at all times, not a value that changes over time. And the last 13 years EBAF Ed2.7 global all-sky annual TOA imbalance data has a negligible trend. CERES data is, as you know, meant to be fairly stable over time even though its absolute accuracy is poor.
Whether 0.58 W/m2 is a reasonable imbalance estimate is debatable, but it looks rather more realistic than 0.85 W/m2 to me.
I agree with you about the OHC data fluctuations probably being largely noise.

@ mwhite says:
January 5, 2014 at 2:10 am
“Climate Change: Challenges and Solutions. A FREE online course from the University of Exeter”
This looks like FREE propaganda, to me. Take the fungus teaser for example.
From the article (link below). This looks like just more hand-waving about an unproven threat from “climate change.” So, if you want to be brain-washed, take the course by all means.
“Fungal disease threat seen increasing
Fungal diseases are a major threat not just to wild plants and animals, but to us.
A new Nature paper shows we’re already heading for huge fungal damage to vital crops and ecosystems over the coming decades. If we don’t do more to stop these diseases’ spread, their impact could be devastating.
Fungi already destroy at least 125 million tonnes a year of rice, wheat, maize and potatoes and soybeans, worth $60 billion. Researchers estimate that in 2009-10, this lost food could have fed some 8.5 per cent of the world’s people. And this is just the result of persistent low-level infection; simultaneous epidemics in several major crops could mean billions starve.”http://www.enn.com/climate/article/44265

Willis says:
“Now to me, starting by assuming that there is a major imbalance in the system seems odd.”
I submit it’s far worse than that–it’s criminal–because there are a lot of policy decisions that are based on that fallacy.
Our (so-called) Masters are forcing their agenda on us through lies, lies, and more lies.

Paul Vuaghan: “Also some worthwhile questions about water vaporization/condensation (not to be confused with freeze/thaw)”.
That was my major point to Willis, though he seems to have read it as just refering to ice I did specifically mention latent heat of evap.
Willis, two things. First there has been a long term change of ice volume though we don’t really have any decent estimation of it’s size.
Second my main point was about evaporation. There is a net change in atmospheric water content on inter-annual time scales and this can be seen reflected in LOD. (LOD is a also a variable energy reservoir).
If El Nino dominated periods have more tropical (or extra tropical) cloud then there is an energy transfer to atmosphere . Conversely when it cools and precipitation gives up the latent heat (which will show up in outgoing IR) and a reduction in angular momentum.
This may go some way to explaining why CERES shows little variability and may be evidence of strong negative feedbacks being present.

Willis, as a couple commenters have already noticed. Greg may have a bigger point than ice caps. The Heat of Vaporization is much greater than the Heat of Fusion for water. Could part of the imbalance mismatch be simply changes to global humidity?

Looking at the longer “official” OHC record, there is some relation to the global temperature record as one should expect. Surely there is a significant lag in heating and cooling of the ocean 0-700. I note despite no warming for 17 years, the inflection to a lower slope in OHC is around 2003, suggesting a ~ 5 year lag. Perhaps a truer correction could be estimated by reducing the OHC record slope overall so that it comes out flat after 2003. We also know there is a positive component to the slope attributable to warming proponent analysis – we always have a range of choices from data variation to select from in plotting these things – the correction chosen based on Levitus is a good example.http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content55-07.png
Also the perhaps silly question: can there be any other forms the outgoing energy can take, or is there some sort of leakage in the coverage? I’m sure physicists have already dismissed such possibilities. And, why would there be no obvious variation resulting from the annual perihelion/aphelion in the orbit.

Look to the volume of Ice for your proxy of the net energy balance. The overall energy content of the oceans can not change as it is set by the surface pressure on it. The energy loses of the Earth are set by deep space and do not change. Only energy input from the Sun can change. We are getting a lesson on that even now. pg

Exeter:
About the course
The course is aimed at the level of students entering university, …
….
Requirements
No previous experience or qualifications required.
====
Entry requirements for “climate change” studies at Exeter do not seem to be very high. If that is typical it may go a long way to explaining a lot of the work we see getting published in this field.
( Small green ‘Gaia” prayer mat will be supplied to all applicants to the course ).

Nic Lewis says:
January 5, 2014 at 7:50 am
“http://ceres.larc.nasa.gov/documents/cmip5-data/Tech-Note_CERES-EBAF-TOA_L3B_Ed2-7.pdf , says that the SW and LW fluxes in EBAF Ed2.7 are adjusted to give a net TOA flux of 0.58 W/m2…not 0.85..|”
Nic, this would go someway toward flattening the slope after the inflection downwards at 2003 as I suggested in my comments above:
Gary Pearse says:
January 5, 2014 at 9:27 am

The most important info from the Hockey Schtick is missing: Launch of the RAVAN
precicion satellite, which will give the answer….
until now: Assumptions…
“””””””Satellite will launch in 2015 to measure Earth’s radiation budget for the first time
A satellite scheduled to launch in 2015 will “measure the absolute imbalance in the Earth’s radiation budget for the first time, giving scienti……..”””””

Joe Chang says: January 5, 2014 at 5:01 am
“5W/m2 is lot of missing heat. Wikipedia has total global photosynthesis at 130TW (I wonder how accurate this estimate is?). Radius of Earth is 6378km for a surface area of 511 ^12 m2. So photosynthesis would only account for 0.25W/m2?”
Joe, you might want to check out this paper: http://academic.research.microsoft.com/Publication/6170526/global-mapping-of-terrestrial-primary-productivity-and-light-use-efficiency-with-a-process-based model. In the abstract, it says ” Gross photosynthetic production (GPP), net primary production (NPP), carbon storage, absorption of photosynthetically active radiation (APAR), and light-use efficiency (LUE) were addressed. Assuming an equilibrium state under the present environmental conditions, Sim-CYCLE estimated the annual global GPP and NPP as 124.7 and 60.4 Pg C yr-1, respectively. Based on the estimated APAR of 191.3 × 1021 J, the annual average biospheric LUEs for GPP and NPP were calculated as 0.652 and 0.315 g C MJ-1, respectively.”
I have seen higher gross numbers, GPP, up to 160 Pg C yr-1 in a study that looked at oxygen isotoypes and compared them. With these numbers you are looking at .315 x 191.3 x 10^21 joules, which is 6 x 10^22 joules per year.

The problem is that according to the raw, unadjusted CERES data, there’s an average net TOA radiation imbalance of ~ 5 W/m2 … and that amount of imbalance would have fried the planet long ago. That means that there is some kind of systematic error between the three datasets (solar, reflected solar, and longwave).

Sorry if I appear a bit slow on this, but can you tell me: Does the raw CERES data imbalance (~5 w/m2) have a trend or is it simply a noisy constant average.

As long as there is an imbalance between the measurements of incoming and outgoing energy, this creates a trend in the net energy storage. As a result a 5 W/m2 imbalance gives a huge, unimaginably large trend.
w.

Willis, as a couple commenters have already noticed. Greg may have a bigger point than ice caps. The Heat of Vaporization is much greater than the Heat of Fusion for water. Could part of the imbalance mismatch be simply changes to global humidity?

Thanks, Charles. A couple of comments. First, what we see in the Figures above is the annual anomaly after the annual cycle has been removed. The normal seasonal cycles don’t show up at all.
Second, whatever warms the ocean also melts the ice and evaporates the water. As a result, the melting and evaporation will add to any OHC swings.
And that means that the melting and the evaporation is already included in the CERES data above … and thus, in the immortal words of Jim Hansen, it’s worse than we thought …
w.

Willis Eschenbach: “Global sea ice area, for example, only varies by about 6 million km^2 over the year. The amount being thawed and frozen is the thin stuff, and averages only about a metre thick. Lets be generous, call it 2 metres thick.”
Could you make the assumption behind this a little more explicit? It seems that you’re assuming that the freezing and thawing occurs only on the ice that appears and disappears, not on the ice that from above seems to remain permanently. So if dA is the area change and h is the thickness, you conclude that the volume change is h * dA.

I don’t know any way to get better numbers. You are right that more will melt and freeze than I estimated.
The main point, however, is that the overwhelming majority of the ice signal is cyclical, and so it will be removed because we are only looking at the non-seasonal (anomaly) signal.
w.

thanks REJ. 130TW working out to 0.25W/m2 seemed low me because somewhere it was cited that photosynthesis efficiency was 3-6%. If global PS is only 0.25W/m2 versus average sunlight at 340W/m2 (1361 / 4, area of circle vs surface area of sphere) would mean that only 1.2-2.5% of the surface of earth was engaged. Land is 30% but some PS also occurs in the ocean/water.
The value of 60×10^22 j/yr is 1900TW, which is a huge, 3.7W/m2, requiring PS on 18-36% of earths surface, which seems high to me. Anyways, should photosynthesis be included in the accounting for the difference between inbound and outbound radiation? or is life not important?
I did say 5W/m2 is a lot, Considering the 340W/m2 inbound, that’s 1.5%, which is perhaps not bad considering the complexity in making this measurement.

http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
In 2000 global ice area was around the 1979-2008 average used as base in that plot. At the beginning of 2013 is was down nearly 1.5 10^6 km^2, about 1/4 of your ball park annual swing calcs. ie 0.1e+22 joules
Clearly there has been substantial loss in thickness in areas that are still covered year round, so that needs scaling up by some factor. The effect would be of the order of magnitude to be visible on figure 3.
I know there have been a number of papers from GRACE and other satellite estimations but they had massive uncertainty figures even when they were being honest about it. However, it looks big enough to be counted.
Since we seems to have regained as much ice in the last 12mo , it could account for the uptick at the end.

Thanks, Nic. The 0.85 W/m2 number comes from an actual area-weighted average of the full EBAF Ed2.7 dataset. Let me check the period you mention …
Nope. The average of the toa_net_all dataset from July 2005 to June 2010 is 0.84. Go figure …
w.

Joe Chang, you jumped a decimal point on me. It was 6 not 60. However, thinking about the numbers, I believe that most estimates given for productivity are net and not gross. So, this estimate at 60 PgC y-1 is low by a factor of two. The real number should be greater than 10^23 joules per year. These are large numbers, you should read the paper and see how they are generated.

One thing that I think could produce an systematic imbalance in CERES is surface reflection at low incidence angle. This particularly affects the polar regions on open water and even more so on the stiller melt pond water.
Satellite measurements are essentially downward looking but have to be protected against getting a direct flash when their orbit puts them facing the sun as they come over the pole, so they have a shutter which protects the radiometer in this position.
Most of the time they will not be measuring surface reflected solar because they are not in the right position and when they are they flip the shutter and don’t get any data.
We are all familiar with sun reflected off water and at low incidence it can be a large proportion of the light which is reflected.
If we are talking about 5/1361 that’s only 0.36% of full incoming flux.
This is never taken into account in “albedo” figures which is the specular reflectivity. Neither is it accounted for in PIOMAS and other ice models as far as I have been able to ascertain.
It will only be a very small fraction of the exposed surface which is concerned by this but then 0.36% is “a very small fraction”.

Well darn, there goes my first guess as to where the problem is coming from.
So we are left with: CERES has systematically underestimated the reflected solar radiation by about 5 W/m^2, systematically underestimated the outgoing longwave by about 5 W/m^2, or it’s a shared underestimate and some comes from too little reflected solar, some from too little outgoing longwave.

http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
In 2000 global ice area was around the 1979-2008 average used as base in that plot. At the beginning of 2013 is was down nearly 1.5 10^6 km^2, about 1/4 of your ball park annual swing calcs. ie 0.1e+22 joules
Clearly there has been substantial loss in thickness in areas that are still covered year round, so that needs scaling up by some factor. The effect would be of the order of magnitude to be visible on figure 3.
I know there have been a number of papers from GRACE and other satellite estimations but they had massive uncertainty figures even when they were being honest about it. However, it looks big enough to be counted.
Since we seems to have regained as much ice in the last 12mo , it could account for the uptick at the end.

Greg and others, let me thank you for your speculations. To focus them, however, let me remind everyone that whatever climate phenomena you come up with to explain the large quarterly changes in the Levitus data, you also need to explain why these phenomena have not affected the CERES data.
I do like your idea about the changes in the global sea ice area, however. Because these changes do NOT seem to show up in the CERES data, this could provide further evidence that the temperature of the earth is thermoregulated.
For folks information, here’s what we really have to explain—the changes (and the lack of changes) in the CERES data.Figure S1. Decomposition of CERES system energy content. Since this contains all changes in system energy and not just ocean heat content (OHC) I have named it accordingly. Note the range bars at the right side of each panel, that show the relative sizes of the residuals at the various scales.
The panel of interest is panel three, “Trend”. This shows the variations in the total system energy from all causes.
w.

Willis writes:
“all that this does is reinforce my previous statements about the error bars of the Levitus data. I’ve held that they are way too small … and both Figures 3 & 4 show that the error bars should be at least twice as large.”
How large do you think Figures 3 and 4 suggest the error bars should be? If these are one standard deviation error bars on the ocean heat content then crudely we would expect 68% of CERES points within one standard deviations, 95% within 2 standard deviations and the 99.7% within 3 standard deviations.
Of course if there are errors on the CERES data (there surely are, but I’m not sure how large) or other sources of heat storage that others have pointed too then we would expect somewhat more disagreement than that.
If there are some outliers greater than 3 standard deviations that probably just reflects some lack or normality?
Overall, I don’t think Figures 3 & 4 make a strong case that errors are underestimated. But to back that up I would have to look at more quantitative calculations rather than just eyeballing.

Willis, I have been thinking about your data and I think that you have uncovered a case of deception. Why do I say that? If you are a scientist or engineer and your hardware is not getting good data; you want to fix it in the worst way. The data is good or there would have been a team of technical specialists, engineers, and scientists working the problem or proposing a new instrument. What does this mean? It means that they understand what is happening and want to keep it from the public. The data has been faked to cover something that they feel would not be to their advantage if it were widely known.

Here is a simple experiment. You are trying cut a board that must be exactly 8′ 7”-13/64′ long to for a shelf in your closet. The only, and I mean ONLY, thing you have to make any measurements with is an old hardware store yard stick. The smallest readable graduations or 1/4 inch. You have no string, rope, building square, framing triangle, etc. just the yard stick. I don’t care how many times you measure the board and how many times you average each measurement or use “RMS averaging”, you will never get the correct length. Eventually you may cut a board and it may fit the shelf, but it probably is not the exact length.
Data available on the internet about the ARGO buoys provides information about the accuracy. To achieve that accuracy it must be calibrated to a NBS traceable standard and is normally done in an environmentally controlled area. The facility I worked with had NBS traceable monitors with alarms on several walls and a double-door-lock entrance. To get readings from the equipment in turn would be obtainable only if used in laboratory conditions (an environmentally controlled area, including at least temperature, humidity, and pressure conditions equal to the conditions of calibration, +/- a few degrees). The ARGO probes are, from my understanding, subject to about 50 degrees F change from bottom to top of travel. From the seabird technical references you can find me that you will get about a 1% error from a temperature change of 50 degrees C.
Now explain how they compensate/correct for the fact that different buoys will have different surface and lower sample point temperatures.
With that fixed, explain how they correct for the fact that some buoys may take longer ascending/descending than others (on purpose or otherwise) and this will cause different errors (equipment inside the buoy warms/cools faster/slower), and the errors will be different ascending than descending.
1. The fact that you have 3000 buoys does not mean that you have 3000 samples of the same temperature that can be averaged using the RMS accuracy rules (square root of the sum of the squares – in the industry we usually said RMS). In my study and review of the use of RMS averaging in temperature measurement during the development of the Instrument Society of America (ISA) Standard on this topic we concluded that all measurements MUST be of the exact same entity at the exact same time under the exact same conditions.
2.
The fact that you add up 3000 surface level temperatures, (e.g., numbers between 30 and 90 degrees F) then divide by 3000 and gives you a number out to 3 or more decimal points does not mean that is the temperature within anything other than +/- 1.5 Degrees C (or 0.25%, in fact you must use the WORST accuracy to be accurate). PERIOD. The RMS accuracy rules for averaging samples do not apply. IT DOES NOT WORK THAT WAY.
3.
Also, the accuracy for essentially every instrument I have ever worked with, including precision laboratory standard instruments when expressed in terms of % were percent of the MAXIMUM reading for the range selected. (It is a common misconception that the 0.01%) means of the reading – this is not normally the case. Even for instruments selling for more than $10,000. Read the fine print on the accuracy specifications.
All of this tells me that their error bands are MUCH larger. A will allow that the “trend” is shown, assuming that the trend is not caused by other environmental factors affecting the measuring equipment. E.g. are more buoys in an area that is warming and fewer in an area that is not warming?

Willis Eschenbach says:
January 5, 2014 at 10:05 am
John Finn says:
January 5, 2014 at 4:10 am
As long as there is an imbalance between the measurements of incoming and outgoing energy, this creates a trend in the net energy storage. As a result a 5 W/m2 imbalance gives a huge, unimaginably large trend.
w.

Willis, my fault entirely. I read your post just after getting up this morning (UK time). I don’t think I was fully awake. You are, of course, correct. A constant or near constant TOA imbalance will produce a trend in net energy storage. I realised my mistake when I took a closer look at your Fig 1.
Thanks for your reply.

The Levitus data: more regionalism? Data is not distributed evenly enough, so different groupings result in different mathematically derived averages that drive the end result? A Yamal problem, whereby one or two strong “regional” datapoints dominate the computation?
Determinable by a colour coding of initial data that generates the curves. Or frequency breakdown of data by coding of regions.

response to a comment upstream: No the air does not need to heat first before the oceans warm. SW infrared zooms right through dry air, bypassing molecular components such as CO2. It can be reflected for sure (by clouds and other forms of water vapor), but the air can be cold as hell yet the ocean will warm from SW IR.

Willis, you say:
“The average of the toa_net_all dataset from July 2005 to June 2010 is 0.84. Go figure …”
Well, that’s weird. I downloaded the monthly 03/2000 – 06/2013 TOA Net Flux – All-Sky global data for CERES_EBAF-TOA_Ed2.7 and took the mean of months 65 to 124. It is neither 0.58 not 0.84, but 0.623. Beats me…

Doug, there may be some truth in that but the main difference between OHC and CERES is that OHC is just one part (albeit the largest part) of what CERES is measuring. Or to be more accurate what Willis is calling CERES here which I’m assuming is the cumulative integral of CERES TOA radiation budget.
What is labelled “trend” in the latest graph, which is I think a detrended , low-pass filtered (SVD?) component of it. seems to resemble changes in ice area, which is a (questionable) proxy for ice volume which is also an energy term via latent heat of fusion.
As ice is freezing, it’s dumping latent heat back into the system and this is visible in the TOA budget.

@Willis main post:The problem is that according to the raw, unadjusted CERES data, there’s an average net TOA radiation imbalance of ~ 5 W/m2 … and that amount of imbalance would have fried the planet long ago. That means that there is some kind of systematic error between the three datasets (solar, reflected solar, and longwave).
Systematic Error. I think it is best to remember from where the “CERES Dataset” comes. CERES data comes from a device on solar synchronous satellites (TERRA, AURA, AQUA) which are 720 km high, 99 min. orbits. TERA only sees the earth at solar 10am to 11am (depending on Latitude) making its equatorial pass at 10:30 am. AURA and AQUA are in the same orbital plane train, 8 minutes apart making sunlight equatorial passes at 1:30 pm. But that does not cover the entire CERES Dataset.
In the CERES dataSET 12 pm, 1 pm, 2pm, 3pm, etc. coverage comes from low resuolution geosynchronous MODIS data from GOES satellites, that are converted (SOMEHOW!!) into CERES data using the high res CERES data from 10:30 am and 1:30 pm passes. The CERES DATEASET is really GOES data recalibrated from CERES data from about 10:30 am and about 1:30 pm.
Just how good the recalibrated GOES solar 3:00pm data would match to real CERES data at a solar 3:00 pm orbit is at present an unanswerable question because because there is no solar 3 pm CERES data collected.

So, the CERES folks have gone for second best. They have adjusted the CERES imbalance to match the Levitus ocean heat content (OHC) data. And not just any interpretation of the Levitus data. They used the 0.85 W/m2 imbalance from James Hansen’s 2004 “smoking gun” paper.

A poorly understood adjusting out the systematic error within the insufficiently calibrated GOES data embedded in 85% of the CERES dataset is the only smoking gun worth remembering.
More from my Oct 10, ’13 post

I am quite curious what the GOES data looks like before and after the CERES calibration.
How much does the calibration change day to day? Month to Month?
How much does it change from 10:30am to 1:30pm? from 40N to 20N to 40S?
You see, if it is a rock steady calibration, and we can use GOES reliably for 8:30 am, 3:30 pm and 5:30 pm estimates of cloud emissions…. why do we need the CERES instrumentation at all?

Frankly, I am not surprised that The raw Ceres Dataset, which is GOES hourly data, calibrated by CERES at only two times of the data, has a systematic error in heat (either plus or minus). What I am surprised to find is the bald-faced adjustment of the data to justify a claim of “missing heat”.
There is no “missing heat”. There is only the inability to measure the heat flux with the precision needed. What IS missing is honesty.

With regard to Willis’s “trend” chart above, a breakdown by reflected solar and outgoing longwave would be appreciated. It looks to me like there may be some ENSO fluctuations in there, and also it might make sense to compare to atmospheric temps. Have you looked at UAH daily LT over the same period?

Earth is not a barren rock with no atmosphere. Incoming solar energy creates wind (simplification). That equates to electromagnetic energy being converted to mechanical energy. You also have photosynthesis. I wouldn’t expect to have a balance of incoming and outgoing energy equaling zero. The solar energy is utilised someway and not necessarily as heat storage in the system

The central claim of the radiative greenhouse hypothesis is that adding radiative gases to the atmosphere will reduce the planets radiative cooling ability. The critical flaw here is believing that temperatures for moving fluids in a gravity field can be derived from SB equations alone.
If global warming was physically possible, a TOA radiation imbalance as radiative gas concentration was increasing would be the expected signature, however the CERES data is not fit for purpose. The error margin is simply too great to allow any conclusion given the small signal expected. The problem is similar to trying to extract a global temperature trend from surface stations. No amount of adjustment, homogenisation or “correction” will make inadequate data fit for purpose.
However, while new remote sensing instruments could answer the question, AGW can be disproved far more simply.
The radiative greenhouse hypothesis relies on the mis-application of SB equations to moving fluids in a gravity field. Climate scientists incorrectly claim a figure of -18C for surface Tav in the absence of radiative gases, then use down-welling LWIR to add 33C to arrive at the observed 15C surface Tav.
To understand why AGW is a physical impossibility you only need to model the planet correctly. Land, ocean and atmosphere need to be treated as separate bodies, and the ocean and atmosphere need to be treated as moving fluids in a gravity field. This is important because for moving fluids in a gravity field SB equations alone cannot determine their temperature profile. The relative height of energy entry and exit, fluid resistance and conductivity are critical to the pattern of non-radiative energy transports and the temperature profile within the fluid.
But even building a complex model of the planet using CFD is not needed to disprove the radiative greenhouse effect. A few cheap empirical experiments are all that are required.
1. Does the two shell radiative model work for surfaces that are not moving fluids?http://i44.tinypic.com/2n0q72w.jpghttp://i43.tinypic.com/33dwg2g.jpghttp://i43.tinypic.com/2wrlris.jpg
The answer is yes.
2. Will this work for moving fluids in a gravity field?http://i48.tinypic.com/124fry8.jpg
The answer is no.
3. Does incident LWIR heat or slow the cooling rate of the oceans?http://i42.tinypic.com/2h6rsoz.jpg
The answer is no.
Climate scientists have tried to solve for “how cold would the surface be without radiative gases?”. What they should have asked is –
4. How hot would the surface get without an atmosphere?http://i42.tinypic.com/315nbdl.jpg
Unlike other experiments I have posted in the past, I have not built this one. The need for near instantaneous and precise temperature control of the inflowing dry N2 and the liquid N2 cryo cooling are a significant cost barrier and “dark money” and “big oil dollars” seem only to exist in the minds of AGW believers.
Without an atmosphere our oceans would boil into space. Experiment 4 shows what would happen if a force field stopped that. The oceans would still be heated below the surface by SW and cooled at the surface by out going LWIR. But they can no longer cool by evaporation or conduction. How hot would they get? SB equations won’t provide the answer. SW heating is not an average flux, it is intermittent and occurs at depth. Speed of convection and fluid conduction become a factor.
I have conducted similar experiments with water samples and sunlight, however these suffered from conductive losses and were still exposed to DWLWIR. Temperatures easily exceeded 70C.
How hot or cold would the oceans really get without an atmosphere?
Will they freeze like the AGW doom mongers claim?
Or would it be boiled whale time?
(answers may be posted in on the back of a Turney’s Turkeys commemorative postcard)
Why is this question important?
The atmosphere cools the oceans.
Radiative gases cool the atmosphere.
Adding radiative gases to the atmosphere will not reduce the atmospheres radiative cooling ability.
AGW is a physical impossibility.

Latent heat is nearly completely ignored by K&T in their ridiculous energy exchange graphic in AR4, some 78W/m2 paired with evapotranspiration a biological process. Latent heat of evapouration of water is the reason why rainforests are cooler than deserts. (predictions using the theory of the GHE would claim the opposite).

I have a question which I hope Willis or somebody else can answer. We all hear regularly about how ocean warming is *bound* to cause sea level rise. I have read that ~52% by volume of the ocean is below a depth of 2000 metres. I also recall reading that this deep ocean water is extremely cold on average, with a temperature of 3 – 4°C. From high school physics I vaguely recall that water is at its densest at about 4°C. So if deep ocean water is, let’s say, 3.5°C, then surely heating it a little bit (ARGO data says 0.065°C) will cause it to contract, not expand, and sea levels to fall, not rise?
I would be grateful if somebody more knowledgeable than I could explain or clarify this.

KRJ Pietersen
What an excellent question.
There is a very useful graph here which shows the effect of water temperature on density. http://www1.lsbu.ac.uk/water/explan2.html
As far as I know there is very little difference in density between say 4C and 20C which would cover most of the oceans layers. What effect that density effect would have on the warm surface layer in the tropics I will leave for others to answer
tonyb

climatereason says:January 6, 2014 at 5:52 am
“KRJ Pietersen, What an excellent question.
There is a very useful graph here which shows the effect of water temperature on density. http://www1.lsbu.ac.uk/water/explan2.html”
Your graph is for fresh water; salt water has different properties near freezing. Salt water does not expand at 3-4 degrees F; but, it starts losing the sodium chloride, salt, heat of hydration of 4 KJoules per mole of sodium chloride. Only after all the sodium chloride to water bonds are broken will the water freeze. It is this heat of hydration that keeps most of the deep ocean at 3-4 degrees F.

In response to KRJ Pietersen
That would be the case if the ocean was freshwater. However, it is saline and does not exhibit the same characteristics. The lower the temperature the greater the density, increased temperature results in reduced density, hence expansion.

This is a little bit off topic, but relevant to the climate debate and also to your skills, Willis.
I know that questioning the GHG effect is not welcome on this site. As a chemistry graduate I know all about the IR activity of carbon dioxide and its characteristic absorption bands. However, there are a number of assumptions involved in the journey from atmospheric absorption of long wave Boltzmann radiation by CO2 to how effective this is in causing the atmosphere to warm up.
I’m referring for example to the question of energy transfer by collision between excited CO2 molecules and non-IR active nitrogen. Another question concerns how sure we are about the values assigned to theoretical earth temperatures with and without GHG and the contribution of CO2.
Your discussion above suggests to me that the Levitus data is too variable to be of any use and the imbalance indicated by the CERES data rather renders that useless too. We think there is an offset, but we don’t know why and we don’t know its true value. I do not know to what extent these datasets are used to underpin “the science” and alleged proof of warming.
I have a suspicion that much of global warming science involves uncertainties, assumptions, guesswork and “settled science” that would not survive objective close scrutiny of the kind illustrated above.
At a time when the CET record shows that temperatures today are similar to those of 23 years ago, we really do need to reconsider the GHG assumptions and calculations and the credibility of the numbers we use to calibrate the magnitude of the effect. A step by step critical review of this important topic is long overdue.

You guys are forgetting that the deep ocean is also under enormous pressure. This will behave differently from water in the ambient environment.
At any rate, you’d need to integrate the effects over the whole of the ocean from top to bottom. I haven’t seen an explicit calculation considering all factors, but the results of these are, apparently, that there is some small overall expansion.
Which is more of a “so what” than “no, that’s wrong!”

Willis, you say:
“The average of the toa_net_all dataset from July 2005 to June 2010 is 0.84. Go figure …”
Well, that’s weird. I downloaded the monthly 03/2000 – 06/2013 TOA Net Flux – All-Sky global data for CERES_EBAF-TOA_Ed2.7 and took the mean of months 65 to 124. It is neither 0.58 not 0.84, but 0.623. Beats me…

Doug, there may be some truth in that but the main difference between OHC and CERES is that OHC is just one part (albeit the largest part) of what CERES is measuring. Or to be more accurate what Willis is calling CERES here which I’m assuming is the cumulative integral of CERES TOA radiation budget.
What is labelled “trend” in the latest graph, which is I think a detrended , low-pass filtered (SVD?) component of it. seems to resemble changes in ice area, which is a (questionable) proxy for ice volume which is also an energy term via latent heat of fusion.
As ice is freezing, it’s dumping latent heat back into the system and this is visible in the TOA budget.

The residuals (data minus seasonal swings) are what Levitus is looking at in their OHC data.
The residuals from the energy necessary to melt the ice are quite small. I calculate them at about 0.1e+22 joules per month
And as a result, this is not visible in the TOA budget.
w.

Latent heat is nearly completely ignored by K&T in their ridiculous energy exchange graphic in AR4, some 78W/m2 paired with evapotranspiration a biological process. Latent heat of evapouration of water is the reason why rainforests are cooler than deserts. (predictions using the theory of the GHE would claim the opposite).

Say what? Latent heat gets lots of attention in the K/T budget. The paper is here. They estimate evapotranspiration by noting that what goes up must come down, so evaporation must equal precipitation.
Here’s their discussion of their calculations and their likely errors:

Global precipitation should equal global evaporation for a long-term average, and estimates are likely more reliable of the former. However, there is considerable uncertainty in precipitation over both the oceans and land (Trenberth et al. 2007b; Schlosser and Houser 2007). The latter is mainly due to wind effects, undercatch and sampling, while the former is due to shortcomings in remote sensing. GPCP values are considered most reliable (Trenberth et al. 2007b) and for 2000 to 2004 the global mean is 2.63 mm/day, equivalent to 76.2 W m-2 latent heat flux. For the same period, global CMAP values are similar at 2.66 mm/day, but values are smaller than GPCP from 30° to 90° latitude and larger from 30°S to 30°N. If the CMAP extratropical values are mixed with GPCP tropical values, and vice versa, the global result ranges from 2.5 to 2.8 mm/day. In addition, new results from CloudSat (e.g., Stephens and Haynes 2007) may help improve measurements, with prospects mainly for increases in precipitation owing to under-sampling low warm clouds. Consequently the GPCP values are considered to likely be low. In view of the energy imbalance at the surface and the above discussion, we somewhat arbitrarily increase the GPCP values by 5%, in order to accommodate likely revisions from CloudSat studies and to bring them closer to CMAP in the tropics and subtropics. Hence the global value assigned is 80.0 W m-2 (2.76 mm/day).

I have a question which I hope Willis or somebody else can answer. We all hear regularly about how ocean warming is *bound* to cause sea level rise. I have read that ~52% by volume of the ocean is below a depth of 2000 metres. I also recall reading that this deep ocean water is extremely cold on average, with a temperature of 3 – 4°C. From high school physics I vaguely recall that water is at its densest at about 4°C. So if deep ocean water is, let’s say, 3.5°C, then surely heating it a little bit (ARGO data says 0.065°C) will cause it to contract, not expand, and sea levels to fall, not rise?
I would be grateful if somebody more knowledgeable than I could explain or clarify this.

The variation of sea level with ocean temperature is called the “steric” sea level. It’s pretty well understood. You might start with Anne Cazenave’s work here, or google “steric sea level”.
w.
PS—As others said, in the ocean, sea water continues to contract right up until freezing.

Willis Eschenbach says:
January 6, 2014 at 10:17 am
Thank you to Willis and others for taking the time to answer my question. It’s much appreciated.
As a follow up, has anybody looked into the contribution of so-called ‘primary water’, namely water produced by chemical reactions deep underground, to rising sea levels? I know that the prevailing view is that there is a fixed amount of water on Earth, but I have read that under certain conditions the rocks themselves can produce water. For example, super deep boreholes such as Kola in Russia have come up against phenomenal problems from water at levels at which no water is supposed to exist:
“And if the non-existence of an entire layer of the Earth’s crust is not surprising enough, the cracks of the rock many kilometers below the surface were found to be saturated with water. As free water is not supposed to exist at such great depths, researchers believe the water consists of hydrogen and oxygen atoms that have been squeezed out of the surrounding rock by the enormous pressure and retained below the surface due to a layer of impermeable rock above”.http://www.atlasobscura.com/places/kola-superdeep-borehole
Regarding primary water, about the best review of the topic is here (please don’t be put off by the less-than-user-friendly way the article appears):http://merlib.org/node/5063
It acknowledges the contributions of Adolf Erik Nordenskjold, nominated for the Nobel Prize for his work, Frank Wigglesworth Clarke, Armand Gautier, Josiah Edward Spurr and especially Stephan Reiss.
I have never seen ‘primary water’ mentioned on WUWT (I am a regular reader) or much anywhere else. Is it nonsense? To my mind (I am not a trained scientist) I can imagine rocks producing water very naturally given hydrogen, oxygen and great pressure.
What comments do others have?

@Willis…..”””””Say what? Latent heat gets lots of attention in the K/T budget. The paper is here. They estimate evapotranspiration by noting that what goes up must come down, so evaporation must equal precipitation…….””””””
So true, but not for everything.
Water (in the ocean) becomes water vapor (in the atmosphere) by gathering up (from the ocean) the latent heat of evaporation (590-690 cal per gram) which is readily available at the high KE end of the molecular energy distribution, thereby cooling the ocean surface. (only the hotter molecules evaporate).
This water vapor laden air, then rises, being lighter, than the dryer air, until it encounters at high altitude, air that is much colder, and dryer, which then proceeds to suck heat (latent) out of the water vapor, until it also is cool enough to condense on any available nuclei, to form water droplets, or if cold enough, give up another 80 cal per gram and become ice crystals.
So the latent heat energy, comes from the ocean surface layer (film if you like), and is deposited at some higher altitude, from where it can ultimately be lost to space by further convection, and ultimately, thermal radiation.
The water droplets or ice crystals (snow), no longer are in possession of the latent heat, so any subsequent precipitation, does NOT return the latent heat energy to the surface.
The latent heat does not heat the upper atmosphere (raise its Temperature). The heat flow is from the cooling water vapor laden air, to the even colder upper air, which is colder, because it is losing heat to even higher air. The Temperature of everything keeps falling as it expands into the emptiness of the upper atmosphere, at higher altitudes.

CERES data pertains entirely to radiative fluxes at TOA, i.e., largely to ATMOSPHERIC inputs and emissions. OHC data, which also has huge uncertainties due to lack of uniform spatial coverage, pertains to STORED thermal energy entirely below the atmosphere. While there is coupling between the two, the physical differences are intrinsic and should not be expected to produce comparable empirical features, such as quarterly variations. Much of the speculation here about the differences is geophysically misguided.

Hi Willis,
It’s nice to see you doing this work – especially since I am about to pick up the CERES data for something. It is a great datacheck. I am still hoping that you can reconcile your values with Nic Lewis.
However, my main comment is that according to my not very accurate eyeball, you appear to have picked up Levitus 0-700m data. Is that correct? If so, then I think you need also to benchmark your “no adjustment to CERES trend” data against the quarterly data for 0-2000m OHC data over recent years. According to most OHC analyses there is still ongoing heat gain below 700m. Conventional wisdom is that the OHC data should then be somewhere around 93% of the net energy gained from integrating the radiative flux imbalance.

To add to Paul_K, Willis – this is a great post. Scrolling through your comments back and forth with Nick, is it as simple as averaging the imbalance over global surface area (0.58) vs global ocean surface area (0.85)? I’ve wondered about the huge changes in quarterly OHC – especially in the 700-2000m range – and just assumed that’s why they still do pentadal averages regardless of what the error bars say.

CERES data pertains entirely to radiative fluxes at TOA, i.e., largely to ATMOSPHERIC inputs and emissions. OHC data, which also has huge uncertainties due to lack of uniform spatial coverage, pertains to STORED thermal energy entirely below the atmosphere. While there is coupling between the two, the physical differences are intrinsic and should not be expected to produce comparable empirical features, such as quarterly variations. Much of the speculation here about the differences is geophysically misguided.

I agree with the first two sentences. However, variations in that thermal energy stored in the oceans have to be compensated for by transfers to other parts of the earth system, or to space, for conservation of energy. So where does this energy go, if it is not lost to (or gained from) space? The atmosphere and cryosphere variabilities do not compensate, e.g. the latent heat of arctic sea ice changes at a rate of about 4 orders of magnitude below the quarterly fluctuations in OHC data. The energy content of the dry atmosphere varies at a rate of about 3 orders of magnitude less than the OHC data. There are contributions from land ice, antarctic sea ice, thermal heat storage in the continents, latent heat of evaporation/condensation etc., atmospheric kinetic energy, biomass etc. but nothing close to having the storage capacity required to offset the reported ocean changes. So if these losses and gains don’t show up in the CERES data, where are they?

bill_c:
Thermal energy STORED at significant depths in the ocean doesn’t have to go anywhere that currents don’t take it. Unfortunately, the known currents often take it to totally unmonitored locations, leaving the seasonal imprint of the very sparsely monitored ones. There’s no compelling physical reason why the truly global CERES data should manifest similar variability.

1sky1,
In other words you agree with me – heat transport to unmonitored locations, as you say – is your “answer” to my speculation that the observed fluctuations are unrealistically large. I’m ok with that.

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy