I’ve been investigating one of my favorite datasets in the last few days, the CERES satellite-based top-of-atmosphere (TOA) radiation dataset. In particular, I’ve taken month-by-month global and hemispheric averages of the data. The dataset consists of observations of three variables—downwelling solar radiation, upwelling longwave (infrared) radiation, and upwelling shortwave radiation (reflected sunlight). From these I derive a further dataset. This is the top-of-atmosphere (TOA) imbalance. It is calculated as downwelling solar minus upwelling (reflected) solar minus upwelling longwave. That gives a fascinating look at the overall radiation picture.

I got to thinking about this because of a curious claim in a recent paper published in Nature Climate Change entitled Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods ([see pdf link below]). I did love the whole concept of “model-based evidence”, but that wasn’t what caught my eye. It was this statement (emphasis mine):

“[…] Here we analyse twenty-first-century climate-model simulations that maintain a consistent radiative imbalance at the top-of-atmosphere of about 1 W m−2 as observed for the past decade”

Anyhow, here’s some news regarding that claim of a consistent TOA imbalance, from the CERES satellite dataset:

[…]

As you can see, they have a couple of big problems with their claims of a consistent 1 W/m2 imbalance over the last decade.

First, it is contradicted by the very data that they claim establishes it. There is nothing “consistent” about what is shown by the Levitus data, unless you take a long-term average.

The second problem is with the Levitus data itself … where is the energy coming from or going to? While the CERES TOA imbalance is not accurate, it is very precise, and it would certainly show a fluctuation of the magnitude shown in the Levitus data. If that much energy were actually entering or leaving the ocean, the CERES satellite would surely have picked it up … so where is it?

I’ve discussed what I see as unrealistic error bars in the Levitus data here. My current comparison of Levitus with the CERES data does nothing to change my previous conclusion—the precision of the Levitus data is greatly overestimated.

Finally, the idea that we have sufficiently accurate, precise, and complete observations to determine the TOA imbalance to be e.g. 0.85 watts per square meter is … well, I’ll call it premature and mathematically optimistic. We simply do not have the data to determine the Earth’s energy balance to an accuracy of ± one watt per square metre, either from the ocean or from the satellites.

The observed imbalance is not shown in Figure 15b. The IPCC AR5 Chapter 10 reports theoretical ERF (“effective radiative forcing”) of 2.3 W.m-2 as current for that report. AR5 Chapter 2 reports the observed imbalance was 0.6 W.m-2 which reconciles with Hansen et al (2011) observations above but not the theoretical ERF in AR5.

Stratospheric aerosol forcing to 2014 uses the data set of Sato et al. (1993) as updated at http://www.columbia.edu/ ~mhs119/StratAer/. Future years have constant aerosol optical depth 0.0052 yielding effective forcing -0.12 Wm(-2), implemented by using fixed 1997 aerosol data. Tropospheric aerosol growth is assumed to slow smoothly, leveling out at -2 Wm(-2) in 2100. Future solar forcing is assumed to have an 11-year cycle with amplitude 0.25Wm(-2). Net forcing exceeds 5 Wm(-2) by the end of the 21st century, about 3 times the current forcing (Fig. S16).”

This is all theoretical, the solar assumption (constant) was already wrong at publication. Solar activity has reduced in SC 14 and is predicted to continue like this for the coming decades. SC 24 resembles SC 5 – NOT SC 23.

They say “Net forcing [ERF] exceeds 5 Wm(-2) by the end of the 21st century” but omit to address the problem that the current ERF (2.3 W.m-2) exceeds the observed imbalance (0.6 W.m-2) by 1.7 W.m-2.