provides a valuable summary of global warming, as utilized by the IPCC and others. The text we will use to discuss global warming will be based on the equations below.

“According to the radiative-convective equilibrium concept, the equation for determining global average surface temperature of the planet is

where

is the heat content of the land-ocean-atmosphere system with ρ the density, Cp the specific heat, T the temperature, and zb the depth to which the heating penetrates. [These equations] describes the change in the heat content where f is the radiative forcing at the tropopause, T′ is the change in surface temperature in response to a change in heat content, and λ is the climate feedback parameter (Schneider and Dickinson, 1974), also known as the climate sensitivity parameter, which denotes the rate at which the climate system returns the added forcing to space as infrared radiation or as reflected solar radiation (by changes in clouds, ice and snow, etc.). In essence, λ accounts for how feedbacks modify the surface temperature response to the forcing. In principle, T′ should account for changes in the temperature of the surface and the troposphere, and since the lapse rate is assumed to be known or is assumed to be a function of surface temperature, T′ can be approximated by the surface temperature.”

The use of T’ to diagnose global warming can be diagnosed [which is dH/dt] clearly involves estimates of f and λ which necessarily introduces complexity into obtaining dH/dt. Moreover, as we have documented in our papers; e.g.

there remain major problems with the accuracy of obtaining a global average value of T’. Among the unresolved problems are:

There is no one value of T’. The estimates for this number are constructed from sea and land measurements of the values temperature anomalies and their absolute values across the globe. The actual loss of heat to space by radiation is proportional to T to the fourth power. A warm anomaly has a greater long wave flux to space in the tropics than at higher latitudes in the winter [e.g. Liljegren, 2008];

The neglect of including concurrent trends and anomalies of the absolute humidity biases the dry bulb temperature estimates of T’, since it is really moist surface air enthalpy that should be measured [e.g. Davey et al 2006];

The use of land surface minium temperatures introduces a bias since it responds in an amplified manner to changes in heating and cooling higher in the boundary layer. This has been a warm bias in recent years [e.g. see Klotzbach et al, 2009];

The siting for land observations have often been poorly located and, in the USA, have introduced a warm bias. The homogenization of the poorly and well sited locations smears this warm bias into the homogenized data analysis [a paper in preparation which will be submitted within the next two weeks on this research].

There is an obvious solution to these problems with respect to diagnosing global warming and cooling. We should adopt measuring a finite difference version of dH/dt [e.g. monthly intervals].

There has been quite a bit of discussion in the posts on the accuracy of monitoring dH/dt from upper ocean heat content. The first requirement, however, is to agree that this is the now preferred metric to measure global warming and cooling, and to replace T’ [or, at least in addition to that metric].

My recommendation for the report of the meeting that was held September 7-9 2010 in the United Kingdom in Exeter titled

is to focus on improved temperature data sets for regional studies (in which the issues we have raised in items #1 to #4 among others need to be addressed), while accepting that the time to use T’ as the primary metric to diagnose global warming and cooling has passed.

It is my opinion that they should recommend we move to the monitoring and reporting of dH/dt [in real time] to policymakers and the public using upper ocean heat content.

Follow up – November 23, 2010:

My recommendations, as far as I know, remain ignored (rather than accepted or refuted) by NCDC, GISS and CRU