Pages

Wednesday, 15 April 2015

Why raw temperatures show too little global warming

In the last few amonths I have written several posts why raw temperature observations may show too little global warming. Let's put it all in perspective.

People who have followed the climate "debate" have probably heard of two potential reasons why raw data shows too much global warming: urbanization and the quality of the siting. These are the two non-climatic changes that mitigation sceptics promote claiming that they are responsible for a large part of the observed warming in the global mean temperature records.

If you only know of biases producing a trend that is artificially too strong, it may come as a surprise that the raw measurements actually have too small a trend and that removing non-climatic changes increases the trend. For example, in the Global Historical Climate Network (GHCNv3) of NOAA, the land temperature change since 1880 is increased by about 0.2°C by the homogenization method that removes non-climatic changes. See figure below.

The global mean temperature estimates from the Global Historical Climate Network (GHCNv3) of NOAA, USA. The red curve shows the global average temperature in the raw data. The blue curve is the global mean temperature after removing non-climatic changes. (Figure by Zeke Hausfather.)

The adjustments are not always that "large". The Berkeley Earth group may much smaller adjustments. The global mean temperature of Berkeley Earth is shown below. However, as noted by Zeke Hausfather in the comments below, also the curve where the method did not explicitly detect breakpoints does homogenize the data partially because it penalises stations that have a very different trend than their neighbours. After removal of non-climatic changes BEST come to a similar climatic trend as seen in GHCNv3.

The global mean temperature estimates from the Berkeley Earth project (previously known as BEST), USA. The blue curve is computed without using their method to detect breakpoints, the red curve the temperature after adjusting for non-climatic changes. (Figure by Steven Mosher.)

Let's go over the reasons why the temperature trend may show too little warming.

Currently we are in a transition to Automatic Weather Stations. This can show large changes in either direction for the network they are introduced in. What the net global effect is, is not clear at this moment.

Irrigation

Irrigation on average decreases the 2m-temperature by about 1 degree centigrade. At the same time, irrigation has spread enormously during the last century. People preferentially live in irrigated areas and weather stations serve agriculture. Thus it is possible that there is a higher likelihood that weather stations are erected in irrigated areas than elsewhere. In this case irrigation could lead to a spurious cooling trend. For suburban stations an increase of watering gardens could also produce a spurious cooling trend.

It is understandable that in the past the focus was on urbanization as a non-climatic change that could make the warming in the climate records too strong. Then the focus was on whether climate change was happening (detection). To make a strong case, science had to show that even the minimum climatic trend was too large to be due to chance.

Now that we know that the Earth is warming, we no longer just need a minimum estimate of the temperature trend, but the best estimate of the trend. For a realistic assessment of models and impacts we need the best estimate of the trend, not just the minimum possible trend. Thus we need to understand the reasons why raw records may show too little warming and quantify these effects.

Just because the mitigation skeptics are talking nonsense about the temperature record does not mean that there are no real issues with the data and it does not mean that statistical homogenization can remove trend errors sufficiently well. This is a strange blind spot in climate science. As Neville Nicholls, one of the heroes of the homogenization community, writes:

When this work began 25 years or more ago, not even our scientist colleagues were very interested. At the first seminar I presented about our attempts to identify the biases in Australian weather data, one colleague told me I was wasting my time. He reckoned that the raw weather data were sufficiently accurate for any possible use people might make of them.

One wonders how this colleague knew this without studying it.

The reasons for a cooling bias have been studied much too little. At this time we cannot tell which reason is how important. Any of these reasons is potentially important enough to be able to explain the 0.2°C per century trend bias found in GHNv3. Especially in the light of the large range of possible values, a range that we can often not even estimate at the moment. In fact, all the above mentioned reasons could together explain a much larger trend bias, which could dramatically change our assessment of the progress of global warming.

The fact is that we cannot quantify the various cooling biases at the moment and it is a travesty that we can't.

2 comments:

Saying that Berkeley's adjustments are smaller is somewhat misleading; some of the difference is accounted for the fact that Berkeley's spatial fields are constructed in a manner that downweights the impact of locally-divergent trends, which itself is a form of homogenization not shown in the figure. If that step were excluded, the effect of pairwise homogenization would likely be larger and more comparable to NOAA's PHA (as is the case in the U.S.).