CRUTEM

DRAFT

Reconstructing CRUTEM

Introduction

In December 2009, the Met Office posted code and dataset to reconstruct the Hadley Climate Research Unit global temperature anomaly. The dataset was a subset of the full HadCRUTEMP data. The released dataset was composed of data known to be public by virture of being data from weather stations that were part of the World Meteorlogical Organization (WMO) Regional Basic Climatological Network (RBCN).

Reconstructing CRUTEM with the Met CRU data

The MET CRU code and data install and run on any system with Perl available. Download the data from the link for All.zip. Unzip the datasets. Download the two Perl scripts. Run the scripts to create a gridded text file and a yearly global temperature anomlay series.

The CRU code uses metadata in the station files to determine the latitude and longitude of each station. In addition, a ‘normal’ mean average for each month for the period 1961-1990 has been previously calculated and included in the metadata. Similarly, the ‘standard deviation’ has been previously calculated for a period from 1951-1990 and included in the metadata.

Reconstructing CRUTEMP with the GHCN RAW data (1243 stations)

Next, we compare the CRU subset with the GHCN raw data set. The GHCN raw data is a data set that provides the monthly mean for individual weather stations with few modifications. There is an attempt to fill missing data. See FILNET. The temperature data for each station is pulled from the publicly available v2.mean file. Again, The normals and standard deviation are recalculated (both to a 1961-1990 baseline). The CRU station gridder and global average scripts are run on the GHCN RAW data set for the 1243 common weather stations.
The temperature data for each station is pulled from the publicly available v2.mean_adj file, the separate records averaged into a single series, and metadata is extracted from the v2.temperature.inv file. The normals and standard deviation are recalculated (both to a 1961-1990 baseline). All this information is combined into station files conforming to the CRU station file format. Then the CRU station gridder and global average scripts are run on the GHCN ADJ data set for the 1243 common weather stations.

Reconstructing CRUTEMP with the GHCN ADJ data (1243 stations)

Next, we compare the CRU subset with the GHCN adjusted data set. Of the 1741 stations in the released CRU subset, 1243 are also available in the GHCN data set. The GHCN adjusted data is a data set with modifications to discontinuities caused by such things changing record keeping methodologies and station moves. These homogenity adjustmentes are described in the paper listed below.

The temperature data for each station is pulled from the publicly available v2.mean file. Again, The normals and standard deviation are recalculated (both to a 1961-1990 baseline). The CRU station gridder and global average scripts are run on the GHCN RAW data set for the 1243 common weather stations.
References:

The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age. -H.P. Lovecraft