Has the CRUTEM4 Data been fiddled with?

It is apparent from the data that CRUTEM4 temperatures adjustments have, in part, been made with reference only to the earlier CRUTEM3 data, rather than raw temperature data. Further, the adjustments depend on the month for the data, and these adjustments are made for 20 or 30 consecutive years.

In the case of Adelaide (946720), for 30 years from 1857, CRUTEM4

Lowers all January temps by 1.4oC

Lowers all Feb temps by 0.9oC

Lowers all March temps by 1.7oC

With April to December all lowered by 0.5 to 1.1oC.

Thereafter, there are no adjustments until 2000, when a smattering of adjustments appear, mostly raising the temperature.

There are many examples of this practice. The total effect of all the differences between CRUTEM4 and CRUTEM3 where there is corresponding data is to accentuate warming trend by lowering pre 1995 temps by 0.1 to 0.2oC, and raising post 1995 temps by a similar amount.

This does not mean that the overall effect of all CRUTEM4 updates will induce the same magnitude change in HADCRUT4 anomaly, as no account has been taken of deleted and added stations, or relative numbers of stations in the database, vs the number which display differences as analysed here.

I believe that CRUTEM4 is seriously flawed, due to the apparent selective mechanical adjustments to blocks of station temperatures where the criteria for adjustment are a function of the month name. Until this is adequately explained, CRUTEM4 should be withdrawn.

Background

In March 2012 the Hadley Climate Research Unit released the land temperature dataset CRUTEM4, along with the station data from which it was constructed. A cursory inspection of the new dataset revealed some irregularities in Australian data. In particular, there were puzzling differences between CRUTEM3 and CRUTEM4 data.

A program was written to compare the two complete sets CRUTEM data, and highlight the differences. It compares the two sets to report:

· Stations in CRUTEM3 which do not appear in CRUTEM4. That is, they have been dropped in the construction of CRUTEM4.

· Stations in CRUTEM4 which do not appear in CRUTEM3. That is, new stations.

· Stations which appear in both sets and which have an arithmetical difference in any month of any year. The whole 12 months are reported for these. All missing data (reported as -99 in CRUTEM data) was converted to zero. This means that some data is lost if a valid temperature appears in one set with a matching -99 in the other dataset.

To exclude minor differences due to issues (like roundoff, precision) which could arise in comparing two datasets derived in different ways, a threshold of 0.22 was applied. That is, years are selected only if they contain differences whose absolute value exceeds 0.22oC.

Some simple statistics about the two databases include:

1. Station/Years of data: In CRUTEM3 set – 399,303; In CRUTEM4 set – 466,246.

2. Number of -99 months: In CRUTEM3 set – 674,993; In CRUTEM4 set – 568,606.

3. Number of CRUTEM3 stations dropped from CRUTEM4 286

4. Number of stations added to CRUTEM4 738

5. Matching station years where there is at least 1 month difference 104,296

6. Total stations in CRUTEM3 5,113

7. Total stations in CRUTEM4 5,565

Results

Old temperature data has been adjusted

In the following examples, only a snapshot of small parts of the data is presented to illustrate the point. Positive differences imply that CRUTEM4 is higher than CRUTEM3.

Adjustments of this magnitude can be seen through most of the CRUTEM4 database, but especially in Europe and adjacent areas. With data from such early years being so sparse, it is difficult to see both why such adjustments have been made, and the basis on which they were made. They were clearly made with reference solely to CRUTEM3 data, and not from original data. There seems little value in adjusting such early data, unless the purpose is to lower early temperatures.

There are strange repeating adjustments

Strange adjustments have been systematically applied to CRUTEM3 data to create CRUTEM4 data. For example, for a period of 22 years, from 1951 to 1972 Station 915540 (Vanuatu) has the following set of adjustments applied:

With the original CRUTEM3 data showing

1951 26.5 26.3 26.5 26.0 25.2 24.4 24.2 24.1 24.2 25.1 25.6 26.1

1952 26.8 26.6 26.3 25.6 24.8 23.9 23.0 22.6 22.8 23.8 24.5 24.8

1953 25.3 25.4 25.1 24.3 23.7 22.6 22.5 23.1 23.1 24.2 25.0 24.4

1954 25.2 25.1 24.8 24.2 23.6 23.3 23.3 22.2 23.5 23.7 23.9 24.8

1955 24.6 25.2 24.1 24.1 23.9 22.6 23.1 22.3 23.7 24.0 24.5 24.7

1956 24.9 25.6 24.8 24.3 23.9 23.7 22.4 22.5 24.1 24.4 24.4 25.7

1957 25.7 25.7 25.1 25.0 23.1 21.6 21.7 23.2 23.1 23.5 24.3 25.0

1958 25.6 26.3 25.7 24.9 23.9 23.6 21.5 22.0 23.9 23.8 24.1 25.4

1959 25.3 25.7 25.4 24.6 23.7 23.1 23.4 22.8 23.3 24.2 25.3 25.5

1960 24.9 24.8 24.9 24.4 24.1 23.6 22.6 23.3 24.0 23.8 24.7 24.4

1961 26.0 26.6 26.0 25.0 24.9 24.3 23.9 23.7 24.3 24.5 25.1 25.7

1962 26.1 25.8 25.2 24.9 24.7 23.8 23.3 24.1 23.9 24.4 24.7 25.3

1963 25.6 25.9 25.6 25.0 23.7 24.4 23.0 23.8 23.7 23.6 24.4 25.3

1964 25.8 26.5 26.2 25.3 24.5 24.4 23.2 24.0 23.9 24.5 25.5 25.4

1965 25.5 25.8 25.5 24.6 23.5 23.1 22.5 21.9 22.8 23.1 24.1 25.4

1966 25.8 25.9 25.8 25.1 23.5 23.4 22.4 23.0 23.3 23.8 24.3 24.5

1967 25.4 26.0 25.6 24.6 24.7 24.0 22.9 23.5 23.8 24.1 24.2 25.5

1968 25.5 26.0 25.4 24.5 24.0 23.5 22.9 22.6 23.2 24.1 24.5 25.3

1969 25.6 25.9 26.1 25.6 25.0 24.4 23.3 23.6 23.2 24.1 24.7 25.3

1970 25.9 26.0 26.5 25.1 24.4 23.8 23.6 24.2 24.0 24.5 24.5 25.5

1971 25.3 25.7 25.2 24.8 24.1 24.1 23.2 23.9 23.9 24.1 24.5 24.8

1972 25.3 25.6 25.2 24.8 25.1 23.7 22.4 21.7 23.1 24.0 25.2 25.5

And the original CRUTEM4 data showing

1951 26.9 26.7 26.9 27.0 26.0 25.0 25.0 25.3 25.2 25.9 26.4 26.7

1952 27.2 27.0 26.7 26.6 25.6 24.5 23.8 23.8 23.8 24.6 25.3 25.4

1953 25.7 25.8 25.5 25.3 24.5 23.2 23.3 24.3 24.1 25.0 25.8 25.0

1954 25.6 25.5 25.2 25.2 24.4 23.9 24.1 23.4 24.5 24.5 24.7 25.4

1955 25.0 25.6 24.5 25.1 24.7 23.2 23.9 23.5 24.7 24.8 25.3 25.3

1956 25.3 26.0 25.2 25.3 24.7 24.3 23.2 23.7 25.1 25.2 25.2 26.3

1957 26.1 26.1 25.5 26.0 23.9 22.2 22.5 24.4 24.1 24.3 25.1 25.6

1958 26.0 26.7 26.1 25.9 24.7 24.2 22.3 23.2 24.9 24.6 24.9 26.0

1959 25.7 26.1 25.8 25.6 24.5 23.7 24.2 24.0 24.3 25.0 26.1 26.1

1960 25.3 25.2 25.3 25.4 24.9 24.2 23.4 24.5 25.0 24.6 25.5 25.0

1961 26.4 27.0 26.4 26.0 25.7 24.9 24.7 24.9 25.3 25.3 25.9 26.3

1962 26.5 26.2 25.6 25.9 25.5 24.4 24.1 25.3 24.9 25.2 25.5 25.9

1963 26.0 26.3 26.0 26.0 24.5 25.0 23.8 25.0 24.7 24.4 25.2 25.9

1964 26.2 26.9 26.6 26.3 25.3 25.0 24.0 25.2 24.9 25.3 26.3 26.0

1965 25.9 26.2 25.9 25.6 24.3 23.7 23.3 23.1 23.8 23.9 24.9 26.0

1966 26.2 26.3 26.2 26.1 24.3 24.0 23.2 24.2 24.3 24.6 25.1 25.1

1967 25.8 26.4 26.0 25.6 25.5 24.6 23.7 24.7 24.8 24.9 25.0 26.1

1968 25.9 26.4 25.8 25.5 24.8 24.1 23.7 23.8 24.2 24.9 25.3 25.9

1969 26.0 26.3 26.5 26.6 25.8 25.0 24.1 24.8 24.2 24.9 25.5 25.9

1970 26.3 26.4 26.9 26.1 25.2 24.4 24.4 25.4 25.0 25.3 25.3 26.1

1971 25.7 26.1 25.6 25.8 24.9 24.7 24.0 25.1 24.9 24.9 25.3 25.4

1972 25.7 26.0 25.6 25.8 25.9 24.3 23.2 22.9 24.1 24.8 26.0 26.1

Similar adjustments can be seen in many other stations. The above Vanuatu data comes from a tropical area where temperature varies only 2 to 4oC over the year. In this situation, an adjustment of 0.4 to 1.2oC degree seems extreme. Furthermore, there are no adjustments after 1972. But the same type of adjustment appears in more temperate 946720 (Adelaide Australia). For the 30 year period 1857 to 1886 the following adjustment has been applied to CRUTEM3 to create CRUTEM4.

when the two Datasets show

CRUTEM3

1857 22.6 28.3 19.7 17.5 13.0 11.6 11.8 12.2 14.2 15.4 18.4 23.4

1858 26.4 24.0 22.3 18.3 13.5 12.0 10.3 11.7 12.6 15.8 21.1 22.5

1859 23.8 22.4 20.3 17.1 13.2 10.9 10.7 12.4 13.3 17.7 19.7 22.9

1860 26.1 23.8 22.4 17.0 14.4 12.5 11.7 13.4 15.5 17.0 19.9 22.5

1861 23.2 22.3 23.2 19.1 14.2 13.7 10.6 11.4 14.3 17.7 20.0 19.8

1862 25.4 22.4 23.0 17.0 14.9 11.9 13.2 13.1 15.3 18.6 21.4 23.6

1863 23.5 24.0 22.3 20.7 15.8 13.3 11.7 12.2 13.5 16.3 19.0 21.9

1864 23.0 21.9 21.1 18.1 15.3 11.7 11.5 11.7 15.4 16.1 20.6 21.0

1865 21.6 21.7 21.3 19.5 13.6 11.8 10.6 12.7 14.6 17.1 21.7 20.8

1866 23.9 25.2 21.2 19.0 15.8 12.6 11.6 12.6 13.8 16.6 17.8 22.1

1867 24.1 24.3 20.7 18.2 15.6 14.0 11.5 12.6 13.2 16.2 18.9 20.1

1868 20.6 22.9 23.4 17.8 15.8 12.1 10.4 12.1 14.6 17.9 20.2 22.0

1869 22.1 22.6 21.3 17.4 13.4 12.4 11.3 12.8 11.8 16.4 20.4 21.7

1870 23.4 25.3 21.7 18.7 13.9 12.7 10.8 11.5 12.9 17.2 18.0 21.9

1871 23.0 23.8 20.3 18.6 15.4 13.4 11.6 13.4 14.8 16.6 18.6 24.0

1872 25.9 23.3 22.4 17.1 13.5 12.6 11.1 9.9 14.1 16.2 21.0 20.0

1873 23.9 22.6 19.9 16.7 14.9 11.8 10.7 12.6 14.2 18.3 17.3 23.6

1874 24.0 21.7 19.8 19.4 14.0 11.7 9.8 11.5 12.1 17.6 17.9 21.9

1875 23.6 22.9 20.9 18.1 13.2 11.9 10.6 12.2 13.8 16.6 18.2 19.3

1876 22.8 22.1 23.8 16.5 13.3 10.8 10.1 11.5 13.5 16.1 18.6 23.4

1877 23.1 24.7 20.3 17.7 14.2 11.7 11.2 13.6 12.4 16.1 17.2 20.6

1878 25.6 23.0 21.6 18.3 14.1 9.9 11.7 12.9 14.3 17.5 20.1 21.4

1879 24.4 23.9 20.9 18.6 12.4 11.6 10.1 12.1 13.2 16.2 18.6 21.1

1880 25.6 26.4 21.5 17.5 13.7 12.1 10.4 12.9 13.7 15.4 17.8 21.9

1881 23.2 21.7 21.4 17.4 14.9 10.6 10.6 11.8 13.9 15.3 18.6 21.3

1882 22.9 24.0 22.6 17.4 15.1 10.6 9.6 11.2 14.0 16.9 20.8 21.6

1883 23.6 21.9 20.9 18.4 13.2 13.1 10.8 11.3 12.4 15.3 19.6 20.9

1884 21.2 23.6 22.1 17.1 13.9 12.2 9.8 13.3 14.1 15.5 18.9 19.5

1885 21.6 21.6 18.9 17.2 15.9 10.8 10.7 12.8 14.0 18.0 19.2 23.2

1886 24.4 20.6 20.4 17.3 14.0 11.6 11.6 12.4 16.1 14.7 19.9 21.9

CRUTEM4

1857 21.2 27.4 18.0 16.4 12.1 11.0 11.4 11.4 13.7 14.7 17.5 22.6

1858 25.0 23.1 20.6 17.2 12.6 11.4 9.9 10.9 12.1 15.1 20.2 21.7

1859 22.4 21.5 18.6 16.0 12.3 10.3 10.3 11.6 12.8 17.0 18.8 22.1

1860 24.7 22.9 20.7 15.9 13.5 11.9 11.3 12.6 15.0 16.3 19.0 21.7

1861 21.8 21.4 21.5 18.0 13.3 13.1 10.2 10.6 13.8 17.0 19.1 19.0

1862 24.0 21.5 21.3 15.9 14.0 11.3 12.8 12.3 14.8 17.9 20.5 22.8

1863 22.1 23.1 20.6 19.6 14.9 12.7 11.3 11.4 13.0 15.6 18.1 21.1

1864 21.6 21.0 19.4 17.0 14.4 11.1 11.1 10.9 14.9 15.4 19.7 20.2

1865 20.2 20.8 19.6 18.4 12.7 11.2 10.2 11.9 14.1 16.4 20.8 20.0

1866 22.5 24.3 19.5 17.9 14.9 12.0 11.2 11.8 13.3 15.9 16.9 21.3

1867 22.7 23.4 19.0 17.1 14.7 13.4 11.1 11.8 12.7 15.5 18.0 19.3

1868 19.2 22.0 21.7 16.7 14.9 11.5 10.0 11.3 14.1 17.2 19.3 21.2

1869 20.7 21.7 19.6 16.3 12.5 11.8 10.9 12.0 11.3 15.7 19.5 20.9

1870 22.0 24.4 20.0 17.6 13.0 12.1 10.4 10.7 12.4 16.5 17.1 21.1

1871 21.6 22.9 18.6 17.5 14.5 12.8 11.2 12.6 14.3 15.9 17.7 23.2

1872 24.5 22.4 20.7 16.0 12.6 12.0 10.7 9.1 13.6 15.5 20.1 19.2

1873 22.5 21.7 18.2 15.6 14.0 11.2 10.3 11.8 13.7 17.6 16.4 22.8

1874 22.6 20.8 18.1 18.3 13.1 11.1 9.4 10.7 11.6 16.9 17.0 21.1

1875 22.2 22.0 19.2 17.0 12.3 11.3 10.2 11.4 13.3 15.9 17.3 18.5

1876 21.4 21.2 22.1 15.4 12.4 10.2 9.7 10.7 13.0 15.4 17.7 22.6

1877 21.7 23.8 18.6 16.6 13.3 11.1 10.8 12.8 11.9 15.4 16.3 19.8

1878 24.2 22.1 19.9 17.2 13.2 9.3 11.3 12.1 13.8 16.8 19.2 20.6

1879 23.0 23.0 19.2 17.5 11.5 11.0 9.7 11.3 12.7 15.5 17.7 20.3

1880 24.2 25.5 19.8 16.4 12.8 11.5 10.0 12.1 13.2 14.7 16.9 21.1

1881 21.8 20.8 19.7 16.3 14.0 10.0 10.2 11.0 13.4 14.6 17.7 20.5

1882 21.5 23.1 20.9 16.3 14.2 10.0 9.2 10.4 13.5 16.2 19.9 20.8

1883 22.2 21.0 19.2 17.3 12.3 12.5 10.4 10.5 11.9 14.6 18.7 20.1

1884 19.8 22.7 20.4 16.0 13.0 11.6 9.4 12.5 13.6 14.8 18.0 18.7

1885 20.2 20.7 17.2 16.1 15.0 10.2 10.3 12.0 13.5 17.3 18.3 22.4

1886 23.0 19.7 18.7 16.2 13.1 11.0 11.2 11.6 15.6 14.0 19.0 21.1

The updates accentuate the global warming argument

There is pronounced tendency to flex the graph of global temperature vs time in a way which accentuates warming in recent years. However, the dominant effect is to lower temperatures prior to 1995.

The differences between CRUTEM3 and CRUTEM4, with zero tolerance, were consolidated to annual temperature differences, and plotted against time along with the count of the stations which contributed to the graph. This gives

The adjustments indicate that in CRUTEM4:

· A comparatively small number of stations have been cooled between about 1820 and 1900 anywhere up to 0.4oC.

· A substantial number of stations have been cooled about 0.1oC between about 1910 and 1995.

· Between about 100 and 400 stations have been warmed by up to about 0.2oC since about 1995.

This does not translate into the same change in the anomaly vs time graph, because it is not the whole dataset, but simply the changed data between CRUTEM3 and CRUTEM4. Nor does it take into account the effect of dropped and added stations. But it does indicate the direction of the change.

Nitpicking Differences

The nature of the CRUTEM database makes it obviously difficult to manage. Data comes from many sources. It is unreasonable to expect Hadley or UEA personnel to understand the geography of the data they receive. So errors will appear in the database. Some positional errors can be dismissed on the grounds that CRUTEM is directed at anomalies. This means that if a station’s position data is wrongly represented, then provided the error is contained within the same 5o v 5o gridcell, the anomaly is unaffected.

However, it is difficult to see how Station number 237070, name listed as “Unknown, Russia”, with a Lat/Long of -99.9/-999.9 (ie unknown), could escape being found by Quality Control, while still feeding temperatures to the gridding/anomaly calculation. Station 288020 is similarly identified, and also supplies data.

“Normals” are, by convention, calculated over 30 years from 1961 to 1990. While this appears to be observed, it is quite common to calculate Standard Deviations over a different period, commonly 1941 to 1990.

Extreme differences occur when the same station Number is used for two different locations. For example, In CRUTEM3, Station 840270 is the high altitude Tulcan El Rosal in Ecuador. But in CRUTEM4 station 840270 is Esmeraldas Tachina, also in Ecuador, but several hundred km distant from its CRUTEM3 namesake, and at almost sea level. This appears to be a CRUTEM3 error, probably difficult to find once the error is made.

Sydney Airport (947670) is still badly identified. The data runs from 1859 to the present, but only data after 1990 comes from Sydney Airport. The earlier data comes from Sydney Observatory, 10km away.

Effect of added and deleted stations

No attempt is made in this report to assess the effect of the 286 station deletions and the 738 additional stations, except to observe that most of the additions are high latitude NH stations.

Discussion

When updating a database such as CRUTEM, I would expect the steps to be roughly

· Tidy up the precursor database, which should be mostly stable data that has had years of scrutiny, and which would require limited correction or additions.

· Delete data which is considered poor quality.

· Add new data, edited and homogenised.

· Run a check as has been done in this report, looking at differences which might suggest irregularities.

But Hadley/CRU do not have appears to have done this. They have added and deleted stations, but it seems strange that very early data – 18th and 19th Century data, should be added, especially when much of it is sparse, of perhaps questionable quality and not germane to the current temperature/time/CO2 discussion.

But the biggest problem with the new HADCRUT4 database is the frequent practice of systematic, repetitive temperature modifications on blocks of station data. What reason could there be for making the following set of adjustments on Adelaide data for every year from 1857 to 1886 ?

The perpetrators of this change have clearly not gone back to original raw data and re-appraised their original homogenisation processes. They have simply taken every year of CRUTEM3 data (with it’s possible homogenisation adjustments), from 1857 to 1886, and

· Deducted 1.4oC if the month name was January,

· Deducted 0.9oC if February,

· Deducted 1.7oC if March

· And so on.

It is difficult to conceive w­­­­­­hat the justification could be for these changes. There are many instances of such changes in HADCRUT4, covering periods of 5 to 30 years, in both ancient and recent data. The adjustment vector differs in each, but the magnitudes of the individual elements are equally large. Their signs may vary.

The temperature differences, where there are corresponding data in each of the datasets works to promote the impression of a temperature surge in the 1990s.

It has done this by lowering temperatures before about 1995, and raising them thereafter.

The effect of this on the final gridded anomaly data has yet to be calculated. But it will be less than depicted above. The above graph shows the difference in the data induced by changes in data which appear in both CRUTEM3 and CRUTEM4. It does not include the effect of

· Dropped stations

· Added stations

· Stations which record a valid temperature in one set, but a Null (-99) in the other.

There are other relatively minor issues regarding data quality. I am surprised that neither CRU quality control nor users of the data have picked up these errors.

The CRUTEM4 database appears even more like one which has been assembled by amateurs with little concept of accuracy or integrity. If you were of a suspicious nature, you might get the impression that the changes are targeted at accentuating the Warmist message.

Until the matters raised in this report have been adequately explained, CRUTEM4, and therefore HADCRUT4 are seriously flawed.

Acknowledgements

Thank you Warwick Hughes (http://www.warwickhughes.com/blog/) for encouraging me to write this report, and for helpful suggestions on the construction of the report.

Data

· The CRUTEM3 and CRUTEM4 station data for all examples mentioned in this report.

· The output of a full CRUTEM run, with some analysis of the output. It will not run under XL97, which is limited to 64K rows, when this workbook holds over 400,000 rows.

This is about 26mb compressed.

Programs

· An Excel 2010 Workbook. The one that produces the output used in this report. It is a Visual Basic driven Excel workbook. It extracts stations missing from each dataset, and the differences. An untested XL97 version is included. It will not run the full CRUTEM database, as XL97 is restricted to 64K rows, and CRUTEM generates over 100,000. But it is possible to split the two sets of input into smaller parcels. Australia/New Zealand generated only about 650 rows of output.

will anyone understand it if it was??
3 May: Australian: Fiona Gruber: Fighting ‘catastrophilia’ with wit
RICHARD Bean greets me in the foyer of London’s National Theatre. He’s a strapping man with the air of a pugilist. He has tight grey curls on a battering-ram head and a bluff northern manner to go with it.
He’s friendly, but you wouldn’t want to pick a fight with him. I discover this when I start on the subject of global warming. It’s apposite because we’re here to discuss his play The Heretic, opening at the Melbourne Theatre Company this month…
Every climate model has “failed laughably”, he says, and these are the models that are the whole basis for global warming alarmism. The scientists who push their gloomy predictions are politically motivated, he claims, and the politicians are too ignorant to understand the arguments.
“There’s one single bachelor of science in the House of Commons. They don’t understand a word of it and I bet your government is much the same.”…http://www.theaustralian.com.au/arts/fighting-catastrophilia-with-wit/story-e6frg8n6-1226345167184

All climate data sets will be routinely adjusted to lower historic temperatures and raise current ones. There will be no exceptions.
This is basic ‘Climate Science’ 101.
I believe your study is just the tip of the iceberg in exposing deliberate scientific malpractice in the creation of CRUTEM4.

What is so head-shakingly galling is the utter shamelessness of these data-fiddling cultists. They know they are being scrutinised. They clearly couldn’t care less. When challenged they will, as usual, just deny it. Post Normal Science. Shameless charlatans.

Adjustments own their own are not a problem . But there needs to be clarity on what was done and good scientific support for why it was done and top of that the ability to recall the raw unadjusted data , now with that being missing there is a very real problem. And this does seem to be a ‘problem’ of ‘climate science’

The changes in CRUTEM4 in comparison with CRUTEM3 is that we have more Arctic represented and thus more Arctic amplification of warming.
However, this Arctic amplification should then also be seen around 1930-45, but its just not really there to be seen:http://hidethedecline.eu/media/ADJ/11.13.gif
So yes CRUTEM4 is not surprisingly yet another freely painted dataset, and thank you for bringing it up!

Well done in tackling this. I daresay we shall need a lot more such investigation before these temperature compilations may be trusted. The Climategate 1 documents revealed people who had neither the moral nor the technical competence to be entrusted with such compilations. In the absence of any prospect of a major audit by competent and uncompromised authorities, work such as yours is invaluable.

Forget these massive databases. We need a “few good men”, maybe 50-100 unimpeachable rural locations with long records but still operating, spread around the globe. This would produce a much more reliable and trustworthy record, and it would be much harder for CRU, Hansen etc to apply their “tricks” to the data without being caught.
Adding vast quantities of unreliable data to good data and then trying to correct the errors is scientific idiocy. Sydney Airport is included? What a joke!

What, exactly, is the justification for adjusting historic data? Do they have the original instrumentation and methodolgy to hand to know that they were over reading during Victorian times?
The “fact” that adjustment directions are virtually 100% predicatable is something of a smoking gun.

Hey, by the time CRUTEM20 comes out, the 19th century should be featuring subarctic temperatures for the whole temperate world. They’ll need to rewrite history that the Civil War was fought in the snow all year long. And Washington did not cross the Delaware in boats, they walked across the ice! It’s no longer good enough to “flatline the whole Holocene”, it needs to be driven 6 degrees under (zero) as well. Also, thermometers will need to be rezeroed to show more degrees.

I know Lerwick is down, but even so … why the **** are they adjusting it at all?
Lerwick is on Shetland. For those who may not know, it is an island some 100miles away from the mainland. So, I doubt there was another station within 100miles. On what possible basis can they change historic data?
Why would I change this? What legitimate reason could I have to do this?
Answer: unless I had a thermometer which I knew to be more accurate and which I could compare directly with this station (which seems incredibly unlikely on Shetland) I can’t think of any reason on earth to change this.
Could any kind of averaging come into play? One might be able to argue (after extensive field tests to confirm the thesis) that historic data was biased in some way and that an average were more accurate. This however would be something added to the global average figure and not individual stations.
The only possible even vaguely credible explanation (and it still stinks to high heaven unless it is openly discussed) is that stations were adjusted in different classes depending on the exact instrumentation being used. E.g. one might possibly argue that mercury thermometers … I can’t think of a reason why … but if one had a plausible rationale that they wouldn’t read the right temperature. OK, perhaps But each station would need to be documented, and my impression is that there just isn’t the historic evidence to go about such a wholesale modification.
…. this isn’t science, its not engineering is quackery!

braddles says:
” We need a “few good men”, maybe 50-100 unimpeachable rural locations with long records but still operating, spread around the globe. ”
I central and western Europe there are just a handful such rural stations not placed on a coast and not placed on a mountains (both shows ocean trends).
Heres what happens for Europe temperatures when examining the few.. :http://hidethedecline.eu/pages/ruti/europe/western-europe-rural-temperature-trend.php
In general we need more data and thats why my “RUTI” has to compromise on “Rural” data by using more urban data, although not the metropols:
RUTI is world wide, but here for europe, (I use RAW GCHN V2, RAW Nordklim and RAW NACD version1..!) + data from original writings, which are presently coming in fast so I have to update soon.
Overview RUTI results europe:http://hidethedecline.eu/pages/ruti/europe.php
Im awware of all the arguments against this project, but I would appreciate more help and backup in stead since there is no alternative I we should just be honest.

This may not be the best place for this question, but it is sort of relevant. I don’t really understand why all the adjustments to temperature series, but presumably there are good reasons.
Something I have amused myself with is to look at the greater than 10 year trends in temp data using Wolfram Alpha. I don’t know what data this uses, or whether it’s raw or adjusted. However, most places I do this for show next to no trend in either direction.
That makes me wonder what would happen if we looked at raw unadjusted data over time. Has that been done? Is that what BEST did? Without adjustment for various effects we may not get a true representation of actual temps, but any trend would surely be obvious and relevant. Wouldn’t it?

It looks like the most reliable historical temperature information we now have is the anecdotal evidence from personal diaries and records, ships logs and media reports.
Everything else would appear to be worthless.
If the evidence is about to destroy one’s case then corrupt the evidence.

As I understand it, the creators of the above ‘know’ what the right answer is, with absolute certainty.
Since the data doesn’t show this right answer, it is clearly wrong, and must therefore be adjusted so that it does.
As the ‘global mean temp’ is a concept, and has no physical meaning or relevance to anything, I suppose they can do whatever they like to construct it?

Block adjustments only make sense to me if there is some known change to the measurement method – if, say, Adelaide had a poorly set up station from 1857 to 1886, then we might be able to estimate a set of offsets to apply to a block of data. And it’s not so unreasonable to suppose that those offsets might be calculated month by month, based on averages for those months.
One thing that springs to mind is that Adelaide has unusually low humidity compared both to the rest of Australia’s populated areas and Northern / Western Europe, so there is conceivably some humidity-related adjustment that has been calculated.
But of course this is groping for explanations. I don’t see how you can get away with publishing this sort of data set without also giving at least some specific explanation for each adjustment made (something at the level of, “Compensation for factor X, calculated monthly according to x = y * z). And when all those unexplained adjustments just happen to nudge the line more towards a stick used for playing hockey, it all smells a bit.

Ok, so adjustments have been made.
But what is the stated rationale for such adjustments? Presumably not even the team will make all kinds of adjustments without some sort of reason?
If anyone knows, or can point to some source, that would be appreciated.

The CRU doesn’t even need to be hacked to conclude that data manipulation without sound scientific and/or statistical arguments is one of their main trademarks! This is just more of the same that came out during the climategate hack and it’s sad that nothing has changed and the responsible people are still running the “catastrophic climate show”….

There can be no rational reason to adjust very old historical temperatures.
If we cannot time travel to calibrate the OLD against the NEW, then we CANNOT adjust the old data at all.
Yet clearly, the old data WAS adjusted, on a massive scale.
Won’t someone arrest these lying sacks of manure already, and have them time-travel their way out of jail…

I understand that CRUTEM4 still uses GHCN V2 data. When they move to V3.1, even more unexplained adjustments will appear.
What is often forgotten is that much of the “raw” data CRU (and GHCN and GISS) use is in fact already adjusted at source, for non climatic influences, by the national met offices. (And where they are not, I would question whether their quality is robust enough to warrant inclusion in a global database)

“The perpetrators of this change have clearly not gone back to original raw data and re-appraised their original homogenisation processes. “
I don’t think the CRUTEM3 data for Adelaide was homogenized, in the period you have highlighted. It seems to be identical, to GHCN unadjusted. It’s possible that it should have been, and that is the reason for the change. That would probably account for the monthly pattern.

If you adjust data you are not going to be able to predict/project future weather/climate.
Good data does not necessarily mean good predictions, but surely bad data will render faulty predictions/projections/whatever-you-want-to-call-them.

are the warmista going to appear and tell us the logical reasons for these obvious adjustements?
somehow, I doubt it……
as I’ve mentioned many times, does anyone actually have the ‘raw’ data? (from Jones testimony, it seems not?) does anyone have a record of the adjustments since the first prepared data sets? (again, IIRC, from Jones testimony, I think not!). So, in a nutshell, this data is not ‘as recorded’ data anymore, it is essentially worthless to use for any meaningful interpretation, IMHO.

Thank you for your hard and professional work Ed.
As a meteorologist I was always a bit distrustful of climate data adjustments, even in the days before AGW raised its ugly head – datasets can need adjustment, but it can’t always be in the one time related direction, but it is. Amazing stuff.
Thanks again

After reading enough comments I’m kindof wondering if there exists a complete dataset of just the raw temperatures anywhere in this world, or has it basically been destroyed through these multilayered, multisite adjustments to the data itself?
If it does, who holds it? Is it public and available?
For instance, I have download and processed both of the BEST datasets, about 13-17 gigs each unpacked, that is supposed to be “RAW” just to find out that even their “RAW” data has already gone through a whole series of manipulations and the worst, detrended. Where’s the beef?

“It is difficult to conceive w­­­­­­hat the justification could be for these changes.”
Not at all. It would be charitable to call it inept programing and abysmal quality control. However, the very purpose of these people is to produce a dataset to be used by the world. It cannot be that they are so useless at their primary task. Therefore, it is no accident this is deliberate.
It is possible that the intent is to provide useful figures to Rio+20 supporting a hockey stick in the hope that the morass of figures with changed station numbers would hold reviewers at bay for a few months.
How very sad that an accredited academic institution should be involved.

In the book published in 1913 by the Commonwealth Bureau of Meteorology “Climate and weather of Australia by H.A.hunt and Griffith Taylor and E.T.Quayle” There is at page 11 a mean monthly Temperature and rainfall of all the Australian Capitals (cities) including Adelaide – I assume this is a 1913 snapshot across Australia. There was a printable copy on line (from University of California) but the original link
(http://www.archive.org/bookreader/print..php?id+climateweatherof00huntrich&server=1a331414) no longer works.
It contains a wealth of contemporary weather information including hottest and coldest recorded temperatures, plus details of droughts and floods up to 1913 and 59 weather maps on 75 printed pages. There are copies held in libraries including at the BOM Melbourne, Interesting synopsis of Australian weather history. sorry the link wont work.. Maybe someone else might be able to post a copy of the relevant page from an electronic image.

Frank Lansner:: Overview RUTI results europe:http://hidethedecline.eu/pages/ruti/europe.phpIm aware of all the arguments against this project, but I would appreciate more help and backup.
Frank I’m interested, but I don’t see the point of another index which shouts at me “don’t trust me I’m written by someone who is biased”. I know it sounds pedantic, because no warmist will ever refer to it, but a long time ago … I had this dream of finding a temperature index I could trust. Not upjusted, nor down-justed, just the best and honest appraisal of the temperature record.
At the very least you have to draw a very clear distinction between what is and what is not your own views. That means a neutral web name otherwise it screams: “bias”.

Great post!
But I have to disagree on a minor point: given the importance of the dataset to the notion of climate change, the availability of money to study such things – I do not think it ALL unreasonable to expect a well-documented and understood dataset. Just my view.
Think of squadrons of graduate students in math, statistics, meteorology and geography who could be employed to ensure raw data is pristine and well-handled. You did this work in your spare time for free…
I’m getting tired of this….

If the fate of the world hangs upon civilization’s response to the purported AGW, one would think that changes to the yardstick used to measure such warming would go through scientific debate and peer review. When an organization makes such changes without even discussing why they were made, then I can’t help believe that they’re doing it to make their fradulent arguments look stronger.
I smell a rat…

Ian Bryce
The Australian BOM has done something similar with their high quality data.
With the raw data, the trend line from the 1880’s to now for Echuca for the maximums shows a straight line, with the raw data showing peaks on the odd solar cycles, and for the El Nino’s.
The minimums show a trend line of falling temperatures.
However, the BOM high quality data has bee adjusted to show global warming in recent years.
When I have presented the graphs of the raw data to the BOM for their observations, they do not respond to my emails.

Thing is though that these adjustments will come back to haunt them – they may have adjusted recent years up, but that means that the next few years will be cooler when compared to those recent years – assuming they don’t keep adjusting the new figures up :0. If they do then they will start to diverge from the satellite data in a way that will become apparent to even the dullest hack.

Satellite and weather balloon records show that the atmosphere is not warming as fast as the surface, which directly contradicts the predictions of CO2 induced warming. If CO2 is the cause of the warming, then the atmosphere must warm first and the surface follow. This is not is what is happening.
Adjusting past surface temperatures gives the appearance of increased surface warming, but it also increases the difference between surface warming and atmospheric warming, further strengthening the argument that the surface warming cannot be a result of CO2.
CO2 warming must warm the atmosphere first, and this warming must be greater than the surface warming for CO2 to heat the surface. This is climate science 101 and every climate model predicts this. Clearly CRUTEM4 shows that is is the surface that is leading the temperature increase, contrary to all predictions for CO2 induced warming.
150 years ago humans use 4% of the land surface. Today we use 40%. During this same time there has been an increase in surface temperatures that is significantly greater than the increase in atmospheric temperatures.
This clearly points to land use, not CO2 as the cause of the increase in surface temperatures. People are replacing forests and jungles with pastures and cities. The thermometers are recording the effect on temperatures, which is large.
At the same time we are adding CO2 to the atmosphere. The satellites and weather balloon are recording the effect on temperature, which is small.
CRUTEM4 directly contradicts CO2 as the cause of the warming, because it shows that the surface temperatures are increasing faster than the atmospheric temperatures and thus cannot be a result of increased CO2, because there is no known mechanism by which CO2 can have this effect.

Any, ANY, Weather Station Temperature database that uses anything other than the actual Raw, RAW temperature data should be invalidated. Thus, automatically invalidating any research, studies, or claims of AGW that use said database of non-Raw data.
From everything we have witnessed nobody, no group has the expertise to be biasing temperature records. If a climatologist or GW scientists needs to bias the Raw temperature data, then let them do so, but make them show their biasing method and justify that method.
Global Warming = Smoke & Mirrors

Ed Thurstan,
Your contribution is invaluable. Thank you.
Your findings of fact can help form the basis for an answer to my question, “Why would UEA/CRU consciously choose to have its products be produced in such a non-explained and non-professional way?”
I think the reason they do so is necessarily connected to why they vehemently fight FOIA requests even to this day after a decade of consistently fighting FOIA requests.
John

The researchers who adjust data seem to have lost the gold standard research principle that raw data is the data and that error bars are the proposed problems with the raw data. More clearly, the demonstrated extent of adjustments should be used to determine error bars, not to adjust the raw data average. The anomaly average from unadjusted raw data should be shown as is, with problems in the data calculated into error bars. For AGW purposes, error bars will then provide the suggestion that it “could be” warming if indeed reasonable, researched, and fully explained adjustments would be to raise the anomaly in recent years and decrease it in historical years for reasons x, y, and z. Now that would be refreshing transparent work.

The arbitrary adjustments to the Adelaide data of almost one degree on average appear indefensible.
The magnitude of the adjustment swamps that of the signal these people are attempting to conjure from the data.
You can call this a lot of things. But you sure can’t call it science.

The data sets were obviously in need of adjustment to properly reflect the findings of the latest models. Continued revision will be required to eliminate erroneous past observations that would not have been possible at the preindustrial CO2 levels of the time. This will be continously improved as our computing capability and modelling sophistication improves with greater government resources. /sarc off

I have to say that these climate “scientists” have a bare-faced cheek. Either that or they should be congratulated on their ability to teach the Victorians a lesson about AGW. Oh, and I can guarantee that the silence from politicians and warmists will be deafening!

Frank Lansner says:
May 3, 2012 at 1:48 am
This was similar to something Courtillot did. He was refused data by the UK MO and CRU so went to all the stations he could one at a time, gathered them together, drew the graph and came up with …………………………………………. the same as you!! Now there’s a thing.

– – – – – – –
My answer – CRUTEM4 Data has never been formally audited by independent critical panels. Until it has then it is not in any sense a finished scientific product. It is an unfinished product with the prima facia problems shown by Ed Thurstan which makes it reasonably suspect regarding ‘fiddled data’.
While advocating audits by independent critical panels, I think the activities of individual independent thinkers like Ed Thurstan are just as important in the processes of science as formal audits.
I vote for Ed Thurstan to be on a formal independent and critical panel auditing CRUTEM4 Data. : )
John

“Sydney Airport (947670) is still badly identified. The data runs from 1859 to the present, but only data after 1990 comes from Sydney Airport. The earlier data comes from Sydney Observatory, 10km away.”
Sydney Airport 2009:
Passenger movements 33,451,383
Airfreight movements in tonnes 471,000
Aircraft movements 302,907
Sydney was founded in 1788. Population in 1859 was rather small, now it is well over 4 million.

Great post by Ed Thurstan. Thanks!
Funny how after so much devious effort to hide the decline of temperatures HADCUT4 still shows a very slight cooling since 2002 after warming some 0.4°C since 1980. Note that HADCRUT4 now only covers till 2010.92.
Forget the lying thermometer databases they have been corrupted into compost.

Here’s the basic 6 point plan if you want to run society….
1. Claim that some natural variation is caused by sin (of some sort)
2. Demonstrate the validity of your assertion through convenient observations (black death, drought, climate fluctuations, etc.)
3. Keep all the records yourself
4. Institute no-sin policies (that do not apply to you)
5. Collect taxes to fund your organization
6. Wait for your cult to be overthrown in 1,000-2,000 years or so
Worked for the Mayans, etc. why does everyone keep spoiling things for the Climatologists?
[“run” society, or “ruin” society? Robt]

This is brilliant! It is a question I have asked after looking at the continual adjustments that change previously changed data. Is there a circular adjustment process going on? Looks like there is.
Any attempt to recorrect earlier temperatures will be a disaster, as a multiplier effect has happened. The changes will ripple through.
Perhaps this is the way to kill CAGW: focus on 1930 to 1940, bring those temps into reality, and there will be more than a bulk shift in the post 1940 data.

Pamela,
Your point is extremely important. If error bars were used to represent the uncertainty instead of “correction”, I would imagine that the magnitude of the error bars would be so large as to highlight the absurdity of the excercise. It would make any conjecture about tenths of a degree warming or cooling appear as ridiculous as it truly is.
[quote:]
Pamela Gray says:
May 3, 2012 at 6:55 am
The researchers who adjust data seem to have lost the gold standard research principle that raw data is the data and that error bars are the proposed problems with the raw data. More clearly, the demonstrated extent of adjustments should be used to determine error bars, not to adjust the raw data average. The anomaly average from unadjusted raw data should be shown as is, with problems in the data calculated into error bars. For AGW purposes, error bars will then provide the suggestion that it “could be” warming if indeed reasonable, researched, and fully explained adjustments would be to raise the anomaly in recent years and decrease it in historical years for reasons x, y, and z. Now that would be refreshing transparent work.
[/quote:]

While I might be willing to concede that it might be possible for Rumpelstiltskin to weave bad data into good temperature estimates, I doubt that even he could weave non-existent data into good temperature estimates.
Data, once adjusted, is no longer data, but rather something else, which I typically refer to as “undata”. The entire global temperature record, in all of its source-variant and time-variant versions, is constructed from this “undata”. I would suggest that it is therefore “unreal”, or perhaps “surreal”.

Scottish Sceptic
What is the alternative to collect raw data and then when possible present these in a transparent manner that will make Hadcrut and their CRUTEM4 look like the iron curtain?
At this point the database for RUTI is growing and one day it will kick ass, just wait and see.
I just encourage everyone to send me raw temperature data from old writings or databases.
– And then its really fun to work with 😉
K.R. Frank

they describe what they did here-http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
i think in the case of pre 1910 in australia, the thermometers were not stevenson screens. there is a graph of adjustments made due to this that show both positive and negative adjustments, but there are just so few or maybe even NO adjustements that are positive. they seem to want people to believe that there are adjustments both ways, but that very hard to believe. the month thing is because of monthly averages being used as a guide for uncertainty, and earlier years were very uncertain.
a fine example of how ridiculous these adjustments are is the alice springs record that has a large crossover period between the earlier site/thermometer type and the current record, and they match for this period, but by the magic of their models, the record is cooled in the earlier period, even the crossover section that now doesnt agree with the current period, and the current period left as is. the warming there is just an artifact of the adjustments made, nothing to do with thermometer readings.

Alpha Tango says:
May 3, 2012 at 6:09 am
Thing is though that these adjustments will come back to haunt them – they may have adjusted recent years up, but that means that the next few years will be cooler when compared to those recent years – assuming they don’t keep adjusting the new figures up :0. If they do then they will start to diverge from the satellite data in a way that will become apparent to even the dullest hack.
_____________________________
When you have several nasty winters and very cool summers and call it “The Warmest Years Evah” People are going to get really ticked.

Pamela Gray says:
May 3, 2012 at 6:55 am
The researchers who adjust data seem to have lost the gold standard research principle that raw data is the data and that error bars are the proposed problems with the raw data. More clearly, the demonstrated extent of adjustments should be used to determine error bars, not to adjust the raw data average….
_____________________________
Pamela, you really highlighted the entire problem with the temperature sets and CAGW. This deserves “Quote of the Week” at the very least or a prominent place on this website permanently.
The raw data is the data and any problems should be illustrated with error bars and notes. Under no circumstances should raw data EVERbe “adjusted” without notes detailing the specific reason for the adjustment. Even when a thermometer or other piece of equipment is shown to be out of calibration during a routine calibration check after being calibrated and put into service, there is no way to know when the drift occurred so adjustment of the ERROR BARS and a note is the correct method of dealing with the problem.
I think the idea is important enough to deserve a permanent spot in the header.
An illustration on how the data SHOULD be presented is in this graph. (gray area is error)

Congratulations to Ed Thurstan.
On reflection, the story Ed uncovered has the potential to be much, much bigger than ClimateGate.
We are possibly looking at the wholesale and intentional falsification of the record by a major public agency. That’s big. Really big.
I only hope there is some organization with significant resources willing and able to pick up the investigation and push for a full accounting of these “adjustments”.
Ironically, the undoing of “global warming” will be sunshine — the very best form of disinfectant!

Although not referring to global warming specifically, Stephen Hawking, in his book “The Grand Design”, has a couple of very insightful comments regarding human nature. I think they sum up what I see as the main differences between alarmists and skeptics. By blaming CO2, we are blaming ourselves, and in kind have created a religion around it.
“In ancient times it was natural to ascribe the violent acts of nature to a pantheon of mischievous or malevolent deities. Calamaties were often taken as a sign that we had somehow offended the gods. . . The human capacity for guilt is such that people can always find ways to blame themselves”. – Stephen Hawking
“. . . scientists are always impressed when new and stunning predictions prove correct. On the other hand, when a model is found lacking, a common reaction is to say the experiment was wrong. If that doesn’t prove to be the case, people still often don’t abandon the model but instead attempt to save it through modifications. Although physicists are indeed tenacious in their attempts to rescue theories they admire, the tendency to modify a theory fades to the degree that the alterations become artificial or cumbersome, and therefore “inelegant.” If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model.” – Stephen Hawking

ZT says:
May 3, 2012 at 8:05 am
Here’s the basic 6 point plan if you want to run society….

– – – – – –
ZT,
That is a keeper. Thanks.
I think there may be a point ‘0’ (zero) preceding your 1st point. I suggest it is:
Point 0. – Locate people in government, academia and the media who are ideologically gullible enough to believe in the concept of original sin; they will be necessary to voluntarily advocate your plans to control society.
: )
John

KenB says:
May 3, 2012 at 4:40 am
In the book published in 1913 by the Commonwealth Bureau of Meteorology “Climate and weather of Australia by H.A.hunt and Griffith Taylor and E.T.Quayle” There is at page 11 a mean monthly Temperature and rainfall of all the Australian Capitals (cities) including Adelaide . . .
Printed copy available fromhttp://books.google.com/books?id=avCvNQAACAAJ&dq=editions:NYPL33433090738521

It seems to me that they and GISS are trying to get rid of the 20th century bumps that don’t align with CO2. They cool the clearly non-CO2-induced rise in the 30s while quashing the cooling of the 70s. In the future, the r-squared values for alignment to CO2 concentrations will improve. Like with Mann’s Hockey Stick. It wasn’t so much about showing that we’re warm now, it was about obliterating variation in the past that they can’t explain.

Pamela Gray says:
May 3, 2012 at 6:55 am
The researchers who adjust data seem to have lost the gold standard research principle that raw data is the data and that error bars are the proposed problems with the raw data. More clearly, the demonstrated extent of adjustments should be used to determine error bars, not to adjust the raw data average….

Exactly, which is why in another thread I said the error bars that are assigned are laughable. Yet even with error bars on the raw data is erroneous because the system has never been calibrated, nor are the major uncertainties qualitatively defined, e.g. UHI for one thing. Oh sure Parker/Jones et al claim UHI is accounted for, but that is pure bunk.
Were the errors for the mass of thermometers used during the 90’s ever resolved or properly accounted for? I haven’t seen anything to support that.

Alpha Tango says:
May 3, 2012 at 6:09 am
Thing is though that these adjustments will come back to haunt them – they may have adjusted recent years up, but that means that the next few years will be cooler when compared to those recent years – assuming they don’t keep adjusting the new figures up :0. If they do then they will start to diverge from the satellite data in a way that will become apparent to even the dullest hack.

Perhaps their adjustments are aimed only at affecting publications and irrevocable decisions in the near future.

What is notable here is not just cooling of earlier temps and bumping up or recent ones. There is a general reduction in the long term variations, ie removal (reduction) of the natural cycles.
Note the pre-1940 warming gets reduced, the post war cooling get warmed etc. the only exception being the recent warming which gets boosted.
Similar play seems to be happening in the sea surface data:http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
HadSST3 is the other half of HadCrut4
As noted in the article on HadSST3, these adjustments are entirely speculative and sometimes actually involve rewriting part of the temperature record that does not fit the preconceived ideas.

PS, also note how the significant late 19th century cooling gets warmed up.
This is a real trend that is seen in a number of different proxies such as 0-50m ocean temps and tide gauge readings, but does not fit the CO2 propaganda.

@John Whitman says:
Point 0. – Locate people in government, academia and the media who are ideologically gullible enough to believe in the concept of original sin; they will be necessary to voluntarily advocate your plans to control society.
Good point – It appears the Mayans, Aztecs, Druids, [SNIP: A swipe too far. We don’t really want to derail this thread, do we? -REP], Climatologists, Scientologists, etc. successively stumble on the fact that locating gullible, scared, vaguely guilty, compliant, ‘flocks’ is easier than one might suspect. Opportunistically inclined broadcasters, educators, and politicians needing gullible scared voters and audiences cannot help but participate in the frauds of the day.

“MattE says:
May 3, 2012 at 9:27 am (Edit)
It seems to me that they and GISS are trying to get rid of the 20th century bumps that don’t align with CO2. They cool the clearly non-CO2-induced rise in the 30s while quashing the cooling of the 70s.
##########################
As we collect more data the 30 WILL COOL.
The reason is pretty simple. in past datasets such as GHCN monthly the spatial distribution of stations is skewed to the northern hemisphere. You know that complaint people have about the amount of coverage ( too few stations ) well it does have an effect during certain times. the 30’s
is one of those times. As we add more data from the SH during the 30s you will see the global average come down. Why, because of polar amplification. The trend tends to be higher in the NH than the SH. during the 30s the sample is skewed to the NH. As we add stations ( remember the argument that we need more stations ) from other archives as we add more SH data the 30s will come down a bit.

The major adjustment that needs to be done throughout the data is for urban heat island temperature increase – which is not done.
One can argument how much one wants, the urban heat phenomenon is real. Think of the locations where we have thermometers – in cities which grew – and a growing city has a growing urban heat – it is logical if one puts 2 or more such UHI islands the resulting one will have a little higher urban heat island.
Now Steven Mosher and others will say that Berkeley proved that UHI is not creating any trend – they found even a bit cooling in the cities lately – but why?
Simply because all the database is contaminated with urban heat.
There is practically no location in the database that is not influenced in a way or another by the UHI. And why did they found a difference showing less growing in cities?
Because growing had stalled in many cities. Birmingham, Leeds Frankfurt, Vienna, Budapest – name those cities – are no longer growing as they did between 1800, 1900 and1950.
Exactly what Berkeley found – cities not warming as much as other locations since 1950s – is the fact that demographically human locations are not growing once the countries get at a certain level of energy comfort and civilization. UHI for American, Russian and European cities is indeed getting stationary or growing slowlier since growing of those cities stopped – some in 1950s some 1960s some later.
So what Berkeley found is UHI signal on trend in the database – exactly the opposite of what they say. And this UHI growing should be removed.
Once that is removed then we bring terrestrial measurement closer to satellite measurement.
Of course no warmista wants to do it as it will cut half or more of the temperature increase. This is why they all try to hide the UHI effect on temperature in all ways. And this is why they rely only on locally measured data which they can adjust how they please and not on satellite data which cannot be adjusted so easily.
And this is another reason why the Earth temperature is no longer growing “lately” as it did before as the population is no longer growing as it did before – UHI effect on trend is getting less significant.

Artificial adjustments to the temperature database will tend to reach a plateau. Temperatures beyond that will flatten and not increase. This is in effect what has happened to the temperature data over the past 12 years. CO2 goes up but the temperature remains the same. The only way to suggest an urgency to act is to lower the earlier numbers and show a continued upward slope of the temperature curve. Those in control of the temperature database cannot artificially adjust the current numbers higher, but, they can continue to adjust the earlier numbers lower. It seems to me the newer the database, the less credible the accuracy of the values. It looks like a new FOI needs to be filled right away while the gun is still smoking.

Mosher: “As we add more data from the SH during the 30s you will see the global average come down. Why, because of polar amplification. The trend tends to be higher in the NH than the SH. during the 30s the sample is skewed to the NH. As we add stations ( remember the argument that we need more stations ) from other archives as we add more SH data the 30s will come down a bit.”
So, we will not expect hemispherical trends to change in a specific direction with the addition of new stations?

Niels A Nielsen says:
May 3, 2012 at 1:18 pm (Edit)
Mosher: “As we add more data from the SH during the 30s you will see the global average come down. Why, because of polar amplification. The trend tends to be higher in the NH than the SH. during the 30s the sample is skewed to the NH. As we add stations ( remember the argument that we need more stations ) from other archives as we add more SH data the 30s will come down a bit.”
So, we will not expect hemispherical trends to change in a specific direction with the addition of new stations?
#############
how did you you get that from what I wrote? you can get changes in the hemispheres as you
add stations. It depends on where the stations are added and when they are added.
In general if you look at the distribution of places that are missing measurements, the expectation would be: as you add stations the past will generally cool and the present will
warm. Generally. Of course the devil is in the details. But notions that there is some kind
of “plot” to cool the past and warm the present do not hold up. My experience is that
whenever I add new data ( ghcn daily, gsod, colonial records, ) the general result is the
same. if the curves move at all they tend to move cooler in the past and warmer in the present.
Generally.
Of course none of these minor changes means anything. There was a LIA, the planet is warming. changes in the global temperature index are EXPECTED when you add more data.
We clamored to add more data. Now we have more data. Guess what? the answer changes.
slightly. The idea that you can look at changes in series and DEDUCE a conspiracy, isnt “skeptical” thinking at all. The deep irony here is that many of us asked for code and data.
we asked for more stations. Why? because we were skeptical. Now that we have more data some people run off and make all sorts of hyperbolic alarmist claims. “theres a plot!”
not very skeptical. kinda funny. kinda depressing.
Lesson: be careful what you wish for.

Lars P
“Now Steven Mosher and others will say that Berkeley proved that UHI is not creating any trend – they found even a bit cooling in the cities lately – but why?
Simply because all the database is contaminated with urban heat.
######################
1. Berkeley Proved no such thing.
2. The last study Zeke, nick stokes and I did, suggested a UHI trend from 1979-2010
That trend, about .04C per decade, is consistent with the handful of regional
studies of UHI which all show trend bias of .03C to .125C per decade over the same
period. It is REGIONALLY variable. One UHI does not fit all. In the SH, UHI is much
smaller. In china and japan and Korean building practice drives it higher.
The argument that the whole database is contaminated is lacking one thing: evidence.
Here is a good question
Are you familiar with CRN? the network that Anthony approves?
would you call them rural? or do you disagree with Anthony?
think hard, it might be a trick question

Steven Mosher, you yourself, as a professional scientist and a logical person, already believe in conspiracy – conspiracy demands only ONE man or woman who, in a weak moment, decides that “the story” is more important than the whole truth… What are the odds that it is only one person? You would have to say, “Unlikely. People are imperfect.”
Either all men and women in science have no weak moments (HA!), or conspiracy theory is simply human nature and happens in ALL sciences, always to the detriment of knowledge and advancement…
If all science was under the white hot spotlight like climatology, a great many scientists would be disgraced, especially considering most sciences lack the vast and mostly unquestioning support of most major medias…

Stephen Rasey says:
May 3, 2012 at 3:17 pm (Edit)
Re: Steven Mosher at 1:51 pm: think hard, it might be a trick question
Yes, indeed. It is a trick question. For Steven Mosher’s idea of “very rural” is hardly what I think of as very rural: See CA “Berkley ‘Very Rural’ Data Dec 20, 2011, 12:01 pm to Jan 1, 2012 12:36pm
#############
Typical response.
My question is do you accept Anthony’s endorsement of CRN as good stations?
For me. I do the following.
1. People suggest (pielke for example) that land use changes are important
So I look at land use using standard metrics a) nighlights, b)percentage of land
that is concrete. c) percentage of land that is “built”, is the land inside the admistrative
boundaries? outside? distance from the urban center, airport no airport. you name it
2. Try to combine all metrics
3. Use a pattern matching approach: you say CRN is rural, I’ll go find sites that match
the rural characteristics.
4. Use stations that others point to as rural. hey we know there is UHI because somebody compared urban to rural, so I use those rural as a “rule” for what counts as rural.
5. use population.
When I have those metrics I do sensitivity analysis.
Example: Hansen says if nighlights is less than 30, its rural. I dont do that
I ask?
Whats the difference between rural and urban IF we define rural as nighlights = 0,
nightlights = 1,2,3,4,5,6,7,8,9,10….
I do the same thing for any and every metric that people can throw at me.
BUT, it has to be an objective metric. measurable. not your sisters opinion

Re: Steven Mosher at 1:51 pm: think hard, it might be a trick question
Yes, indeed. It is a trick question. For Steven Mosher’s idea of “very rural” is hardly what I think of as very rural: See CA “Berkley ‘Very Rural’ Data Dec 20, 2011, 12:01 pm to Jan 1, 2012 12:36pm

Mosher: “And you see what I am saying GENERALLY. GENERALLY, on average, over a number of different datasets, when you add data the general effect will be to lower the trend in the 30s.”
That’s not what you said above: “As we add more data from the SH during the 30s you will see the global average come down. Why, because of polar amplification. The trend tends to be higher in the NH than the SH. during the 30s the sample is skewed to the NH. As we add stations ( remember the argument that we need more stations ) from other archives as we add more SH data the 30s will come down a bit.” You are talking about the _global average_ – not the trend. You say that the effect of polar amplification from adding stations should be to lower the the global _average_ in the 30’s compared to the present. Which makes sense to me. But that is not what the hadcrut4 shows.
You also said above “As we collect more data the 30 WILL COOL.” and “In general if you look at the distribution of places that are missing measurements, the expectation would be: as you add stations the past will generally cool and the present will warm.”
Surely, you are not going to tell me that you by “the past” meant the 30’s and “the present” the 40’s 🙂
My point is that you don’t see the 30’s cooling in hadcru4 compared to hadcru3 as you claimed above.

cartoonasaur says:
May 3, 2012 at 2:52 pm (Edit)
Steven Mosher, you yourself, as a professional scientist and a logical person, already believe in conspiracy – conspiracy demands only ONE man or woman who, in a weak moment, decides that “the story” is more important than the whole truth…
##############
if you want to redefine the word conspiracy to mean purple, then I suppose I believe in purple.
I will put it simply. There is no evidence whatsoever that any one person consciously and with intent changed temperature data to give the result they wanted.
If you have the evidence, and understand what would count as evidence of this conscious intent then let me know.
The closest case would be the blip in SSTs

Mosher,
Can you show the trend of the dropped stations versus the newly added ones?
I haven’t seen this analysis done yet.
Best is supposed to have included all of them for Land temperatures. Is this true?

NHills says:
May 3, 2012 at 6:21 am
Regarding “Climate and weather of Australia by H.A.hunt and Griffith Taylor and E.T.Quayle” , looks like you can get it herehttp://openlibrary.org/books/OL7221939M/The_climate_and_weather_of_Australia
pdf, epub, read online.
Thanks, yes that is the same perforated copy (University of California) I had printed off many of the pages while doing research on historical data/writings relating to floods and droughts in Australia, and within that report is a great deal of information on early documented reports before the Australian Bureau of Meteorology was formally set up. The historical hydrological information is also “interesting” in view of the ‘modern” C.S.I.R.O./B.O.M. propaganda that came out supporting Flannery’s claim that we would see decreasing rainfall patterns in Australia (now flooding of course) instead of confirming the weather variability that we know applies to Australia.
In that more complete copy that you linked, It is interesting when you flip through to the end, the library lending record shows it to be then held at the UCL Berkley and appears to have been last accessed in the 1960″s. I wonder if the copy is still in that Library. (Mosher and Nick Stokes should be able to confirm!)
There is a wealth of information within that booklet for more detailed examination by those studying the historical Australian climate record. I commend the work being done by some dedicated Australians to try and undo the vandalism to our Australian Climate History, especially those who have been successful in digging out old photographs of Stevenson screens in situ in the 1800’s and , confirming their use.
Also useful is the actual B.O.M (Australia) 100 year official History “The Weather Watchers” by David Day first published in 2007 ISBN 9780522852752 and its neat graphical depictions of each years weather variability maps for the years 1900 to 2005. I commend the title as it fits neatly to this subject – who is now watching over our precious weather watching record. Thanks Anthony
for your dedication and to those Australians who really care about integrity in climate science.

I also studied differences between CRUTEM4 and 3 and agree with most of what Ed writes. The most obvious difference is that many new stations have been added , while many others have been dropped. There are now 5549 stations in the set compared to 5097 in CRUTEM3. 628 new stations have been added while 176 stations have been discarded. All the new stations are in far northern latitudes around the arctic. Many stations in The US have been dropped.
Anyone who looks in to this data should also be aware that many stations have changed numbering ! This can cause huge confusion when comparing station by station. A list of all the renumbering can be found at http://clivebest.com/data/changed.txt. Details of where the stations are located are at http://clivebest.com/blog/?p=3493 and the next post.

Clivebest
Thank you for that note. I missed your original post, and I did not think of the possibility you raise.
I have just compared the two databases for exact matches on Lat/Long. I get 122. You got 124.
This means that 122/4 stations should get aliases so they will be treated as matching stations, and be treated accordingly. I don’t think it changes the tenor of my argument much, if at all when I include the effect of deletions/replacements. But I will nut out how to do it over the next few days, and figure what other implications it has before I introduce aliases..
Thank you.

@Mosher: The argument that the whole database is contaminated is lacking one thing: evidence.
…
My question is do you accept Anthony’s endorsement of CRN as good stations?
Here are some of the things Anthony said about CRN wrote about CRN on April 7, 2012. (You provided no link, so I attempted to find someting…)

The goal of [Anthony’s] project is to provide a publicly accesssible one-on-one live comparison of temperatures between GHCN and other hourly reporting stations from the older surface network, to the new Climate Reference Network (CRN). The impetus was the heat wave in Texas last year, where I noticed that while there were a number of record setting high temperatures, many of them were higher than temperatures seen in the CRN. This suggested to me that UHI and siting effects play a role in elevating such temperatures. [evidence?] ….
Basically, the CRN is NCDC’s response to their realizations of problems in the existing climate observing network, something that I’m long since identified in my own surfacestations.org work,….
…. Given the advanced way it is measured, there’s no need to adjust the CRN station data whatsoever
Overall I’m pleased with that CRN project and the USHCN modernization, and I endorse it. But little of the data from it is finding its way into the public realm, and I aim to change that.
…
The first job was to arrange for and to program data ingestion. Initially it looked like the project was designated to be done with an Internet based FTP fetching, which can be fraught with problems related to network delays, timeouts, server load, etc. Fortunately it was discovered that the entire CRN data set was delivered on an hourly basis via one of NOAA’s satellite feeds…..
….
REPLY: Yes they are UHI free, I’ve visited a few around the country and they live up to that claim – Anthony

I note that there are only 4 instances of the word ‘rural’ in the thread, none by Anthony. I also note that there a comparative handfull, at pressent only 114, of these CRN stations.
I also note the following from NCDC on the need for CRN:

On the other hand, to ensure credibility of future climate change assessments, it is necessary for the scientific community to acknowledge that a crisis exists in the quality of our long-term observing systems – a crisis to which it must respond.

I accept Anthony’s endorcement of the CRN project as an improvement of USHCN modernization. I even accept that the few stations that he visited are absent of UHI — today. How well that quality is maintained over time is anyone’s guess. That Anthony is undertaking a privately financed, but sorely needed, project to disseminate the data to the public is not one of CRN strong points.
Ok. I accept Anthony’s “endorcement”. CRN is good. It is an improvement. What does that say about the crisis of non-CRN data that comprises the bulk of the whole database use in analytical work?

Lars P. says:
May 3, 2012 at 12:55 pm
/////////////////
I find your comments to be nn the right track.
The only important data is ocean temperature but this is defficient due in no small part to woefully inadequate coverage and short time scale.
All land based temperature measurements should be ditched. The various data series are now too contaiminated to shed any useful light on anything, and due to loss of original raw data it would appear that these are now beyond repair. BEST should have gone back to raw data and only used data that they were 100% certain was raw and had not already been the subject matter of adjustments. Whenever there has been equipment change, siting change etc, the record should stop. There should be no attempt to make it a continuous record by making some adjustment to supposedly account for the change that took place. The state of the various data sets is such that we are today essentially merely reviewing trends induced by adjustments rather than trends which arise truly from the data.
We should only be looking at satellite data. This shows no warming these past 30 or so years, just a step change around the super El Nino of 1998. This data strongly suggests that there has been no CO2 induced warming during the satellite period.
This raises a number of interesting questions, such as why has the temperature that was released around 1998 not dissipated? Is this because of so called GHGs which have ‘trapped’ or delayed the dissipation of this temperature, or is there some other explanation as to why temperatures remain high? What conditions need to be met for this temperature to dissipate, and over what length of time can we expect the dissipation to take place, and to what base level will temperatures return? Unfortunately, these questions are difficult to answer due to lack of knowledge and understanding of the many natural processes involved.
O/T, the Daily Mail is running an article which suggests that a new report suggests that Greenland Glacier melt is taking place at a far slower pace than had previously been thought and as a consequence sea level rise due to Greenland Glacier melt has been vastly over-estimated.

Reading the article prompted me to download the data and do some digging. Rather than trust the data, I generated my own header files by grepping the data files. My V3 header file matched Hadley’s. My V4 header file did not match Hadley’s. A quick check showed that the Hadley V4 header file has several extra stations in Poland that do not show up in the Hadley V4 data file, let alone the Hadley V3 data file. Wattsupwiththat???
The ID numbers are 121050 121500 121950 122500 122400 125200 125300 125500 125850 125950 126250 126950

Re: Contamination & evidence.
Thank you, Ed & Warwick, for this large computational ecercise and code. My contribution is smaller, but demands similar answers. Here is a slight variation of a post of some data I put on Jo Nova’s site earlier today, ref Australia, Darwin.
You guys who are arguing need to look at evidence. Back to basics. You are talking about a temperature rise.
The BoM is talking about a temperature rise. They have released a new data set of temperatures for about 100 stations, called Acorn. It’s online. Then, you can go online to a different part of the BoM home page and get another version.
I’ll show you both. Then I’ll ask you which version I should use in this example (Darwin).http://www.geoffstuff.com/Darwin3preExcel2010.xls
There are a couple of pages of data here, so you can DIY. However, if you look at the graphs on sheet 1, one of the products shows a warming Darwin, the other shows a cooling Darwin. If this exercise was studied and repeated all over the world, we might be able to conclude that most of the warming is manufactured by fiddling with numbers. I’ve done it for more Australian stations, similar conclusions.
If you are expert enough to comment on comparisons, then you should be expert enough to tell us which of the graphs, both being handed out today by our BoM, should be used in constructing comparisons. If you can’t do that, then you have failed to use due diligence in your science.

As someone who has been a committed sceptic for the last 10 years, a supporter of Anthony and all the other heroes (and I consider myself a modest one, putting my arse on the line, along with Andrew Bolt down under) Given the climate of persecution on this matter, I am curious about this abrupt response to a post of mine yesterday:
“[snip OT . . read Tips & Notes for what you are looking for . . kbmod]”
I was simply asking what had happened to Goddard’s blog. Maybe there is some politically correct line that I have crossed..
[REPLY: It was Off-topic, but there is nothing wrong with Steve Goddard or his blog. There was a disturbance not long ago that you can read about here. -REP]

I was simply wondering what this was about http://www.real-science.com/[it seems the blog has been the subject of a conflict and the current postings seem to be part of that, it is apparent that Mr. Goddard is not in control at the moment. . . I apologize for my rather curt response to you earlier and I never intended any discourtesy . . kbmod]

Steven Mosher says:
May 3, 2012 at 3:47 pm
Why is the extra data from the SH not been available before? Many decades of SH data don’t just appear out of nowhere. For long records they would have been available for ages. I don’t see any difference to cherry picking the data and moving the chairs around the room, to show more warming to counter the amount that didn’t occur. With that many stations available (thousands) it is very easy to cherry pick 500+ of them every decade or so for confirmation bias.

Matt G says:
May 4, 2012 at 9:55 am (Edit)
Steven Mosher says:
May 3, 2012 at 3:47 pm
Why is the extra data from the SH not been available before? Many decades of SH data don’t just appear out of nowhere. For long records they would have been available for ages. I don’t see any difference to cherry picking the data and moving the chairs around the room, to show more warming to counter the amount that didn’t occur. With that many stations available (thousands) it is very easy to cherry pick 500+ of them every decade or so for confirmation bias.
#####################
Why is the extra data not been available before.
Some of it has been available but nobody saw the need to actually use it.
Lets see if I can explain. GHCN Monthly was a project started long ago. The set out to look at
all the records they could and select a subset. After the project was complete there was no real effort put into collecting more data FOR THAT COLLECTION. over time people stopped giving reports to the folks who run that data. Efforts concentrated on ‘homogenization.
CRU takes about 98% of its data from GHCN. 2% it got from other sources. There was no effort to get more data because no body saw the need. Plus, its really boring work and you cannot
write a good science paper based on adding a few hundred stations here and there.
To some extent cwith climategate people saw the opportunity to finally do this work
and assemble the data that has always been there.
Meanwhile other data collections continue to be made: Ghcn daily, GSOD, the colonial records. They have been there but nobody wanted to undertake the effort to slog through it all.
My experience. I finished slogging thru GHCN daily. It starts with over 80000 individual files
I worked on it for months. In the end, guess what? the answer is the same. Try to publish that as a paper. here is another experience. env canada. 7-8K sites. a couple months of work.
A brutal problem because they dont have their data on an FTP site. ever scrape 7000 web pages?
Result? The answer doesnt change.
GSOD data. Thousands of sites. Answer? the same.
When I say the answer doesnt change I mean in any scientifically interesting way. A bump here
a valley there, but its still warming. little changes in the bumps and valleys are not scientifically interesting to Nature or Science. No scientist would waste his time on it.
As we sit here there are millions of boxes of data that have not been digitized. The science and the statistics tell us that the answer in the unopened boxes will NOT be significantly different from the answer calculated from the sample we have. here is what I know. Give me ANY latitude, longitude and altitude and I can tell you the temperature within x.y C ( wait for the publication) The information in the un opened boxes will tighten that range, but long term means will not change in any scientifically interesting way. That means doing a bunch of work and its hard to publish a paper that confirms what we already know.
Skeptics ( me included) thought more data might change the answer. It didnt. Sorry.
This is part of the reason people like McIntyre and me suggested that the temperature series be taken over by an independent agency. An agency that doesnt have to Publish science papers. An agency that compiles all the data into one official series. Some steps have been taken in that direction
There are two efforts aimed at this. One is Berkeley earth and the other is being headed up by Peter Thorne. Both of these groups dont have the luxury of not publishing.
As for cherry picking series. that is SUPER EASY to test. Lets be objective.
Hypothesis: CRU cherry picked series
How do I test that? Well easy peasy.
A. I duplicate CRU’s method and pick different stations.
B. I create a better method and USE ALL THE DATA
fair test it seems to me.
What is the answer? The answer you get from using CRUs method ( which I emulated and verified) with different data, gives the same answer. And the answer you get from a better method ( Least squares or Kridging) and ALL THE DATA… is the same.
conclusion: either CRU did not cherry pick or if they did it doesnt change the answer.
of course at the micro level if you compare one needle in the haystack to another needle in the haystack you will find differences. But in the end the answer doesnt change. Finding out that the 1930s were .04C cooler than thought, ISNT INTERESTING. Adding arctic stations and finding that the current estimate goes up by .05C ISNT INTERESTING, especially when you consider what the error bounds were TO BEGIN WITH. the changes are not interesting, not scientifically relevant. C02 still causes warming. the interesting question is HOW MUCH?
studying the temperature record cannot answer that question, in fact we know it cant. By itself the temperature record can tell us precious little about sensitivity. Sensitivity is underdetermined by the short temperature record. At best you can get an estimate of the TRANSIENT climate response out of the series, but evern there you have big error bars.
The record is scientifically boring.minor changes in it are a little less boring.

The adjustment trend also shows in a comparison to HadSST3:http://img850.imageshack.us/img850/2079/hadcrut4vshadsst3.png
The sea surface temperatures have slightly lower variance, but does it make sense that the averages diverge 0.1K? Before ~1985, the difference is pretty random and close to zero, but towards the end it’s consistently positive.

Steven Mosher says:
May 4, 2012 at 1:24 pm
Thank you for your responce, can’t disagree with the majority of what you have responded with. Only little concern is that athough the long term trend may not change hardly what ever general set is used, including all the data. The short term periods or individul years do chop and change and can make different years cooler or warmer than others. Hence, the last several few years were made warmer with HADTEMP4 compared with HADTEMP3.
Air temperature alone can’t tell us about sensitivity whether it is short or medium term. Other factors how they behave during these time frames are considered to beable to find a reasonable sensitivity. The longer the period goes on the sensitivity is increasingly exposed, so either contributes tiny or large impact it wil become known. Anyway I agree with Richard lindzen that sensitivity is low. The change in global cloud albedo is having a large affect on temperatures relative to the increase over the last century.

Matt.
Understand what the definition of sensitivity is:
What is the change in temperature (C), given a change in Forcing (Watts)
so if the sun goes up by 1 watt, how much does the temperature go up?
Here is a simple analogy. You are sitting in your car. You slam the pedal down and apply
100 horsepower. What is the velocity you will see from this forcing?
Well, it is time dependent. and it depends on your tires, on the headwind, on the track, on your power train losses. Lots of things. Now, hold all those variables constant and apply 100 horsepower and let the car get to a final top speed, where it is no longer increasing.
Lets say its 105 mph. Thats the response we talk about when we talk about equillbrium
climate response. When the sun goes up by one watt, how does temperature respond.
Well, we dont look at the very next day. The very next day could be cloudy. We have to look
over a long time scale. We cant control all the other variables. and there are feedbacks. Some act on short scale, some act on long scale. back to the car. Suppose I look at the car after
10 seconds. There im looking at the transient response. First I have to overcome inertia and that takes time. maybe I spin the tires, crap. Anyways the transient response will be related to
the equillibrium response but its not exact.
With the climate you have a system with huge inertia. The transient response is much smaller than the response after decades or centuries. looking at the 160 years of land surface data you can get a sense of the transient response. Sometimes the tires spin. Sometimes you hit a head wind. It takes centuries to reach the equillbrium response.
Lindzen? if you look at the data he looked at you can only get the transient. Same thing with volcanoes. Transient.
I find it odd that people have such certainty over sensitivity when it comes to lindzens results.
I wouldnt discount him out of hand, but his results are far from certain.

Stephen – taking your analogy for climate sensitivity to a 1 watt increase in solar energy further. We know that 3.5 billion years ago the Earth had liquid oceans while the solar radiation was about 80 watts less than today. The sun gradually brightened by ~0.02 watts/million years. The Earth’s temperature has changed very little since then, in fact it seems to have cooled – otherwise we would not be here able to discuss it. How can it then even be possible for water feedbacks to be positive. Recent geological evidence rules out CO2 or Methane GHG as the cause. Water must therefore act to stabilise temperatures on Earth in the long term. Feedbacks must be negative around -2 watts/m2K^-1 to explain this. So we can expect AGW < 1 degC – not really a big deal.

Steven Mosher says:
May 4, 2012 at 2:47 pm
This link below also supports why only air temperature is no good for deducing sensitivity and also a good description of the situation. Have to distinguish between regular ocean cycles like ENSO that result from regularly changing cloud levels in the tropics. The feedback with regarding CO2 effect on clouds has to be established and this can’t be achieved by air temperature alone.http://www.sciencebits.com/OnClimateSensitivity
Whether it would take centuries to responce is very unlikely, the extra CO2 molecules only absorb a tiny amount of energy and this happens almost immediately. There has to be an almost immediate responce taking other factors into account or it is not there to be accounted for. Finally, the step up in global temperatures after each significant El NIno show the majority of the warming can’t be caused by CO2 unless CO2 caused the El Nino’s.

Matt, you fundamentally misunderstand the warming mechanism of C02. it has NOTHING to do with the absorbing of heat or the heat capacity of C02.
If the atmosphere had no absorbing GHG gases the earth would radiate from the surface.
But we have gases which absorb, reflect, and retransmit Long wave radiation. As a result the earth radiates from about 5-6km. This is called the effective radiating altitude.
Because the earth has a lapse rate ( higher is colder ) that means the earth radiates to space from a colder place than the surface. Physically, since colder bodies lose energy via radiation slower than hotter bodies do, the earth radiates more slowly than it would if it radiated from the surface. That means the surface is warmer than it would be otherwise. It cools less rapidly than it would with no GHG atmosphere. The silvered liner of a thermos doesnt “trap” heat by its heat capacity, it DELAYS the loss of energy via radiation. Your coffee is not WARMED by the reflection of radiation, it loses heat less rapidly.
When we add C02 to the atmosphere we raise the level at which the earth radiates to space.
So, over time, that altitude increases and earth radiates to space more slowly. That means over time the surface cools less rapidly. This effect is small in day to day terms, small in monthly, yearly and even decadal terms. Yes, the earth continues big swings with el nino and other internally driven variations, but over time, over long periods of time the increased opacity of the atmosphere results in a raising of the ERL. A higher ERL means we radiate from a colder place in the atmosphere. That means slower loses of energey to space. That means a surface that cools less rapidly. its got NOTHING to do with the heat capacity of C02

I don’t know where else to ask this. I watched a documentary a couple of days ago on the reason the Maya civilisation collapsed – Dick Gill spent 20 years exploring it and shows it was drought. The Mayan area has no natural lakes, rivers or underground water, relies completely on water collected during the summer rainy season, around 800 AD this failed. During the telling of it he said that normally the rains come because of a particular high pressure system which more or less stays put, somewhere in the Atlantic I think, but that this moved considerably further south than it normally does which altered the climate by making it colder in the north, which in turn didn’t bring the rains up into the area. All this to ask, is this the mechanism which triggers the El Nins? If so, what moves the high pressure system?http://topdocumentaryfilms.com/ancient-apocalypse-maya-collapse/
The graphic and that explantion are towards the end of the docu, sorry can’t say exactly but around forty minutes in.

Steven Mosher says:
May 4, 2012 at 6:13 pm
I didn’t explain it well enough.
The CO2 molecule absorbs energy then releases energy, absorbs energy then releases energy. (continuous cycle) If it can’t be absorbed it is refected or retransmitted. While it is doing this it delays the energy loss to to upper atmosphere. One CO2 molecule can’t delay energy more than it’s heat capacity at the frequency any one moment this occurs.

Matt, you still dont get it. The effect is cause by increasing the opacity of the atmosphere.
At a given concentration of C02 the earth will radiate to space. That altitude is called the ERL.
Above this altitude the concentration of C02 and other gases is such that the radiation escapes
to space. Call this concentration X.
When you add C02 to the atmosphere the altitude at which Concentration X occurs increases.
That means the earth radiates from a HIGHER altitude. for example: at 280 ppm the
altitude the earth radiates is say 5.5km. Above 5.5km the concentration is such that the radiation escapes to space. call that concentration X. When you add more C02, the altitude
at which X occurs is Higher. The earth radiates from a higher colder place.In short, the amount of C02 above the ERL is constant. Its the amount that allows radiation to escape freely to space.
raise the total C02 in the atmosphere, and the ALTITUDE at which the ERL occurs goes up.
That means the rate of energy loss decreases. The surface cools less rapidly as a result.
You are confusing yourself
start herehttp://www.aos.wisc.edu/~aos121br/radn/radn/sld012.htm
graduate to herehttp://geosci.uchicago.edu/~rtp1/papers/PhysTodayRT2011.pdf

While Crutem4 is now on woodfortrees, there is a slight problem. It only goes to the end of 2010. Mathematicians may wish to improve on my crude analysis, but for what it is worth, here is what I did. I took the slope of Crutem3 from September 2001 to December 2010. Then I found the slope from September 2001 to March 2012. The drop for the additional 15 months was 0.0083. The slope of Crutem4 from September 2001 to December 2010 was 0.0083. So if I am allowed to make the assumption that when Crutem4 is completely updated, that there would be a similar drop, there will be NO temperature change in land for the past 10 years and 7 months. Granted, it is not over 15 years like Hadcrut3, Hadsst2 and RSS. But it is long enough to adopt a bit of a wait and see attitude before spendings billions that may not be needed.http://www.woodfortrees.org/plot/crutem4vgl/from:2001.66/plot/crutem4vgl/from:2001.66/trend/plot/crutem3vgl/from:2001.66/plot/crutem3vgl/from:2001.66/to:2011/trend/plot/crutem3vgl/from:2001.66/trend

Mosher, I thought you said you meant the _trend_ in the 30’s go down in Hadcrut4. Now you go back to repating that the decade was cooler than thought. But the average anomaly is slightly (about 0.015C) higher in the 30’s in hadcrut4 compared to hadcrut3. It’s not a big deal, I’m just puzzled you keep ignoring that.

“Adjustments of this magnitude can be seen through most of the CRUTEM4 database, but especially in Europe and adjacent areas. With data from such early years being so sparse, it is difficult to see both why such adjustments have been made, and the basis on which they were made”
“There are strange repeating adjustments”
Ed you are certainly pointing at big flaws in CRU’s methodology to prepare the data for analysis. To my understanding CRU further blows their own credibility with the new version.
Great report!

P. Solar says:
May 3, 2012 at 10:43 am
“What is notable here is not just cooling of earlier temps and bumping up or recent ones. There is a general reduction in the long term variations, ie removal (reduction) of the natural cycles. ”
Exactly – to better fit models – not the other way around …

Steven Mosher says:
May 3, 2012 at 1:51 pm
“1. Berkeley Proved no such thing.”
—————————————————————————–
Steven other people understands things differently. Here from Tamino’s blog at the time:
berkeley-team-says-global-warming-not-due-to-urban-heating:
” the results from the Berkeley team have confirmed that the other main global temperature estimates (NASA GISS, NOAA/NCDC, and HadCRU) got it right, and that station siting/urban heat island effects are not responsible for any of the observed temperature increase. The real reason all these analyses (including Berkeley’s) show temperature rise is: the globe is warming.”
Berkeley themselves do not go so far, but leave room for this interpretation:
“We observe the opposite of an urban heating effect over the period 1950 to 2010, with a
slope of -0.19 ± 0.19 °C/100yr. This is not statistically consistent with prior estimates, but it
does verify that the effect is very small,”
——————————————————————————
Steven Mosher says:
2. The last study Zeke, nick stokes and I did, suggested a UHI trend from 1979-2010
That trend, about .04C per decade, is consistent with the handful of regional
studies of UHI which all show trend bias of .03C to .125C per decade over the same
period. It is REGIONALLY variable. One UHI does not fit all. In the SH, UHI is much
smaller. In china and japan and Korean building practice drives it higher.
The argument that the whole database is contaminated is lacking one thing: evidence.”
——————————————————————————
As I posted above the UHI trend is different per the region but also period of time – very depending of the time of urbanisation – the time when the city grows – what you seem to ignore.
I have not seen the study you mention, but am not surprised by the relative small amount of UHI effect in the period 1979-2010.
You miss to address the influence of the demographic development which is in my view very relevant.
You do not specify where is “region variable”, but let me make a guess: it is more relevant at high latitudes. And as we do not have many cities in the south at high latitudes it is more relevant for the northern hemisphere high latitudes. If this is true, it is again an argument to my hypothesis that UHI effect on grows plays a relevant role in the measurement of data in North America, Europe, and Russian cities 19th and early 20th century.
You say that the evidence is lacking – but first the simple existence of the UHI is a first evidence to it. To deny there is any effect of UHI on grows would simply require an UHI effect that appears at first time and stays all the time constant. Is this your hypothesis of how UHI works?
Furthermore you ignore my argument above: the fact that including cities shows a slower/different trend after 1950s is an important hint. The fact that demographic changes occured also after this moment – especially in the northern region where this UHI effect is more relevant (the other hypothesis) – is again pointing towards it. No further cities grows, less warming.

The simple, direct answer to the headline question is: Yes.
All climate data has been “fiddled with”. The more important questions appear to be: why, how and to what purpose.
“Enquiring minds want to know.” FOIA might eventually allow them to know, years from now.

Which data set should be used when analyzing these predictions versus reality? I was given the impression that CRUTEM3 was not good because it did not cover the polar regions well. Presumably, we now have the cream of the crop with both CRUTEM4 and BEST. Is one better than the other? I checked out the year 1996 with both and there are HUGE differences! For example, with BEST, January 1996 was 0.282 and August was 1.095 for a rise of 0.813 between January and August of the same year. However with CRUTEM4, January 1996 was 0.208 and August was 0.220 for a rise of only 0.012 between January and August of that same year. The net difference is 0.801 C, which is supposedly the total warming since 1750. Seehttp://www.woodfortrees.org/plot/best/from:1996/to:1997/plot/crutem4vgl/from:1996/to:1997

Werner.
which dataset should be used when comparing predictions versus reality.
1. understand what the prediction actually is. Typically people predict a decadel trend.
fully understanding that monthly and yearly figures are going to be noisy.
2. There is no need to pick ONE. standard practice would be to compare the prediction
(.2C per decade ) to ALL observation datasets. Note the results and proceed accordingly.
Monthly temps are noisy. that is why you look at longer periods.

Steven Mosher says:
May 5, 2012 at 2:36 pm
Monthly temps are noisy. that is why you look at longer periods.
Thank you. However my point was not so much about the monthly noise but rather the huge difference between two land data sets for the same months.

clivebest says:
May 6, 2012 at 6:43 am
“Stephen – taking your analogy for climate sensitivity to a 1 watt increase in solar energy further. We know that 3.5 billion years ago the Earth had liquid oceans while the solar radiation was about 80 watts less than today. The sun gradually brightened by ~0.02 watts/million years. The Earth’s temperature has changed very little since then, in fact it seems to have cooled – otherwise we would not be here able to discuss it”
clivebest, what people do not take into account here is that the earth has lost a quarter of its water. This gradual loss of water compensates for the increase brightness of the sun. The oceans define earth temperature.http://sciencenordic.com/earth-has-lost-quarter-its-water
There are several attractors where the earth with its oceans reaches its thermal equilibrium.
These are defined by specific behaviour of the oceans, first would be the high capacity of the oceans to absorb heat – in a three dimensional volume (gradually in depth to 200 meters) but lose heat only at the surface.
Second would be the evaporation which is taking a lot of heat and does not allow the surface to reach higher temperature – it will not radiate much heat as a rock would do.
Then it is the ice at the surface which creates an isolation sheet over the ocean in the parts that are in the dark for too long time.
Of course then come the clouds which – if too much evaporations clouds form and shield the ocean from further heat intake, heat transfer through enthalpy and so on.
Only after all these do come the “greenhouse gases” with heat transfer through radiation where carbon dioxide is a small player.
With the earth losing some of its water, it can capture and redistribute less energy as the continents warm much more under the sun and radiate the heat away – radiation increases with T**4.
This is why globally the climate is getting slowly colder even if the sun brightened.
Of course how currents circulate the stored heat plays also a major role too.

Steven Mosher says:
May 5, 2012 at 12:28 am
I was discussing the atomic stage before even considering how it effects the opacity of the atmosphere. I do know about the ERL and longwave radiation can be measured above the top of the atmosphere to see if there has been any change in it.
Outgoing longwave radiation has not changed over recent decades, indicating that the ERL has not increased.http://www1.ncdc.noaa.gov/pub/data/cmb/teleconnections/olr-s-pg.gif
Another piece of evidence that supports the sensitivity is low.

@Lars.P
Interesting. The study supports water as being the Earth’s thermostat, with the main mechanism as being changing albedo. Too hot more evaporation and clouds lowering albedo. Too cold less cloud and higher albedo. It appears the earth was mostly covered in oceans 4billion years ago and now it is 70% water. The loss of water through methanogenesis implies 50 to 500 times more methane then , but still far below levels either of methane or CO2 for a greenhouse explanation. However water vapor content in the high atmosphere could be another thermostat at play. Either way it is a remarkable fact that Earth’s temperature has remained so constant , and the only constant factor has been a dominant water surface.

Who could help?
The big change in are lower temps 1814 until 1895, producing a steep
increasing line….
HadCRUT sets the global warming century figure 1900-2000 at 0.74 C. Now,
who can tell the global warming figure for the preceding century, 1800-1900.
Would it be 0.74 C as well, or something more or less? Help appreciated…..
JS

What also is interesting that they admit the 20-year step increase of the 60 year
Jovian cycle of 0.4 C 1815-1835……This step increase was always missing and
one can clearly see this 0.4 C increase…..somethin good in the fiddling, after all….
JS

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!OkPrivacy policy