Comments on: Met Office Archives Data and Codehttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/
by Steve McIntyreThu, 08 Dec 2016 12:56:06 +0000hourly1http://wordpress.com/By: Some Data Released in UK – NearWaldenhttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-218084
Wed, 27 Jan 2010 01:18:06 +0000http://climateaudit.org/?p=9683#comment-218084[…] Steve McIntyre reports that “the UK Met Office has released a large tranche of station data, together with code”. […]
]]>By: Andrew Bennetthttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-213031
Mon, 28 Dec 2009 20:20:21 +0000http://climateaudit.org/?p=9683#comment-213031Your focus may well be the heavy adjustments at Darwin, which are absolutely not supported by the online BoM data. The more recent of the latter are nicely displayed as

For Darwin time series see under ‘high quality site networks’. The slider on the site-specific page allows you to select the trend (“T”).

No doubt, the homogenized global products that are so much in discussion have serious problems.

I was responding to an earlier comment of Steve’s, same thread, that for a quick ‘return’ one should focus on selected sites rather than slog through an entire archive. My concern is the difficulty in assessing the data quality in old records from sites with unknown or turbulent histories. We should select sites that are rural and have long histories of civil order, such as

Looks like about a third of the stations are rural, mostly in N. America and Siberia.

]]>By: alantrerhttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-212933
Mon, 28 Dec 2009 05:34:00 +0000http://climateaudit.org/?p=9683#comment-212933The appropriate solution depends on the intended requirement. A “script’ these days has a broader meaning. It can be as simple as parsing data into a spreadsheet. It can be as elaborate as plotting graphs for each station in a geographic region on a map.

I have seen a call for “papers” for a balanced and reasoned analysis. For this a MediaWiki is appropriate.

I have also seen many calls for an open source alternative to both global temperature calculations and climate modeling. For this a SourceForge is appropriate.

If a script is a one-off tool to achieve a simple data manipulation then a wiki is sufficient: rich in meta data with a discussion of the process.

If the objective is data management and code development then a software management site is more appropriate.

Personally I believe both are needed. And by moniker alone climateaudit would seem an appropriate trail head.

The resources seem available and motivated. They just require an organizing force.

opens a web page where I’ve recorded links to Google Maps showing the locations of the CRU stations specified in the data release.

Scanning through this I’ve learned that about a third of the stations are rural. There are large areas such as much of Europe, South America, India, much of the Middle East, where there are no rural stations to speak of. Most of the rural stations seem to be concentrated in the other areas, such as North America and Siberia.

Since the locations are only to the nearest tenth of a degree one can only get a general impression of a station’s location unless the station name provides something specific.

]]>By: Alexej Buerginhttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-212815
Sun, 27 Dec 2009 20:25:49 +0000http://climateaudit.org/?p=9683#comment-212815Is data that has “fair credibility” not a better indication of climate than data from another place a continent away?
The focus is not on the measurements, but on the adjustments.
]]>By: ChrisZhttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-212745
Sun, 27 Dec 2009 11:48:20 +0000http://climateaudit.org/?p=9683#comment-212745

All these processes are subject to errors which cannot currently be calculated, but their total must amount to several degrees Celsius. […]
It is unlikely that [the original data sheets] would provide useful information.

I respectfully disagree. Access to the original data sheets would allow, in the absence of the actual algorithms used for all the processing you describe, to reverse-engineer this processing and check whether the various “corrections” and embellishments are sensible, and – more importantly – whether they on average push long-term trends in a certain direction. I trust it is not to be put down to selective reporting that the few spot-checks done on single stations so far have shown a steeper upward slope in the processed version whenever changes were done in comparison to the raw data. What we need now is as much raw, or at least less-processed material, as can be recovered, to do a large number of such station-by-station comparisons, in order to make a well-founded statement about the degree of bias introduced into the available dataset by the corrective processing.

]]>By: Vincenthttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-212714
Sat, 26 Dec 2009 23:10:19 +0000http://climateaudit.org/?p=9683#comment-212714These are certainly not “raw data” Few of you seem to understand what was actually nmeasured. In most cases it was the maximum and minimum temperatures taken once a day at verious times. The maximum was usually on a diffferent calendar day from the minimum. They were usually entered on data sheets, which are the true “raw data”

All the figures now supplied are the result of multiple processing. First, the average of the Max and min is taken, sometimes from the same day sometimes from the different days. Then they are averaged over the week, the month, the year after they have eliminated any readings that are unbelievable (depending what you believe), “estimated” missing sequences, time of measuement bias, changes of sites, instruments, observets, administration, and urbanization. lumped together under the heading “homogenization”.

All these processes are subject to errors which cannot currently be calculated, but their total must amount to several degrees Celsius, which should be applied to all of the figures supplied. It means that they are incapable of indicating an upwards or downwards trend unless they exceed several degrees.

The originaal data sheets are probably unobtainable. Even if they were they would be impossible to process. Even if it were p[ossible it is unlikely that they would provide useful information.

and check out York Post Office (WA) and Bathurst Gaol (NSW). From my reading of the history, those data have fair credibility in the early years. The monthly mean maximum temperatures at the two sites seem, from a simple inspection, also to lack trends.

]]>By: oldgiffordhttps://climateaudit.org/2009/12/22/met-office-archives-data-and-code/#comment-212683
Sat, 26 Dec 2009 15:58:35 +0000http://climateaudit.org/?p=9683#comment-212683First how I converted the CRU downloads to XLS
Moved all the files into one directory as .txt files
You can download this as a zip file

Rename all the files to .xls
You then get the data in one column
Uses data –text to columns, delimited –space to split into columns.

Would appreciate your comments on this anomaly.

I took the Met Office Station data for Oxford.
They give tmax and tmin.
For each year added the 12 month values and divided by 12 to get the average for the year.

Calculated (tmax-tmin)/2+tmin to get the average temperature for each year.

Took the CRU data for Oxford which only covers 1900-1980.
[ I wonder why just this limited period when the station data is readily available on the Met Office site to 2009]
Again, added each month together and divided by 12 to get the year’s average
Compared the CRU average with the Met Office average.
Minor differences until the last 3 years when the difference jumps from maximums of around 0.05 to 0.5

I’ve asked the Met Office about the anomaly but still waiting for a reply.