Dr Marohasy has analysed the raw data from dozens of locations across Australia and matched it against the new data used by BOM showing that temperatures were progressively warming.

In many cases, Dr Marohasy said, temperature trends had changed from slight cooling to dramatic warming over 100 years.

BOM has rejected Dr Marohasy’s claims and said the agency had used world’s best practice and a peer reviewed process to modify the physical temperature records that had been recorded at weather stations across the country.

It said data from a selection of weather stations underwent a process known as “homogenisation” to correct for anomalies. It was “very unlikely” that data homogenisation impacted on the empirical outlooks.

In a statement to The Weekend Australian BOM said the bulk of the scientific literature did not support the view that data homogenisation resulted in “diminished physical veracity in any particular climate data set’’.

Historical data was homogenised to account for a wide range of non-climate related influences such as the type of instrument used, choice of calibration or enclosure and where it was located.

“All of these elements are subject to change over a period of 100 years, and such non-climate ­related changes need to be ­accounted for in the data for ­reliable analysis and monitoring of trends,’’ BOM said.

Account is also taken of temperature recordings from nearby stations. It took “a great deal of care with the climate record, and understands the importance of scientific integrity”.

Dr Marohasy said she had found examples where there had been no change in instrumentation or siting and no inconsistency with nearby stations but there had been a dramatic change in temperature trend towards warming after homogenisation.

She said that at Amberley in Queensland, homogenisation had resulted in a change in the temperature trend from one of cooling to dramatic warming.

She calculated homogenisation had changed a cooling trend in the minimum temperature of 1C per century at Amberley into a warming trend of 2.5C. This was despite there being no change in location or instrumentation.

Congratulations to The Australian again for taking the hard road and reporting controversial, hot, documented problems, that few in the Australian media dare to investigate.

How accurate are our national climate datasets when some adjustments turn entire long stable records from cooling trends to warming ones (or visa versa)? Do the headlines of “hottest ever record” (reported to a tenth of a degree) mean much if thermometer data sometimes needs to be dramatically changed 60 years after being recorded?

One of the most extreme examples is a thermometer station in Amberley, Queensland where a cooling trend in minima of 1C per century has been homogenized and become a warming trend of 2.5C per century. This is a station at an airforce base that has no recorded move since 1941, nor had a change in instrumentation. It is a well-maintained site near a perimeter fence, yet the homogenisation process produces a remarkable transformation of the original records, and rather begs the question of how accurately we know Australian trends at all when the thermometers are seemingly so bad at recording the real temperature of an area.

When raging floodwaters swept through Brisbane in January 2011 they submerged a much-loved red Corvette sports car in the basement car park of a unit in the riverside suburb of St Lucia.

On the scale of the billions of dollars worth of damage done to the nation’s third largest city in the man-made flood, the loss of a sports car may not seem like much.

But the loss has been the catalyst for an escalating row that ­raises questions about the competence and integrity of Australia’s premier weather agency, the Bureau of Meteorology, stretching well beyond the summer storms.

It goes to heart of the climate change debate — in particular, whether computer models are better than real data and whether temperature records are being manipulated in a bid to make each year hotter than the last.

With farmer parents, researcher Jennifer Marohasy says she has always had a fascination with rainfall and drought-flood ­cycles. So, in a show of solidarity with her husband and his sodden Corvette, Marohasy began researching the temperature records noted in historic logs that date back through the Federation drought of the late 19th century.

Specifically, she was keen to try forecasting Brisbane floods using historical data and the latest statistical modelling techniques.

Marohasy’s research has put her in dispute with BoM over a paper she published with John Abbot at Central Queensland University in the journal Atmospheric Research concerning the best data to use for rainfall forecasting. (She is a biologist and a sceptic of the thesis that human activity is bringing about global warming.) BoM challenged the findings of the Marohasy-Abbot paper, but the international journal rejected the BoM rebuttal, which had been prepared by some of the bureau’s top scientists.

EARLIER this year Tim Flannery said “the pause” in global warming was a myth, leading medical scientists called for stronger action on climate change, and the Australian Bureau of Meteorology declared 2013 the hottest year on record. All of this was reported without any discussion of the actual temperature data. It has been assumed that there is basically one temperature series and that it’s genuine.

But I’m hoping that after today, with both a feature (page 20) and a news piece (page 9) in The Weekend Australia things have changed forever.

I’m hoping that next time Professor Flannery is interviewed he will be asked by journalists which data series he is relying on: the actual recorded temperatures or the homogenized remodeled series. Because as many skeptics have known for a long time, and as Graham Lloyd reports today for News Ltd, for any one site across this wide-brown land Australia, while the raw data may show a pause, or even cooling, the truncated and homogenized data often shows dramatic warming.

When I first sent Graham Lloyd some examples of the remodeling of the temperature series I think he may have been somewhat skeptical. I know he on-forwarded this information to the Bureau for comment, including three charts showing the homogenization of the minimum temperature series for Amberley.

Mr Lloyd is the Environment Editor for The Australian newspaper and he may have been concerned I got the numbers wrong. He sought comment and clarification from the Bureau, not just for Amberley but also for my numbers pertaining to Rutherglen and Bourke.

I understand that by way of response to Mr Lloyd, the Bureau has not disputed these calculations.

This is significant. The Bureau now admits that it changes the temperature series and quite dramatically through the process of homogenization.

I repeat the Bureau has not disputed the figures. The Bureau admits that the data is remodeled.

What the Bureau has done, however, is try and justify the changes. In particular, for Amberley the Bureau is claiming to Mr Lloyd that there is very little available documentation for Amberley before 1990 and that information before this time may be “classified”: as in top secret. That’s right, there is apparently a reason for jumping-up the minimum temperatures for Amberley but it just can’t provide Mr Lloyd with the supporting meta-data at this point in time.

The Australian Bureau of Meteorology has been caught red-handed manipulating temperature data to show “global warming” where none actually exists.

At Amberley, Queensland, for example, the data at a weather station showing 1 degree Celsius cooling per century was “homogenized” (adjusted) by the Bureau so that it instead showed a 2.5 degrees warming per century.

At Rutherglen, Victoria, a cooling trend of -0.35 degrees C per century was magically transformed at the stroke of an Australian meteorologist’s pen into a warming trend of 1.73 degrees C per century.

Last year, the Australian Bureau of Meteorology made headlines in the liberal media by claiming that 2013 was Australia’s hottest year on record. This prompted Australia’s alarmist-in-chief Tim Flannery – an English literature graduate who later went on to earn his scientific credentials with a PhD in palaeontology, digging up ancient kangaroo bones – to observe that global warming in Australia was “like climate change on steroids.”

But we now know, thanks to research by Australian scientist Jennifer Marohasy, that the hysteria this story generated was based on fabrications and lies.

It’s the news you’ve been waiting years to hear! Finally we find out the exact details of why the BOM changed two of their best long term sites from cooling trends to warming trends. The massive inexplicable adjustments like these have been discussed on blogs for years. But it was only when Graham Lloyd advised the BOM he would be reporting on this that they finally found time to write three paragraphs on specific stations.

Who knew it would be so hard to get answers. We put in a Senate request for an audit of the BOM datasets in 2011. Ken Stewart, Geoff Sherrington, Des Moore, Bill Johnston, and Jennifer Marohasy have also separately been asking the BOM for details about adjustments on specific BOM sites. (I bet Warwick Hughes has too). The BOM has ignored or circumvented all these, refusing to explain why individual stations were adjusted in detail.

The two provocative articles Lloyd put together last week were Heat is on over weather bureau and Bureau of Meteorology ‘altering climate figures, which I covered here. This is the power of the press at its best. The absence of articles like these, is why I have said the media IS the problem — as long as the media ignore the BOM failure to supply their full methods and reasons the BOM mostly get away with it. It’s an excellent development The Australian is starting to hold the BOM to account. (No sign of curiosity or investigation at the ABC and Fairfax, who are happy to parrot BOM press releases unquestioned like sacred scripts.)

‘Who’s going to be sacked for making-up global warming at Rutherglen?’

By Jennifer Marohasy on August 27, 2014

HEADS need to start rolling at the Australian Bureau of Meteorology. The senior management have tried to cover-up serious tampering that has occurred with the temperatures at an experimental farm near Rutherglen in Victoria. Retired scientist Dr Bill Johnston used to run experiments there. He, and many others, can vouch for the fact that the weather station at Rutherglen, providing data to the Bureau of Meteorology since November 1912, has never been moved.

Senior management at the Bureau are claiming the weather station could have been moved in 1966 and/or 1974 and that this could be a justification for artificially dropping the temperatures by 1.8 degree Celsius back in 1913.

Surely its time for heads to roll!

[See Rutherglen graph]

Some background: Near Rutherglen, a small town in a wine-growing region of NE Victoria, temperatures have been measured at a research station since November 1912. There are no documented site moves. An automatic weather station was installed on 29th January 1998.

Temperatures measured at the weather station form part of the ACORN-SAT network, so the information from this station is checked for discontinuities before inclusion into the official record that is used to calculate temperature trends for Victoria, Australia, and also the United Nation’s Intergovernmental Panel on Climate Change (IPCC).

The unhomogenized/raw mean annual minimum temperature trend for Rutherglen for the 100-year period from January 1913 through to December 2013 shows a slight cooling trend of 0.35 degree C per 100 years. After homogenization there is a warming trend of 1.73 degree C per 100 years. This warming trend is essentially achieved by progressively dropping down the temperatures from 1973 back through to 1913. For the year of 1913 the difference between the raw temperature and the ACORN-SAT temperature is a massive 1.8 degree C.

There is absolutely no justification for doing this.

This cooling of past temperatures is a new trick* that the mainstream climate science community has endorsed over recent years to ensure next year is always hotter than last year – at least for Australia.

There is an extensive literature that provides reasons why homogenization is sometimes necessary, for example, to create continuous records when weather stations move locations within the same general area i.e. from a post office to an airport. But the way the method has been implemented at Rutherglen is not consistent with the original principle which is that changes should only be made to correct for non-climatic factors.

In the case of Rutherglen the Bureau has just let the algorithms keep jumping down the temperatures from 1973. To repeat the biggest change between the raw and the new values is in 1913 when the temperature has been jumped down a massive 1.8 degree C.

In doing this homogenization a warming trend is created when none previously existed.

The Bureau has tried to justify all of this to Graham Lloyd at The Australian newspaper by stating that there must have been a site move, its flagging the years 1966 and 1974. But the biggest adjustment was made in 1913! In fact as Bill Johnston explains in today’s newspaper [See below], the site never has moved.

Surely someone should be sacked for this blatant corruption of what was a perfectly good temperature record.

THE official catalogue of ­weather stations contradicts the Bureau of Meteorology’s explanation that the relocation of the thermo­meter in the Victorian winegrowing district of Rutherglen has turned a cooling into a warming trend. THE Bureau of Meteorology appears to have invented a move in the official thermometer site at the Rutherglen weather station in Victoria to help justify a dramatic revision of historic temperatures to produce a warming trend.

The hot questions for the Australian Bureau of Meteorology (BOM) mount up. Rutherglen was one of the temperature recording stations that was subject to large somewhat mysterious adjustments which turned a slight cooling trend into a strongly warming one. Yet the official notes showed that the site did not move and was a continuous record. On paper, Rutherglen appeared to be ideal — a rare long rural temperature record where measurements had come from the same place since 1913.

The original cooling trend of – 0.35C was transformed into a +1.73C warming after “homogenisation” by the BOM. To justify that the BOM claims that there may have been an unrecorded shift, and it was “consistent” with the old station starting further up the slope before it moved down to the hollow.

Today retired scientist Bill Johnston got in touch with Jennifer Marohasy, with me and with Graham Lloyd of The Australian to say that he worked at times at Rutherglen and the official thermometer had not moved. It was always placed where it is now at the bottom of the hollow. That information has already made it into print in The Australian.

David Karoly knew he had to defend the BOM with regard to the hot questions about adjustments to Amberley, Bourke, and Rutherglen data. What he didn’t have were photos of historic equipment, maps of thermometer sites, or quotes from people who took observations. Instead he wielded the magic wand of “peer review” — whereupon questions asked in English are rendered invalid if they are printed in a newspaper instead of a trade-magazine.

Prof David Karoly, Climate Professional called people who ask for explanations poorly informed amateurs [hotlink, paywall, see below]. In response, we Poorly Informed Climate Amateurs wonder what it takes to get Climate Professionals to inform us? Instead of hiding behind ‘peer review’, vague complex methods, and the glow of their academic aura, the professionals could act professional and explain exactly what they did to the data?

[…]

The articles by Graham Lloyd on Jennifer Marohasy’s analysis are generating debate.

CONCERNS about the accuracy of the Bureau of Meteorology’s historical data are being raised by “poorly informed amateurs”, one of Australia’s leading climate ­scientists has said. David Karoly of Melbourne University’s School of Earth Sciences, said claims BOM had introduced a warming trend by homogenising historical temperature data should be submitted for peer review.

‘Rewriting the History of Bourke: Part 2, Adjusting Maximum Temperatures Both Down and uP, and Then Changing Them Altogether’

By Jennifer Marohasy on April 6, 2014

“Anyone who doesn’t take truth seriously in small matters cannot be trusted in large ones either.” Albert Einstein.

[…] In a report entitled ‘Techniques involved in developing the Australian Climate Observation Reference Network – Surface Air Temperature (ACORN-SAT) dataset’ (CAWCR Technical Report No. 049), Blair Trewin explains that up to 40 neighbouring weather stations can be used for detecting inhomogeneities and up to 10 can be used for adjustments. What this means is that temperatures, ever so diligently recorded in the olden days at Bourke by the postmaster, can be change on the basis that it wasn’t so hot at a nearby station that may in fact be many hundreds of kilometres away, even in a different climate zone.

Consider the recorded versus adjusted values for January 1939, Table 1. The recorded values have been changed. And every time the postmaster recorded 40 degrees, Dr Trewin has seen fit to change this value to 39.1 degree Celsius. Why?

NEAR Rutherglen, a small town in a wine-growing region of northeastern Victoria, temperatures have been measured at a research station since November 1912. There are no documented site moves. An automatic weather station was installed on 29th January 1998.

Temperatures measured at the weather station form part of the ACORN-SAT network, so the information from this station is homogenized before inclusion into the official record that is used to calculate temperature trends for Victoria and also Australia.

The unhomogenized/raw mean annual minimum temperature trend for Rutherglen for the 100-year period from January 1913 through to December 2013 shows a slight cooling trend of 0.35 degree C per 100 years, see Figure 1. After homogenization there is a warming trend of 1.73 degree C per 100 years. This warming trend is essentially achieved by progressively dropping down the temperatures from 1973 back through to 1913. For the year of 1913 the difference between the raw temperature and the ACORN-SAT temperature is a massive 1.8 degree C.

Hello Soviet style weather service? On January 3, 1909, an extremely hot 51.7C (125F) was recorded at Bourke. It’s possibly the hottest ever temperature recorded in a Stevenson Screen in Australia, but the BOM has removed it as a clerical error. There are legitimate questions about the accuracy of records done so long ago — standards were different. But there are very legitimate questions about the BOMs treatment of this historic data. ‘The BOM has also removed the 40 years of weather recorded before 1910, which includes some very hot times. Now we find out the handwritten original notes from 62 years of the mid 20th Century were supposed to be dumped in 1996 as well. Luckily, these historic documents were saved from the dustbin and quietly kept in private hands instead.

[…] I went and checked not only the old newspapers but also the book in the national archive, because, guess what? The Bureau of Meteorology is claiming it was all a clerical error. They have scratched this record made on 3rd January 1909 from the official record for Bourke, which means it’s also scratched from the NSW and national temperature record.

Yep. It never happened. No heatwave back in 1909.

They have also wiped the heatwave of January 1896. This was probably the hottest January on record, not just for Bourke, but Australia-wide. Yet according to the rules dictated by the Bureau, if it was recorded before 1910, it doesn’t count.

Once upon a time — before the Great Politicization of Climate Science — CSIRO was able to analyze trends from 1880 to 1910. In 1953 CSIRO scientists were making a case that large parts of Australia had been hotter in the 1880s and around the turn of last century. >>>>>

THE Bureau of Meteorology has been forced to publish details of all changes made to historic temperature records as part of its homogenisation process to establish the nation’s climate change trend. Publication of the reasons for all data adjustments was a key recommendation of the bureau’s independent peer review panel which approved the bureau’s ACORN SAT methodology.

IT reflects poorly on key members of Australia’s climate science establishment that tribal loyalty is more important than genuine inquiry. Openness not ad hominem histrionics was always the answer for lingering concerns about what happened to some of the nation’s temperature records under the Bureau of Meteorology’s process of homogenisation.

The Pairwise Homogenization Algorithm [hotlink #1 – see below] was designed as an automated method of detecting and correcting localized temperature biases due to station moves, instrument changes, microsite changes, and meso-scale changes like urban heat islands.

The algorithm (whose code can be downloaded here [hotlink] is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally. If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.

To detect localized biases, the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors (separately for min and max) and looks for breakpoints that show up in the record of one station but none of the surrounding stations. These breakpoints can take the form of both abrupt step-changes and gradual trend-inhomogenities that move a station’s record further away from its neighbors.

NOAA/National Climatic Data Center, Asheville, North Carolina
(Manuscript received 2 October 2007, in final form 2 September 2008)

Page 4 pdf,

a. Selection of neighbors and formulation of difference series

Next, time series of differences Dt are formed between all target–neighbor monthly temperature series.
To illustrate this, take two monthly series Xt and Yt, that is, a target and one of its correlated neighbors.

This seems to be the 95th Percentile Matching (PM-95) method that BOM uses for ACORN-SAT except it’s not an X to Y neighbour comparison as in BOM’s method.

2. An Fmax test statistic

We start with the simple two-phase linear regression
scheme for a climatic series {Xt} considered by Solow
(1987), Easterling and Peterson (1995), and Vincent
(1998; among others). This model can be written in the
form:

Xt = [model] (2.1)

where {et} is mean zero independent random error with
a constant variance.

The model in (2.1) is viewed as a classic simple linear
regression that allows for two phases. This allows for
both step- (u1 ± u2) and trend- (a1 ± a2) type changepoints.
Specifically, the time c is called a changepoint
in (2.1) if u1 ± u2 and/or a1 ± a2. In most cases, there
will be a discontinuity in the mean series values at the
changepoint time c, but this need not always be so (Fig.
10 in section 5 gives a quadratic-based example where
the changepoint represents more of a slowing of rate of
increase than a discontinuity).

# # #

Fmax test statistic series lengths (n) range from 10 (e.g. 10 months, RS93 k=0.4) to 5000 in Table 1, but I can’t see any recommendation for length n in respect to temperature series. There’s nothing said about n in equation 2.2 for example. What happens when a break (c) occurs at time 5 months of n = 100 months for example? Isn’t this just effectively n = 10?

One of the regions that has contributed to GISS’ “hottest ever year” is South America, particularly Brazil, Paraguay and the northern part of Argentina. In reality, much of this is fabricated, as they have no stations anywhere near much of this area, as NOAA show below.

Nevertheless, there does appear to be a warm patch covering Paraguay and its close environs. However, when we look more closely, we find things are not quite as they seem.

[See data coverage graph]

There are just three genuinely rural stations in Paraguay that are currently operating – Puerto Casado, Mariscal and San Juan. They all show a clear and steady upward trend since the 1950’s, with 2014 at the top, for instance at Puerto Casada: [graph]

It could not be more clearcut, could it? However, it all looks a bit too convenient, so I thought I would check out the raw data (which is only available up to 2011 on the GISS site, so the last three years cannot be compared). Lo and behold! [graph]

As we so often see, the past has been cooled.

GHCN show the extent to which they have adjusted temperatures, the best part of 2 degree centigrade. [graphs]

Of course, there may be a genuine problem with Puerto Casada’s record, except that we see exactly the same thing happening at the other two Paraguayan sites. [Raw – Adjusted gif comparisons]

So we find that a large chunk of Gavin’s hottest year is centred around a large chunk of South America, where there is little actual data, and where the data that does exist has been adjusted out of all relation to reality.

From the above post, Paul Homewood’s recent temperature record posts can be accessed:

Recent Posts
How GHCN Keep Rewriting Reykjavik History
Cooling The Past In Bolivia
Cooling The Past In San Diego
Greene Hypocrisy
Higher Snowfalls Due to Change In Measurement, Not Global Warming.
Temperature Adjustments Around The World
Shub Niggurath On The Paraguayan Adjustments

It is often claimed that these adjustments are needed to “correct” errors in the historic record, or compensate for station moves. All of which makes the adjustment at Valentia Observatory even more nonsensical [old vs new].

Valentia Observatory, situated in SW Ireland, is regarded as one of the highest quality meteorological stations in the world, located at the same site since 1892, well away from any urban or other non climatic biases. The Irish Met Office say this:-

“Since the setting up of the Irish Meteorological Service, the work programme of the Observatory has greatly expanded and it has always been equipped with the most technologically advanced equipment and instrumentation. The Observatory is well known and very highly regarded by the scientific community. As well as fulfilling its national and international role within Met Éireann it is involved in many projects with other scientific bodies both in Ireland and abroad.”

If we cannot get accurate temperature trends in Valentia, we cannot get them anywhere. Yet the GHCN algorithm decides that the actual temperatures measured there do not look right, and lops 0.4C off temperatures before 1967.

Worse still, the algorithm uses a bunch of hopelessly unreliable urban sites, as far away as Paris, to decide upon the “correct temperature”, as Ronan Connolly illustrated.

[…] We saw previously how the temperature history for Paraguay, and a large slice of the surrounding region, had been altered as a result of temperature adjustments, which had significantly reduced historic temperatures and changed a cooling trend into a warming one.

I can now confirm that similar “cooling the past” adjustments have been carried out in the Arctic region, and that the scale and geographic range of these is breathtaking. Nearly every current station from Greenland, in the west, to the heart of Siberia (87E), in the east, has been altered in this way. The effect has been to remove a large part of the 1940’s spike, and as consequence removed much of the drop in temperatures during the subsequent cold decades.

Yesterday I showed how 100% of US warming since 1990 is due to NCDC filling in fake temperatures for missing data. The actual measured data shows no warming

See gif: Measured vs No Underlying Data

See graph: Percent Of USHCN Final Temperatures Which Are “Estimated” [over 50%]

The next step is to look at the correlation between how much infilling is being done vs. the divergence between estimated temperatures and measured.

See graph

There is a very good correlation between infilling and inflated temperatures. The more fake data they generate, the larger the divergence in temperature vs. measured. This is likely due to loss of data at rural stations, which are now being doubly contaminated by gridded and infilled urban temperatures.

In case you thought widespread temperature adjustments were confined to the Arctic and South America, consider again. Apparently, New Zealand has caught Paraguayan fever!

There are five stations currently operational under GHCN in New Zealand and surrounding islands. It will come as no great surprise now to learn that GHCN warming adjustments have been added to every single one. (Full set of graphs below).

In all cases, other than Hokitika, the adjustment has been made in the mid 1970’s.

This adjustment has been triggered by a drop in temperatures in 1976, as we can see with Gisborne, below. (The algorithm did not spot that temperatures recovered to previous levels two years later!)

See graph: Raw Data, Gisborne Aero

Was this temperature drop due to some local, non-climatic factor at Gisborne. Apparently not, because the same drop occurred at all eleven of the other NZ stations operating at that time.

Below is a comparison of the unadjusted annual temperatures for 1975 and 1976.

As a result of the adjustment, Gisborne’s temperatures for 1974 and earlier have been lowered by 0.7C. Similar sized adjustments seem to have been made at the other stations.

As the algorithm cannot have arrived at the adjustment by comparing NZ stations with each other, it must have used stations further away, presumably in Australia.

But can we really compare the two? Once again, the evidence points strongly to the adjustments being incorrect, and reacting to a genuine drop in temperature.

It is often claimed that, overall, temperature adjustments up and down largely cancel each other out. But, while we keep coming across warming adjustments that are questionable, I don’t see cooling ones similarly criticised. Maybe most of these are justifiable.

If this is the case, and many of the warming ones are not, then the overall effect would be much greater than suggested.

On the other hand, if many cooling adjustments are also incorrect, it does not inspire much confidence in the process.

At 1963 the cumulative adjustment is 0.7
At 1968 the cumulative adjustment is 0.6
At 1972 the cumulative adjustment is 0.5
At 1975 the cumulative adjustment is 0.4
At 1980 the cumulative adjustment is 0.3
At 1982 the cumulative adjustment is 0.2
At 1986 the cumulative adjustment is 0.1
At 2001 the cumulative adjustment is 0.1
At 2002 the cumulative adjustment is 0.0

Using the excellent web platform provided by NASA GISS it is possible to access GHCN V2 and GHCN v3 records, compare charts and download the data. It does not take long to find V3 records that appear totally different to V2 and I wanted to investigate this further. At this point I was advised that the way homogenisation works is to adjust records in such a way that a warming trend added in one station is compensated by cooling added to another. This didn’t sound remotely scientific to me but I clicked on Alice Springs in the middle of Australia and recovered 30 V2 and V3 records in a 1000 km radius and set about a systematic comparison of the two. The results are described in detail below.

In summary I found that while individual stations are subject to large and what often appears to be arbitrary and robotic adjustments in V3, the average outcome across all 30 stations is effectively zero. At the regional level, homogenisation does not appear to be responsible for adding warming in Australia. But the thing that truly astonished me was the fact that the mean temperature trend for these 30 stations, 1880 to 2011, was a completely flat line. There has been no recorded warming across a very large portion of the Australian continent.

# In Alice Springs the raw record is flat and has no sign of warming. In the adjusted record, homogenistaion has added warming by significantly cooling the past. Five other stations inside the 1000 km ring have similarly long and similarly flat records – Boulia, Cloncurry, Farina, Burketown and Donors Hill. There can be no conceivable reason to presume that the flat raw Alice Springs record is somehow false and in need of adjustment.

# Six records show a significant mid-1970s cooling of about 3˚C (Alice Springs, Barrow Creek, Brunette Down, Cammoo Weal, Boulia and Windorah) that owing to its consistency appears to be a real signal. Homegisation has tended to remove this real temperature history.

# The average raw temperature record for all 30 stations is completely flat from 1906 (no area weighting applied). There has been no measurable warming across the greater part of Australia. The main discontinuity in the record, pre-1906, arises from there being only 3 operating stations that do not provide representative cover.

# Homogenisation appears to have added warming or cooling to records where neither existed. Homogenisation may also have removed real climate signal.

# I find zero warming over such a large part of the Australian continent to be a surprise result that is consistent with Roger Andrew’s observation of no to little warming in the southern hemisphere, an observation that still requires more rigorous testing.

euanmearns | March 17, 2015 at 12:33

By way of a little further background. This post is a bit rough around the edges in part because it is a huge amount of work to clean the V3 data where large amounts of records are deleted and many are “created”. I was also feeling my way trying to make sense of how to treat the results. I have since moved on to look at Southern Africa and I hope these results will also be posted here. Excluding urban records that show warming trends, southern Africa looks like central Australia.

One thing I want to try and nail is how the likes of BEST manage to create warming from temperature records that are flat. I ventured on to Real Climate a few weeks ago and was told repeatedly that what GHCN and GISS were doing must be correct since BEST shows the same trends.

I have completed analysis of S S America and Antarctica that have yet to be published. All this pretty well confirms Roger Andrews observation that there is little warming in the southern hemisphere which I find is a real puzzle.

‘New paper finds a large warming bias in Northern Hemisphere temperatures from ‘non-valid’ station data’

The Hockey Shtick, May 28, 2015

A new paper published in the Journal of Atmospheric and Solar-Terrestrial Physics finds that the quality of Northern Hemisphere temperature data has significantly & monotonically decreased since the year 1969, and that the continued use of ‘non-valid’ weather stations in calculating Northern Hemisphere average temperatures has created a ‘positive bias’ and “overestimation of temperatures after including non-valid stations.”

The paper appears to affirm a number of criticisms of skeptics that station losses, fabricated/infilled data, and positively-biased ‘adjustments’ to temperature data have created a positive skew to the data and overestimation of warming during the 20th and 21st centuries.

Graphs from the paper below show that use of both valid and ‘non-valid’ station data results in a mean annual Northern Hemisphere temperature over 1C warmer at the end of the record in 2013 as compared to use of ‘valid’ weather station data exclusively.

[snip]

Extraction of the global absolute temperature for Northern Hemisphere using a set of 6190 meteorological stations from 1800 to 2013

Demetris T. Christopoulos

Highlights

•
Introduce the concept of a valid station and use for computations.
•
Define indices for data quality and seasonal bias and use for data evaluation.
•
Compute averages for mean and five point summary plus standard deviations.
•
Indicate a monotonically decreasing data quality after the year 1969.
•
Observe an overestimation of temperature after including non-valid stations.

In most parts of Iceland, last year was the coldest since 2000, in marked contrast to 2014.

The Iceland Temperature Series below is built up from the seven following sites, which have long running, high quality temperature records back to 1931 and earlier. They also present a reasonable geographic distribution. It is also important to note that the temperature data has been carefully homogenised over the years by the Iceland Met Office, to adjust for station moves, equipment changes etc. (For more detail, see here).

The warm years of the 1930’s and 40’s, and much colder ones that followed clearly correlate with the AMO cycle. Although the warmth has been more persistent in the last decade, only one year, 2014, has been warmer than those earlier years.

There is no evidence to suggest that temperature trends will increase in the next few years, and much to suggest that Iceland will suffer from a return of a very cold climate once the AMO turns cold again.

Below is plotted all of the currently active individual USHCN stations in Michigan. There is only one station which shows net warming since the 1950s (Mt. Pleasant) and that one appears to have a discontinuity at 1995 causing the problem.

The NOAA hockey stick in Michigan is completely fake. It has no basis in reality.

The rate of change of near surface land air temperature as estimated in the Berkeley “BEST” dataset is very similar to the rate of change in the sea surface temperature record, except that it shows twice the rate of change.

Sea water has a specific heat capacity about 4 times that of rock. This means that rock will change in temperature four times more than water for the same change in thermal energy, for example from incoming solar radiation.

Since soil, in general, is a mix of fine particles of rock and organic material with a significant water content. The two temperatures records are consistent with the notion of considering land as ‘moist rock’. This also partly explains the much larger temperature swings in desert regions: the temperature of dry sand will change four times faster than ocean water and be twice as volatile as non-desert land regions.

This also underlines why is it inappropriate to average land and sea temperatures as is done in several recognised global temperature records such as HadCRUT4 ( a bastard mix of HadSST3 and CRUTem4 ) as well as GISS-LOTI and the new BEST land and sea averages.

It is a classic case of ‘apples and oranges’. If you take the average of an apple and an orange, the answer is a fruit salad. It is not a useful quantity for physics based calculations such as earth energy budget and the impact of a radiative “forcings”.

The difference in heat capacity will skew the data in favour of the land air temperatures which vary more rapidly and will thus give an erroneous basis for making energy based calculations

In his comment to How much Estimation is too much Estimation?, Anthony Watts suggested I create a scatter plot showing station distribution with latitude/longitude. It turned out not to be the ordeal I thought it might be, so I have posted some of the results in this thread. I started with 1885 and created a plot every 20 years, ending in 2005. I deliberately ended with 2005 because this is the final year in the GHCN record prior to the US station die-off of 2006.

Every dot on a plot represents a station, not a scribal record. Stations may be comprised of multiple records. A blue dot represents a station with an annual average that was fully calculated from existing monthly averages. A red dot represents a station that had missing monthly averages for that year, so the annual average had to be estimated. Stations that had insufficient data to estimate an annual average are not shown.

In the case where multiple scribal records exist for a station in the given year, I assigned a blue dot if all records were fully calculated from existing averages, a red dot if at least one record was estimated, and no dot if none of the records could produce an estimate. I believe this errs in the direction of assigning more blue dots than is deserved. Hansen’s bias method mathematically forces estimation to occur during the period of scribal record overlap.

The first plot shows coverage in 1885, five years into the GHCN record.

[see chart]

1905 shows improved coverage across the continental US, Japan and parts of Australia. A few stations have appeared in Africa.

[see chart]

1925 shows increased density in the western US, southern Canada, and the coast of Australia.

[see chart]

At the end of WWII, not a lot of change is noticeable other than improved coverage in Africa and South America as well as central China and Siberia.

[see chart]

In 1965 we see considerable increases in China, parts of Europe, Turkey, Africa and South America.

[see chart]

A decline in quality seems to be apparent in 1985, as many more stations show as red, indicating their averages are estimated due to missing monthly data.

[see chart]

A huge drop in stations is visible in the 2005 plot, notably Australia, China, and Canada. 2005 was the warmest year in over a century. Not surprising, as the Earth hadn’t seen station coverage like that in over a century.

[see chart]

The final plot illustrates the world-wide station coverage used to tell us “2006 Was Earth’s Fifth Warmest Year“.

It’s like watching the lights go out over the West. Sinan Unur has mapped the surface stations into a beautiful animation. His is 4 minutes long and spans from 1701-2010. I’ve taken some of his snapshots and strung them into a 10 second animation.

You can see as development spreads across the world that more and more places are reporting temperatures. It’s obvious how well documented temperatures were (once) in the US. The decay of the system in the last 20 years is stark.

Mears and Wentz aper was rejected by the first Journal where it was submitted.

Spencer:

“The paper is for MT, not LT…but I think we can assume that changes in one will be reflected in the other when Mears completes their analysis.

From what little we have looked at so far, it appears that they did not correct for spurious warming in NOAA-14 MSU relative to NOAA-15 AMSU…see their Fig. 7c. They just leave it in.

Since this spurious warming is near the middle of the whole time period, this shifts the second half of the satellite record warmer when NOAA-14 MSU (the last in the MSU series) is handed off to NOAA-15 AMSU (the first in the AMSU series).

Why do we think NOAA-14 MSU is at fault?

1) AMSU is supposed to have a “Cadillac” calibration design (that’s the term a NASA engineer, Jim Shiue, used when describing to me the AMSU design, which he was involved in).
2) NOAA-14 MSU requires a large correction for the calibrated TB increasing with instrument temperature as the satellite drifts into a different orbit. The NOAA-15 AMSU requires no such correction…and it wasn’t drifting during the period in question anyway.

So, it looks like they decided to force good data to match bad data. Sound familiar?

Now that John Christy and I have had a little more time to digest the new paper by Carl Mears and Frank Wentz (“Sensitivity of satellite derived tropospheric temperature trends to the diurnal cycle adjustment”, paywalled here), our conclusion has remained mostly the same as originally stated in Anthony Watts’ post.

While the title of their article implies that their new diurnal drift adjustment to the satellite data has caused the large increase in the global warming trend, it is actually their inclusion of what the evidence will suggest is a spurious warming (calibration drift) in the NOAA-14 MSU instrument that leads to most (maybe 2/3) of the change. I will provide more details of why we believe that satellite is to blame, below.

Also, we provide new radiosonde validation results, supporting the UAH v6 data over the new RSS v4 data.

[…]

Conclusion

The evidence suggests that the new RSS v4 MT dataset has spurious warming due to a lack of correction for calibration drift in the NOAA-14 MSU instrument. Somewhat smaller increases in their warming trend are due to their use of a climate model for diurnal drift adjustment, compared to our use of an empirical approach that relies upon observed diurnal drift from the satellite data themselves. While the difference in diurnal drift correction methodolgy is a more legitimate point of contention, in the final analysis independent validation with radiosonde data and most reanalysis datasets suggest better agreement with the UAH product than the RSS product.

‘Can Both GISS and HadCRUT4 be Correct? (Now Includes April and May Data)’

justthefactswuwt / June 16, 2016

Guest Post by Werner Brozek, Excerpted from Professor Robert Brown from Duke University, Conclusion by Walter Dnes and Edited by Just The Facts:

[see graph: HadCRUT4 vs GISTEMP 1960 – 1979]

[…]

The two anomalies match up almost perfectly from the right hand edge to the present. They do not match up well from 1920 to 1960, except for a brief stretch of four years or so in early World War II, but for most of this interval they maintain a fairly constant, and identical, slope to their (offset) linear trend! They match up better (too well!) — with again a very similar linear trend but yet another offset across the range from 1880 to 1920. But across the range from 1960 to 1979, Ouch! That’s gotta hurt. Across 20 years, HadCRUT4 cools Earth by around 0.08 C, while GISS warms it by around around 0.07C.

[…]

Finally, there is the ongoing problem with using anomalies in the first place rather than computing global average temperatures. Somewhere in there, one has to perform a subtraction. The number you subtract is in some sense arbitrary, but any particular number you subtract comes with an error estimate of its own. And here is the rub:

The place where the two global anomalies develop their irreducible split is square inside the mutually overlapping part of their reference periods!

That is, the one place they most need to be in agreement, at least in the sense that they reproduce the same linear trends, that is, the same anomalies is the very place where they most greatly differ. Indeed, their agreement is suspiciously good — as far as linear trend is concerned – everywhere else, in particular in the most recent present where one has to presume that the anomaly is most accurately being computed and the most remote past where one expects to get very different linear trends but instead get almost identical ones!

Schmidt predicts a 1.3 C anomaly for 2016 (plus or minus) but his graph uses a “Pre-industrial baseline (1880-1889)”. This is NOT the GISTEMP LOTI baseline.

Schmidt’s graph has 2015 at 1.1 C. GISS LOTI data sheet has 0.87 C for 2015. Schmidt’s 2016 prediction is 0.2 C higher than 2015. So from the LOTI data sheets 0.87 + 0.2 = 1.07 C for 2016. This is only a fraction below the YTD mean (1.095) with 6 months of data still to come in and La Nina cooling.

0.54 C drop in 4 months Feb to Jun. Only another 0.23 C drop in 6 months and the 2016 mean would equal the 2015 mean (0.87). ENSO-neutral is about 0.65 and conditions will cross ENSO positive to negative this year i.e. neutral at some point before the end of the year.

In the top graph (scary) he uses a 1980-2015 baseline which I assume (doesn’t say) is in respect to each specific month rather than all months (i.e. different baseline for each month – I think). July anomaly is over 2 C (scary).

But in the next LOTI map the anomaly, from normal 1951-1980 all months mean baseline, is only 0.83 C (yawn).

Neither Cox nor Roberts addressed the IPCC’s primary climate change criteria. (I did at #17 – 10 “likes”. 19 ‘”likes” at #12.1.2 re CO2 disconnect). I doubt either of them have even the vaguest idea.

But in terms of temperature series, a useful resource to call up in the future. Way down at #30.1.1 I placed a comment re GISS “adjustments” to Gisborne Aero NZ. That comment, and others nearby, cross links to Judith Curry’s Climate Etc, Paul Homewood’s notalotofpeopleknowthat, Euan Mearns’ Energy Matters, Climate$you, and back to ‘Temperature Records’ here at CCG from Paul Homewood’s post. Further down other GISS and BEST “adjustment” case studies e.g. Puerto Casado. Zeke Hausfather did a runner on that one

GISTEMP is rubbish and provably so but at least it’s a devil we know. And the data after the El Nino peak isn’t on Schmidt’s “long-term trend” anymore despite his frantic Tweeting. I think Schmidt’s on a hiding to nothing over the next 2, 3, 4 years.

‘Rock star-scientist Brian Cox confused on more than global temperatures’

By Jennifer Marohasy – posted Thursday, 18 August 2016

[…. 3 pages….click “ALL” at bottom……]

Cox may not care too much for facts. He is not only a celebrity scientist, but also a rock star. Just the other day I was watching a YouTube video of him playing keyboard as the lead-singer of the band screamed, “We don’t need a reason”.

There was once a clear distinction between science – that was about reason and evidence – and art that could venture into the make-believe including through the re-interpretation of facts. This line is increasingly blurred in climate science where data is now routinely remodeled to make it more consistent with global warming theory.

For example, I’m currently working on a 61-page expose of the situation at Rutherglen. Since November 1912, air temperatures have been measured at an agricultural research station near Rutherglen in northern Victoria, Australia. The data is of high quality, therefore, there is no scientific reason to apply adjustments in order to calculate temperature trends and extremes. Mean annual temperatures oscillate between 15.8°C and 13.4°C. The hottest years are 1914 and 2007; there is no overall warming-trend. The hottest summer was in 1938–1939 when Victoria experienced the Black Friday bushfire disaster. This 1938-39 summer was 3°C hotter than the average-maximum summer temperature at Rutherglen for the entire period: December 1912 to February 2016. Minimum annual temperatures also show significant inter-annual variability.

In short, this temperature data, like most of the temperature series from the 112 sites used to concoct the historical temperature record by the Australian Bureau of Meteorology does not accord with global warming theory.

So, adjustments are made by the Australian Bureau of Meteorology to these temperature series before they are incorporated into the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT); and also the UK Met Office’s HadCRUT dataset, which informs IPCC deliberations.

The temperature spike in 1938-1939 is erroneously identified as a statistical error, and all temperatures before 1938 adjusted down by 0.62°C. The most significant change is to the temperature minima with all temperatures before 1974, and 1966, adjusted-down by 0.61°C and 0.72°C, respectively. For the year 1913, there is a 1.3°C difference between the annual raw minimum value as measured at Rutherglen and the remodelled value.

The net effect of the remodelling is to create statistically significant warming of 0.7 °C in the ACORN-SAT mean temperature series for Rutherglen, in general agreement with anthropogenic global warming theory.

NASA applies a very similar technique to the thousands of stations used to reproduce the chart that Cox held-up on Monday night during the Q&A program. I discussed these change back in 2014 with Gavin Schmidt, who oversees the production of these charts at NASA. I was specifically complaining about how they remodel the data for Amberley, a military base near where I live in Queensland.

Back in 2014, the un-adjusted mean annual maximum temperatures for Amberley – since recordings were first made in 1941 – show temperatures trending up from a low of about 25.5°Cin 1950 to a peak of almost 28.5°Cin 2002. The minimum temperature series for Amberley showed cooling from about 1970. Of course this does not accord with anthropogenic global warming theory. To quote Karl Braganza from the Bureau as published by online magazine The Conversation, “Patterns of temperature change that are uniquely associated with the enhanced greenhouse effect, and which have been observed in the real world include… Greater warming in winter compared with summer… Greater warming of night time temperatures than daytime temperatures”.

The Bureau has “corrected” this inconvenient truth at Amberley by jumping-up the minimum temperatures twice through the homogenization process: once around 1980 and then around 1996 to achieve a combined temperature increase of over 1.5°C.

This is obviously a very large step-change, remembering that the entire temperature increase associated with global warming over the 20th century is generally considered to be in the order of 0.9°C.

According to various peer-reviewed papers, and technical reports, homogenization as practiced in climate science is a technique that enables non-climatic factors to be eliminated from temperature series – by making various adjustments.

It is often done when there is a site change (for example from a post office to an airport), or equipment change (from a Glaisher Stand to a Stevenson screen). But at Amberley neither of these criteria can be applied. The temperatures have been recorded at the same well-maintained site within the perimeter of the air force base since 1941. Through the homogenization process the Bureau have changed what was a cooling trend in the minimum temperature of 1.0°Cper century, into a warming trend of 2.5°C per century.

Homogenization – the temperature adjusting done by the Bureau – has not resulted in some small change to the temperatures as measured at Amberley, but rather a change in the temperature trend from one of cooling to dramatic warming as was done to the series for Rutherglen.

NASA’s Goddard Institute for Space Studies (GISS) based in New York also applies a jump-up to the Amberley series in 1980, and makes other changes, so that the annual average temperature for Amberley increases from 1941 to 2012 by about 2°C.

The new Director of GISS, Gavin Schmidt, explained to me on Twitter back in 2014 that: “@jennmarohasy There is an inhomogenity detected (~1980) and based on continuity w/nearby stations it is corrected. #notrocketscience”.

When I sought clarification regarding what was meant by “nearby” stations I was provided with a link to a list of 310 localities used by climate scientists at Berkeley when homogenizing the Amberley data.

The inclusion of Berkeley scientists was perhaps to make the point that all the key institutions working on temperature series (the Australian Bureau, NASA, and also scientists at Berkeley) appreciated the need to adjust-up the temperatures at Amberley. So, rock star scientists can claim an absolute consensus?

But these 310 “nearby” stations, they stretch to a radius of 974 kilometres and include Frederick Reef in the Coral Sea, Quilpie post office and even Bourke post office. Considering the un-adjusted data for the six nearest stations with long and continuous records (old Brisbane aero, Cape Moreton Lighthouse, Gayndah post office, Bundaberg post office, Miles post office and Yamba pilot station) the Bureau’s jump-up for Amberley creates an increase for the official temperature trend of 0.75°C per century.

Temperatures at old Brisbane aero, the closest of these station, also shows a long-term cooling trend. Indeed perhaps the cooling at Amberley is real. Why not consider this, particularly in the absence of real physical evidence to the contrary? In the Twitter conversation with Schmidt I suggested it was nonsense to use temperature data from radically different climatic zones to homogenize Amberley, and repeated my original question asking why it was necessary to change the original temperature record in the first place. Schmidt replied, “@jennmarohasy Your question is ill-posed. No-one changed the trend directly. Instead procedures correct for a detected jump around ~1980.”

If Twitter was around at the time George Orwell was writing the dystopian fiction Nineteen Eighty-Four, I wonder whether he might have borrowed some text from Schmidt’s tweets, particularly when words like, “procedures correct” refer to mathematical algorithms reaching out to “nearby” locations that are across the Coral Sea and beyond the Great Dividing Range to change what was a mild cooling-trend, into dramatic warming, for an otherwise perfectly politically-incorrect temperature series.

Apart from the estimate being merely “based on” data (i.e. not actual data), and that the 1998 El Nino relativity has been erased, there is a further eye teaser concerning the LOWESS smoothing (red line) at the end point 2015. In other words, the red line guides the eye but not the brain.

2004 to 2011 is the “hiatus/pause” in the red line. After 2011 the red line hikes up abruptly due to the El Nino spike. 2013 (0.66) was ENSO-neutral,

GISS appear to be assuming, given Gavin Schmidt’s 2016 prediction and reasoning (see Tweets), that the red line will continue upwards to a new record high (anomaly 0.87+) as per 1998 in the graph rather than return to neutral (anomaly 0.66 ish) and a continued “hiatus”.

I’m guessing, given the monthly data trajectory, that instead of continuing upwards what we will see in a couple of years is just a spike in the red line smoothing, but time will tell.

Basically, what I’m getting at is that the LOWESS smoothing (red line) of annual mean datapoints 2011 – 2015 is no indication of the series trajectory from 2015 onwards. But GISS and their fan base like Cox, and his fans (“fanbois” – Dellingpole) are convinced the red line is, actually, the data trajectory.

This series progression will be make or break between now and about 2018 for Schmidt and GISS given their posturing. We could see some “adjustments” of course. But then they still have the satellites to contend with.

As I reported earlier, NOAA shows Angola as their hottest month ever in August. [see chart]

Amazing, since NOAA doesn’t actually have any thermometer readings in Angola – which is almost twice the size of Texas. [see chart]

The RSS TLT August anomaly for central Angola was 0.53, slightly above average and nowhere near a record. There has been almost no trend in Angola temperatures since the start of records in 1978 [see chart]

NOAA and NASA are defrauding the American people and the world with their junk science, which is used as the basis for policy.