Why we should rally against homogenization, and I don’t mean of milk

I was invited to speak at the Liberal Democrats Conference in Sydney last Sunday. I began by explaining that while the delegates may have though homogenization was a term used exclusively for milk, that homogenization is also a technical term used in climate science, but with an altogether different meaning. It allows scientists to remodel historical temperature data so it’s closer to the heart’s desire.

These few words – closer to the heart’s desire – have been borrowed from that famous poem the Rubaiyat of Omar Khayyam. The stanza reads:

Ah, Love! could thou and I with Fate conspire
To grasp this sorry Scheme of Things entire!
Would not we shatter it to bits-and then
Re-mould it nearer to the Heart’s Desire!

Omar used the word “remould”, climate scientists say they are improving the data.

The end result is the same: something has been changed.

The scientific revolution rejected unnatural causes to explain natural phenomena, rejected appeals to authority, and rejected revelation, in favor of empirical evidence. Today, the biggest threat to science is from the sophisticated remodeling of data, known in climate science as homogenization.

An analogy can be made between the remodeling of scientific data, which is now common in a variety of disciplines from conservation biology to climate science, and “fitting up” people the police know to be guilty, but for whom they can’t muster enough forensic evidence for a conviction. This is also a form of “noble cause corruption”.

Why is it a form of corruption? Because we expect the criminal justice system to be fair, to be based on legitimate evidence. Science should also be about facts, and evidence.

Let’s consider a real world example of homogenization, specifically the maximum temperature record for Darwin.

There are very few continuous, surface temperature recording from northern Australia that extend beyond 60 years. The Australian Bureau of Meteorology, Hadley Centre (UK Met Office), Goddard Institute for Space Studies (NASA GISS), and other institutions concerned with the calculation of global temperature trends, join temperatures recorded at the Darwin post office from 1882 until January 1942 with temperatures from the Darwin airport recorded from February 1941 to the present, and then make adjustments. There is no temperature record at the post office after 1942 because the Darwin post office was bombed in Japanese air raids.

Temperatures were recorded at the Darwin post office from 1882, but are only shown in Figure 1 from 1895, which is the first full year of recordings in a Stevenson screen.

In the above chart I’m only showing the temperatures from 1895, even though they were measured from 1882, because I’ve excluded the early years when measurements were not recorded in a Stevenson screen. So, I’m only showing temperatures recorded from recognized official standard equipment.

The temperatures as recorded from the mercury thermometer in a Stevenson screen at the post office from 1895 to 1942 show statistically significant cooling of almost 2 degrees Celsius per century.

This clearly does not accord with the theory of anthropogenic global warming. And indeed these raw observational values are not incorporated into official temperature time series used to calculate national and global temperature trends. First the data is homogenized.

In particular, the Bureau of Meteorology truncate the record so it starts in 1910, rather than 1895. Then they drop down the temperatures as recorded at the post office by -0.18 degree for all values before February 1941, and by a massive -1.12 for all values before January 1937. This has the effect of creating a warming trend, as shown by the red line in the following chart, Figure 2.

The homogenized series for Darwin, which is incorporated into official data sets, shows warming of 1.3 degree Celsius, which accord much better with global warming theory.

Is this drop-down justified? Let’s consider some other series from northern Australia.

There are very few long continuous temperature records for northern Australia. The records for Broome, Derby, Wyndham and Halls Creek in Western Australia, and for Richmond, Burketown and Palmerville in Queensland are the only continuous high quality series that extend from 1898 to at least 1941. This includes the period of the adjustments to the Darwin post office data. These towns are marked in red on the map of northern Australia, Figure 3.

To be clear, I have compared Darwin post office against the series for these locations because they were recorded at the same site, in proper equipment (mercury thermometer in a Stevenson screen), and the individual series do not show any discontinuities when appropriate statistical tests are applied.

Considering the series from Western Australia, Figure 4, it is apparent that there is much inter-annual variation. This is typical of observational temperature data from around the world. While annual temperatures fluctuate from year to year, it is apparent that there is considerable synchrony between locations. Indeed, there are synchronous peaks in 1900, 1906, 1915, 1928, 1932 and 1936. Note that temperatures at Wyndham, Derby and Halls Creek all dip together in 1939, while temperature at Darwin decline from 1936, Figure 4.

Temperatures at Darwin are less synchronous with temperatures from northern Queensland, Figure 5. In particular the peaks in the Darwin temperature series appear to lag the Queensland series.

There is considerably synchrony between the Queensland locations with Burketown, Palmerville and Richmond all showing spikes in 1915. Of course this was an El Nino drought year, and the year that the Murray River ran dry in south eastern Australia. Interestingly the Queensland series all show a significant drop-down from 1939, Figure 5.

None of the temperature series, either from Western Australia or Queensand, show the expected global warming from 1898 to 1941.

Only one of these northern locations has a continuous temperature record from 1898 to the present, that location is Richmond in Queensland. The maximum temperature series for Richmond, which is shown by the green line in Figure 6, doesn’t actually look anything like the homogenized series for Darwin, which is shown by the red line.

The series for Richmond, which represents temperatures for the one location recorded by a mercury thermometer in a Stevenson screen, is actually remarkably similar to the un-homogenized series for Darwin, Figure 7.

What I’ve discovered, after looking at hundreds of such raw time series from eastern Australia, is that they tend to follow a similar pattern: there is almost always cooling in the first part of the record and then warming at least from 1960. Yet whenever I look at the homogenized official records for these same locations they all show statistically significant warming from the beginning of the record.

Let’s now consider the Bureau’s justifications for the homogenization of Darwin.

The first thing that the Bureau does to the Darwin temperature series is discard all the data recorded before 1st January 1910 on the basis it might not be reliable. Yet we know that a mercury thermometer in a Stevenson screen was the primary instrument used to record temperatures at the Darwin post office from March 1894, and that there was no site move, or equipment change, for the period to 1910. Yet this valuable 15 whole years of data from 1895 to 1910 are just discarded.

Then the Bureau drop the maximum temperature series down by -0.18 degree Celsius for all data/all recorded observations from February 1941 back to January 1910, and then again by -1.12 degree Celsius for all data before January 1937 back to January 1910. So the adjusted/homogenized series has a statistically significant warming trend of 1.3 degree Celsius per century for the period from 1910 to 2014.

This largest drop-down of -1.12 degree Celsius is justified on the basis that the site became “overshadowed by trees, especially after 1937”. This is what is written in the official catalogue. If shading from trees were the cause of the cooling, then it’s curious that the adjustment is made for the period before the site ‘deteriorated’ because of shading. This is not rational. If the problem occurred from 1937, the correction should be for this period.

Furthermore, photographic evidence from the Northern Territory State Library does not support the hypothesis that the site became progressively more shaded, Figure 8.

In particular, the trees behind the Stevenson screen along the front of the post office building are tallest in the first photograph taken in 1890. In the middle 1930 photograph, the trees immediately in front of the post office appear to have been removed. In the 1940 photograph (third), the trees have apparently regrown in front of the post office and there appears to be some shrubbery where the modified Greenwich thermometer stand was positioned in 1890.

What all of this ignores, is the cyclone that hit Darwin on 10th March 1937. According to the Northern Standard newspaper reporting immediately after the cyclone, “it raged and tore to such vicious purpose that hardly a home or business in Darwin did not suffer some damage… Telephone wires and electric mains were torn down by falling trees and flying sheets of iron, windmills were turned inside out, garden plants and trees were ruined, roads and tracks were obstructed by huge trees…”.

Is it possible that rather than the cooling being due to ‘shading’ from 1937, there was actually less vegetation following the cyclone? In fact, the drop in maximum temperatures is likely to have been due to removal of trees and shrubbery by cyclonic winds. It is more likely that vegetation which had previously screened the post office from the prevailing dry-season south easterlies that have a trajectory over Darwin Harbor, was removed by the cyclone.

While shading can create cooling at a site, a similar effect can be achieved through the removal of wind breaks.
In a study of modifications to orchard climates in New Zealand it has been shown that screening could increase the maximum temperature by 1°C for a 10 meter high shelter.

In conclusion, the homogenization of the record at Darwin is by no means unusual. I used the example of Rutherglen, in north eastern Victoria, in my request late last year to the Auditor-General of Australia for a performance audit of the procedures, and validity of the methodology used by the Bureau of Meteorology.

At Rutherglen a cooling trend of 0.35 degree Celsius per century in the minimum temperature series is homogenized into warming of 1.73 degree Celsius.

In support of this request to the Auditor-General a colleague, Tom Quirk, showed how the raw record for Dubbo in NSW is changing from cooling of 0.16 into warming of 2.42 degree Celsius per century through homogenization. Brisbane in Queensland is changed from cooling of 0.68 to warming of 2.25. Warming at Carnarvon in Western Australia is increased from 0.18 degree per century to 2.02, and so the list goes on.

At an online thread at The Australian newspaper’s website last Monday – following a terrific article by Maurice Newman further supporting my call for an audit of the Bureau of Meteorology – I noticed the following comment:

“Don’t you love the word homogenise? When I was working in the dairy industry we used to have a homogeniser. This was a device for forcing the fat back into the milk. What it did was use a very high pressure to compress and punish the fat until it became part of the milk. No fat was allowed to remain on the top of the milk it all had to be to same consistency… Force the data under pressure to conform to what is required. Torture the data if necessary until it complies…”

Clearly the Bureau uses force to remodel historical temperature data so it’s closer to what they desire.

This is not science. But given the Bureau’s monopoly, and the extent of the political support they received from both Labor and the Coalition, they can effectively do whatever they want.

Western elites are beset by the fear of a coming environmental apocalypse, and climate scientists at the Bureau of Meteorology have undertaken industrial scale homogenization of the historical temperature data to support this phobia.

The late Professor Bob Carter wrote in 2003:

“To the extent that it is possible for any human endeavor to be so, science is value-free. Science is a way of attempting to understand the world in which live from a rational point of view, based on observation, experiment and tested theory.”

“The alternative to a scientific approach”, according to Prof Carter, “is one based on superstition, phobia, religion or politics.”

What the Bureau does to the data from Darwin could even be described as a form of art, based on a mix of desire and phobia. It is not science, and we need to rally against it, against this homogenization.

One of the important facts exposed here is that continuous, uninterrupted, high quality (Stevenson Screen) temperature data sets extending back before 1910 exist around Australia. There are 7 such data sets listed here.

The BOM has almost got all Australians believing that no useable temperature data exists before 1910, so they removed all the data before 1910 from the national temperature record.

This has three important consequences:

1/ the major Centenary Drought of 1895-1903 and its correspondingly high average temperatures are removed from the record. The Centenary Drought can be easily seen in 3 of the records here. This was important in backing the Flannery deception of the recent drought being the “worst drought in history”.

2/ Based on raw temperatures, it is likely that the average annual Centenary Drought temperatures were hotter than today in most parts of Australia. This is totally unacceptable to the Global Warming movement.

3/ the early temperatures from 1910 onwards can be heavily adjusted downwards with little fear of contradiction. The whole homogenisation process requires temperatures from 1910 onwards to be dropped by 1-2 degC from their measured values to create the warming trend. This is more easily accomplished with the well documented Centenary Drought figures removed.

Whatever position you lean towards in this debate, the measurement of air temperature is a difficult task.

Temperature sensors tell you what the temperature of the sensor is, be that a glass bulb full of mercury, a thermocouple or a thermistor. Apart from the problems of calibrating the sensor (no measure device is inherently accurate), there is the problem of how to get the sensor be at the temperature of the surrounding air.

The errors in this are much greater than the temperature trends under consideration here.

Whether you think there is anthropogenic global warming or not is not the issue. Post-processing data that are so fundamentally unsound for so many reasons is extremely dubious practice, when we consider the tiny trends we are trying to detect. Even if you exclude the vested interests.

Why? Some of the energy (heat) transfer between the sensor and it surroundings is by convection, some by conduction (say from the stem of the thermometer and whatever it is attached to), and some is by radiation. In the ideal world the sensor would be very small, have a reflective surface, not be attached to anything (to prevent conduction of heat, but this is often impractical to avoid), and be actively ventilated at a constant rate. Radiant and conducted heat are problems because they not necessarily closely correlated with the air temperature of the wider environment. The source of radiant heat is of course the surroundings of the sensor, which is why the standard is a white-painted wooden Stevenson screen. But the temperature of the screen (which is determined by radiant heat impinging upon and radiating from it, and ventilation of the screen by wind) is determined by the nature of the paint and its condition, the thickness of the paint (the number of coats applied), the thermal properties of the wood and so on. The point is, there are inevitably going to be variations between screens and within a screen over time. And apart from the screen, there is the shading by trees, the colour and temperature of objects nearby, the properties of the nearby ground, etc etc.

Back to the sensor. A small body with a reflective surface will have a different radiant heat exchange (in and out) with its surroundings than a larger body with an absorptive surface. A small reflective sensor surrounded by moving air can be expected to read lower than a large sensor surrounded by still air if:
• the two sensors are located within the same general environment, which is typically the screen,
• the screen is warmed by the sun, so that the screen is warmer than the surrounding air.

I cannot cite a higher authority because I went through this exercise about 40 years ago; it’s basic physics that the well ventilated small object will be more influenced by the actual air temperature and less by the incoming radiant energy from the warm wooden box, than the large sensor that is surrounded by a thick boundary layer of still air and the same warm wooden box. The boundary layer of still air around the small ventilated sensor is very thin (because of the air rushing past). So the heat transfer to and from the sensor will be more influenced by conduction and convection across the thin boundary layer between the air and the sensor, relative to the radiant heat exchange taking place at the same time. For the larger unventilated sensor, the balance between conduction, convection and radiation is tipped more towards radiation because the relatively still air within the screen leaves a comparatively thick boundary layer of absolutely still (insulating) air around the sensor. No absolute numbers here, just the basic principles of heat transfer. On top of this, for ideal measurements and trend analysis the screen itself should also be ventilated at a constant rate and generally managed the same as the sensors themselves, but in the 19th century all this was a step too far and so we have the comparatively arbitrary convention of the naturally ventilated Stevenson screen.

There are other factors to consider, but this post is too long already. In a couple of nut shells:
• Anyone who thinks they are measuring the absolute temperature of air 5 feet above earth’s surface with an error of less than one degree is probably fooling themselves unless they have designed their sensor and its surrounds, and the surrounds of the surrounds, very carefully. Better then +/- 0.5 degree; very very unlikely.
• The factors that affect repeatability – which is what these discussions about homogenising data are mostly about – are mostly the same as the accuracy factors, but these factors are so numerous, complex, variable over time, and mostly unrecorded, I wonder about the reliability of any surface temperature record. The use of long term records using similar instruments (remembering that no two mercury in glass thermometers are the same) is the best anyone can do. But the whole exercise is like measuring the diameter of a pin with several wooden primary school rulers.

Didn’t the climategate emails from our BoM chief (Jones?) clearly reveal that their tactic for dealing with any questioners about temp measurements was to “snow them with all the raw readings so they can’t make any sense of our reports”?

So as soon as someone takes the time and effort to try and make sense of the raw measurements (eg pre-1910), the BoM excludes them from the official reporting?

Over at Judith Curry’s blog Zeke Hausfather admits that they have adjusted US warming up by 100% compared to raw data. . Here’s his quote——-
“Fixes to errors in temperature data have effectively doubled the amount of U.S. warming over the past century compared to the raw temperature records.”
So how does that compare to the Watts stations used in their study? Dr Spencer said that the Watts study reduced US temps by 50%. That’s a turn around of 150% between the 2 data-sets.

Mathematician Mike Jonas is unimpressed with Zeke’s post as well. Here is his summary at Climate etc ————–

Mike Jonas | February 10, 2016 at 12:36 am | Reply

This is weird. The CRN temperature anomaly spends the vast majority of the time well inside the -2 to 2 deg C range. The CHN(raw)-CRN and CHN(adj)-CRN differences frequently exceed +2 or 2 deg C. IOW just the difference between the two frequently exceeds the normal range! I find it hard to accept the assertion that “adjustments are effective at finding and removing problems in the historical network“.

I’m also unhappy with the concentration on trends. Given that temperatures can easily vary by 10 degrees C between summer and winter months at most locations, it is surely inappropriate to use trends over barely more than a decade to assess whether temperature adjustments have been reasonable, when those trends are typically of a fraction of a degree C over the whole period. It seems entirely reasonable to suppose that on that timescale two locations 50, 100 or 150 miles apart could easily have opposite trends.

Maybe it is just fanciful to suppose that meaningful trends can be seen in any of this data.

So let me get this straight … Jennifer is using claimed changes to – and excisions from – the temperature records for northern Australia to observe that

“The [non-‘homogenised’] temperatures as recorded from the mercury thermometer in a Stevenson screen at the post office from 1895 to 1942 show statistically significant cooling of almost 2 degrees Celsius per century. This clearly does not accord with the theory of anthropogenic global warming.”

To me it seems a rather ‘long bow’ to draw … from northern Australia to the entire globe. And I also wonder at how Jennifer is clearly at odds with the ‘homogenisation’ of our early records, while her article does not – in my opinion – establish that such ‘homogenisation’ was faulty.

In my opinion, it does not help her argument to note that Rutherglen, Dubbo etc were also ‘homogenised’. If there is error in BoM’s modifications, then it is not sufficient, in my opinion, to point to the differences between the modifications and the original values as if that some proof of error.

“From the earliest days of the penal colony the journals of the First Fleet officers remarked upon the weird, often violent climatic changes that made survival in the antipodes such a fraught, contingent affair.

There were no records to tell them of the long run of unusually mild years that left the harbour explored by Captain Cook such a verdant green landscape.

The colonists did not know of el Nino or the Southern Oscillation Index.

They could see for themselves, however, the fearful thunderstorms that boiled up over the southern horizon and fell upon the little settlement, uprooting trees, smashing down flimsy huts and killing unprotected livestock with cannonades of lightning.

Lieutenant William Dawes, an engineer and surveyor of the Royal Marines, was soon put to work recording the colony’s weather at the little observatory he had built to observe the passage of a comet.

Even though the colonists’ understanding of the hypercomplex interplay of all the variables feeding into their weather and climate was unsophisticated, they understood the importance of recording them.

“The Bureau is also publishing fact sheets on all individual ACORN-SAT locations, including adjustment history. It anticipates these will be available by the end of 2017. Six sites have been published to date including:

Amberley
Deniliquin
Mackay
Orbost
Rutherglen
Thargominda ”

The following are links to some of the fact sheets that are already available for particularly contentious sites.

I realize that the bureau are being very tardy in addressing the issues raised by yourself and others. As you had the ear of the Coalition Environment Committee last October, did you suggest, to accelerate the process. that either staff of the BOM be reassigned from their present duties or new staff or possibly a new auditing department of the BOM be setup for this purpose?

Either the BOM staff in Queensland that have been recently retrenched or even the laid off CSIRO climate scientists could be re-employed after being re-educated appropriately.

I am sure the IPA would surely approve of an expansion of the BOM bureaucracy and the necessary resources to examine in detail your claims re Darwin and the other six stations in question. By the time the end of 2017 has arrived you may with your colleagues have another couple of station to keep them busy for the next couple of years. The tax payers will be getting full value for their dollar.

To avoid all this effort and expense, I think it might be wise to examine the issue in a more holistic manner rather than conducting forensics on individual sites.

The question is how sensitive the overall trend is to data form an individual station or a small group of stations. For example if the temperature data for the seven questionable sites were expunged from the record, how much difference would it make it to the trend?

Would it lower the trend value sufficiently to eliminate any possibility that Australia has noticeably warmed since .. whenever?

In summary Ken found that the raw data for Tmin gave trend of 0.7 degrees per century while the Acorn data shows a trend of 1.03 degrees (53% increase) per century. Ken has collated the Tmax data likewise which showed a much more moderate increase due to homogenization ( 13%).

Interestingly these values are much lower than the very latest UAH v6 beta5 value (from 1979 till present ) of 1.5 degrees per decade, which fortuitously is down from the precariously warm 2.4 degrees per decade of UAH v6 beta 4.

It would be a fascinating exercise, if Ken could be requested to redo his calculations and eliminate the above six stations plus Darwin (maybe throw in Bourke for free) to see how much this effects the Acorn trend in both Tmin and Tmax.

Intuitively I suspect it may matter little (which may depend on how you define little) as we are looking at 7 or 8 stations out of 103 but I would be happy to be shown to be wrong.

In the meantime and in the interest of saving a lot of unnecessary hooha, I think everyone involved should have a nice cold bath and cool down.

No spangled drongo, clearly the staffing levels at the BoM need to be massively increased to audit the temperature data from the sites Jennifer has issues with. They are obviously taking too much time to adequately address her concerns.

I think I need to reiterate once again, but this time with feeling, that rather than being bogged down with the minutiae of particular sites, one should look at the big picture. Personally I really don’t know who to believe with regard to Rutherglen, Deniliquin etc. but frankly, I don’t give a damn.

A standard practice in any scientific discipline is to examine the sensitivity of a process (such as averaging) to uncertainties or errors in the input.

If you have identified these significant errors then you must investigate what effect does this have on the results. If 8 of 104 sites are in error then remove them and then average the results.

You might find Australia has cooled overall which would be a significant result, or it might have dropped the trend by 7% ,or it may have raised the trend values by 50% (which would make it comparable to the UAH satellite data for the current version).

Alternatively you could include in the average all the suspect sites but with zero trends or trends from the raw data and look at how much this changes things. This is an undergraduate or even high school exercise if you have the data at hand. See Ken Stewart.

Without doing the above exercise, then you are wasting your’s and everybody else’s time. I guess your’s at the IPA’s expense, but as the resources of the BOM are dwindling, do really you want to make the taxpayer chase you and your colleague’s obsessions down every rabbit hole?

I think the BOM might have different priorities. After general forecasting, marine forecasting, cyclone tracking etc. etc. your enquiries maybe at the end of a long queue . Hopefully by the time you arrive at the start of the queue you may have done the extremely obvious exercise I suggested above and come to an appropriate conclusion.

As for outrage , Jennifer please reserve it for when it’s really due. The UAH satellite data, if you had any outrage left ,would send you into apoplexy. The major problem is not only is it heavily homogenized but the homogenization changes massively from version to version (even from one beta to the next- see my comment on Feb 9 above).

I seem to recall you made the following statement in October of last year which is pertinent.

jennifer October 23, 2015 at 4:34 pm #

Miker,

“I appreciate you taking the time to draw these issues to my attention; particularly the extent of homogenization of the UAH satellite data. I have avoided using this data because I know it has been homogenized, but I was unsure of the extent”

To further elaborate on the extent in the Australian context.

For the Australian UAH data , going from v5.6 of a year or two ago, through to the current version 6 beta 5, the trend has changed twice by over 50%, firstly from 0.16 to 0.24 and then back to 0.15 degrees per decade . Looking at the data for every month and for each version, the vast majority of temperatures,if not al,l are changed ,even between successive beta versions !

Now these changes are simply due to changes in the algorithm used, as the original raw satellite data is identical.

As they say about a spinning roulette wheel,” around and around she goes, where she lands nobody knows”.

What is even more amusing Roy Spencer’s statement in his description of the ad hoc changes introduced to his latest beta “We also find that the resulting LT trends over the U.S. and Australia are in better agreement with other sources of data” . He hasn’t specified what the other sources are , but it may mean he tried to bring the trend value, from the ridiculously high value of 0.24 ,down to something that more closely approximates the much lower Acorn trend value.

So assuming this is true (it is hard to know because of the vagueness of his description) then he is possibly homogenizing his Australian satellite data against the Acorn data. What a joke.

Just an interesting take on Perth’s recent equalling heatwave of four days. According to ACORN, Perth had six consecutive days of +40C in 1933. Since ACORN is the ‘goto’ standard of the BoM, wouldn’t this have been the record? Ah, homogenization!

I think the point about Jennifer’s article is that whilst these are only a few of the 103, they are valuable pointers to the way BoM is undertaking its work, and if it isn’t trustworthy for these 6, then how can we believe what they are doing with the whole data base? I would like to believe the large amount of money that is taken from me by the government each year is actually being spent wisely – and not on poor quality science or in pursuit of a political agenda.

Do these two homogenize each data point by hand? If so the solution is easy. Just get rid of these two no hopers and assign the task to someone more trustworthy. Someone in finance or economics area would be most suitable, as we all know, these are the most trustworthy professionals there are outside of used car salesmen and lawyers.

I know of someone well credentialed and who well understands all the intricacies of the scientific method My suggestion is Maurice Newman. I think he may be looking for full time work at the moment. Currently he seems to be on extended leave and just utilizing his extensive knowledge of climate science to provide commentary for the Murdoch press on a part time basis.

As an economist (economics is commonly referred to as the “dismal science” for excellent reason) his talents were utilized to advise Tony Abbott on the economy. He was a stunning success in this role as evidenced by the state of the economy at the time, before being told to get on his bike by Malcolm.

Other persons who could fill the role dependent on availability are Alan Jones and Andrew Bolt as they claim to know climate science better than anyone at the BoM.

All suitable choices, but Jennifer, maybe you would like the gig? Maybe get spangled drongo to help out.

By the way for some unknown reason Watts did not give the regional distribution of the the stations he included versus the same information for those he excluded. Trends always vary from region to region so the absence of this information really does little to instill any confidence in his work.

Good for you Jen.
It does indeed appear to be a form of ‘noble cause corruption’.
I feel sad for the genuine, well trained people who work hard to do their job.
Unfortunately, this branch of science along with ‘environmental science’ is suffering some major reputational damage amongst the real people who live and work in the real weather/climate/environment.
BoM is actually getting worse at seasonal forecasting… not better.
Why is that?

Yes I agree a balanced diet is essential and I hope likewise you are reading widely

. I tend to read, and sometimes comment on, sites that have a diverse range of opinions on climate matters.

My favourites are of course this one, Ken’s Kingdom where i have had some engaging exchanges with Ken Stewart, Roy Spencer’s and Judith Curry’s blogs and on the other side of the fence Moyhu, Tamino and.. then there is Physics.

Do you have your favourite haunts other than this site? Maybe you could provide some comments on these other sites.

As for the other forms of media I used to subscribe to both Fairfax and the Australian. Despite the Australian ‘s journalism being superior to Fairfax in a lot of respects , I did find the religious crusades by the cultural warriors a bit too tedious so I have dropped it.

We also have Foxtel, so for humour, I tend to watch either the Comedy Channel or the shock jocks on Fox News.

Yes Spangled.
I did watch them tippy toe around the elephant.
A couple of us went to find out what on earth was going on there in early January.
If you go here:https://www.facebook.com/women4livingbasin/
And scroll down to Jan 29th, you will find the article we wrote about it.
The other ‘Elephant’ that is not in this article, was that a lot of this disaster is because our ‘authorities’ took too much notice of BoM’s seasonal forecasting in 2013.
In Autumn 2013, BoM forecast an 85% probability for a ‘wetter than average’ Winter/ Spring 2013.
That was partly the reason why entities like the MDBA, CEWH and OEH thought it would be a great idea to splash and trash all that stored water onto an already swollen system.
Instead of being ‘wetter than average’, Spring 2013 was the start of a very severe drought.
A similar problem occurred in our valley but we were way less impacted than the communities out there.
Their environment has been totally degraded and quite clearly sacrificed for some other environment somewhere else or other downstream…. and they’re suffering from a man made drought.
We left that area and travelled downstream off past Lake Victoria and then on to the Lower Lakes in SA.
They were completely full.

From what I saw I think the acronym MDBP is a total misnomer.
It should be MFDDP…..The Murray Flood Darling Drought Plan…..because that’s exactly what they’re doing!
And BoM is most definitely one of the entities that should be called to account for it!
The Water Act 2007 gifted BoM a legislative monopoly over reporting on water resources.
And BTW….I would happily rally against the homogenisation of milk too!
It tasted much better before then 🙂

Over at climate audit Steve McIntyre has recently obtained more data on some of the past studies that claimed to show less warming for the Med WP. Using his maths and stats skills he has been able to find much higher dendro info for the Med wp when compared to recent warming.
Some of his graphs show the results compared to the earlier studies and Willis Eschenbach’s comments backed up by maths expert Nic Lewis endorse Steve’s results. Here’s Willis’s comment———-

“Steve, I’m overjoyed to see your continued deconstruction and disambiguation of the tree ring data.

I was saddened but not at all surprised by this statement:

Although the article in consideration was published more than a decade ago, the analysis in today’s article was impossible until relatively recently, because coauthor Luckman withheld the relevant data for over a decade.

The tragedy of modern climate science is that this kind of scandalous scientific malfeasance goes not only entirely unpunished, but is completely ignored by the overwhelming majority of mainstream climate scientists. Gotta say, it angrifies my blood. People tell me all the time that I’m too passionate about this kind of scientifically criminal behavior.

The taxpaying public funds these charming fellows to go adventure in the wild and secure important scientific data, and they treat it as if it were their own. Disgraceful and shameful. Me too passionate? I say that people are not passionate enough about this underhanded, deceptive withholding of scientific information.

Except Canadians, I have it on good authority that they never sweat unless there’s emotional attachment. But for the rest … where is the richly deserved outrage?

But I digress. Good stuff, and fascinating. I gotta learn more about “random effects statistical techniques”, where might I start my further education?

My best to you,

w.” End of quote.

Some of these people seem to be extreme cherry pickers and con merchants, certainly not serious scientists.

Great article. “Lying for dollars” could be a nice title describing what has been done with tax payer money.
First they attack you, then they try to ignore you, then they try to distract you, then they try to buy you off, then they do what is right.
Hang tough.
This pernicious social mania needs to be killed off.

Science in schools has been dumbed-down long enough to not expect much response from the community at large, at least the younger gens. I’m waiting for the nation’s Chief Scientist to make a comment. Ian Chubb lost his opportunity, but how about the incumbent. Waiting … waiting … …

“Even watts from denial central”
Jesus…can’t you people just try engage normally without holocaust allusions..are your arguments that thin…
Sarcasm and rhetorical…
By the hysterical gibberish…I could ask if you are off the grid and got rid of the car…
But we both know your just another keyboard klimate warrior..pretending..to care..

Sorry I didn’t introduce the dreaded h word, but by your use, you seem to have invoked Godwin’s law. However I do apologize if you took personal offence to the use of the term ‘denial’ in isolation. I will now refer to the Watts site as (non holocaust) denial central.

Yes I have been known to use rhetorical devices and sarcasm in fact I know all the tricks, dramatic irony, metaphor, pathos, puns, parody, litotes, hyperbole and even satire. On that last note, beware of a hedgehog or an echidna called Spiny Norman see http://www.urbandictionary.com/define.php?term=spiny+norman. Spiny is a committed Christian and is deeply offended by your use of the Lord’s name in vain.

No, Drapetomania I am not off the grid but I do have solar panels and enjoy watching the incoming photons on a sunny Melbourne day at 66c/kw-hour feed in tarrif. Unfortunately where I work has little public transport so I am forced to use the car but I do save a bit using a diesel and the wife drives a Prius C and averages 4 litres/100k around town. Terrible isn’t it.

Correspondingly I assume you drive a vehicle like this to the local milk bar. Is this yours for sale?

I have travelled long and difficult fields of reality to find the Hairy- nosed Bunyip; animal of speculation and legend in a cotton paddock just out of Wee Waa.Was a UNSW climate scientist looking for relevance and a fat grant. Never mind.

Miker, i think you have let through to the keeper the key points that Jen is making. Data is manipulated to suit a meme. Now you may say no its not but why is it that data in the past is invariably adjusted down not a 50/50 split of ups and downs? Why do they leave out data that does not suit the warming? ie why do they remove the pre 1910 data? why where there is other continuous data does the data show a similar trend as the pre adjusted darwin data?

You also might like to ponder exactly what the temperature anomaly can actually show us?

“a representative lower-limit uncertainty of ±0.46 C was found for any global annual surface air temperature anomaly. This ±0.46 C reveals that the global surface air temperature anomaly trend from 1880 through 2000 is statistically indistinguishable from 0 C, and represents a lower limit of calibration uncertainty for climate models and for any prospective physically justifiable proxy reconstruction of paleo-temperature. The rate and magnitude of 20th century warming are thus unknowable, and suggestions of an unprecedented trend in 20th century global air temperature are unsustainable.” http://eae.sagepub.com/content/21/8/969.abstract

Like yourself, I was puzzled for quite a while why more adjustments are made up than down. After thinking about it and doing a bit of reading, I think I may have an understanding of the reason for the disparity. It seems it is simply the result of the movement of stations at various times over the past century, from UHI affected sites, such the centre of towns (typically the P.O), to more rural sites, often adjacent to airports or other less UHI affected sites. You can imagine a number of reasons why this would be a sensible thing to do.

As well as avoiding the effects of increasing UHI as time passes due to the encroachments of buildings, roads etc there are other good reasons why it would be sensible to move a station to a more remote location. In some cases it may be to reduce the interaction with the local populace, traffic etc. and even in some cases possibly related to land values. The latter is probably a minor consideration.

Comparison of the two sites, before and after the move, will more often than not show a cooling, simply as a result of being moved away from urban heat environments. The BoM and similar bodies in other countries are interested in trying to determine whether temperature changes are exogenous(due to natural variation or driven anthropomorphically, depending on you viewpoint) rather than simply being due to this move.

I know it may cause an outbreak of cognitive dissonance amongst the usual readers of this blog as, ironically the increase in trends, as a result of homogenization, is likely to be due to BoM’s compensation for UHI. The homogenization is an attempt to comparing apples with apples and not with some other fruit (especially cherries) .

I think my conjecture could be easily refuted, if one could demonstrate that the number of sites where the station has been moved from a remote site to the centre of town is comparable to the number that have been moved away from the centres of towns.

Finally, I am still waiting for Jennifer, or one of the others who comment here, to attempt to quantify the significance of the anomalous sites that they apparently have identified, upon the overall Australian Acorn trend value. For those that are interested in the answer, to help you along the way, you might examine Ken Stewart’s data which I refer to above in my comment of February 10.

I know I am beginning to sound like a scratched record but without this information then these discussions, including my own contributions, are full of useless hot air. The only effect would be to create localized UHIs , that even the skills of the BoM could not successfully homogenize away.