Preliminary comments on Hausfather et al 2013

There’s a new paper out today, highlighted at RealClimate by Hausfather et al titled Quantifying the Effect of Urbanization on U.S. Historical Climatology Network Temperature Records and published (in press) in JGR Atmospheres.

I recommend everyone go have a look at it and share your thoughts here.

I myself have only skimmed it, as I’m just waking up here in California, and I plan to have a detailed look at it later when I get into the office. But, since the Twittersphere is already demanding my head on a plate, and would soon move on to “I’m ignoring it” if they didn’t have instant gratification, I thought I’d make a few quick observations about how some people are reading something into this paper that isn’t there.

1. The paper is about UHI and homogenization techniques to remove what they perceive as UHI influences using the Menne pairwise method with some enhancements using satellite metadata.

2. They don’t mention station siting in the paper at all, they don’t reference Fall et al, Pielke’s, or Christy’s papers on siting issues. So claims that this paper somehow “destroys” that work are rooted in failure to understand how the UHI and the siting issues are separate.

3. My claims are about station siting biases, which is a different mechanism at a different scale than UHI. They don’t address siting biases at all in Hausfather et al 2013, in fact as we showed in the draft paper Watts et al 2012, homogenization takes the well sited stations and adjusts them to be closer to the poorly sited stations, essentially eliminating good data by mixing it with bad. To visualize homogenization, imagine these bowls of water represent different levels of clarity due to silt, you mix the clear water with the muddy water, and end up with a mix that isn’t pure anymore. That leaves data of questionable purity.

4. In the siting issue, you can have a well sited station (Class1 best sited) in the middle of a UHI bubble and a poorly sited (Class5 worst sited) station in the middle of rural America. We’ve seen both in our surfacestations survey. Simply claiming that homogenization fixes this is an oversimplification not rooted in the physics of heat sink effects.

5. As we pointed out in the Watts et al 2012 draft paper, there are significant differences between good data at well sited stations and the homogenized/adjusted final result.

We are finishing up the work to deal with TOBs criticisms related to our draft and I’m confident that we have an even stronger paper now on siting issues. Note that through time the rural and urban trends have become almost identical – always warming

up the rural stations to match the urban stations. Here’s a figure from Hausfather et al 2013 illustrating this. Note also they have urban stations cooler in the past, something counterintuitive. (Note: John Nielsen-Gammon observes in an email: “Note also they have urban stations cooler in the past, something counterintuitive.”, which is purely a result of choice of reference period.” He’s right. Like I said, these are my preliminary comments from a quick read. My thanks to him for pointing out this artifact -Anthony)

I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data. Our approach in Watts et al is to locate the best stations, with the least bias and the fewest interruptions and use those as a metric (not unlike what NCDC did with the Climate Reference Network, designed specifically to sidestep the siting bias with clean state of the art stations). As Ernest Rutherford once said: “If your experiment needs statistics, you ought to have done a better experiment.”

6. They do admit in Hausfather et al 2013 that there is no specific correction for creeping warming due to surface development. That’s a tough nut to crack, because it requires accurate long term metadata, something they don’t have. They make claims at century scales in the paper without supporting metadata at the same scale.

7. My first impression is that this paper doesn’t advance science all that much, but seems more like a “justification” paper in response to criticisms about techniques.

I’ll have more later once I have a chance to study it in detail. Your comments below are welcome too.

I will give my kudos now on transparency though, as they have made the paper publicly available (PDF here), something not everyone does.

Post navigation

106 thoughts on “Preliminary comments on Hausfather et al 2013”

Typo
“takes the well sited stations and adjusts them to be closer to the well sited stations,”
UHI appears in the minimum temps which occur in the dawn, not so much the max temps. Surely a more accurate temperature scale could be created by using only max temperatures.

Typo: “homogenization takes the WELL sited stations and adjusts them to be closer to the WELL sited stations” I assume you meant “homogenization takes the well sited stations and adjusts them to be closer to the POORLY sited stations”

First off, never let the twits stampede you.
I will not read this until I’m done work but cynicism rules, Hausfather’s methods seem to be the definition of insanity, that of repeating the same actions hoping for a different result.
Nice comparison, muddying the water.
I expect circular argument,statements such as, no matter how we smear the station data about, we get these same results, therefore the station siting problems do not matter.

“…homogenization takes the well sited stations and adjusts them to be closer to the well sited stations…”
Did you mean “homogenization takes the well sited stations and adjusts them to be closer to the poorly sited stations”?

Please allow me to digress for a moment. I am facing the same, exact problem with test results from other labs. We take the same product (in your case it is the weather) and measure we it in pretty much the same manner as other labs. The measurements are then treated differently so different labs giving different ‘results’. The outputs from different methods do not match, though the method of making raw measurements is pretty much the same because scales, particle counters, thermometers and gas analysers are pretty much the same.
The bias that results in different final answers starts immediately the calculations begin. I agree the whole thing boils down to making final claims that are, or are not solidly rooted in the raw data available. There are erroneous conceptual constructs which are applied and this affects what happens to the raw numbers.
It is frustrating in the following manner: We were asked to provide a ‘conversion’ so our results will be ‘comparable with the results from other labs’ who are using other data processing methods. Our position is, why would we do that when the other methods are known to be faulty, arbitrary, questionable, variable, even invalid etc?
Why not have a conversation about doing things correctly, then trying to process old raw data into new, correct results? That is exactly what you are proposing. Why convert newer correct results to ‘match’ old questionable ones? The paper above is basically a call to use a different protocol, with known issues, to process a larger set of contaminated raw data into a result that is more or less the same as older treatments which are broadly accepted have known issues.
My recommendation, Anthony, is to stick with the most correct method you have at your command, and show how you do it, and present the results. It also helpful to show that the ‘other methods’ give different results, and show why (if possible) that when one takes a First Principles approach they will necessarily be different for X and Y reasons. It is more work and you should not have to do it. They should be willing to listen to arguments developed from First Principles, but reality impinges: many people are not capable of following a logical explanation – not nearly as many as we might suppose. I am not claiming perfidious bias (as many do), I am suggesting they are not competent.
The paper is indeed a ‘justification’ and its value is it gives the occasional reader an alternative view, well referenced in a way, to how some are approaching the issues, thereby giving you a chance to highlight differences. Observing this does not invalidate other, more correct methods such as, I believe, the one you are using. We should not complain when someone details their the methods by which they arrived at what is a demonstrably incorrect final result. It is a bit like encouraging Weepy Bill to speak publicly as much as he can because it lets people know what childish, uninformed, agenda-driven fanaticism looks like.
Additionally, it is important to keep siting quality a issue separate from UHI and place the difference front and center. It is patently obvious the two are different issues and both must be resolved. Argumentative obfuscation by Hausfather does not correct methods or validate deficient conclusions. Press on. Someone has to do things properly and if the Mennes and Hausfathers of the climate circle are not intellectually or methodologically up to the task, others must make the effort and don the Yellow Jersey. Congrats. It looks good on you.

Hi Anthony,
You are correct in asserting that this paper makes no real claims regarding station siting, but rather focuses more on meso-scale urbanization effects. To the extent that siting issues are urbanity-correlated they might influence the results, but they are not really the focus of the paper. We do mention them briefly in the introduction:
“To further complicate matters, changes associated with urbanization may have impacts that affect both the meso- scale (102–104 m) and the microscale (100–102 m) signals. Small station moves (e.g., closer to nearby parking lots/ buildings or to an area that favors cold air drainage) as well as local changes such as the growth of or removal of trees near the sensor may overwhelm any background UHI signal at the mesoscale [Boehm, 1998]. Notably, when stations are located in park-like settings within a city, the microclimate of the park can be isolated from the urban heat island “bubble” of surrounding built-up areas [Spronken-Smith and Oke, 1998; Peterson, 2003].”
The figure you cite from our paper does not show us cooling urban stations in the past. First of all, the baseline period used is 1961-1990, which forces agreement over that period (as we are dealing with anomalies, and comparing trends, the absolute offsets are somewhat irrelevant). Second, what is being shown is not urban stations and rural stations, but rather all stations using only rural stations to run the homogenization process, all stations using only urban stations to run the homogenization process, all stations using all stations for homogenization, and all stations with no homogenization (only TOBs adjustments).
This graph was created by our co-author Troy Masters (who blogs as Troy_CA). Its an important part of our lengthly analysis of the possibility of urban “spreading” due to homogenization. We find that while using only urban stations to homogenize does increase the trend (suggesting that urban spreading is indeed a possible concern), the results of using all stations to homogenize are effectively identical to those of using only rural stations (which, by definition, cannot spread in any urban signal provided they are sufficiently rural, something that we try to ensure by examining multiple different urbanity proxies). Our supplementary information contains more figures examining the specific breakpoints detected and adjustments made by urban and rural stations separately during the pairwise homogenization runs.
We really tried to ensure that we were examining the homogenization process in a way that would avoid adjusting rural stations to be similar to urban stations. I hope folks will take the time to read that section of the paper in depth, as well as to look at the figures in the supplementary materials. Also, our data and code is up at the NCDC FTP site, and I’d encourage people to play around with it.

@Zeke, thanks.
It would be a non issue except that some folks are seeing claims that aren’t there. Bob Ward and Scott Mandia for example. I hope that you’ll point this conflation out to them and others when unsupportable claims about siting are made in reference to this paper.
Also, you may not have seen my update in the body related to the figure before writing your comment.

I have an open question- with all of the analysis that is given to the CONUS data set, is it given more weight in the global data sets as an accurate sample of data or are the global data sets simply a spatial “average” from data around the globe?

There’s no way to get accurate conclusions from inaccurate data.
It reminds me of the lady that accidentally added salt to her tea instead of sugar then spent the rest of the morning adding different ingredients from her cupboard to negate the salt. As you can imagine, the situation just got worse and worse until she finally dumped the mess and started with a new cup of tea.
We should all go back to the beginning and do it right.

Thanks Anthony. Here is the section of our paper discussing the tests we did around the potential for homogenization “spreading” urban signal to rural stations. Note that the figures referenced are in the supplementary materials (on the NCDC FTB site): ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013-suppinfo/hausfather-etal2013-supplementary-figures.pdf
In all of the urbanity proxies and analysis methods,the differences between urban and rural station minimum temperature trends are smaller in the homogenized data than in the unhomogenized data, which suggests that homogeniza- tion can remove much and perhaps nearly all (since 1930) of the urban signal without requiring a specific UHI correction. However, the trends in rural station minimum temperatures are slightly higher in the homogenized minimum temperature data than in the TOB-only adjusted data. One possible reason for this is that the PHA is appropriately removing inhomoge- neities caused by station moves or other changes to rural sta- tions that have had a net negative impact on the CONUS aver- age bias (e.g., many stations now classified as rural were less rural in the past because they moved from city centers to air- ports or wastewater treatment plants). Another possibility is that homogenization is causing nearby UHI-affected stations to “correct” some rural station series in a way that transfers some of the urban warming bias to the temperature records from rural stations. In such a case, a comparison of the homogenized data between rural and urban stations would then show a decreased difference between the two by removing the appearance of an urbanization bias without actually removing the bias itself.
To help determine the relative merits of these two explanations, the PHA was run separately allowing only rural-classified and only urban-classified Coop stations to be used as neighbors in calculating the PHA corrections for USHCN stations. In Figure 9, the spatially averaged U.S. minimum temperature anomalies for rural stations are shown for the four different data sets: the unhomogenized (TOB-adjusted only); the version 2 (all-Coop-adjusted; v2) data; the homogenized data set adjusted using only coop stations classified as rural; and the homogenized data set adjusted using only urban coop stations.
The large difference in the trends between the urban- only adjusted and the rural-only adjusted data sets suggests that when urban Coop station series are used exclusively as reference series for the USHCN, some of their urban-related biases can be transferred to USHCN station series during homogenization. However, the fact that the homogenized all- Coop-adjusted minimum temperatures are much closer to the rural-station-only adjustments than the urban-only adjustments suggests that the bleeding effect from the ISA-classified urban stations is likely small in the USHCN version 2 data set. This is presumably because there are a sufficient number of rural stations available for use as reference neighbors in the Coop net- work to allow for the identification and removal of UHI-related impacts on the USHCN temperature series. Furthermore, as the ISA classification shows the largest urban-rural difference in the TOB data, it is likely that greater differences between rural- station-only-adjusted and all-coop-adjusted series using stricter rural definitions result from fewer identified breakpoints because of less network coverage, and not UHI-related aliasing. Nevertheless, it is instructive to further examine the rural-only and urban-only adjustments to assess the consequences of using these two subsets of stations as neighbors in the PHA.
Figure S2 shows the cumulative impact of the adjustments using the rural-only and urban-only stations as neighbors to the USHCN. In this example, the impermeable surface extent was used to classify the stations. The cumulative impacts are shown separately for adjustments that are common between the two runs (i.e., adjustments that the PHA identified for the same stations and dates) versus those that are unique to the two separate urban-only and rural-only reference series runs. In the case of both the common and unique adjustments, the urban-only neighbor PHA run produces adjustments that are systematically larger (more positive) than the rural-only neighbor run. The magnitude of the resultant systematic bias for the adjustments common to both algorithm versions is shown in black. The reason for the systematic differences is probably that UHI trends or undetected positive step changes pervasive in the urban-only set of neighboring station series are being aliased onto the estimates of the necessary adjustments at USHCN stations. This aliasing from undetected urban biases becomes much more likely when all or most neighbors are characterized by such systematic errors.
Figure S3 provides a similar comparison of the rural-only neighbor PHA run and the all-Coop (v2) neighbor run. In this case, the adjustments that are common to both the rural-only and the all-Coop neighbor runs have cumulative impacts that are nearly identical. This is evidence that, in most cases, the Coop neighbors that surround USHCN stations are sufficiently “rural” to prevent a transfer of undetected urban bias from the neighbors to the USHCN station series during the homogenization procedure. In the case of the adjustments that are unique to the separate runs, the cumulative impacts suggest that the less dense rural-only neighbors are missing some of the negative biases that occurred during the 1930–1950 period, which highlights the disadvantage of using a less dense station network. In fact, the all-Coop neighbor v2 data set has about 30% more adjustments than the rural-only neighbor PHA run produces. Results using the other three station classification approaches are similar and are provided as Figures S3–S8.

@Zeke
I really appreciate your input and explanation of your intents. It is unfortunate that supercharged climate conversations are taking well-intentioned work and misrepresenting what it says, or intends.
However, that said, there is a pretty harsh reality bearing on this ‘temperature record’ business. Wild claims are being bandied about and while the poisoned atmosphere of (to me) crazy climate agendas continues, scientists have a beholden duty to try to present their work in full context. To me, it seems clear you are aware of and are trying to separate UHI from siting issues but I think Anthony’s point is well taken: how can you make a silk purse out of a sow’s ear? Your charts will show what you are able to from the ‘all data’ but if the quality of the station data is known to fall into standard buckets from 1 to 5, what is demonstrated by failing to exclude the low quality input?
In other words you get an answer and you get charts, but what have we learned, even if you are only “dealing with anomalies”? Much of the raw data itself is anomalous. Whatever value your analysis has, it could be presented in a manner that disallows easy misrepresentation of the calculated results, which is I fear, too easy at the moment.

The chart above does not show the true difference between urban and rural and especially how that difference is widening over time.
The lines in the graph are almost all, one single line from 1961-1990. That is because the base-period is 1961-1990 which is smack-dab in the middle of the data so that the difference over time is obscured to the maximum amount possible. Red-Urban line is lowest at the beginning, Red-Urban line is highest at the end.
The base period should be changed to 1895-1924 and then we could actually see how much difference there is between Urban and Rural and how that has changed over time. Then all the calculations should be redone to present a true picture.
Difference at the beginning of the data using a 1961-1990 base period, Urban is 0.25C lower than Rural temps. Difference at the end of the data, Urban is 0.25C higher than Rural. What the average difference over the whole data period – Zero. All kinds of unusual statistical trends appear on their own when the data is set-up this way.
I’ve made this point before but it doesn’t seem to sink in.

Too many assumptions in this report in regards to the homogenization justifications to my liking.
The one thing I can agree with is the notion that provided nothing else has changed around both a rural and urban station a trend is valid even if the temps are different.
The thing is as always nothing stays the same.
A rural station is still compromised if it now sits above a concrete pad (to avoid getting your shoes wet when checking) rather then grass. Still classified as rural though just with urban features.
A station according to this is classed as urban if it sits within the limits of a centre of 1000 people or more. This is biased from the start, just because it is classified one way or the other does not make it so. A rural station under these criteria can have more UHI bias if it is poorly sited then an urban one in the middle of a large park.
What one could conclude from this is that once again it proves that we can not homogenize any data and that to really be true to ourselves in regards to absolute temps we should only look at temp data from stations where we know that nothing has changed that could influence the reading since original siting (not many of them around anymore).
From what I have seen of the readings from such stations the “trend” is vastly different then the homogenized line.
But this would now be classified as localized data with no relevance to the global trend.
However if they are true to the above statement that a trend is a trend no matter where the station is sited that should not be an issue.
It is the trend we are looking for not the absolutes.

‘I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data. Our approach in Watts et al is to locate the best stations, with the least bias and the fewest interruptions and use those as a metric (not unlike what NCDC did with the Climate Reference Network, designed specifically to sidestep the siting bias with clean state of the art stations). As Ernest Rutherford once said: “If your experiment needs statistics, you ought to have done a better experiment.”’
I was going to write an extended comment on this but Crispin in Waterloo has done it.
No discussion of statistical methods can explain why some data points are devalued in the course of statistical analysis. The very fact that devaluing takes place renders the statistical work non-empirical. Mainstream climate science, apparently following the lead of Mann and the paleo crowd, has failed to understand that they must undertake the necessary empirical work to give a robust integrity to each of their data points. There is not one among these scientists who has instincts for the empirical.

“homogenization takes the well sited stations and adjusts them to be closer to the poorly sited stations, essentially eliminating good data by mixing it with bad.” This reminds me of my old Chemistry professor’s explanation of ‘entropy’. “If you mix a tablespoon of fine wine with a liter of sewage you get sewage; you mix a tablespoon of sewage with a liter of fine wine you get sewage; “

outheback says:
“However if they are true to the above statement that a trend is a trend no matter where the station is sited that should not be an issue. It is the trend we are looking for not the absolutes.”
Such a trend on what is effectively urbanization is good for what exactly???
To evaluate the effect of CO2 upon global temperatures beyond the baseline natural warming since the little ice age, we need absolutely clean data, ie, numbers which are from a long term pristine site. If we only have 50 such land sites world wide, so be it. Thats all we have to work with. Add that to the validated ocean measurements.

This sounds eerily reminiscent of the economic geniuses at Citi & Goldman Sachs talking about the safety of derivitives of mortgage backed securities. The logic was similar. You package good and bad mortgages together and you end up with AA rated securities because you assume that only a small percentage will go bad. The problem is when you mix bad mortgages with good (or bad data with good) you end up with bad derivitaves, not good ones because you can’t separate the bad from good and you end up not trusting any of the derivitaves.
I know I could have worded this better…

Bill Illis’ argument surely has to be correct, and certainly makes sense to a non-scientist like me.
You surely cannot use a part of the timescale you are studying as the baseline for comparison. In my world that’s usually called cheating.

“If you torture the data long enough, it will confess.”
Somehow in the field of climate “science” the concept of altering temp data based on a supposed understanding of bias in 100-year-old data has become acceptable. This is not acceptable in any other field that I have seen. NASA and NOAA apparently do it every day.
It has never made any sense. The raw data are what they are, whether from a liquid-in-glass thermometer, pulled up from a bucket of ocean water, the water intake of a sea-going ship, a thermocouple with a short cable, whatever. UHI is real, everyone knows this. Badly sited thermometers give inaccurate readings, everyone knows this. Cold air flows downhill, so if the thermometer is in the lowest point of land in a particular area, it will read cold at night, everyone who has taken a walk after sundown knows this.
Technically educated people will always be skeptical of adjustments to data. It is a lie to say that a particular day, week, month, year, or decade was warmer or cooler than another if the raw data has been adjusted. The raw data was taken for specific purposes and it was good enough for those who took it, deal with it.

No discussion of statistical methods can explain why some data points are devalued in the course of statistical analysis.

Depends on what you mean by “devalued”. If you mean “not treated equally”, there could be a good statistical reason for that. But if the equipment use to measure all data is statistically similar in behavior, no devaluation is justifiable.

Zeke Hausfather;
First of all, the baseline period used is 1961-1990, which forces agreement over that period (as we are dealing with anomalies, and comparing trends, the absolute offsets are somewhat irrelevant).
>>>>>>>>>>>>>
I repeat my question to you from another thread which you have failed to answer. What is the justification for averaging anomalies from completely different baseline temperatures when these represent completely different flux levels? For example an anomaly of 1 from a base temperature of -30 represents a change in flux of 2.89 w/m2. How do you justify averaging it with an anomaly of 1 from a baseline of +30 which would represent a change in flux of 6.34 w.m2?
The standings on this question so far are:
Zeke Hausfather: No Response
Steven Mosher: No Response
Joel Shore: No single metric is right or wrong
Robert G Brown; There is no justification
Richard S Courtner; There is no justification

We are finishing up the work to deal with TOBs criticisms related to our draft and I’m confident that we have an even stronger paper now on siting issues.
Anthony, are you hinting that your 2012 paper is being held up because of TOBs criticisms? Or is it that you have a follow-up paper that will combine Watts-2012 draft with additional stations where TOBs needs to be addressed?
In the video of Watts-etal-2012 during your Gore-a-thon-2012, I thought your solution of using only stations that had no need of TOB adjustment was not only a very proper treatment of the data, but also an essential one. The reduction in the number of stations available will increase the uncertainty in the trends, but we must see the analysis where TOBS adjustments are unnecessary before we apply a confounding TOBS adjustment with its necessary increase in uncertainty due to method.
BTW, in the Gore-a-thon-2012 category, I don’t see the last hour covering Watts 2012 listed. Do you have the video on WUWT?
Thanks for it all.

Hi Anthony
Keep on drumming, eventually they will have to march to the beat of science based research not dogma.
I have done some work on Australia’s temperature record keeping and plotted RAW temperature series for a number of them… I then located the Acorn-Sat data and have been plotting these alongside the RAW… not only do they no take into account UHIE in city-sited stations (e.g. Obervatory Hill Sydney) , their data bears no resemblance to the RAW data, particularly in the earlier years.
In addition in the Richmond NSW series, between 1958 and 2012 the Acorn-Sat data has 173 missing entries as compared to their own RAW data. The RAW data in the CSV I downloaded from the BOM site are all marked as ‘Y’ for quality but have been removed from the Acorn-Sat data?
The result of my plotting these together comes up with the following obvious adjustments… the following are the first 10 years of minimum temperatures recorded for each year.
SAT DATA
1958 -4.20
1959 -5.00
1960 -5.00
1961 -5.00
1962 -4.00
1963 -2.90
1964 -3.90
1965 -3.50
1966 -5.30
1967 -2.60
RAW DATA
1958 -1.70
1959 -2.80
1960 -2.20
1961 -2.20
1962 -1.80
1963 -0.60
1964 -1.70
1965 -1.30
1966 -2.80
1967 -0.60
It should be noted that in 1994 a new station came on line and it is from that point back to 1958 that the minimums have been adjusted… however, the maximums have also been adjusted, but to a far lesser extent.
Adding up the differences over the 1958 – 2012 period we see the minimum aggregate changes totalling -45.7° C while the maximums aggregate to -2.3° C over the same period… it should further be noted that 26.2° C of the adjustments to the minimum temperatures were in the first 11 years?
The result of the obvious tinkering is that the average annual temperatures were adjusted to the tune of -19.22° C over this same period with -7.9° C being in the first 11 of the 55 years.
It goes without saying that the charted SAT data starts off at a much lower point than the RAW data for both minimum & average temperatures while the maximum, although still different is relatively similar in comparison.
(-;]D~

Slartibartfast says:
February 13, 2013 at 9:57 am
“But if the equipment use to measure all data is statistically similar in behavior, no devaluation is justifiable.”
There are at least two parts to every measurement: the object measured and the measuring instrument. My comment is about the former not the latter. Mainstream climate science refuses to do the empirical work to determine that the objects measured are actually comparable. Anthony gives them an empirical five-fold classification of the objects. They simply cannot bring themselves to deal with Anthony’s empirical work on the objects measured. They are not engaged in science.

Ed_B says: “To evaluate the effect of CO2 upon global temperatures beyond the baseline natural warming since the little ice age, we need absolutely clean data, ie, numbers which are from a long term pristine site. If we only have 50 such land sites world wide, so be it. Thats all we have to work with. Add that to the validated ocean measurements.”
Sorry, Ed, perfect data won’t prove causation, no matter what the correlation. Trying to tease a trend out of extremely noisy data and then saying it proves AGW exists is a fool’s errand.

The chart shows that the study used USHCN version 2, which is a “value-added” data set.
The v2 revision from the original USHCN raw data in 2009 had the effect of increasing the slope of the curve, moving it closer to the GISS homogenized data, producing an artificial warming.
The paper merely studies whether the rural stations were as altered as the urban ones.

jorgekafkazar says:
February 13, 2013 at 10:50 am
“Sorry, Ed, perfect data won’t prove causation, no matter what the correlation. Trying to tease a trend out of extremely noisy data and then saying it proves AGW exists is a fool’s errand”
But a zilch trend in prisitne sites above the long term rise and cyclical ups and downs from the liitle ice age leaves a big problem for those wanting to demonize CO2 does it not? I doubt CO2 has any effect at all that we can measure with 2 sigma statistical significance, let alone 3 sigma. Would you allter the worlds economy on 2 sigma? Not me, I would want better proof than that.

“In all of the urbanity proxies and analysis methods,the differences between urban and rural station minimum temperature trends are smaller in the homogenized data than in the unhomogenized data, which suggests that homogeniza- tion can remove much and perhaps nearly all (since 1930) of the urban signal without requiring a specific UHI correction. ”
Of course the differences are smaller. It’s like taking some white paint and pouring it into the black paint and taking some black paint and pouring it into the white paint and then acting as though it is a revelation that the differences between the black and the white paint are smaller after the “homoginization”. When will you give up the idea that homoginization is any kind of solution for UHI. Are you so enamored of letting the computer do it for you that you can’t see that it’s an absurd idea right from the start.
Simply UHI correct the Urban and small town stations based on pristeen rural sites with the least amount of changes in location, urbanization, and reading time of day.
Look Zeke, the objective is to determine what is happening to the global climate. If you took the simple mean of the uncorrected records of the world’s 100 most pristeen stations – well distributed – you would have a far more trustworthy idea of that objective than all of the nonsense that you are currently doing. But then, probably nobody would fund you or publish you for doing that, right?
I think we’ve been having the same “homoginization” discussion for at least 4 years. Give it up. It’s a bad idea – almost as bad as extrapolating arctic and antarctic shore station data across a thousand kilometers of solid ice.

Maybe Zeke could tweet to Ward and Mandia that they are misusing or misreading his results, just as they misuse / misread everything else? Then we could discuss Zeke’s paper without the distraction of their offstage catawauling chorus…

Mike McMillan,
We examine both fully-homogenized USHCN data (using both v2 and the new v2.5 methods), data that has been homogenized using only rural stations (to avoid potentially [aliasing] in any urban signal), TOBs-only adjusted data, and raw data. Figures 1 and 2 show the resulting trends in urban-rural differences for each series and each urbanity proxy (nightlights, GRUMP, ISA, and population growth) using both station pairing and spatial gridding approaches.

‘Thanks Anthony. Here is the section of our paper discussing the tests we did around the potential for homogenization “spreading” urban signal to rural stations. ”
And just so you know Anthony as Zeke was working on this paper, your claim of “spreading” the urban bias through homogenization was at the front of Zeke’s mind. We talked about it a number of time. That is why they looked at homogenizing with rural only stations.. to put that issue to bed. Their approach does that.
So, your criticism was noted. A proceedure to insure that “spreading” didnt happen was used and folks can see the result.
UHI is real. and by using rural stations to detect and correct you can answer the ‘spreading” issue. That looks like an advance.

‘Look Zeke, the objective is to determine what is happening to the global climate. If you took the simple mean of the uncorrected records of the world’s 100 most pristeen stations – well distributed – you would have a far more trustworthy idea of that objective than all of the nonsense that you are currently doing. But then, probably nobody would fund you or publish you for doing that, right”
###
did that. the answer is the same.

david
‘I repeat my question to you from another thread which you have failed to answer. What is the justification for averaging anomalies from completely different baseline temperatures when these represent completely different flux levels? For example an anomaly of 1 from a base temperature of -30 represents a change in flux of 2.89 w/m2. How do you justify averaging it with an anomaly of 1 from a baseline of +30 which would represent a change in flux of 6.34 w.m2?”
Actually in berkeleyearth we dont use anomalies so your question doesnt apply.
We work in absolute temperature. no anomalies are averaged.
Thank you for playing.

steven mosher;
Actually in berkeleyearth we dont use anomalies so your question doesnt apply.
We work in absolute temperature. no anomalies are averaged.
Thank you for playing.
>>>>>>>>>>>>>
1. My question may not apply to Berkely Earth, but it most certainly applies to other temperature trends. Either you can justify their use of this or you can’t. Not to mention that my original question to you was posed on a thread in which you explained the value of anomalies. So, the truth is that while I am playing, you are just avoiding answering the question.
2. Am I to understand that you are averaging temperature from completely different temperature regimes? If so, how do you justify averaging a temperature of +30 which would represent 477.9 w/m2 with one of -30 which would represent 167.1 w/m2? Are you of the opinion that averaging such disparate temperature ranges has any value in understanding the earth’s at surface energy balance?

As Ernest Rutherford once said: “If your experiment needs statistics, you ought to have done a better experiment.”
Rutherford was known for his off-the-cuff disrespectful comments. That actually is pretty stupid. The results statistical analyses are used to design the next experiments. For one nice example, consider the Green Revolution.

I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data.
the issue you are trying to address is this: aggregating statistically improves the estimate of the most important effects (here changes across time), ** if ** the variation (across types of sites) is independent of important the important effects of interest. It is the obligation of the people using the statistical methods to show that such variation is in fact independent of the effects of interest. If you can’t do that, then you have to do what Watts et al (in preparation) have done, which is analyze the types of sites as different strata (at minimum estimating site-by-time interactions.) This is discussed in nearly all textbooks that include experiment design, hierarchical modeling, meta-analysis, multiple linear regression, repeated-measures analyses (including time series such as growth curves), and most of the commonly used techniques.

david
“1. My question may not apply to Berkely Earth, but it most certainly applies to other temperature trends. Either you can justify their use of this or you can’t. Not to mention that my original question to you was posed on a thread in which you explained the value of anomalies. So, the truth is that while I am playing, you are just avoiding answering the question.
1. Your question DOES NOT apply. you should unfool yourself.
2. Anomalies do have a value, that is different than saying they are perfect.
3. They are used in other temperature series. The effect is to change the variance
during the overlap period. We’ve published on that aspect.
4. If the process of taking anomalies changed means or trends and you can show that, then a nobel prize awaits you.
5. I dont have time to answer every question. So, write up your nobel prize winner and do like zeke did. Do like Mcintyre. Do like Anthony. publish.

Matthew R Marler says:
February 13, 2013 at 12:54 pm <blockquote.
As Ernest Rutherford once said: “If your experiment needs statistics, you ought to have done a better experiment.”
You should understand however, that one of Rutherford’s most important papers, based on the 1909 Geiger-Marsden experiment with alpha particles being shot at a very thin gold foil held in a high vacuum, showed that (1) EXPERIMENTAL results override EVERY “theory” that even the most advanced atomic scientist of the time (J J Thomson) held
and (2) EXPERIMENTAL results are NOT subject to “statistical treatment” .
If the experimental results were held to CAGW “standards” he (Rutherford) should have thrown out every reflection of each alpha particle found at 90 degrees or more. By consensus, by conventional theory, by EVERY “scientific body”, by the conclusion of the top scientists in the world – despite Leif’s demands that solar energy discussion must begin with theory, not results – there could be NO reflection of alpha particles at the high angles infrequently observed, and thus, the results must be discarded “by statistics” …
Our modern atomic nuclear theory shows who was correct.

Doug Proctor says:
February 13, 2013 at 11:35 am
The Mauna Loa site is considered to represent the world vis-a-vis CO2 measurements. This paper looks to see if the Mauna Loa site could also represent the world vis-a-vis temperature readings.
What is VERY interesting is that the data from 1979 – 2009 shows
1) at noontime a DROP in temperatures of -1.4C/100, while
2) at midnight, a RISE in temperatures of +3.9C100.
The average, then is +1.25C/100. But this is an artefact of mathematics!
This might be a good example of David Hofer’s argument about actual energy. The change at midnight clearly represents lower energy changes per degree than those at noon. What was the actual temperatures?

one of -30 which would represent 167.1 w/m2?
>>>>>>>>>>>>>
well that would be the w/m2 for -40. You really should check your math before pressing post comment mr hoffer.
(Figured I may as well call myself out before anyone else got to it)

davidmhoffer wrote:
“Am I to understand that you are averaging temperature from completely different temperature regimes?”
Here’s a graph that demonstrates the need of regime definitions.
It shows the different slopes/trends of measurements taken at different times of day:http://www.boels069.nl/Climate/SlopePerHourDeBiltNL.pdf
The same is true when using daily Tmax, Tmin and T(max-min).

david:
‘2. Am I to understand that you are averaging temperature from completely different temperature regimes? If so, how do you justify averaging a temperature of +30 which would represent 477.9 w/m2 with one of -30 which would represent 167.1 w/m2? Are you of the opinion that averaging such disparate temperature ranges has any value in understanding the earth’s at surface energy balance?”
You need to read the papers. First we are not at all interested in the energy balance. The method estimates the temperatures at un observed locations. It does that by using information at observed locations. That says nothing about energy balance and no claims about energy balance are made. The method estimates the temperature at un observed locations. The test of that proceedure is simple:
A) Hold out a sample of stations.
B) estimate the temperature at all locations, using a sample of locations
C) compare your prediction with your hold out sample.
and well, it works. go figure.
The method works to do what it was designed to do. Estimate temperature at un observed locations using observed locations. The concept of ‘average’ temperature is somewhat confused, for the reasons you state. That is why, I wouldnt characterize ANY temperature series as an ‘average’ temperature. It is an index. Its non physical. It tells you nothing about energy balance and was never intended to. nevertheless the concept of ‘average’ temperature has a meaning.
When we say the LIA was colder we are referring to something.

steven mosher;
4. If the process of taking anomalies changed means or trends and you can show that, then a nobel prize awaits you.
>>>>>>>>>>>>>
Haven’t got a clue if it does or not. But if the purpose of tracking the data is to determine if there is an energy balance at earth surface, why wouldn’t you average and trend w/m2 at earth’s surface? Demonstrating that the trend would be different would by no means earn me a Nobel prize, and I think you know that. Red herring. You have to raw data and access to the compute horsepower to do it. As does Zeke. As do many others. But you insist instead on using a proxy, and one that you ADMIT is imperfect. One than can be demonstrated with artificial data to produce a negative energy balance trend for a positive temperature trend. Which I have done, no one has produced an error in my math, yet no Nobel prize have a been nominated for.
What should I put you down for in the list? Something like Joel Shore, it is an imperfect metric but let’s use it anyway?

Steven Mosher says:
February 13, 2013 at 1:12 pm
5. I dont have time to answer every question. So, write up your nobel prize winner and do like zeke did. Do like Mcintyre. Do like Anthony. publish.
Translation: Mosher doesn’t know how to answer the question and tries to obfuscate as a cover.
BTW, is there anything in this paper that attempts to factor in what Pielke and others have found relative to vertical mixing caused by man made structures? Even a rural station could have structures well away from the actual thermometer that cause this mixing but still qualify the station as rural. If this is ignored then the paper is open to valid criticism.

@Doug Proctor
I picked my own quarrel with a different lot at Berkeley who are using results of triplicate tests to get an ‘average’. Later, the average of the result of a different set of triplicates is generated. The two averages are then compared. They concluded that the test method is precise to the extent of the difference between the two sets of averages.
I cannot for the life of me see any difference between that (which is completely unscientific) and what is being done with these homogenisation and averaging routines. What is this paper about? It is a test of processing methods, is it not? Am I reading this correctly? It is a series of comparisons that tests whether one or other processing method is ‘valid’. The use to which the data set emerging from the process they are examining is fed into another averaging process. I found the several methods described in detail during the hullabaloo about the BEST pre-print nothing less that extraordinary. The comment above about generating your own homogenised baseline and then using a different process to generate a comparison runs an interesting risk. At what point are you comparing artifices of the methods and at what point are you comparing trends in data? Zeke’s first comment is quite on the mark when he says basically, ‘this is what we did and this is what we found when we did it’. Well, OK that is a valid statement. It is what he did. The question hangs large over the exercise : what has been shown by this effort? There are three processes involved: the original process, the process used to test that process, and the analysis of the meaning of the result of the second process upon the results of the first. If an artificially modified version of the raw data was fed into both processes 1 and 2, would analysis 3 be able to tell what kind of modification was made to the raw data? I am borrowing a page from S McIntyre here. Basically the paper claims the answer is yes.
@davidmhoffer
You may enjoy this: I am reminded of yet another Berkeley group who have been averaging results too. They have constructed metrics which are ‘inverted’ and then they produce a simple average. [An example is miles per gallon and litres per 100 km – the latter, volume per distance, is the inverse metric of the former.] When inverted they should average using a Harmonic Mean, not an Average. The effect is to bias the results (of using the incorrect averaging method) always in one direction. One way this can appear in temperature averages is to use anomalies instead of numbers because sometimes inverting or re-expressing them and then averaging the numbers gives the wrong answer. It is vaguely like your energy and temperature example. When you change the denominator the method must also be changed appropriately.
Question for readers to come to grips with this:
Example 1
Rural stations increase in temperature at 0.1 deg per decade
Peri-urban stations increase in temperature at 0.2 deg per decade
Urban stations increase in temperature at 0.3 deg per decade
What is the average temperature rise of these three sets of stations, per decade? (assume equal weighting)
Example 2, devived from Example 1 by inverting the data
Scenario 1 is 10 decades per degree of temperature rise
Scenario 2 is 5 decades per degree of rise
Secnario 3 is 3.333 decades per degree of rise.
What is the average number of decades per degree of temperature rise? Invert your answer. Does it agree with the answer to Example 1?
Imagine you were trying to forecast how long it will take for the temperature to rise 2 degrees, or to double. Methinks there is madness in some methods.

Anthony, I hope your paper gets published and gets lots of attention. Quality of surface station data (and subsequent “homogenizations” and “adjustments”) are one of the sloppiest aspects of current Climate “Science.” Of course, the adjustments to ocean temperature and troposphere/ stratospher temperatures are just as suspect. And the effects of soot on arctic temperatures should be seriously revisited.
If half the “scientists” studying this would spend time on data quality and the physics of radiative absorption, we’d get somewhere, and the alarmists would go away with tails between their legs.

Zeke, Something I’ve wondered about (relating to homogenization processing) is have you compared a local calculated value while excluding a known good station, with the good station to see how your calculations compared to actual measurements?

Notably, when stations are located in park-like settings within a city, the microclimate of the park can be isolated from the urban heat island “bubble” of surrounding built-up areas [Spronken-Smith and Oke, 1998; Peterson, 2003].”
Urban temperature measurements from parks are problematic in warmer and drier climates because parks are normally irrigated. Here in Perth the official site was moved in 1992 from opposite an irrigated park to an un-irrigated location and night time temperatures immediately rose 1.5C. The park was irrigated at night.
Urban rural comparisons are moot because the implicit assumption that rural locations don’t have local anthropogenic influences is wrong. And the rural temperature measurement problem is compounded by the fact temperature measurement is often done at agricultural research stations.
What we should be comparing is urban vs rural vs pristine locations, and as said above if we only have 50 pristine locations, then so be it. Although, I’d expect at least a few hundred.

Worth mentioning Ed Long’s work from a few years back which suggests that UHI correction is the wrong way around, especially for quality rural sites.
Also Ray Spencer’s work which empirically shows ‘urban’ heat island effect can kick in very significantly with as few as 20 people/km^2.

To me the elephant in the room is the reason why the surface temperature record diverges from the satellite record.
Anthony attempts to deal with the elephant by offering the explanation that degradation over time of station siting, creeping UHI, and similar slow by steady degradation of the temperature network have caused the land based record to record warmer temperatures. There is experimental support for this. Poor station siting can clearly cause an instrument to record a spuriously high temperature. His preferred solution is to examine in detail the nature of each measurement site and to eliminate suspect data.
Others seem to see any suggested problem with the data as an invitation to adjust it (and hence produce a paper without leaving the office). Yet any adjustment method will result in the temperature record being contaminated with the biases and assumptions of those choosing the method of adjustment. When we look at the output we see serious unexplained anomalies. Adjustments supposedly made to eliminate UHI somehow result in still greater warming. They result in temperatures from pristine stations being changed, usually in the direction of showing much greater warming, with absolutely no attempt made to provide a physical justification for this. If the station is pristine why are you tampering with its data? And no – you cannot point somewhere into the mathematical complexities of your data mangling machine and say the reason is hidden in the mechanism. If your data mangling machine wants to mangle pristine data then your machine is broken because the data cannot be.
This approach seems so wrong and generates such strange results that many of us have lost all trust in the people doing this. And one of the main custodians of the US data record is being arrested in front of the white house right now, which also does not inspire confidence in the impartiality and scientific detachment of those doing the adjustment. In any case all discussion about mangling methods to produce an even more dramatic record of steadily rising temperatures completely ignores the elephant in the room. At the end of it all you still must explain why the satellite record shows a much smaller rise.
While I have only skimmed this latest paper, it it seems to me that all it does is show that the results of the various data mangling machines are insensitive to certain choices in the data massaging method chosen. This might be of interest to people who want to build data mangling machines. It is of little interest to those of us who are deeply suspicious of them. It offers no explanation of some of the paradoxes generated by these machine. It doesn’t explain why the massaging methods which are supposed to eliminate UHI make the temperature rise greater. It does not explain what in these machines is broken which leads them to mangle pristine data to show greater warming. And once again it ignores the elephant in the room.
At least Anthony has tried to talk to the elephant.

Bruce of Newcastle says:
February 13, 2013 at 2:55 pm
Also Ray Spencer’s work which empirically shows ‘urban’ heat island effect can kick in very significantly with as few as 20 people/km^2.
20 to 50 people/km2 is arable land in most parts of the world. Yesterday, I called this Rural Heat Island. The Spencer paper is a must read.

Zeke Hausfather says:
February 13, 2013 at 9:37 am
Bill Illis,
That graph does not show urban and rural temperatures. You want Figs. 3-6 in our paper for a good example of that.
—————-
Oh yeah, Figs 3-6 are clear.
Can you explain what exactly does the caption mean “Time of obs min urban-rural differences 1895-2010.”
And why does it show 0.4C of change in the difference between Urban and Rural from 1920 to 2000.
Why does the abstract describe this situation as “urbanization accounts for 14% to 21% of the rise in unadjusted minimum temperatures since 1895”.
And what exactly does that mean.
And when I say “exactly”, I mean something that describes the situation in tempC to a number. Like 0.5C of the increase of 0.8C is caused by urbanization.
Now that would be a paper that is helpful to everyone.

Ian H. writes:
“When we look at the output we see serious unexplained anomalies. Adjustments supposedly made to eliminate UHI somehow result in still greater warming. They result in temperatures from pristine stations being changed, usually in the direction of showing much greater warming, with absolutely no attempt made to provide a physical justification for this. If the station is pristine why are you tampering with its data? And no – you cannot point somewhere into the mathematical complexities of your data mangling machine and say the reason is hidden in the mechanism. If your data mangling machine wants to mangle pristine data then your machine is broken because the data cannot be.”
Amen. And Amen to the entire post. The answer is that their concept of “pristine data” is a statistical concept that cannot be explained except by reference to their statistical efforts of the moment.
That raises the Big Question and the Big Picture. Why do they engage in statistical exercises that make reference to “pristine data” or to Anthony’s five-fold classification of measurement sites? Are they hoping that the reader will confuse the empirical concept of pristine data with their statistical concept of pristine data? No such confusion will occur at this site.

Steven Mosher says:
February 13, 2013 at 12:03 pm
“‘Look Zeke, the objective is to determine what is happening to the global climate. If you took the simple mean of the uncorrected records of the world’s 100 most pristeen stations – well distributed – you would have a far more trustworthy idea of that objective than all of the nonsense that you are currently doing. But then, probably nobody would fund you or publish you for doing that, right”
###
did that. the answer is the same.”
I think something is being missed here. This is a great idea. If you chose the 100 most pristine sites in the world only, and kept track of their raw data over time, even though you are unlikely to have a good average of global temp(if that is what is being attempted with all the adjustments), you would have a handle on a clean useful trend. If CAGW is significant and longterm, it should show an incontrovertible signal, free of criticism that the data has been incorrectly manipulated. Let’s face it, if we are going into some serious longterm heating, there is no need for controversial homogenization corrections. To take an extreme example: it the sea is going to rise 20 metres, there is no need to make 0.3mm.annual adjustments for whatever reason. 19.97 metres is close enough!
I would be very interested in a proper critique of this idea.

Bill Illis,
I am assuming your first questions are about the bottom panel in Figure 9? If so, hopefully this helps:

“Can you explain what exactly does the caption mean “Time of obs min urban-rural differences 1895-2010.”
And why does it show 0.4C of change in the difference between Urban and Rural from 1920 to 2000.”

The three lines on that chart show the difference between the grid-averaged minimum U.S. temperature using all CONUS stations for homogenization (USHCNv2) and the grid-averaged minimum U.S. temperature using the following three sets:
1) No homogenization (TOB only)
2) Station data homogenized using ONLY rural CONUS stations (rural neighbor)
3) Station data homogenized using ONLY urban CONUS stations (urban neighbor)
Obviously, the urban-only adjusted series shows contamination of the stations by urban neighbors. I think this is the 0.4 K change you are referring to. This is the very reason I was interested in the analysis in the first place, to see what urban stations potentially might have contaminated rural stations during homogenization. However, obviously the urban-adjusted-only dataset is very different from the actual NCDC USHCNv2 dataset, as indicated by the large trend in that figure, which should be a good sign that the main (all-neighbor) dataset is not similarly contaminated.
If you look that green line (V2.0 All Coop Neigh minus Rural Neigh), you see it does NOT have a subsantial trend, suggesting that adjusting using rural-only neighbors produces similar results to homogenization using ALL neighbors, again indicating that we are not getting the “urban bleeding” when using all stations (as in USHCNv2) that many (and myself personally) were initially concerned about.
For more on this particular topic I discuss it on my blog:http://troyca.wordpress.com/2013/02/13/our-paper-on-uhi-in-ushcn-is-now-published/

Thanks Carrick, I see you included the link while I was mid-post.
Bill Illis, to answer your follow-on comment:

Why does the abstract describe this situation as “urbanization accounts for 14% to 21% of the rise in unadjusted minimum temperatures since 1895″.
And what exactly does that mean.
And when I say “exactly”, I mean something that describes the situation in tempC to a number. Like 0.5C of the increase of 0.8C is caused by urbanization.
Now that would be a paper that is helpful to everyone.

From table SI.1, you can see that the trend in T_Min from 1895-2010 in the “unadjusted” (TOB) all-station series is 0.074 C/Decade. When using only rural stations, that trend in T_Min over the same period (again for TOB) is 0.060 to 0.064, depending on the urbanity proxy used. The difference is thus 14% to 21% of the all-station series. I would thus say that UHI contributes ~0.12C to ~0.16C of the ~0.85 C rise in MIN U.S. temperatures in the TOB-only dataset. Obviously, the conclusion of the paper is that the homogenization process removes most of this influence from UHI. The reason that the trend doesn’t decrease by this much after homogenization, when the UHI influence is removed, is because inhomogeneities identified by the PHA — which artificially deflated the trend by a similar amount — are also removed. This is why we investigated using rural-only neighbors, to see if the PHA was really just spreading the UHI, rather than actually removing it. Given that using the PHA with rural-only neighbors *still* identifies the inhomogeneities, and increases the trend, this led us to the conclusion that the corrections were warranted and not simply UHI spreading. As you recall from a while back, I had investigated using the PHA with synthetic data, and as a first check obviously just determined whether it would artificially inflate the trend. It did not:http://troyca.wordpress.com/2011/01/14/testing-the-pha-with-synthetic-data-part-1/

Speaking of UHI and further to James Sefton’s Australia comments above, it’s worth looking at the BoM’s December 2012 update on weather for Melbourne.http://www.bom.gov.au/climate/current/month/vic/archive/201212.melbourne.shtmlLocated in the Central District at the head of Port Phillip Bay, Melbourne is Victoria’s State Capital. Here, overnight minimum temperatures were much warmer than those usually experienced and averaged 15.1°C (departure from normal 2.2°C). That the overnight temperatures in Melbourne are higher than those in most surrounding localities is a consequence of the city being under the influence of the effect of urbanisation (cities are usually warmer than their rural surroundings, especially at night, because of heat stored in bricks and concrete and trapped between close-packed buildings). Daytime maximum temperatures were much warmer than those usually experienced and averaged 25.7°C (departure from normal 1.5°C). Total rainfall for the month was 30 mm, this being less than that usually recorded (normal 59.3 mm, percentage of normal received 51%).
Some 20 kilometres northwest of the Melbourne city centre, and located in a somewhat rural setting, Melbourne Airport, is more typical of the suburban areas of Melbourne. Here, overnight minimum temperatures were slightly warmer than those usually experienced and averaged 12.5°C (departure from normal 0.5°C). Daytime maximum temperatures were much warmer than those usually experienced and averaged 26°C (departure from normal 1.6°C). Total rainfall for the month was 18.6 mm, this being much less than that usually recorded (normal 48.8 mm, percentage of normal received 38%).
OK, the BoM acknowledges that UHI affects Melbourne Regional Office temps, primarily minima which, if their airport comparison is the benchmark, adds as much as 2.6C.
Indeed, the December 2012 mean minima at nine weather stations surrounding Melbourne RO averages 12.3C, so it might be said that UHI exaggerates MRO December 2012 min by an average 2.8C.
Since they acknowledge UHI, it surely can be assumed they adjust down to compensate for it in their ACORN dataset – the homogenised temp records from a network of 112 stations since 1910 (sort of) that provide Australia’s feed into global temp indices. Melbourne RO is in the ACORN network.
If you look up Melbourne RO raw min temps via http://www.bom.gov.au/climate/data/ and BoM ACORN min temps via http://www.bom.gov.au/climate/change/acorn-sat/#tabs=1, you’ll find December 2012 adjustments thus:
1 Dec 17.6C adjusted to 17.6C
2 Dec 13.5C adjusted to 13.5C
3 Dec 14.2C adjusted to 14.2C
4 Dec 12C adjusted to 12C
5 Dec 12.2C adjusted to 12.2C
etc with no adjustment at all.
There have been no adjustments since 1998/99. So how is this explained? By looking at the adjustments for historic Melbourne RO raw vs ACORN minima records:
1910-29 adjusted up .6C
1930-59 up 1C
1960-69 up .6C
1970-89 up .4C
1990-99 up .2C
no adjustment since 98/99
Since there have been no Stevenson screen, instrument or location changes, early records are adjusted up presumably to compensate for modern UHI rather than new records adjusted down to compensate, the difference narrowing since about 1960.
ACORN adjustments reduce the 1910-2012 min increase at Melbourne RO from 1.8C to 1.2C. Melbourne is lucky compared to most other stations.
For example, Laverton RAAF 87031 from the BoM December 2012 comparison table linked above, which is the only ACORN site for comparison on their Melbourne monthly update page … 1946 (earliest year without days missing) Laverton raw mean min 8.8C. ACORN 1946 adjusted down to 8.1C. Laverton 2011 raw 10C. Laverton 2011 ACORN 10C.
Historic UHI adjustments are little more than guesswork and one of various reasons why ACORN is a mess.

Bill Illis,
The 0.4 C difference between urban and rural temps over the century doesn’t translate into an overall 0.4 C bias in the temperature record from all stations. In practice, the bias is about half of that as about half of the stations are urban and half rural (the actual proportion will vary based on the urbanity proxy used). As Troy mentions, you can find the trends from all stations and rural stations for various proxies, series, and time periods in table SI.1 in the supplementary information.

In Australia there are many sites that could be described as pristine. I selected and culled to 44 sites and looked at trends in the last 35 years. The logic was that either Tmax or Tmin or Tmean would show a similar trend from place to place, theoretically related to GHG changes, over the period.
It is desirable to establish a baseline change that is as isolated from spurious effects as possible. I failed to find one. I failed to explain why slopes were all over the place.
I’ve posted this before, but nobody has yet explained it.
Its importance is that failure to obtain a consistent baseline trend in Australia also fails attempts elsewhere in the world, until an explanation can be found. So, Zeke, you can fiddle with figures as much as you choose, but you can’t have them believed until you can explain this Australian anomaly.http://www.geoffstuff.com/Pristine_Summary_1972_to_2006.xls
Data are from the Australian Bureau of Meteorology as posted on their web sites. There is occasional infilling that would have negligible effect on the outcome. The use of linear least squares fit does not imply an endorsement that this is the best way to interpret the data. It is simply a help to guide the eye. The period was chosen from 1972 because there is a break point in much Australian data about 1970 and I wanted to be past that. They end in 2006 because that’s when my data ended.
The summary information is graphed at the bottom.

‘They make claims at century scales in the paper without supporting metadata at the same scale.’
Climate science has ever operates at a standard below that expected of a student doing a science degree . Is it really to much to ask professionals to be at least has good has their own students ?

I first started taking an active interest in the basis for “climate change” a couple of years ago. Knowing nothing of which sites that might deal with this issue, or what their nature might be, I searched under Hansen, whom i’d heard of.
After finding a few sites, including this one, I firstly took an interest in the actual discussions around the scientific basis for this speculation, theory, or dogma, as you choose.
Whilst keeping an interested eye open, I no longer do that assiduosly.
The reason for that starts at Hansen’s NASA site and his “explanation” as to what constitutes a legitimate methodology for establishing the actual temperatures of the earth in the first instance and then the use of anomolies in preference to raw (adjusted) temperature data.
In a nutshell, from memory, he maintains that there is no such thing as the “real” temperature because any one measurement can only be taken at a specific point, and then goes on to illustrate this supposedly intractable metaphysical problem by citing the difference between a reading, for example at a height of 1 metre compared to say 10 metres. Let alone any measurement taken in a hollow or behind trees etc a little distance away.
And since it is impractical – even impossible – to take measurements across this range that he has manufactured, this is not how things must be done.
Having claiming to have established in this transcendent fashion that there is no legitimacy to an actual measurement, he then claims that the manner for establishing truth is to apply a methodology of his own devising to the very measurement that has no legitimacy.
And so to manufacture “true” data.
I actually couldn’t and still can’t believe what I read.
This is the most bogus thing I have ever encountered. It represents the complete defeat of intelligence. It reeks of deceit.
I can honestly say that my interest in this whole issue is not driven by curiosity it is driven by fear.
The fact that this being was not just considered and accepted as a scientist but as in effect the presiding authority on this issue has made me think that mankind simply has no hope.
Even those who are sceptical, or who give alternative interpretations, seem never to actually see this rudimentary exercise in either incomprehensible incompetence or primaeval fraud. Having since heard of his manouverings in the 1988 Congress hearing, I know what I think it is.
Your efforts Anthony in attempting to actually verify what was being measured both disturbed me profoundly in that it is beyond comprehension that instruments used in testing were never themselves verified, and reassured me that there was at least some basic human intelligence being applied, somewhere.
I see from some – only some – of the above comments, continuing efforts by you and possibly others, but mainly the fact that the core question of, really, what constitutes the basic application of human intelligence, is now being brought into focus, that the degradation of human capacities may soon end.
People do need to reduce all of this to such simple observations and evaluations.
It is not even necessary, in coming to a decision on whether CAGW is true or not, to even consider the science or what purports to be science. When someone, anyone, claims “I know this” and therefore “this will happen” about anything at all, and it doesn’t, then they were WRONG. That is, at the the time of making such a claim THEY DID NOT KNOW WHAT THEY WERE TALKING ABOUT.
Any subsequent claims to knowledge must be judged in that light. That is, they didn’t know what they were taking about then, but claimed to, and now they are making another claim with the same level of conviction. What should I make of this revised claim – and this person?
When that person refuses to even acknowledge that they were wrong (“its worse that we thought”) then you are dealing with a person who is fundamentally dishonest. Intractably dishonest.
The scientific inquiry on this will go on. But it must exclude the apparent multitude of those who are simply not scientists regardless of their accreditation and ratification as such.
People such as @Crispin in Waterloo and @RACook PE1978 above are focusing on the guts not just of this issue but of the whole culture that has generated it.

If you adjust the ruler temperature to the UHI and you find that oke. And it is somewhat in line whit AGW. And thats what you after anyway.
BUT is that not somehow a bit like committing fraud?
By all means any adjustment to data must be avoid it in the first place only than you get the right answer.
The raw data is the only correct data to use. But that means no warming so there is a adjustment done and just like you do it goes upward so to match the needed global warming.
In the real world however the temperature is what the raw data shows so there is a difference. Man made off cores.
The only way to go and the honest way is using the raw data if however you want to make a correction the only hones way to go is down. If yo want to correct data you must do UHI minus ruler.
So if ruler is 15 and UHI is 18 you make 15+18+3 /2 =18
If you do nothing you get 15+18/2=16,5 thats the correct number.
If you do the adjustment right 15+18-3/2=15.
You see the differences and it dose not look like a lot but recherches showed that UHI cane be up to well over 5 degrees. And your result won’t be correct if you work whit faulty data.
And then I read the paper over and see something I can’t place.
I can see some fraud in this paper to.
Somehow the raw data is higher then the adjusted data. That looks oke for you but if correct you must get the same result. So the correction must be the same as the raw data. Or a bit over because you adjust the UHI.
U use the wrong raw data and then I like to now witch one (Barkleyes I would say)
Now to me it looks like you have taken the agw temperature altered it and say look the temperature is all right we smoothen UHI and still there is global warming!

troyca,
“Given that using the PHA with rural-only neighbors *still* identifies the inhomogeneities, and increases the trend, this led us to the conclusion that the corrections were warranted and not simply UHI spreading.”
False conclusion, of course. Tell me, from where comes this systematic increase of trend. If you do not have a convincing answer to this question, I do not think there is much conclusion to draw from your paper. Sorry.

A quote from Press et al (1989) ‘Numerical Recipes’ in the introduction to the chapter on Statistical Description of Data, which somehow seems pertinent to ‘climate science’ in general:
“Data consists of numbers, of course. But these numbers are fed into the computer, not produced by it. These numbers are to be treated with considerable respect, never to be tampered with, nor subjected to a numerical process whose character you do not completely understand. You are well advised to acquire a reverence for data that is rather different from the ‘sporty’ attitude which is sometimes allowable, or even commendable, in other numerical tasks.”

The figures from the Supplementary data shows what’s really happening here.
I want to just focus on the ISA Urban Rural classification and use the TOBs adjusted data (I still like my data raw but that is not presented in the supplemental other than in the chart).
ISA Rural Trend – TOBs -> Min temp 0.064C/decade -> Max temp 0.026C/decade
ISA Rural Full NCDC Adjusted 5.2i -> Min 0.070C/decade -> Max temp 0.060C/decade.
So the Rural stations are adjusted up 0.006C/decade in the Minimum temperatures and up 0.034C/decade in the Maximum temperatures.
So, the Rural temps are adjusted UP on average +0.23C from 1895 to 2010.
Why adjust the Rurals UP +0.23C ?
Applying the same math, we get Urbans adjusted DOWN -0.09C (hardly a UHI adjustment)
And ALL stations (and not including the TOBs adjustment remember) are adjusted UP +0.138C from 1895 to 2010.
I don’t think this is how it is usually described. (and the TOBs adjustment adds – well noone really knows what it is anymore – but I’ve got it at about +0.28C – and then the adjustments from the truly raw data to the NCDC adjusted Raw data is another +0.15C – add it all up if you want).ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013-suppinfo/hausfather-etal2013-supplementary-figures.pdf

Remember this,
Assuming nothing, I downloaded raw daily data for 282 out of 289 sites. (The other 7 sites either had id number discrepancies or were not online at GHCND.) From this, I calculated average monthly TMAX and TMIN temperatures for all the sites and then calculated 1961-1990 anomalies. I then calculated simple averages of the “raw” anomalies for the two networks BEFORE any jiggery-pokery. Even if all the subsequent adjustments are terrific, from a statistical point of view, it’s always a good idea to see what your data looks like at the start.http://climateaudit.org/2007/08/04/1859/

Gary Pearse says:
February 13, 2013 at 6:18 pm
I think something is being missed here. This is a great idea. If you chose the 100 most pristine sites in the world only, and kept track of their raw data over time, even though you are unlikely to have a good average of global temp(if that is what is being attempted with all the adjustments), you would have a handle on a clean useful trend.
———————————————
I could never figure out why there is a need to compare station pairs, an honest trend with raw data from pristine rural sites is all that`s needed. If you have a rural neighboure just use that and junk the urban station.

By the way, this is from their paper, “for simplicity, non Urban stations are classified as rural”. As Roy Spencer found, a very significant amount of UHI effect can still be seen in smaller growing communities and suburbs, even when they are not classified as Urban. And some Urban stations can show little or no UHI change because the area was Urban before the thermometers were ever put there. As such, they only reflect the change in urbanization, not the total urbanization. And this means that the total UHI contamination is not reflected in the data anywhere.
This paper is designed to prove a forgone and desired result while avoiding any research that might actually yield some other result.

@jc
That was well considered. The claims as early as 1980 that we are headed for thermageddon because of CO2 were, as we now know, wild-assed-guesses (WAGs) based on a hunch. What ensued after 1988 was a relentless drive to prove the hunches were right (at any cost to ‘scientegrity’) and that drive is still floundering around seeking to legitimize the initial WAGs. Time and time again the reading of the bones precedes the location of a skeleton. Your comment was spot on.
The inveterate devotion to the line ‘the End is Nigh’ permeates the culture of alarmism as a confused humanity submits to yet another crop of quasi-scientists and meta-priests of the counter-culture. It is so easy to see through it is a surprise to me that it has gained so much momentum. I work in a field where I have been told in all seriousness ‘play along, a lot of money will come into your sector if you do’. Just like that – blue lies spoken baldly. And I am speaking of a Top Dog.
There is a moral crisis in the scientific community rooted in the unravellings, at some level, of the social order. The lunatics are in control of the asylum’s administration block and have a grip on the microphone of PA system. It is going to make a very, very interesting documentary.

Pearse: “even though you are unlikely to have a good average of global temp(if that is what is being attempted with all the adjustments), you would have a handle on a clean useful trend.”
While any one station may have it’s own climate for a year, or even for several years, the mixing of the atmosphere will assure that no station will have climate that is independent of global trends. I’m just shooting from the hip here, but I seriously doubt that the 100 most pristine stations in the world would have a 100 year trend that varied by more than .1C from reality.
The whole Berkely idea of taking fragments from anywhere and everywhere and statistically homoginizing them together so that they can claim that they have results based on 10,000, or whatever, number of stations seems totally idiotic to me. Beyond a hundred stations the answer is going to be far more effected by the quality of the stations and the amount of data manipulation than any gains that may be made from including those stations.

…..It is so easy to see through it is a surprise to me that it has gained so much momentum. ……
No it is not difficult to see at all. The idea of man made climate change because CO2 got off the ground in the UK because there was a near state of war between the PM and the government on one side and the miners union and its leader on the other.
Hadley climate centre was created with the remit to find a reason to ban coal. It was near inevitable that it would succeed from there given its use as a reason to tax industrial nations to fund socialist programs in addition to the original use.

@ Crispin in Waterloo
Your “reading of the bones” observation just about sums up the levels of function shown and the primitive compulsions behind them. Succinct and accurate! Perhaps even those rummaging in the carcass of civilization will get it. Actually, too much to ask.
I can’t see that this is a surprise though. All of what we see is simply a playing out of themes and characteristics which came into sharp focus in the 1960’s, gained structural definition in the ’70’s and were applied, increasingly widely, through the 1980’s.
The good news is that it is now approaching terminal exhaustion having been largely static and immutable for a quarter of a century. The bad news is it is hard to see a societal wide basis for a renewal. How is anything worthwhile built from complete degradation?
Dishonesty – that is, a basic unwillingness to face reality and the always present limiting factors of that whether in science, politics, human relations, or anything else – is endemic. Convenience, expediency, gratification and self- validation dictate the tenor of just about everything.
The concept of values has been extinguished and has been replaced by the erection of self-interest masquerading as belief, which being completely self-contained is absolutely impervious to those things that don’t confirm it. A self-reinforcing ignorance and sense of moral purity in a vacuum.
“Climate Change” is the ultimate expression of one important strand of this. If this fails the edifice tumbles.
I am encouraged by the occurrence and fate of what seems to me a very comparable hysteria running from the mid-late 19th Century through to the 1920’s: Spiritualism. This had as its most devoted adherents precisely the same type of person: the newly released from toil who no longer had reason for a grounding in the realities of the material world but did not have a commensurate sophistication to translate their experiences back to that. They were pig ignorant.
This might have been more limited in scope and with less capacity for damage but the nature of it seems to me to the same. A big difference of course is that now it is not just a matter of parlour entertainments, there is also a great deal of power and money involved. And powerful influences that either see advantage in others being subsumed in a zombie culture or who themselves are oblivious to what that is.
You referred to the unravellings of the social order, which as the above must make clear, I agree with, although I might refer to it as the underpinnings that make a social order possible – or even make possible a coherent, meaningful, and effective interaction with the world in every part.
Like in anything, the scientific community cannot stand apart, but it is possible with such a history of observable achievement, and the obligation of rigour that has to go with that, that the first signs, and strength, might come from within it. Its certainly hard to see the consumer classes being able to pull anything out of this.

@jc
Thanks for the observations and thoughts. The consequence of extreme materialism is unabated self-interest, or rather, selfishness, and the phrase ‘the end justifies the means’ is usually not made to further the interests of others. It is one’s own narrow version of reality that allows such seeds to gain a root-hold in the fertile soil of personal amibition.
Climate science takes full advantage of the propensity to treat the well-schooled (as distinct from the educated) as para-priests who will interpret the Book of Life on behalf of the ignorant masses. The self-image of being ignorant is reinforced by laying on the BS about how if you get a string of degrees you are therefore educated and therefore knowledgable and therefore wise. I was surprised when a senior manager friend of mine said that an MBA was really meaningless. He said it just means the person jumped through the hoops; that it told me nothing about their ability to think or perform.
Climate alarmists rush to any corner of the room where paper-rooted status attaches to pronouncements. It is has been interesting for me to see the sterling work performed by non-specialists applying common sense to obfuscative and largely meaningless ‘work’ done primarily in support of a CO2-is-dangerous narrative. To date I have yet to see anything that shows CO2 is dangerous. Zilch. Its thernal effects in the atmosphere are not even detectable for heaven’s sake. But the effect of a well-placed and shrill call to defend The Earth is definitely detectable, on our wallets.
The work in the paper above is, as far as I see, not technically defective in the sense that they are reporting what they observed when analysing the outputs of data massaging protocols. So a high horse awaits the work – technically correct. Moving the deck chairs on the Titanic away from the exits was also technically correct and wise. We could host a conference on how and where chairs should be moved away from exits on sinking ships. It seems to me, at least, that the time would have been better spent learning how to steer ships through fields of icebergs – not ever having to think how to displace and place deck chairs.
If we have a heavily contaminated US temperature set, there may be a passing usefulness in learning if one or another method of ‘correcting’ it (guestimating with best guesses in sets and contra-sets) but doesn’t it occur to people whose limited working lives are all to evident in the phrase ‘three score and ten’ that it is pretty much a waste of talent?
That of course applies equally to those who are bending over backwards to defend the indefensible position that human emissions from burning fossil fuels are causing not only a rise on global temperatures, but that the rate of rise is increasing. What a load of horse feathers. Any child who can read a thermometer soon knows it is untrue. My goodness, don’t people have better things to do with their limited time and self-acclaimed talents?
Professional climate science and particularly CAGW has become a moral exclusion zone. “Death to climate deniers”? They have the moral gravitas of witch-sniffers (ukubhula). Thank goodness for the few stalwarts who insist on finding and publishing comprehensive analytical works that meet the standards which obviously should apply universally. That they are bitterly opposed simply adds lustre to their diadems.

Zeke Hausfather says:
February 14, 2013 at 11:26 am
Bill Illis,
The majority of the positive adjustments to max temperatures by the PHA are due to the negative bias of around 0.4 C introduced when stations change from CRS to MMTS instruments.
You can see the effects rather clearly here: http://i81.photobucket.com/albums/j237/hausfath/MMTSCRSraw_zps436b0190.png
—————–
Your chart shows a bias in both Max and Min temperatures. There is almost no net change in the Avg from the MMTS.
And how did this inaccurate sensor get installed all over the US (and then with a wire that is too short to use properly). Who at the NCDC/NOAA tested it?

Zeke: “The majority of the positive adjustments to max temperatures by the PHA are due to the negative bias of around 0.4 C introduced when stations change from CRS to MMTS instruments.”
Why are you compensating for a negative MMTS bias? Why don’t you compensate for a positive CRS bias. As the CRS instruments age, their peeling white paint has less albedo, introducing a positive bias. So compensate for the right bias. Don’t compensate for new instruments that were calibrated in the lab.
Oops, I forgot, the cardinal rule of climatology is that all corrections must be positive. And any negative corrections, like UHI obviously would be, must be made to magically disappear with without actually being corrected for.
And another question, if you are going to “disappear” UHI using homoginization only, they why not disappear TOB using homoginization only. Why not “disappear” MMTS using homoginization only? With the things that give you positive corrections you add them in seperately. With the things that would give you negative corrections you don’t compensate for them at all. You simply spread them out across all stations so that the difference between stations is not visible, but the bias is still there.
Multiple studies have already shown that there is a huge UHI factor. Differences between large urban areas and their surrounding landscape can be clearly seen on satellite images. As much as 7 to 10 degrees F of difference exists between some cities and their surrounding rural farm and wild areas. I can see between 2 and 12 F difference nearly every day driving my wife to work from the suburbs to the city and then coming back. Producing a result that shows that there is no UHI effect simply shows that you have no ability to do science.

Those who homogenize are great logicians. When it comes to explain a sudden drop in temperatures, roofs heat a lot, UHI is large, evaporation cools very efficiently.
Miraculously, when you implant a road next to a thermometer when you drain near a station, when you build around the instrument, pffuit, nothing, no effect at all.

Hi Zeke – In your analysis, there are a number of issues that were not examined in your paper. We overviewed most of these in our paper
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229. http://pielkeclimatesci.wordpress.com/files/2009/10/r-321.pdf
Unfortunately, you ignored these issues in your paper (as has NCDC, in general). Anthony, in his preliminary comments, has already effectively summarized some of them.
In this comment on your paper, however, I just want to highlight one issue. That is, you and your co-authors have not assessed if the trends in absolute humidity between rural and urban locations are identical. If they are, than this eliminates an important uncertainly regarding your conclusions. However, if they are different, than in terms of using these trends as part of the construction of a global land average (or USA land average) in terms of global climate heat changes (“global warming”, will be misinterpreted. This issue is discussed in the papers
Pielke Sr., R.A., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211. http://pielkeclimatesci.wordpress.com/files/2009/10/r-290.pdf
Davey, C.A., R.A. Pielke Sr., and K.P. Gallo, 2006: Differences between near-surface equivalent temperature and temperature trends for the eastern United States – Equivalent temperature as an alternative measure of heat content. Global and Planetary Change, 54, 19–32. http://pielkeclimatesci.wordpress.com/files/2009/10/r-268.pdf
Fall, S., N. Diffenbaugh, D. Niyogi, R.A. Pielke Sr., and G. Rochon, 2010: Temperature and equivalent temperature over the United States (1979 – 2005). Int. J. Climatol., DOI: 10.1002/joc.2094. http://pielkeclimatesci.wordpress.com/files/2010/02/r-346.pdf
Peterson, T. C., K. M. Willett, and P. W. Thorne (2011), Observed changes in surface atmospheric energy over land, Geophys. Res. Lett., 38, L16707, doi:10.1029/2011GL048442
As we show in Figure 11 of
Pielke, R.A. Sr., K. Wolter, O. Bliss, N. Doesken, and B. McNoldy, 2007: The July 2005 Denver heat wave: How unusual was it? Nat. Wea. Dig., 31, 24-35. http://pielkeclimatesci.wordpress.com/files/2012/01/r-313.pdf
the dry bulb temperature can be quite high, but the actual heat content of the air could be lower than earlier in the day when there is more water vapor in the air.
Thus, your paper has ignored the issue of the effect of concurrent trends in absolute humidity. It would be remarkable if the rural and urban locations had the same trends.

Multiple studies have already shown that there is a huge UHI factor. Differences between large urban areas and their surrounding landscape can be clearly seen on satellite images.

I have a free local TV Weather App on my smart phone where you can select different weather stations to get the conditions there, you can see the temp change between stations at your finger tips. I’ve compared the two closest (less than ~5 miles away), to the Airport, and the Airport is 2-5 degrees warmer. I’m sure there are hundreds of versions of this free app, and mine (WKYC) will call up the weather I think all over the world.
Zeke, I’ve noticed you’ve looked at T-min, T-max, etc. I worked from the NCDC’s Global Summary of Days 120M+ record set to determine today’s T-rise and subtract tonight’s T-fall at each station, then average T-diff from different stations across various areas. This rejects many of the sources of error with looking at T-min, or T-max alone because it’s an anomaly of measurements taken within 24hrs of each other.
What I’ve found is that the annual daily average Rise and Fall is ~18F each, it varies some from year to year over the last 60+ years, but the difference between the various years is slight, with no trend as Co2 has increased.
Basically with only slight variation, the night time temp drops as much as the temp went up during the previous day.
Also when you select very low humidity, minimal wind speeds and no rain over the 48hr period, the swing can be 40F up and down. Clearly Co2 isn’t reducing the ability to cool at night.
You can review this by following the link in my name, where I have a handful of blogs on the topic.

Tilo Reber,
Actually, NCDC’s method assumes that current temperature readings are the most accurate ones, and applies adjustments relative to those. So the MMTS adjustments do effectively cool the past (since MMTS max temps read ~0.4 C lower than CRS).
MMTS adjustments are made automatically using homogenization. TOBs adjustments can be as well (you get around the same result, e.g. in the Berkeley approach or in Williams et al 2012). As far as I know, the separate manual TOBs adjustment is somewhat of a legacy approach, and will likely be removed in the future provided they are satisfied that automated pair-wise methods can be as effective.
.
Dr. Pielke,
Thanks for your comment. Do you by chance know of a spatially-dense network of stations that have wet bulb and humidity readings over long period of time for the CONUS region? We were somewhat limited in our analysis to drybulb temperatures by lack of readily available data over the last 50-100 years, more than anything else. There could certainly be some interesting follow-up work looking at the humidity question in more detail.

jc says:
February 14, 2013 at 8:52 am
“Like in anything, the scientific community cannot stand apart, but it is possible with such a history of observable achievement, and the obligation of rigour that has to go with that, that the first signs, and strength, might come from within it. Its certainly hard to see the consumer classes being able to pull anything out of this.”
Enjoyed your posts greatly. I see science as the last bastion of reason in a civilization undermined by postmodern thought – which is little more than Derrida’s literary theories.

Zeke,
“So the MMTS adjustments do effectively cool the past (since MMTS max temps read ~0.4 C lower than CRS).”
Less than 0.1 ° C for Tavg much of which has nothing to do with the instrument itself. Very little is allocated. Especially in the case of PHA on rural only. None perturbation (considering your assumptions) can be invoked to explain negative jumps.
You have still a lot of work.

Hi Zeke – Regarding your question
“Thanks for your comment. Do you by chance know of a spatially-dense network of stations that have wet bulb and humidity readings over long period of time for the CONUS region? We were somewhat limited in our analysis to drybulb temperatures by lack of readily available data over the last 50-100 years, more than anything else. There could certainly be some interesting follow-up work looking at the humidity question in more detail.”
please see the data sources we used in the papers I listed.
Roger

@ Crispin in Waterloo
“The ends justify the means” can probably be taken to be the definitive summation of the past 40 years. To those who register it, it is generally automatically taken to refer to “politics”. You are however dead right to draw the connection to individual values, mindset – which effects how things are seen, and even the capacity to see them – and behavior.
Such a mindset can only legitimize itself to someone who holds it by viewing those effected by any actions taken as being an enemy. Even there it has no validity since the means appropriate for dealing with an enemy differ in nature to those required in dealing with a member of the same cohort or society. Such an instinct is possessed only by those who cannot get what they want by means that are legitimate within the society they live in.
Within a society that is not openly at war internally, this is invariably expressed in deceit. Presenting things as other than they are, or ommiting that which should not be, with the intention of gaining advantage, has been completely normalised.
Thus to hide things rather than reveal them is standard. This is antithetical to any interaction in society happening in a manner that is of benefit to any but the deceiver. It is plainly evident in “Climate Science”. Reality can be hidden by distortion, withholding core information, or manipulation.
The most vivid illustration I have come across in this area of “climate science” is with the sea level data at Colorado. The claim that an adjustment is justified because “the land rose” is an extremity of deceit and degradation.
Sea level is just that: the level of the sea against the land. It is not sea depth, it is not sea volume, it is not sea area. It is not “what it would have been if something we claim has happened had not occurred”.
This claim is an assault on the capacity of anyone, including children, to have a grasp on reality. Every child has had an understanding of what sea level means. Now they don’t.
All of this is a direct and inevitable result of “the ends justifies the means” being in fact not a “political” strategy or abstraction but an expression of primaeval self-interest. Just because it expresses itself collectively does not change that. What vehicle it took to carry it here is a distraction and cammoflage in itself.
In a world of “the ends justifies the means” the publicly claimed end can never arrive; it is not actually the intention, and in any case can only create conditions that make any worthwhile end impossible.
There are only means and if these are not honest this is to the advantage of the intrinsically dishonest, and in a culture based on this self-interest, to the mediocre.
Behold “Climate Change”.
I think your comment on “accreditation mania” goes to a significant part of the problem as to how things play out not just in science but in all areas. It is not just the “devotees” relationship to those who hold paper, but the paper holders themselves, as you point out.
I suspect there is now a very large proportion of not just MBA’s but those in science and elsewhere, that actually do not have a grasp of the fundamental nature of what they are involved in. Instead, that they have been trained (point well taken about distinction from education) to apply a process and that the process is itself the point.
That is, the process is not a tool or protocol in service of seeking a truth, but that the process is the truth. This is a technicians role. And a good technician will offer more than that in any case.
This is the mentality of a lawyer. The Law may legitimize itself as being based on justice, but no lawyer I’ve ever met thinks that what they do is about justice. It is a system of administration that refers only to its own internal processes which may or may not deliver what might be seen as justice.
For a lawyer any proposition can be promoted if there is advantage to it. And this is seen to be legitimate. It has no values, is ratified as properly executed entirely by its own processes not by reference to any reality external to them, and has only an incidental relationship with its foundational basis.
Your analogy to the Titanic is very apt. There is no point to arguing the toss about seating arrangements. It is a symptom of the problem that people do.
Although many of these points we have exchanged will be seen as extraneous to science by many posters on the site, to me, and it seems to you and others, that these are basic.
Contrary to the aphorism, you can build on sand if properly dealt with by human intelligence. But you can’t build on the primaeval swamp.

@ Theo Goodwin
Derrida is, as you identify, a touchstone in all this. I have to take issue with you however – although I know it reflects convention – that what is described as post – modernism reflects something that can be called “thought”. Although, so far as it is expressible, words are used and writing is employed to do so, this gives the impression that it is part of a body of human comprehension and social niceties can only go so far!
It is more accurate to say that it is an absence of thought, in that thought must contain at least the potential for meaning. It is more akin to a psychological state, whereby all things being conditional no apprehension of anything, including itself, can occur. This can be revelatory when first encountered as a means of clearing the mind of any and all pre-conceptions, however this can only exist for a moment. It is just the intellectual equivalent of being struck dumb by something unforeseen and therefore not immediately absorbed. The fact that it is constituted and dissected as an actual position is self- defeating and exhibits the inherently fraudulent nature of the whole show.
I can’t say I’ve read Derrida since that implies that what appears as writing is intended to and can communicate something of meaning, and that the reader can discern something within the script to engage with, rather I can say I have exposed myself, to the degree I thought bearable, to his meanderings.
Derrida was simply a purveyor of gibberish. This gibberish did not come from nothing however and was not produced with no point in mind.
Such gibberish is a god-send to those for whom it is useful to be able to justify anything and to never be pinned down or held to account. Since it is delivered in such a manner as to say “this has a basis in intelligence”, with all the accoutrements of culture attached, then there must be some reasonable base for any claim mustn’t there, even if it cannot be seen? So its YOUR problem if things don’t seem amenable to sense. This is just the technique of any con-man carried to an absolute degree. This is its achievement.
As such it suited the ignorance and self-evident existential worth of those liberated in the 1960’s and beyond from any real sense of responsibility and whose livelihood was derived from activities one or ten steps removed from the basis for the material wealth that enabled it.
What better than a source of justification for not having to exist in the straightjacket of values or concern for others, who of course all have their alternative conception of reality and why should anyone cater to that?
And what better possible life could there be for Derrida himself, where any musings would do?
Groovy man.

@jc
The comment by Roger P Sr above shows that your comments have currency in this thread. One can easily misrepresent the whole truth by simply not mentioning criticial elements that change the whole conclusion, were they to be acknowledged.
In this case, the matter of the absolute humidity of the air, which is analogous to its heat-containing capacity, means that temperature is not the only consideration when checking on the validity of homogenisation routines. I have noticed that each time Willis E tries to discuss something involving heat transfer and energy, not just ‘temperature’, the comments from readers are about the most obtuse on WUWT. People simply do not understand the concept of enthalpy and how measuring one (temperature) of three essential metrics (temperature, heat capacity Cp, and density at the time) leads to meaningless statements about ‘climate’. The comment above that we are not talking about, nor need to talk about, heat when discussing temperature indicates that ‘even the elect’ may struggle with basic concepts of what it means to ‘warm the globe’.
Our senior science officer (a nuclear physicist) and I were talking about the pointlessness of a metric that has been used for some years. It involves taking a task-dependent quantity of energy and dividing it by a volume that is not necessarily related to the energy number. He agreed it was invalid and commented that ‘we could divide by the distance to Mars and get a number’. He is right – we will get ‘a number’ but that number does not have any meaning. Because the back-yard gardner deals with temperature in terms of ‘a number’ there is a lot of discussion about what are no more than ‘numbers’. The paper under discussion takes a very close look at how certain numbers are influenced when treated in a certain way, and concludes that the treatment has not affected the numbers in a way that matters. Well, OK, but as Roger Sr. (indirectly) points out, the exercise has no more value than dividing all the numbers by the distance to Mars and plotting the anomalies. It is not an error of commission, it is an error of omission.
Re the value of schooling, training and education: nothing was so sobering about the worth of a PhD as when I started guiding candidates through the writing of papers reflecting intelligent, coherent thoughts on a subject that comes naturally to me (I guess). I was appalled. I have experienced getting someone out the door simply to vacate the space for someone else who might be worthier of the effort. Egad, I am unimpressed by worth of paper!
I used to be impressed by papers appearing in reviewed journals. The glory days were the 60’s in my memory. Science was King! Buck Rogers lived around the corner. Now, having been published and been a reviewer for papers and grants, sobriety once again places its cold hand on my warm throat. Thorough-going ignorance abounds. Was it ever thus?
It is very different now and we have, in large measure, climate science to ‘thank’ for it. The review process with respect to climate-related issues is broken. Anyone can see that. The greatest sins of omission in the modern era belong to climate-oriented publications. Simultaneously the rent-seeking sheep bleat their chant ever-louder across the Farm, “Four citations good! Two devastating counterpoints, bad!” That is noble-cause corruption without the chador. We deserve and can do much better than that.

I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data
Howsyerfather has been blogging at Julia’s for some time now and always does so with his global warming agenda up front. He’s a waste of space.

@Stephen
Your are reinforcing the comments of others that there was an agenda behind the purported purpose of this technical review (which is what the paper is). I don’t have an opinion on this, but you might be right. They are not claiming that what they are doing gives a good estimate of temperature. That is kinda the point. Other people are claiming that their own processes produce usable and meaningful temperatures, but this paper just examines in a certain manner, very narrow technical aspects and as I indicated above, they are saying they if they paint it blue, it looks blue. Well, my reply is, so what? If what you are painting blue is not a valid temperature set, how will the blue paint help?

Zeke et. al. Congratulations on the paper. I think you have shown clearly that spatially gridded data fails to remove UHI from the trends and is not a viable method for determining trends without UHI pollution.
I see a couple of issues that I’d like to see addressed in regards to this though.
First, solving for the difference in trend between urban and nonurban stations does not determine how much UHI or LHI is impacting the trend. That equation can only solve for how much extra UHI warming the subset of urban sites have over nonurban sites. So instead of solving for Urban Trend minus UHI Trend equals Rural Trend, what you are actually solving for with your method is Urban Trend minus Urban UHI Trend equals Rural Trend plus Rural UHI Trend. So the UHI trend among nonurban (which really isn’t rural) stations is still present in both sets of data. At best you have removed any surplus UHI that Urban stations show over the UHI that your selection of possibly rural stations show. Until you compare the Urban Trend with a trend for a set of Rural sites that have both remained rural (avoiding UHI) and have an adequate station siting (to avoid LHI) you cannot solve for the amount of UHI in the data. You can only determine how much worse UHI affects one subset of stations versus the other subset (not entirely urban) of stations.
Your pairing methods would smear the warming from poorly sighted and UHI influenced stations that are classified as nonurban in your data set across the well sited free of UHI rural stations. For example if you have 3 ‘rural’ stations to pair with your urban station: the well sited actually rural one shows a trend of 0, a poorly sighted ranger station sited beside a parking lot has a trend of 2, and a suburban airport station with both UHI and LHI has a trend of 4, and your urban station with UHI has a trend of 6. Your methodology would calculate the mean of the 3 rural stations 2 smearing the UHI and LHI pollution across the actually accurate station and then the homogenization process would lower the urban station to a trend of 2. When there is zero trend in regional warming because failing to eliminate the UHI/LHI station issues in the rural sites prior to homogenization includes those errors into the trend. In order to get to the actual UHI impact, not just surplus UHI, you must control your rural stations only including trends from actual rural sights (not airports, wastewater treatment heat sinks, and suburbs) that have high station quality.
Second, I think the adjustment for station type is not correcting for what you think it is correcting for.
“So the MMTS adjustments do effectively cool the past (since MMTS max temps read ~0.4 C lower than CRS).”
This 0.4 C lower reading is based on what? Is it the difference between the two types of thermometers in a controlled environment showing a 0.4 C difference in identical conditions? Or is it the difference in station measurements estimated after the switch to CRS? I would like to see this addressed because as I understand the switch from MMTS to CRS also typically involved a movement of the station increasing Local Heat Islands. The CRS require power and cabling which resulted in stations that had been sited properly being moved alongside buildings to allow cabling to reach the station. In addition, the surface stations project shows many of these stations also had walkways constructed directly to the station for access. So does this 0.4C difference occur in both controlled environments and station inhomogenaities? If we added a UHI/LHI error into the measurments then it is not appropriate to adjust old temperatures down if the actually deployment of the CRS sensors did not accompany a measured difference of 0.4 C
Another issue with homogenization is what effect does station quality have on homogenization’s ability to detect both step and trend variations? If a station is rated as accurate to 1.0C is the homogenization more likely to detect a step increase than with a station that has a 5.0C accuracy? I would believe that detecting both step and trend variations would be more difficult with the les accurate stations, so step increases at better stations may be homogenized but the errors in poor quality stations are not. There is also the other issue with station quality in climate data in that it is treated as if the error is a standard bell shaped deviation, a station rated as plus or minus 5.0C is just as likely to be 5.0C low as 5.0 C high. This then washes out in the statistical processing as you have enough data points to claim the errors cancel each other out. But is there any actual evidence that station errors are distributed normally? Because the causes of station error are extremely biased toward warming biases, so rather than having a bell curve centered on 0 you may have a bell curve centered on +3.
The paper alos claims that undocumented station moves between the 30s and 60s may have introduced a cooling bias to rural stations making the UHI seem worse than it really was. Is there any measurement data to back up this assertion? That moving a station from downtown to heat sinks such as airports or wastewater treatment plants causes a cooling inhomogeniaty? Or is this simply supposition to explain it away?
In addition, the TOBS adjustment assumes that rural stations read their thermometers later in the day than urban ones. Is there any actual evidence of that? Because knowing rural people, they tend to get to work earlier than urbanites and I have a hard time buying that rural stations need more TOB adjustment than urban ones. It is adding another bias into the stations that masks UHI effects. I know this is how it is typically done, but what is the hard evidence that this divergence is real and necessary as oppossed to convenient?

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy