Urban Heat Islands and U.S. Temperature Trends

The impact of urban heat islands (UHI) on temperature trends has long been a contentious area, with some studies finding no effect of urbanization on large-scale temperature trend and others finding large effects in certain regions. The issue has reached particular prominence on the blogs, with some claiming that the majority of the warming in the U.S. (or even the world) over the past century can be attributed to urbanization. We therefore set out to undertake a thorough examination of UHI in the Conterminous United States (CONUS), examining multiple ‘urban’ proxies, different methods of analysis, and temperature series with differing degrees of homogenization and urban-specific corrections (e.g. the GISTEMP nightlight method; Hansen et al, 2010). The paper reporting our results has just been published in the Journal of Geophysical Research.

In our paper (Hausfather et al, 2013) (pdf, alt. site), we found that urban-correlated biases account for between 14 and 21% of the rise in unadjusted minimum temperatures since 1895 and 6 to 9% since 1960. Homogenization of the monthly temperature data via NCDC’s Pairwise Homogenization Algorithm (PHA) removes the majority of this apparent urban bias, especially over the last 50 to 80 years. Moreover, results from the PHA using all available station data and using only data from stations classified as rural are broadly consistent, which provides strong evidence that the reduction of the urban warming signal by homogenization is a consequence of the real elimination of an urban warming bias present in the raw data rather than a consequence of simply forcing agreement between urban and rural station trends through a ‘spreading’ of the urban signal to series from nearby stations.

Homogenization is a somewhat complex term for a conceptually simple idea. Climate variations tend not to be purely local so changes in temperatures over long time spans (longer than a month) will be highly spatially correlated. Any major changes over time in individual stations that are not reflected in nearby stations are likely due to local (rather than regional) effects such as station moves, instrument changes, time of observation changes, or even such things as a tree growing over the thermometer stand. By removing any artifacts of individual station records not shared with other stations in their region, we can get a more accurate estimate of regional climate changes.

The conterminous United States (CONUS) has some of the most dense, publicly available digital surface temperature data in the world with over 7000 Cooperative Observer (Coop) stations reporting daily maximum and minimum temperature. This provides a unique resource to compare subsets of stations with various characteristics (e.g. urban form, sensor types, etc.) without suffering bias due to differing spatial coverage, a factor that often complicates global-scale studies of UHI. The Coop Program also maintains accurate station location data (roughly 30 meter accuracy), which allows for the accurate indexing of Coop stations against high-resolution spatial datasets that are useful for identifying urban and rural areas.

We use four datasets to classify stations as urban or rural, all with 1 km spatial resolution:

GRUMP – an urban boundary database using administrative borders and other factors (including nightlights) produced by Columbia University.

Population growth – 1930 to 2000 population growth data interpolated to kilometer resolution using U.S. Census data.

We also used two different methods to compare urban and rural stations: a station pairing method, where we looked at all possible permutations of urban and rural stations within 100 miles (160 km) of each other for each urban proxy, and a spatial gridding method where we used a grid-based approach to calculate CONUS temperatures separately using only urban and rural stations and compared the results. For the station pairing method, we imposed additional restrictions that the pairs must both have the same instrument type, to avoid accidentally conflating bias due to urban-correlated differences in the frequency transition from liquid-in-glass thermometers to electric MMTS instruments with actual urban-related warming.

Finally, we examined six different versions of U.S. temperature data separately for both maximum and minimum temperatures:

Raw station data with no adjustments

Station data with only time-of-observation bias (TOBs) adjustments

Station data with both TOBs and full PHA homogenization

Station data with TOBs, full PHA homogenization, and GISTEMP satellite nightlight-based corrections

Station data with both TOBs and rural-only PHA homogenization.

Station data with both TOBs and urban-only PHA homogenization.

We created estimates of urban-rural differences for each of the four temperature proxies, two analysis methods, six temperature datasets, and maximum and minimum temperatures for a total of 96 different combinations.

As shown in Figure 2 from our paper, there are significant differences in the warming rate of urban and rural stations in the raw (and TOBs-adjusted) data that are largely eliminated by homogenization, even when that homogenization is limited to using only rural stations (to avoid the possibility of ‘spreading’ the urban signal).

This can also be seen in the figure below (from our paper’s supplementary information), which shows urban-rural differences over the 1895-2010 period using the spatial gridding method:

We conclude that homogenization does a good job at removing urban-correlated biases subsequent to 1930. Prior to that date, there are significantly fewer stations available in the network with which to detect breakpoints or localized trend biases, and homogenization does less well (though the newly released USHCN version 2.5 does substantially better than version 2.0). In general, there might be a need for additional urban-specific adjustments like those performed in NASA’s GISTEMP for areas and/or periods of time in which station density is sparse, but they are rather unnecessary for the post-1930s CONUS data. The simple take-away is that while UHI and other urban-correlated biases are real (and can have a big effect), current methods of detecting and correcting localized breakpoints are generally effective in removing that bias. Blog claims that UHI explains any substantial fraction of the recent warming in the US are just not supported by the data.

In case people are interested in playing around with our data or code and replicating our approach, all relevant materials to conduct the analysis (as well as versions of the code in both STATA and Java) are available on the NOAA NCDC FTP server. More detail on the specific methods used can be found in our paper, and our supplementary materials contain more detailed tests to ensure that homogenization was properly removing urban-correlated biases. This paper is also somewhat interesting as it arose out of a a blog post back in 2010, and represents a productive collaboration between a number of climate bloggers (Troy Masters, Ron Broberg, David Jones, and Zeke) with climate scientists at NCDC (Matt Menne and Claude Williams).

“Callendar grouped temperature data in England on a town by town basis based on population growth rates. In that way he effectively eliminated the urban heat island effect, which might have skewed his data.”

Unfortunately, it seems as if the article itself has gone behind a paywall; the Royal Met Society wants to whack you 24 pounds for a DVD of Callendar’s classic work (collected.) I don’t seem to have saved a copy of it, which I now regret. But the quote I lifted for my article on Callendar is material:

It is well known that temperatures, especially the night minimum, are a little higher near the centre of a large town than they are in the surrounding country districts; if, therefore, a large number of buildings have accumulated in the vicinity of a station during the period under consideration, the departures at that station would be influenced thereby and a rising trend would be expected.
To examine this point I have divided the observations into three classes, as follows:–
(i) First class exposures, small ocean islands or exposed land regions without a material accumulation of buildings.
(ii) Small towns, which have not materially increased in size.
(iii) Large towns, most of which have increase considerably during the last half century.

I’m not sure why you think we look at average temperature, as (as far as I recall) it is discussed no where in the paper. We primarily focus on minimum temperatures, though also show results for the same analysis using maximum temperatures.

Your suggestion of looking at seasonal variations in UHI effects is a good one. It fell somewhat outside the scope of our work, however, as we were primarily looking at overall trend biases and their impact on multi-decadal temperature trends, as well as the extent to which they are captures and corrected in the course of homogenization. There could certainly be some follow-up work looking at seasonal effects; I might take a stab at it when I have some free time, or you could download our data and code and run the analysis yourself relatively easily.

My sequel to Warm Front, Heat Wave, is live at the Amazon Kindle store.

An assassin executes a U.S. presidential candidate.
To witness and eccentric reporter, Sam Emory, it echoes a killing three years earlier that transformed his life.
Voters elect a new, environmentalist president, a colleague of the slain candidate.
But a powerful industrialist stops at nothing, including murder, to control world energy markets.
And this isn’t his first murder.
Emory and the president are the next in line.
HEAT WAVE
In the not-so-distant future, an assassin kills a U.S. presidential candidate seeking to fix a world ravaged by climate change, and Sam Emory uncovers a chain of murders with a megalomaniac industrialist at its core. The newly elected president vows to solve the climate crisis. Can Emory and his friends stop the assassin from striking again?
HEAT WAVE begins in Chena Hot Springs, Alaska, but the political intrigue and murder spread to Washington, D.C., and into the labyrinth of an Aspen, Colorado energy research facility, where free-marketers manufacture chaos in the electrical grid, and where Emory confronts a terrifying a secret from his past.

There was I thinking that an uncorrected UHI bias was the reason the Arctic was melting faster than anyone had predicted.

But seriously, I just read Mike Mann’s The Hockey Stick and the Climate Wars and that really brought home to me that all these various spurious attacks on science have only one goal: confusion.

The book also prompted me to look up what had happened with the investigation into allegations that the Wegman report, the only “independent” adverse finding against a climate scientist, was heavily plagiarised. It would seem that the only person who has actually been found guilty of misconduct after many investigations of climate scientists is a contrarian – even if it appears that the guilty finding is a minor wrist-slap for an offence that most universities would consider very serious if committed by a student.

My university has a large journalism school, and I remind them when I can that reporting science is not only the responsibility of scientists. Maybe journalists don’t grok calculus or principle component analysis, but they can spot the bleeding obvious: if climate science really was a big fat hoax, the fossil fuel industry has more than adequate resources to debunk it properly. So why all the character assassination, and throwing resources at cranks?

“… 2007-2011, 10 of those organizations, including the Heartland Institute and the Competitive Enterprise Institute, received a combined total of almost $16.3 million from ExxonMobil and three foundations supported by oil and gas companies, according to the Checks & Balances Project. In the same time period, the organizations were mentioned 1,010 times in articles about energy issues in 58 daily newspapers, as well as the Associated Press and Politico, but the media described their financial ties to the fossil fuels industry only 6 percent of the time.

“Most of the time (53 percent), news outlets used only an organization’s name—no more, no less. Occasionally, they would describe the organization’s ideology, with terms like “conservative” (17 percent) or “libertarian” (6 percent) and rarely by its location (3 percent) or function (e.g. “think tank” or “nonpartisan” group) (3 percent)….”

Titus,
Actually, independent of tax incentives, utility providers have strong incentive to reduce at least the rate of growth in energy production, as it means they do not have to sink profits into costly, long-term projects such as new power plants (of whatever flavor). This means that they can continue to pay high dividends and maintain a high stock price as a defense against takeovers. Matter of fact, the only non-petroleum company I can think of that had “growth” as part of its long-term strategy was
Enron. And that worked out so well.

> sink profits into costly, long-term projects such as new power plants

Though those are treated differently, at least in the US, so there’s in most cases an actual income gain to the utility from building new power plants (because they pass the cost on to the consumers) — while all they do is reduce their costs slightly by increasing efficiency. It’s not simple and it’s vastly fiddled with by lobbying, rulemaking, and loopholing.

One of the great experiments in political science: how long can a government continue to function as the ratio of loopholes to sensible policy increases. Eventually the whole thing falls, riddled from the inside as tho’ by termites.

For starters, $16m over 4 yrs to 10 institutions is miniscule. And why shouldn’t they cover all the think tanks and their like of every persuasion. Seems good standard business tactics to me and covers all bases. Also helps to keep everybody honest. It’s tuff out there.

Ray Ladbury @60 – I agree. Just don’t agree that they get a double hike in profits paid from my taxes.

For starters, $16m over 4 yrs to 10 institutions is miniscule. And why shouldn’t they cover all the think tanks and their like of every persuasion. Seems good standard business tactics to me and covers all bases. Also helps to keep everybody honest. It’s tuff out there.

Ray Ladbury @60 – I agree. From my comment I didn’t agree that they get a double hike in profits paid from my taxes.

Titus, you miss my fundamental point. They are doing this rather than funding a definitive study to debunk climate science. If they could debunk the science, why wouldn’t they? It would solve the entire problem for them. And they would only have to do it once, not spend $4-million a year plus whatever it costs to buy off politicians (a lot more than that). My issue is not that they fund this campaign but that journalists who should be professional BS detectors get conned by this sort of campaign time and time again.

This is all so well known, it seems pointless to repeat here. I’ve posted a few articles on my blogs (e.g. here – note the link to the NY Times article on how the industry’s own scientists told them the basic science is sound way back in 1995).

Philip,
I do not think anyone could debunk the science at this point. There is simply too much uncertainty in the available data to make any definitive assertion as such, and to paraphrase Titus, $4M is pocket change.

If you want to point someone to a climate website, I wouls suggest a more scientifically-based site, rather than ones that admittedly use any method possible to debunk arguemnts. What was that about honesty? Seems to be quite similar to Titus’ statement about all sides of the debate.

Not sure what parallels you are drawing regardings Clive Palmer’s exploits and his statement.

Please allow me to digress a bit:
I see folks referencing the tobacco battle. In my opinion the ‘deniers’ (the name you appear to use) played a crucial role in successfully getting the dangers accepted by Joe public. Joe was able to see that this was not a one sided big government saying what they should do because they said so. They were able to make a choice and that’s huge in the adoption that follows. This was my fundamental point.

You are absolutely right to encourage those journalistic friends of yours not to be conned. Investigative reporting is after all what they are trained to do. They need to be called out and had best move on if they cannot.

Dan H. and Titus,
There really is no scientific debate about anthropogenic causation of the current warming trend. None. All of those making actual scientific arguments are on one side. On the other side–the Calvinball All Stars. “Anything but CO2” is not a scientific position.

PS for Titus — yes, there are also nut cases who always trust the government.
The great delaying tactic is to fund the nut cases, to hollow out the center where policies can be agreed on by people; Big Tobacco does that.

Ray Ladbury @72: I’m no scientist but I was taught that part of science IS about trying to knock a theory down. If science is not doing that then hats off to anybody who keeps up the SCIENTIFIC approach. It’s not the role of science to control policy. That’s not science. It’s just one input to the policy making process. Lets hear more about ‘why catastrophic predictions of 15 years ago have failed’, ‘how little to no warming in recent years causes extreme events now that are caused by warming’, ‘why sea level continues to only rise at the same rate when accelaration was predicted’ Etc etc. Lots of Joe’s want to hear about this in language they understand. That’s what we expect from science. Over and over again if necessary. Never mind what policy makers make of it all.

Hank Roberts @73: Of course ‘delay’ is a tactic. My goodness, what’s new? Science should stick to science and leave the messy business of politics and business to the policy makers. Science is doing itself no favors.

Ray,
Not sure where you get the idea that there is no debate. I suggest a reading of the most recent scientific literature about causation of the recent warming trend. Scientific arguments are not limited to “one side,” as if there were different sides. There is more a continuum of thinking, with only the extremes engaging in your anti-science positions. Conincidently, it is the extremist who do not participate in the scientific debate. Rather, they argue like a drunk in a barroom; offering no valid scienitific basis to their arguments, and refusing to listen to reason.

1) Yes, science ‘knocks down’ inadequate hypotheses. However, the process doesn’t continue ad infinitum in practice, as the time required to test hypotheses is not infinite. For example, no-one is re-running Galileo’s experiments on fall rates of heavier and lighter objects at this point. Hypothesis testing in the matter of greenhouse climate change has by and large moved on from the kinds of questions Ray was talking about. The kinds of questions that ARE being investigated, you can read about right here.

2) What predictions of 15 years ago, specifically?

3) It’s not entirely clear what you mean by ‘how little to no warming in recent years causes extreme events now that are caused by warming.’ However, perhaps this will help: while there has been comparatively little further ‘warming’ over the last decade or so, that period has continued to be extremely warm by the standards of recent history.

For example, the page here tabulates NCDC decadal mean anomalies. The 2000s were by far the warmest decade on record, with a mean anomaly of .531. By comparison, the 90s averaged .313; the 80s, .176; and everything before that is in negative anomalies. (Well, not quite–the 40s just barely nudged into positive territory.)

So–the extreme events are driven by warm temperatures, not the rate of change of said temps. Make sense?

(Warning: it’s a lengthy post, and not all of it is relevant to sea level rise.)

4) “Science should stick to science and leave the messy business of politics and business to the policy makers.”

Well, for the most part ‘science’ has–for example, the IPCC has strictly avoided policy recommendations. However, individual scientists can and should, as citizens, appropriately share their expertise.

On the other hand, I’d have to question why it is that certain ‘policy wonks’–say, over at the Heartland Institute, for example–don’t stick to the messy business of politics and business and leave science to the scientists.

You said- “Lets hear more about ‘why catastrophic predictions of 15 years ago have failed’, ‘how little to no warming in recent years causes extreme events now that are caused by warming’, ‘why sea level continues to only rise at the same rate when accelaration was predicted’ Etc etc.”

These comments sound bogus to me, but please prove me wrong by providing some citations that would substantiate your claims. Steve

Really Dan H., I’d love, just love to hear more about some of “the most recent scientific literature about causation of the recent warming trend.” Because everything I’ve seen still says CO2 is a greenhouse gas.

By all mean, Titus, you knock a theory down whenever it is possible. However, you knock it down with another scientific theory. “Anything but CO2” is not a scientific theory.

Now as to your other accusations, I will begin with a general observation: Why is it that denialists are incapable of citing specifics? I would think that you could come up with at least one study to back up your claim. Can you?

1)Which catastrophic predictions of 15 years ago? I don’t know of any in the mainstream scientific literature. Do you?

3) Little or no warming in 15 years. Hmm. Could you explain to me how you can have a situation where 9 of the last 10 years are in the top ten warmest years if it isn’t warming? Can you explain why you choose 1998–the biggest El Nino in memory as your starting point? Can you explain why even then, 4 of the 5 main temperature series show positive trends?

So, Titus, who is telling you these lies? Why do you consider them worth repeating without verifying them yourself? Ever wonder what else they might be lying to you about?

Titus, I suggest you argue with the ignorant if you wish to argue from a position of ignorance. More interesting to me is finding out things I didn’t know and adding to my knowledge, for which this is an excellent site.

Arguing that scientists should leave policy to people who really know stuff is such a screwed up starting point that I’m not going there. Suffice to say I’ve been involved in both politics and policy-making, and science is a far more rational starting point than either of the above.

Here’s something I hope RC will do an article on soon: Bedmap2. A few key points:

• The volume of ice in Antarctica is 4.6% greater than previously thought
• The mean bed depth of Antarctica, at 95 metres, is 60 m lower than estimated
• The volume of ice that is grounded with a bed below sea level is 23% greater than originally thought meaning there is a larger volume of ice that is susceptible to rapid melting.
• The ice that rests just below sea level is vulnerable to warming from ocean currents
• The total potential contribution to global sea level rise from Antarctica is 58 metres, similar to previous estimates but a much more accurate measurement
• The new deepest point, under Byrd Glacier, around 400 metres deeper than the previously identified deepest point

Silly question I am sure, but wondering if there would there be any affect on UHI temperature with extensive light pollution such as is the case in Las Vegas? How do the Satellite Nightlights work in this case?

I would expect a fairly large amount of UHI bias in places like Las Vegas, and in general nightlights were one of the more sensitive proxies we examined (e.g. areas designated urban by nightlights had higher trends than comparable rural locations). However, we found that the homogenization process is pretty effective in detecting and correcting UHI-driven trend biases, provided there are a sufficient number of surrounding stations that are not subject to the same bias at the same time.

Courtney, compared to what? Other cities with as many lights but less light pollution? Other cities with less electricity used per population size? Are you asking about using the brightness to estimate whether the area is urbanized? Or asking if extremely optically bright Las Vegas gets misinterpreted by the instruments in the satellite used to infer temperature? Or something else, just guessing …