Saturday, May 21, 2011

Let us continue our exploration of Fall, et al. the al. being played by Watts, Nielsen Gammon, Jones, Niyogi, Christy and Pielke Sr. Credit where credit is due of course, but Eli thinks that after he drives his combine harvester through their carrot patch they may not be so happy.

This is probably a good time to roll out a comparison of the Menne et al. abstract and our corresponding results. Menne et al. is in italics, including agreements and disagreements. Some agreements, some disagreements. Not shown are additional results from our paper.

There is a mean bias associated with poor exposure sites relative to good exposure sites.

Confirmed.

This bias is consistent with previously documented changes associated with the widespread conversion to electronic sensors in the USHCN during the last 25 years.

The evolution of the bias shows a major contribution at the time of sensor conversion roughly consistent with but not entirely attributable to the sensor change, plus other bias changes over time.

Associated instrument changes have led to an artificial negative bias in maximum temperatures.

Siting differences and associated instrument changes have led to an artificial negative bias in maximum temperature trends (same finding, different interpretation).

Associated instrument changes have led to only a slight positive bias in minimum temperatures.

Siting differences and associated instrument changes have led to an artificial positive bias in maximum temperature trends, similar in magnitude to the negative bias in maximum temperature trends.

Adjustments applied to USHCN Version 2 data largely account for the impact of instrument and siting changes.

The adjustments for instrument and siting changes tend to reduce the impact by about half but do not eliminate it.

A small residual negative bias appears to remain in the adjusted maximum temperature series.

We find no evidence that the CONUS average temperature trends are inflated due to poor station siting.

Neither do we, but important questions remain regarding the effect of the adjustments and the different effects of siting and instruments that may bear on the CONUS average temperature trends.

So Eli being a RTFR kinda bunny asked where the data was, and John pointed. Many thanks, and Eli went and got and extracted the Excel file with the results. Now to be honest, Eli was not looking for what he found, but what he found has implications both for Fall et al, and elsewhere (tho not so much for GISSTEMP). When Eli unzipped the Final List.xls he sorted it by Watts Rank (1-5, with 5 being the worst stations) and by location: Rural, Suburban and Urban.

Then, thanks to Gatesian logic, the Rabett compared the number of stations in each Watts Rank by location and count,

rank

1

2

3

4

5

rural

0.43

0.52

0.68

0.68

0.53

suburban

0.21

0.24

0.21

0.25

0.24

urban

0.29

0.21

0.10

0.07

0.16

Total

14

67

222

662

68

It was an ah choo moment, because clearly rural stations are relatively under-represented in categories 1 and 2, but relatively over represented in the worst three rankings.

The implication of this is that Fall and Co. (and Menne) can and should not simply compare results from categories with each other, but should first look and see how the rural, suburban and urban distributions vary within categories, and indeed they do. Let the bunnies look at this for a couple of categories (gets very Tamino like) starting with Category 2

and Rabett Labs sees pretty much the same thing for the trends in Tmin and Tman, with what looks like two classes of rural stations. This is equally clear in WR3. How about Watts Rank 4?

UPDATE: This was originally switched with Tmax for WR 3. John N-G pointed this out. The asymmetry between the urban and rural remains, but the suburban is more like the rural

It's a little harder but there is a bump on the right hand side of the rural distribution which you can see more clearly in the trend for Tmin for WR4 between the urban and the rural

UPDATE: Same as for above for WR4

Tmean for WR3

Tmax for WR3

So, where does this leave us.

1. Fall, et al. fell off the carrot truck into the harvester because they did not correct for location bias which is a hoot and a half given how Watts and Pielke have gone on for centuries about the UHI, urban heat island effect, but this appears to be the RRE, the rural refrigerator effect.

2. If Eli compares the list of stations GISS uses with those used by Fall, et al., Fall appear to omit some urban and some airport stations although he was too foul to look at what was in the USHCN and what not

3. Without the RRE stations, trends in Tmean, Tmax and Tmin appear to match pretty well within Watts Ranks and across them and for rural, suburban and urban stations (Eyeball Stats).

4. The RRE stations appear to have pretty damn close to zero trends in Tmean, Tmax and Tmin.

5. What differentiates the RRE stations is not clear to Eli. Probably requires digging deep into the metadata.

6. There is at least one paper in there. Please acknowledge Rabett Labs, E. Rabett Prop.

7. If you want a copy of the Excel spreadsheet, put a note in the comments

UPDATE: John N-G points out that for determining a US wide trend proper area weighting has to be used. True enough, but to average something, it helps to average apples, not apples and pears.

23 comments:

That's why we posted the data, so that people could play around with it and see what they can come up with. Glad you're digging in.

There were so few good stations (CRN 1&2) that if you then try to compute national averages taking into account the geographical distribution of stations, there's just too few to be reliable if you subdivide the CRN 1&2 into three urbanization categories. The same applies to CRN 5, which has even fewer stations.

Indeed, one reviewer had a hard time believing that one could compute a reliable national trend with even the full set of CRN 5 stations, because the eastern two-thirds of the US was so sparsely sampled. To satisfy him or her, we added a trend analysis for the eastern two-thirds of the US to show that our overall results applied to that limited domain also, and pointed out that Vose and Menne (2004) found that such a network was just capable of distinguishing the differences in max and min trends that we found.

The DTR trend differences were so big that they were well beyond the sampling threshold, so perhaps even a very sparse network might be able to distinguish separate effects of urbanization and siting among the best and worst sited stations.

/snark Besides, everyone knows that the urbanization effect on trends is probably less than 0.0055C/decade (Jones et al. 1990, Parker 2004, Peterson & Owen 2005, Brohan et al. 2006). Using that number and your Gatesian table values, a back-of-the-envelope calculation suggests that the impact of urbanization on trend differences in the US should be only about 5%-20% of the significant trend differences we found. And, since urbanization ought to preferentially increase the minimum temperature trends, it would have caused us to underestimate the siting dependence on minimum temperature and diurnal temperature range. Thanks for shoring up our results. snark/

Seriously though, we are looking at urban/rural issues as part of our next paper. This first one was designed to be similar to Menne et al.

Based on NOAA’s classification, there are only 113 urban station in the USHCN network (228 sub-urban and 820 rural). Out of the 113 urban stations, 100 were surveyed by SurfaceStation:CRN1: 4 stationsCRN2: 14CRN3: 23CRN4: 47CRN5: 12

Moreover, urban stations are mostly found in two NCDC climate divisions (47% of the surveyed sites are in the Northeast and Southwest). Very different climate backgrounds indeed.

It is unlikely that at national scale, results obtained from such a sparse and small population (especially for best and worst sites) are reliable

John, the two components in the rural stations just jumps out at anyone looking at the data. It is present in all of the categories. The lower trend due to the stations with the Rural Refrigeration Effect (RRE) is what is driving the differences you and Menne see.

Since WR1 and WR2 are "over urbanized", comparisons between ranks without taking this into account are meaningless. By eyeball the RRE pretty well accounts for the difference between WR 1 and 2 and WR 3-5 trends. Something you should have picked up.

[wegman]As anyone can tell, the surburbanization of the Western world has driven depopulation of urban areas, a prime example of which is Houston, a former city squashed flat by lack of zoning. The effects of UHI on temperature trends has always been overestimated. Given that temperature is a measure of energy intensity not amount State Climatologists should have demographers on all their papers as coauthors.[/Wegman]

Eli - Good thing there aren't any USHCN stations within the Houston metropolitan area. (Actually, it IS a good thing.) I look forward to more RRF analysis, but beware of s-f's concern. I have difficulty believing they're driving the differences we see when we see differences of opposite sign (which, ultimately, is why we felt comfortable not pursuing the urbanization angle in the first paper).

M- Menne et al. showed that CRN validate's USHCN's year-to-year variability quite well (or vice versa). For multi-decade trends, I suppose we'll need multiple decades. That would put us somewhere in the 2020s. It'll be great for the urban-rural effect by then too. Alas, it will never be useful for understanding the effects of the big MMTS instrument conversion; we're planning on using non-MMTS USHCN stations for that purpose while controlling for siting quality.

Without the RRE effect there is no significant difference between any of the Watts Rankings, so the whole ballgame becomes the rural stations on the left hand (lower trend) side of the graphs. Fall et al's conclusions about WR effects come from the mismatch btw station distribution of the lower two ranks and the upper three, not from some quality difference in the stations. In other words it is an artifact because stations are not properly averaged. Fall et al. nowhere comment on this.

So your choice is

1. Fall et al. missed the carrot truck as it went by (two of the authors have posted here).2. They saw it and swept it under the rug in which case the bunnies want their carrots back.

Actually, you're doing a Loehle: averaging stations together without regard for their geographical distribution. That, and the sign thing, and the magnitude thing, still leave me wanting to see a more careful analysis. And then there's the station count thing: your Watts Rank 4 graphs only show about 240 out of 662 stations.

As the only person in the room siting in the middle of these two points of view, I declare that I am right and that the UHI effect is both real and not as bad as all the alarmists say. There's no need to take any action, and by the way did I say that I was the most reasonable person here becuase I have nothing to gain personally.

John, you were right about the graphs for WR4. They have been updated. The difference between the urban and the rural remains, however the point is that you can't simply average the rural and the urban together across the Watts Ranks, even with proper area weighting, because they have different distributions.

Being another do-it-yourself type, I also took a quick look at the Final List spreadsheet. As Eli suggests, there is indeed a statistically significant association between environs (rural/suburban/urban) and Watts rank (whether the original 1-5 scale, or combining 1 and 2, also 4 and 5, to get just three categories with more data in each).

But using the numbers I have, there also are statistically significant associations between Watts rank (both versions) and geographical area. For example, only 2% of E North Central and 3% of New England stations have a Watts rank of 1 or 2, compared with 12% in the Mountain or E South Central areas (US census divisions).

So at first glance there's some geographical structure, along both rural/urban and regional lines.

In the file "Final List.xls" what are the last three columns? I'd guess they're linear regression trend rates for mean temp, max temp, min temp -- but over what time interval? 1979.0 to 2009.0? 1895.0 to 2009.0? In what units? Deg.F/yr? Deg.C/decade?

Shouldn't the trend in mean temp be at least *approximately* equal to the average of the trends in max and min temp? Why is it that for the very 1st station (11084), the mean trend (-0.096) doesn't even fall within the range of the max trend (-0.062) and the min trend (+0.087)?

Tamino - That file isn't one that I generated or used, it must be preliminary analysis done by one of the other coauthors. I worked with the raw data (9641*), interpolated values from NARR (NARRtoHCN.tar.gz), and station ratings tables (RATINGS*.csv), using mostly hcn.monte.trends.py. I didn't include output files in the SI, thinking that the output was already shown in the paper.

Rabett Run

Subscribe Rabett Run

The Bunny Trail By Email

Contributors

Eli Rabett

Eli Rabett is a not quite failed professorial techno-bunny, a chair election from retirement, at a wanna be research university that has a lot to be proud of but has swallowed the Kool-Aid. The students are naive but great and the administrators vary day-to-day between homicidal and delusional. His colleagues are smart, but they have a curious inability to see the holes that they dig for themselves. Prof. Rabett is thankful that they occasionally heed his pointing out the implications of the various enthusiasms that rattle around the department and school. Ms. Rabett is thankful that Prof. Rabett occasionally heeds her pointing out that he is nuts.