Responses from Parker

CA reader and contributor Neal J. King got a bevy of questions answered related to the June 14th posting of Parker 2006: An Urban Myth? and they are posted here in order. Thanks to Neal and Dr. Parker for making this available to all for reading and consideration. – Anthony

Questions to Parker, Part 1:

Question:We have noticed that, of the 290 stations included in the analysis, the U.S. sites seem to be at airports. Can you estimate what proportion of these sites are at airports, and do you think there are any site considerations applicable to airports that would affect your results? In general, can you explain why you think that the sampling used in this study is representative for the question of estimating the impact of increasing UHI on a global-warming measure?

I did not have metadata for the sites. Many stations were selected from the GCOS Surface Network (GSN) (Peterson, T.C., Daan, H. and Jones, P.D., “Initial selection of a GCOS surface network.” Bulletin of the American Meteorological Society, 78 2145-2152 (1997)). So stations often satisfy the GSN criteria. You may be able to get GSN metadata from NOAA/NCDC or from http://gosic.org/ios/GCOS_main_page.htm. The reduction of heat-island effects during windy weather should be applicable to airports as well as to other sites, because of vertical as well as horizontal mixing. The global warming rate at the stations used in the analysis, using all days’ data, is the same as that reported using all available stations by Jones, P.D. and A. Moberg, “Hemispheric and large-scale surface air temperature variations: An extensive revision and an update to 2001’€³, Journal of Climate 16: 206-223 (2003). I noted this in my paper. Therefore the set of stations I used is, as a whole, likely to be representative of the larger sets used by Jones and Moberg and other groups. If it had had more (less) urbanization trend overall than Jones and Moberg, it would likely have had more (less) warming. As it had the same amount of warming, it likely had about the same amount of urbanization trend, given that the stations were spread as worldwide as data availability allowed (all networks have sparser coverage in the tropics). As the windy-night trends equalled the calm-night trends, the urbanization trend must have been small.

Question to Parker, Part 2:

Question:In Appendix C of Parker (2005), you studied the correlation between the actual station wind speeds and the NCEP-NCAR reanalysis values for 26 stations. This is important, since one would ideally have used actual wind speeds at the same time as the temperature readings for all stations and measurements. These 26 stations seem to be higher-latitude stations. Would that give rise to any selection effects?

Response:
The choice of 26 stations was limited by availability of data. Reanalysis winds are likely to have been equally representative at other locations owing to the availability of pressure data to control the reanalysis; one exception being in mountainous terrain.

Questions to Parker, Part 3:

Question:

In Table 1 of Parker (2005), the windy trend exceeds the calm trend in 5 regions (Arctic, Europe, Asia, North America, Australasia), and the calm trend equals the windy trend in only the Tropics region. However, the global average shows that the calm trend equals the windy trend overall. According to information on 265 stations that you have provided separately to Steve McIntyre, there would seem to be 224 stations in the first collection, and 41 in the Tropics; I suppose there should be another 25 which were not included. Can you provide some insight into how this works out?
Response:

Some stations in the list were rejected from the regional and global analyses, as tabulated in Appendix B of my paper. Trends do not always combine in a simple linear manner when combining samples, because of nonlinearity in the least-squares process.

Questions to Parker, Part 4:

Question:

In Table 1 of Parker (2005), for the North American region, the trends for “All” days exceeds those for “Windy” and for “Calm”. This seems a bit odd. Can you clarify that?

Response:

“Windy” is the windiest third of days, and “calm” the least windy third. “All” includes everything, including therefore the middle third. The differences are well within the error bars cited in the table. Also, trends do not always combine in a simple linear manner when combining samples, because of nonlinearity in the least-squares process.

Questions to Parker, Part 5:

Question:

Given that there can be no doubt that urbanization is going on, it is a surprise that, on a global level, there seems to be no visible growth in the UHI. Even if it is not comparable to the global-warming signal, one would expect to see something. Can you speculate as to why nothing seems to show up, at the global level?

Response:

The selections of stations made for GSN by Peterson, T.C., Daan, H. and Jones, P.D (1997), and for global monitoring and trend estimation by Jones and Moberg (2003) cited above were carefully made to avoid severe urban biases. I never challenged the reality of urban heat islands, and merely assert that the station selection has largely succeeded in avoiding locations with increasing urban effects.

Questions to Parker, Part 6:

Question:

What is the minimum global UHI trend that could be detected, using these methods?

Response:

From the standard errors in Table 1 of my J Climate paper, the calm-night trends and windy-night trends for the globe have 95% confidence limits (⯠2 standard deviations) of 0.05 and 0.06 deg C per decade. So the difference between calm trends and windy trends for the globe can be estimated with 95% confidence within ‘ˆš(0.052+0.062) ~ 0.078 deg C per decade. If we then assume that nearly all of any urban effect will be concentrated in the calm nights, which were defined as the calmest third of nights, then overall urbanisation trends of about 0.03 deg C per decade (a bit more than a third of 0.078 deg C per decade) in minimum temperature should be readily detectable. If more conservatively we assume that not much more than half the urban warming effect is concentrated in the calm nights with the rest in the intermediate-wind-strength nights, then urbanisation trends of about 0.05 deg C per decade in minimum temperature should be readily detectable. As urbanisation is felt in minimum temperatures much more than in maximum temperatures — which may even be reduced – an urbanisation trend of 0.025 deg C per decade in mean temperature should be detectable using the more conservative assumption. This is about 10 times smaller than the rates of global warming over land since the late 1970s reported in the IPCC 4th Assessment. The more conservative assumption allows for some stations to be affected by large heat islands which persist to some extent even in windy weather (Morris et al. (J. Applied Meteorology, 40, 169-182 (2001))); but GSN stations will almost always be in smaller settlements than Melbourne, with smaller heat islands easily reduced by any wind, or with no heat island at all. None of the US stations used in my J Climate paper is in a city with a population approaching that of Melbourne (3.8 million). [Note that Morris et al’s true heat island in windy weather is about 0.2 deg C weaker than apparent in their results, because the urban station is about 20m lower than the average of the other stations].

Questions to Parker, Part 7:

Question:

Your study is based upon the understanding that the difference between the calm and windy day/night measurements reflects the UHI and near-surface temperature inversions. An alternative view has been suggested in our discussion: that the windy day/night measurements reflect the influence of air from a broader region, which has a lesser UHI. This is a picture in which we have an urban hotspot surrounded by a larger suburban region, both of which are undergoing an increase in UHI — but the suburban UHI is always weaker. (This is one reason why the airport question comes up earlier, in 1).) In this view, the trend in the urban UHI would be hidden by the equal trend in the suburban UHI. Can you comment on the plausibility of this interpretation?

Response:

When it is windy the mixing is vertical as well as horizontal, so the urban temperature becomes more representative of the whole air-mass, especially because the generally faster moving air aloft can quickly cross whole cities including suburbs. Therefore the windy-night trends are still highly likely to be less affected by growing urban heat islands than the calm-night trends when the surface air is less connected to the air aloft which is also moving more slowly.

Questions for Parker, Part 8:

Question:

You have suggested that the calm-day — windy-day signal is a proxy for degree of urbanization. If this turns out not to be the case, would that affect your broader conclusions? And if so, how?

Response:

I think it will be the case and give an example in my 2006 paper. See also my response to (7). See also A. J. Arnfield, International J. Climatology, 23, 1-26 (2003).

Questions for Parker, Part 9:

Question:

9. As you know, Roger A. Pielke Sr. has raised an issue with regards to your study, that it does not take into account sufficiently complications concerning the near-surface temperature inversion. (This is what I was get out of it, anyway.) Would these issues be side-stepped by focusing attention on the Tmax measurements instead of the Tmin measurements?

Response:

No because urban heat islands are less clear by day: sometimes there is even a cool island. See also A. J. Arnfield, International J. Climatology, 23, 1-26 (2003).

In Table 1 of Parker (2005), the windy trend exceeds the calm trend in 5 regions (Arctic, Europe, Asia, North America, Australasia), and the calm trend equals the windy trend in only the Tropics region. However, the global average shows that the calm trend equals the windy trend overall. According to information on 265 stations that you have provided separately to Steve McIntyre, there would seem to be 224 stations in the first collection, and 41 in the Tropics; I suppose there should be another 25 which were not included. Can you provide some insight into how this works out?

Response:

Some stations in the list were rejected from the regional and global analyses, as tabulated in Appendix B of my paper. Trends do not always combine in a simple linear manner when combining samples, because of nonlinearity in the least-squares process.

This is pretty impressive. Note that the trend for the combined data for North America exceeds both the windy and the calm trends. And, yes, in spite of the relative consistency of the windy trend being larger than the calm trend, the global values are equal! Although maximum likelihood (restricted or otherwise) depends completely on the specification of some particular underlying distribution and that there may be nonlinearity in the process, it makes no statistical or practical sense to get results that look like that.

As a practicing exploration field geologist, observation of local climate effects is uppermost on one’s list of things to be familiar with.

Wind has to be one of the most unpredictable, ornery phenomena extant. The sudden appearance of Willy Willies, in the midst of a calm air, on a flat plain during surface temperatures in the high 40’s (C) leads one to the heretical, and as I am a scientist, empirical conclusion that our theories about wind are a lot of hot air.

How is “windiness” measured? Was it peak wind for the day? Or was it some form of averaging?

Both would be in effect, useless for determing whether “the wind” was blowing away UHI.

About the only way to do this would be to chart both temperature and wind on a continuous basis, and try to determine if there is some form of relationship between the two, for each city. Of course such charts would still be useless unless they also charted wind direction.

I was of this opinion before, and nothing in these answers causes me to change my opinion.
This “study” looks to me more like someone repeatedly torturing the data, until he found a methodology that gave him the results that he was looking for.

which shows, among other issues, that minumum temperature trends measured at one level near the surface are not representative of the trends at other levels. This problem is summarized also on Climate Science in a web posting;

“Why there is a Warm Bias in the Existing Analyses of the Global Average Surface Temperature”

RE: #1 – And then you have the case of California “urbmons” (that reference from a 1970s scifi book, in case anyone’s interested) which are horizontally extensive, spanning 70 to 100 miles of territory. Am I supposed to believe that even a 20 MPH wind blowing across 100 miles of urban land would really matter, all that much, in terms of the integral of specific heat over a volume encompassing the “urbmon’s” footprint and say, a couple of hundred feet up from it? Hmmmmm …..

Actually, I would question how they determined wind speed? My understanding is that height above ground clutter is extremely relevant. It is possible to have almost no surface wind while at altitudes of 50 to 100 feet above the ground clutter have a pronounced wind blowing. The other issue is that vertically the winds flow in different directions.

I have to admit,it is funny when people start finding new ways to prove that something you can see every day is not actually happening.

I would assume that wind speeds are measured at some standard height. Wind speeds at a height of 10 feet, say, are strongly correlated to the wind speed at a height of 100 feet. Vertical wind speeds are on average zero. The important issues for UHI are I think the vertical turbulent mixing and horizontal advection. Vertical turbulent mixing is faster if the wind speed is higher.

One other thing which seems to escape those who are overtly working to discredit UHI …. and this is basic undergrad, upper division heat flow 101 …. If I have a network of heat sources (aka urban structures, infrastructure, items, etc) then there is a certain production rate of heat from each source, totally independent of any boundary conditions or conditions intermediate between source and external boundaries. For the purposes of this excercise, the earth is an infinite half slab X. The atmosphere is an horizontally infinite gas of declining density with increasing z coordinate, bounded at its top by another infinite half slab (outer space). How you impart turbulence and laminar flow to the gas does nothing to change the boundary conditions such as the mean temperatures of the earth half slab and space half slab. It does not change the bulk thermal resistance, absorbtion spectra, etc of the gas. In the grand scheme of things (think in terms of dimensions of hundreds of miles) the overall thermal gradient from each source to each slab is the same. This point seems to be one that AGW fanatics either truly do not comprehend (due to lack of applicable training or competence) or, on a more sinister bent, overtly seek to deny. The PDEs describing this do not lie. They are fundamental principles.

Sorry, when I wrote infinite half slab X I did not mean to confuse this X with x as in x,y,z. Danged Sigma training, it has rotted my brain. Big Xs, little xs, and then, the good old (x,y,z) from my earlier academic, pre Sigma days. ;)

Also, one other thing. When considering the impact of a network of anthropogenic heat sources, one must not only integrate over volume, one must also integrate over time. What is the f(x,y,z,t) which describes the mean additive term that a station located at x1,y1,z1 will experience due to a heat source located at x2,y2,z2? Consider also that the f(t) element describing said heat flow function may itself have had a mean value that has not been static. If that mean value has risen, then guess what folks? This is not rocket science. Anyone who has taken heat flow will immediately recognize this.

My thought on Parker came down the this. The thesis that the wind blows the UHI away is misleading.
When the wind is strong enough it can cause verticle mixing. “strong enough” means the wind flow
become turbulant. A laminar flow over the surface leaves an intact boundary layer. Now, Parker
did not measure wind speed at the site ( he did a correlation study on 26 sites) Further, he categorized wind speed
as follows: windy = top 3rd of the wind velocity profile. So if the top 3rd windy days saw > 3meters/sec
then windy was defined as >3m sec. If the grid was a windy grid and the top third was defined by wind
greater than 6 meters/ sec then Windy was > 6Ms for that grid. So the definition of windy varies.

In my mind the question is as follows.

1. For this site what is the wind velocity that creates turbulant vertical mixing? And this velocity is direction
dependent.

THAT is the condition where we expect UHI to be blow away… actually carried upward.

Windy ( creating turbulance) is utterly site dependent. Some sites might get vertical mixing at
low velocity from one direction, and require much higher velocity from other directions.

I wonder if Parker has ever looked at a tufted Wing in a wind tunnel?

You simply cannot tell if mixing is going on by looking at the velocity. And you cannot tell by looking
the top 1/3 of “windy” days.

To make matters worse the effect is heat dependent. For example, by heating a surface you can
reattach a separated flow

For some sites it might never get windy enough to cause vertical mixing

RE: #10 – In many upper middle to upper latitude locations, consider the real situations where there is sufficient wind to guarantee turbulence. Generally speaking, such locations are not going to experience foen / scirocco / chinook / santa ana conditions. So nix that one. That would leave two main ones. The most common would be the sorts of winds that accompany the passage of a cold front, and the ones which arise from having a hemispherical pattern change, typically featuring a very cold high to one’s north and a cold low to one’s south. These are most common at such latitudes during the colder part of the year. When they occur, you can be certain that heating systems will be utilized at the maximal rate. Furthermore, at such times of the year, use of lighting is at peak values. Is it a mere coincidence Parker over ultilized upper mid to upper latitude stations?

Turbulence is of relevance. If the wind speed is high and vertical mixing in the atmospheric boundary layer is strong the temperature gradient will be small. If on the the other wind the wind speed is low vertical mixing is small and the temperature gradient can be large when the surface cools down during the night. So far I have understood this has been observed.

Here is a random thought. When the wind velocity ( at the grid) is high enough to initiate
turbulence and vertical mixing at the site, what kind of speed and directionality do you see
at the site? If you have mixing at the site would’t you see a chaotic record of speed and direction at the site?

In his correlation study Parker look at the correlation between wind speed at the grid and wind speed
at 26 sites. Basically to justify the use of grid level winds.

But in a turbulant flow at the site one doesnt expect it to be correlated with the grid.

SIMPLY. The wind blows down the wind tunnel at 100Mph. north to South. When you get a turbulant
spot on the wing you DONT see the same direction. you DONT see the same velicity. Stuff gets mixed up.

My sense is that Parker thought only about horizonatl mixing. Moving air from one region to another.

RE: 10, I agree, the ground clutter is going to determine at what wind speed this is going to happen. (I only got into this when I was looking at wind energy) If you do not know what the surface terrain is then you will not know what wind speeds are relevant.

Which way does the wind blow over cities?
I would guess that there is a torus pattern over cities where air is drawn in, heats up, rises, spreads laterally and is then recycled. This would still mean that the air would be hotter than the surrounding countryside.
Why do they assume that cool air is coming in from ourside the suberbs. Have you ever seen the brown/black fog over Paris or LA?

Yes Mark that is my point. Parker thinks that high speed = mixing. Well
It depends on the surface charactersistics ( geometry) the surface heat
and the wind direction. Long ago in the parker thread I asked Neal if he thought
a 600 MPH wind would mix the fow over a surface. But see my next post.
For some Parker fun

There are a number of truely rural sites in the area of the airport that also collect
Wind speed. So, I have an Idea in mind.What happens at truely rural sites on windy/calm nights?

But before tacklin that question, I thought to have a look at Fresno Airport.

Parker uses this site. This site has daily data from 1951 onwards.
His method did NOT identify this site as UHI impaired. Therefore: cross check time.

Now, Parker focused on TMin. For good reason, Most people think that UHI will warp
TMIN. Why? UHI heat, built up during the day, stored in the cities heat sinks,
gets released at sundown and Tmin suffers. It’s harder, but not impossible, to spike TMax. Tmax is going
to be a function of Insolation, reflection, and perhaps waste heat. SO, If you want to
see the UHI signal, check the TMin channel. This seems dirt simple.

The other thing to notice here is that if TMin is impaired, then the Diurnal range will
be impaired. Very simply, UHI will Narrow Diurnal range: Tmax-Tmin will get smaller and have
a smaller varience.

So, now comes a thought. Rather than measure TMIN TREND in two conditions (windy/calm)
Let’d just look at the distribution of TMAX-TMIN under wind velocity
conditions. If wind blows the UHI away, then one should be able to regress Wind speed
versus Diurnal range. This removes the issue of “specifying” windy as the top third of windiness.

Wow, this is long. So, starting down this path I thought I should characterize Fresno Airport.
What happens to Diurnal range at this site ( used by Parker) which has NOT seen any UHI increase since 1950?

So. I got the daily data. TMAX-TMIN day by day since 1951 to 2007. in centigrade. Values are TMAX-TMIN,
Averaged for the decade.( except the last 5 years)

So, What we see by looking at a single parker site is impairment.
A systematic NARROWING of diurnal range. A UHI signal. But his test
missed this signal. Why?

I’m going to have a look at the sites with 30 miles of Fresno airport and see
how dinural range varies. Some of these sites have wind velocity data. My sense is
if yu look at DAILY diurnal range and regress against wind velocity and direction
you will learn something. With a 20000 day record, and 8 bins of wind direction
you have like 2500 or so days per wind direction ( uniform assumption) and then with 20
velocity bins we have like 125 samples per velocity/direction bin..

RE: #13 – incorrect. The T(a) inside a building, for example, is what it is. The heat flow from all the stuff in a building is what it is. The temperature of space is what it is. The earth slab is essentially what it is. While I would agree that portions of the overall source – earth and source – space gradients may have different slopes due to wind, the overall gradient, from the source to the two sinks, is what it is.

There is of course a simple way to determine whether the airflow or wind is turbulent or laminar. In fluid mechanics and heat transfer, scientists utilize what is referred to as the Reynolds number, Re = (Ï?Î½sL/Î¼), where Ï? is the fluid density (kg/m3), Î½s is the mean fluid velocity (m/s), L is the characteristic length (m), and Î¼ fluid viscosity (N’€¢sec/m2). It is well agreed that when the Reynolds number is less than 2300 the fluid is laminar, and when the Reynolds number is greater than 2300 the fluid is turbulent.

It should be remembered that turbulence is a time dependant chaotic phenomenon. That said, with some simple estimates (length for one) and some well-determined parameters for air (viscosity, and density) the Reynolds number is easily determined. As an aside, it is believed that the Navier-Stokes equations model turbulence correctly, but it does not inform us if the fluid is well mixed. However, with high Reynolds numbers, say greater than 10,000, it is agreed that the fluid is not only turbulent, but also well mixed.

Yes, I’m pondering that now. Oh By the Way, my first pass as decade dirunal was a slight Boo boo
I’ll post final results later when I can double check everythng. ( same narrowing is there)
but the narrowing is narrowing..

I was going to ask if anyone had the foggest notion what the characteristic length was for a weather
station site.. I’ve read some document roundabouts about L for the surface of the earth or an open feild

Thomson, Dempsey et al (1987) developed climatic a database for the State of Illinois. This database was derived from weather station records in 23 locations in and near the state. Maps were developed showing areas of equal percent sunshine and wind speed. A table of average weekly high and low air temperatures also was produced. Using this new database, combined with a heat transfer model developed years earlier, several new applications could be made. In one application, pavement temperatures were computed with the heat transfer model and climatic data. A regression analysis was run to establish a relationship between pavement temperatures and Mean Monthly Air Temperatures (MMAT). This information could then be used for selection of the proper asphalt cement modulus value to be used in pavement design. The heat transfer model together with input from the climatic database produced the dependency of pavement temperatures on MMAT which compared well to published correlations by The Asphalt Institute. The new tools also were used to predict temperature profiles in PCCP for given date, time, and location. The intensity of solar radiation (direct and diffuse) is dependent on diurnal cycles, the location of the sun in the sky and the incident angle between the surface and sun’s rays. The solar radiation results in direct and diffuse heat gain on the pavement through absorption of solar energy by the pavement. The convection heat flux is a function of fluid velocity and direction, and it is affected primarily by wind velocity and direction on the surface. As the convection heat transfer coefficient increases due to higher velocities and opportune wind directions, the convective heat flux also increases. Thus, at relatively high wind velocities a convective cooling of the surface occurs when the temperature of the wind is lower than the temperature of the pavement surface. The direction of the heat transfer due to thermal and long-wave radiation is away from the pavement since deep sky temperatures typically are significantly lower than pavement surface temperatures.
The TMY weather files were derived from the National Solar Radiation Data Base (NSRDB), which was completed by the National Renewable Energy Laboratory (NREL). The NSRDB contains hourly values of measured or modeled solar radiation and meteorological data for 239 stations for the 30-year period from 1961 to 1990. The NSRDB accounts for any recent climate changes and provides more accurate values of solar radiation due to a better model for estimating values (more than 90 percent of the solar radiation data in both data bases are modeled), more measured data including direct normal radiation, improved instrument calibration methods, and rigorous procedures for assessing quality of data. The TMY weather files were created using similar procedures that were developed at Sandia National Laboratories by Hall et al (1978).

Starting from the results of a meteorological
simulation of Los Angeles (Taha 1997), it was estimated (Pomerantz et al. 1997) that if
the sunlight absorbed by all the pavements were reduced from 90% to 65%, the peak air
temperature would decrease by about 0.6°C (1°F) (population-weighted and on a hot day
in August).

the temperature of the air in a region is determined only by the surfaces
within that region. Thus the region chosen must be large enough that effects
at its edges are negligible. Eq. 3 is wrong for cities that are windy or near
large bodies of water.

In this paper 1F for asphalt, 1.4F for roofs

From these sources it can be estimated that on a windless or near windless days for a semi-infinite slab that the Tmax increase would be expected to approach 1F for a Temp sensor bounded by a road. If bounded by a road on one side and a roof on the other, Tmax increase would tend to approach 1.2F.

It would be expected that microsite problems of this type to approach these increase estimates.

#21 Steven Mosher
But Parker identified a discontinuity in Fresno Tmax that he did not see in Tmin (which he suggests might be due to a site change). While I understand that Steve McIntyre has commented on this, wouldn’t it be better to pick another “you think it’s urban, Parker thinks its unaffected” site for this analysis?

The Fresno airport is on the east side of town and was a little south of the center (1950-1970). It is now much farther south of the actual population center. The prevailing wind direction in the Central Valley is from the NW – that’s why all valley towns (Fresno, Madera, Merced, Modesto, Stockton) grow toward the north. The “urban” build up around the airport has been minimal in comparison to areas to the northwest. There has been building to the east of the airport since the mid ’70’s but given the prevailing wind direction, the impact would be negligible.

Most of the growth – say 70% – to the northwest has been SFR. For the most part the housing replaced vinyard and orchard.

I’m rather surprised that Parker would pick the airport over Cal State Fresno, which lies a few miles north. It’ an ag school with a good sized green belt to the north and I’m sure that weather data has been collected there for at least fifty years (the campus moved to its present location in the early ’50’s.

Steven M,
Go to the NOAA MMS site and look up Fresno (Yosemite Int’l). Select the map tab and notice that the location of the observation stations has moved five times since the mid-40’s spread out over a 2 mile area. In addition the equipment was probably changed (upgraded) at the time of the moves. The current location between the north runway and taxiway is an ASOS facility.

This discontinuity of observation location, if the records are correct, are fairly typical of the surfaces stations I have visited or looked up.

The majority of US stations try to take readings hourly. Fresno has 24 readings per day 97% of the time. I have the NCDC GSOD(Global summary of the day) database which contains the mean of the 24 hourly readings and also the TMax and TMin for the day. The first chart shows the difference between the mean of the hourly readings and the average of TMin+TMax. For most stations the difference usually amounts to around 1/2 degree, with the average consistently higher than the mean. I also included the mean daily dew point. Wouldn’t that be more representative of heat? Series mean lines are also drawn.

The chart below is the percentage time the min/max readings were taken from the hourly readings. The remainder of the time they are evidently using non-hourly readings for min/max. I think the inconsistency in how they record the min/max temperatures would make it difficult to relate to wind. Seems to be a hodge-podge.

After reading Parker’s replies to your questions Neal, I am no closer to resolving my apprehensions of data snooping that, while very well being unintentional, could conceiveably get lost in the “selection processes for a reason(s)” that have been well documented here. Without a better established and understood a prior process for wind effects on Tmin as related to UHI, the selection processes become just that much more suspect as data snooping.

I more than ever think that the data has to re-analyzed by breaking it down into its component parts and additionally that one would have to at least attempt to get a direct measure of Tmin versus wind as a measure of UHI.
What does it mean to have most global regions showing that Tmin has a bigger trend under windier than less windy conditions? Would it mean that these regions have experienced an anti UHI effect that needs to be or has been compensated by adding degrees trends to their temperature records? Based on the excerpted answer from Parker below, I would assume that one could chose areas that could be a prior selected as UHI and non-UHI sites and have temperature and wind measuring devices placed there to establish a direct relationship of UHI degree effects versus the effects of wind velocity (and direction) on Tmin.

The selections of stations made for GSN by Peterson, T.C., Daan, H. and Jones, P.D (1997), and for global monitoring and trend estimation by Jones and Moberg (2003) cited above were carefully made to avoid severe urban biases. I never challenged the reality of urban heat islands, and merely assert that the station selection has largely succeeded in avoiding locations with increasing urban effects.

In my mind, the reason Parker gives here for not finding a substantial UHI effect is the best available hypothesis. Whether it can be demonstrated as true is another question.

Parker assumes that the upper 33% of windiness “blows away” UHI. I don’t think this is correct. Let us assume a “natural” function, F(x,y,z,t) which describes the pattern of isothermal surfaces which would be expected in a given space, due solely to non anthropogenic energy flows. Let us further assume an “anthropogenic energetics” function, G(x,y,z,t) which describes the additional terms contributed by anthropogenic energy dissipation and land use impacts. We would then end up with F + G as the situation describing UHI, or more properly, anthropogenic direct thermal impact, at any given x,y,z,t, assuming windless conditions. Under conditions of wind, there will be an additional tensor, or probably two of them (one for laminar flow and another for turbulent flow) which would describe the distortion effect which the wind would have on F + G. We then would have (A)(B)(F + G) where A and B are said tensors. So, rather than blowing away UHI, the wind simply distorts is space-time functional output. The solution of F is non trivial. The best current techniques might be able to acheive would be attempts to model F. As for determining G, it might be possible to approximate it by carefully combining modeling with careful and painstaking review of whatever high quality records exist of places which have incurred major changes from rural to more urban, over time. A and B may be more straitforward to model and verify. Again, quality of measurements is key for acheiving this.

In any case, to summarize, I suspect that Parker has not measured UHI versus the absence of it. Instead, what he has measured is likely some artifact of the tensors A and B. Distortion as opposed to removal.

I would suggest either Sanger, Reedley, Dinuba, Selma or Fowler in lieu of Orange Cove. The ‘cove’ refers to a wind cove that protected the oranges. It’s a microclimate that doesn’t give a good fit. Sanger is probably the best fit. If you use the runways at the airport as a wind direction guide you’ll see that Sanger lies directly downwind.

There are rather large drainage basins immediately upwind of the Fresno airport which undoubted distort readings in April/May of wet years.

The Reynolds Number approach seems problematic to me. The “characteristic length” for flow around a weather station would be horribly difficult to determine, and it is not well-accepted that any flow over 2300 NRe is turbulent–in pipes the 2300 breakpoint is traditional, but in boundary layers the onset of turbulent flow sometimes requires NRe’s as high as 10^6.

RE: #35
It’s likely the database I got my data from hasn’t been adjusted whatsoever for TOBS or anything else, as the file updates are usually only a couple days behind.
I had noticed your figures have a tighter range than mine. Don’t know why that should be unless one of the databases has been adjusted. I automatically check for erroneous temperature entries and discard from analysis any rare day affected. Occasionally equipment problems might keep a date from even being entered in the database, so I do a data count before calculation.

I did notice that during period from 1965-1971 they were only taking observations 8 times per day. That may be part of the reason my chart dropped to a tighter range during that period.

In my opinion, Parker glosses over many conditions that can affect UHI. Wind is taken into account, however as many have mentioned, wind speed and direction can vary the effect. In my experience, cloud cover has a much greater effect on UHI than wind. The time of day clouds develop, type of clouds, and duration of cloud cover has a huge affect. Cloudy days can decrease insolation, cloudy nights can reduce cooling.

An even greater affect on UHI is any precipitation. Parker, as far as I know, does not even mention precipitation in his paper. Even a brief rain shower can dramatically lower the heat content of buildings and pavement in urban settings. Likewise, dew and frost have an immediate effect.

I have read many posts on this thread regarding the causes of UHI. By far the largest source of UHI is latent heat stored in buildings and pavement. Roofs, masonry walls, and asphalt pavement store tremendous amounts of heat on sunny days. A black roof surface can easily have a surface temperature of 170 F on a sunny day. Brick and concrete walls, especially ones facing west and south, release latent heat all night and are much, much hotter than surrounding surfaces.

I am a building envelope consultant and have conducted over a thousand infrared surveys of roofs and walls. These surveys are always done at night and I’ve had to sit for hours at times waiting for a building to cool enough so that my infrared imager could detect a temperature difference between the building components and background temperatures.

There is an excellent UHI study conducted for New York City that measures UHI and studies various mitigation techniques. It estimates that New York City’s UHI is 7 F. They also have an equation for adjusting near surface temperatures adjacent to impervious surfaces (asphalt). It is:

Tnear surface air = .3 * surf. temp. + .7 * 2 meter temp.

This shows the affect black asphalt surfaces can have on adjacent thermometers.

First off I like to work bottom up, So I picked Fresno as an example.It’s from Parkers
List. And yes, has some issues.

I’ve been stuggling with this Parker definition of Windy/Calm.

Parker defines “Windy” as the top 1/3 of wind speeds for the GRID. Calm is bottom Third.

He did a correlation of grid to station for 26 sites. To ascertain if you can use grid data
to “represent” the site. He thinks you can.

I don’t think it gets windy in Fresno.

What does windy mean? You feel cooler when the wind blows over your skin. Why, because that boundary
layer of air created by those little tiny hairs, get disturbed, gets mixed and heat exchange happens.

Now I AM HAIRY, I don’t chill so easily. I have a nice fat boundary layer of air. So a smooth fellow
will get chilly quicker than me. Put another way, I require higher wind velocities to mix the air
around my body. You get goosebumps ( in creasing your boundary layer) before I do. I am hairy like an animal.

How hairy is Fresno? Simply, what velocity of wind does it take to create turbulant vertical mixing
and heat exchange? The Top 1/3 of the wind speed distribution? Top 1/6? top 1/20th?

It’s site specific. It’s wind direction specific. In Fresno when the wind blows in a W or NW direction
( from death valley I suppose)its a heater box. These winds look to happen well over 70% of the time
(Oh, I think the runway at the airport might be aligned on the NW axis, something to check.)

First a look at velocity. Average wind speed. measured at Fresno state from 1982- 2006
2 Meters above the surface.

Here’s the months of the year with the fastest and slowest mean wind speed to compare with diurnal temperature. All the data is from the station and not gridded. If there is anything to be found I’d expect it to show up. I don’t see much.

What I did was bin the Wind Direction into two groups.
1. W and NW
2. All others.

Thne looked at Tmax and Tmin in these two groups. Basically the W&NW winds could not blow UHI
away because they look like warmer winds. If the air temp above the boundary layer is warmer
that the boudary layer I would think that would impair heat transfer to the sky right?

I keep getting back to Parker’s statement, that obtaining temperature measurements at sites without an UHI effect, could explain why his trend indicators show little or no UHI effect at these stations. It comes down to the apparent judgment of almost all participants in this debate that an UHI effect exists and is probably quite significant, but the differences lay in the whether and how these UHI effects have influenced temperature measurements at the “official” measuring sites.

Given that a significant and agreed upon UHI effect exists and there seem to be objective methods of determining where it would be most and least prevalent, why would not a rather simple study be available whereby contrasting locations of UHI are measured for temperatures and wind in order to better test Parker’s hypothesis?

What I find most curious about this and other studies involving important global warming issues is that some rather indirect methods are used to measure trends with little attention being paid to better understanding the underlying basics principles and processes involved. It is like “well, we did that and it confirms what we had already assumed so let us move on (without taking a deeper look into the situation).”

What Parker has shown is that the Urban and Rural stations, as classified by the USHCN [?], show statistically similar trends. There are several possible explanations, the most obvious being the inference drawn by Parker (who, acting in good faith, assumed the data were OK). However, Parker’s results — not his conclusions — are exactly what one would expect if the U/R classifications were full of errors (i.e. random); based on Anthony Watt’s findings, this is not such a bad assumption.

It would be interesting to see what Parker’s method would reveal if it were applied to properly classified (i.e. into Rural (= substantially unaffected by local human activity during the period of record) and Urban (= urbanizing, or subject to increasing levels of development during the period of record) station data.

TAC, I think that you’ve miscontrued Parker here. Parker’s data set had global coverage (not just US HCN). Also he did not rely on distinctions between urban and rural made by others. He contrasted windy and calm days as classified by the NCEP 5×5 gridcell model.

HAving said that, if the NCEP gridcell model resulted in a classification that was full of errors, one would expect negligible difference between the two results. IMO this is so fraught with possible problems that IPCC can hardly conclude that anything was “very likely” or even “likely” based on this. “Possible” perhaps – but not much above that.

Also I think that the Peterson 2003 analysis has a knock-on effect for Parker 2005,2006. PArker considers raw data more or less equivalent to the “raw” Peterson data which I graphed. Empirically there is trend difference between major city and even a very broad “urban” classification and PEterson-“rural” in the raw data. So PArker should have picked this up. His failure to do indicates that NCEP calm-windy is probably useless as way to measure UHI trends.

PArker also leaned heavily on Oke’s canyon model of UHI which affects nighttime minima, while there seems to be evidence that asphalt UHI is material under some urban conditions and nighttime minima are not where this evidences itself most strongly.

As I have been noting, ad nauseum, anthropogenically caused / managed energy flux, and anthropogenic modifications of surfaces, flora and other environmental characteristics, are a specific term (or set of them) affecting the overall energy balance and apparent surface and low tropospheric temperature record. More proof:

The global warming rate at the stations used in the analysis, using all days’ data, is the same as that reported using all available stations by Jones, P.D. and A. Moberg

Suspicious mind that I have, I just have to wonder how much time Phil Jones et al took to find met stations that could give a UHI of 0.05-0.06C higher than their overall met station population gave. They couldn’t come up with a mix that showed a negative UHI, but they sure as hell couldn’t go with a delta as high as anecdotal says it should be. They HAD to show a positive UHI, or all hell would have broken loose.

This may be the reason for eliminating all the rural 75% of the met stations used, just to keep the average UHI in any subsequent studies within a certain range, so that the average could be seen as somehow plausible.