Urban Heat Island Influence Inadequately Considered in Climate Research

The Intergovernmental Panel on Climate Change (IPCC) deliberately limited climate science to focus on CO2 and temperature. The United Nations Framework Convention on Climate Change (UNFCCC) directed them only to consider human causes of climate change. They used this to narrow the focus of all variables that create the climate and thus eliminate major variables that cause climate change. A major example is the so-called greenhouse gases (GHG). Three of them account for almost 100% of the total; by volume, they are Water vapor (H2O) (95%), Carbon dioxide (CO2) (4%) and Methane (CH4) (0.36%). There are no accurate measures of any of these regarding the amount actually in the atmosphere or the changes in input and output from natural sources over any period.

All agencies agree that Water vapor is by far the largest and most important, but it gets virtually no attention. I do not intend to argue about the various attempts to downplay its importance. They are all proof of how little we know because each manipulator achieves different results. The IPCC admits humans add H2O to the atmosphere. However, they consider the amount so small relative to the atmospheric total and therefore of no consequence in their calculations. The problem is the effect of water vapor as a GHG is so large that it is probable that even a 2% variation could explain a great deal of the effect of CO2 and indeed all the effect of human-produced CO2. Proving this is complicated by the fact that H2O and CO2 overlap significantly on the Electromagnetic Spectrum.

The second significant IPCC bias is on temperature and specifically global warming. The planet is named Earth but should be Water. There is no life without it. Vladimir Koppen recognized its importance in climate. The first operation in his climate classification system is to identify those climates with insufficient rainfall to support plants.

The global temperature data is entirely inadequate to determine anything other than the data is inadequate. It only covers 15% of the surface and less than 1% above the surface. The US temperature record is probably the best in coverage and instrumentation, yet the Watts surface station analysis found only 7.9% with accuracy better than 1°C.

It is multiple times worse for precipitation data. Distribution is almost infinitely variable with large differences occurring in a matter of meters. It is problematic even with vertical fall, but the wind makes that rare, and instrument design the most difficult of any at the weather station. That is for rainfall; accurate snowfall measurement is far more difficult.

On a global basis, the network is inadequate over vast areas. A 2006 study of monsoons in Africa concluded,

Africa’s network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.

The Urban Heat Island Effect (UHIE) contaminates the surface temperature data. The first study to measure the UHIE was by Tom Chandler and detailed in the 1965 book The Climate of London. The work triggered heat island studies in many urban areas, including the ones I participated in for Winnipeg in the late 1960s

Urban Heat Island profile Image from Lawrence Berkeley Labs

A simple schematic of an Urban Heat Island Dome

After establishing its existence, adjustments to many temperature records began. They still make them, but it is a very imprecise adjustment. It is a major cause of the variations between regional and global averages by different groups. You can influence the outcome you desire by choosing the amount of adjustment made. Urban areas are almost all growing, so, presumably, there is a changing adjustment. This is problematic and when combined with the paucity of weather stations, underscores the difficulty of establishing a global temperature.

upward transport of the particles in the convective cell that is the urban heat island,

upward transport and cooling in the urban cell,

increased instability of adiabats as they travel over the outside of the urban dome.

In 1968, support for this urban influence on precipitation appeared in a report of the La Porte weather anomaly. The city of La Porte is in the county of LaPorte, Indiana. While plotting precipitation patterns in the region, Stanley Changnon noticed a significant increase, (30 to 40%) in precipitation levels after 1925. He attributed the increase to the growth of the urban area of Chicago and particularly the construction of steel mills and other heavy industries.

The La Porte claim engendered discussion and disagreements, notwithstanding Atkinson’s research in London. Years later the American Meteorological Society (AMS) reported that,

Earlier research has used ground-based instruments, including rain gauge networks, ground-based radar, or model simulations, to show that urban heat islands can impact local rainfall around cities like St. Louis, Chicago, Mexico City and Atlanta.

NASA resolved the disagreement in 2002 when Dr. J. Marshall Shepherd and colleagues published the results of a study using data from the Tropical Rainfall Measuring Mission (TRMM) satellite. They found that,

…mean monthly rainfall rates within 30-60 kilometers (18 to 36 miles) downwind of the cities were, on average, about 28 percent greater than the upwind region. In some cities, the downwind area exhibited increases as high as 51 percent.

I was unable to find any reference to adjustments to precipitation data based on these findings. The IPCC AR5 Physical Science Report appears to confirm the lack of adjustments. However, much of what they report appears to indicate the data is affected by the UHIE. In their general observations about precipitation they wrote,

Confidence in precipitation change averaged over global land areas is low prior to 1951 and medium afterwards because of insufficient data, particularly in the earlier part of the record (for an overview of observed and projected changes in the global water cycle see TFE.1). Further, when virtually all the land area is filled in using a reconstruction method, the resulting time series shows little change in land- based precipitation since 1901. NH mid-latitude land areas do show a likely overall increase in precipitation (medium confidence prior to 1951, but high confidence afterwards). For other latitudes area-averaged long-term positive or negative trends have low confidence (TFE.1, Figure 1). {2.5.1}

In their more detailed analysis. they wrote,

It is likely that since about 1950 the number of heavy precipitation events over land has increased in more regions than it has decreased. Confidence is highest for North America and Europe where there have been likely increases in either the frequency or intensity of heavy precipitation with some seasonal and regional variations. It is very likely that there have been trends towards heavier precipitation events in central North America.

The areas they identify are where the weather station network is inadequate, but the best globally. The NASA study also notes that,

By showing how space-borne platforms can be used to identify rainfall changes linked to cities and urban sprawl, the research may help land managers and engineers design better drainage systems, plan land-use, and identify the best areas for agriculture. Also, it highlights the need for scientists to account for impacts of urbanization when they design computer models that forecast the weather or predict regional climates.

There is some crude accommodation for the temperature impact of the urban heat island. All it does is create confusion because the overall database is inadequate and the variations due to the effect even more uncertain. In reality, a temperature error of even 2 or 3°C is of little consequence. However, 30 and 40% errors in precipitation are of great consequence for all the managers and engineers planning the list NASA identifies.

It is time to shut down the IPCC and its politically biased climate claims that focus on temperature and CO2 while ignoring or distorting far more important variables and factors.

One small change would take a slice of warming away: close USHCN stations reporting the temperature of an area surrounded by taxiways and parking aprons at large airports. Establish them in a more representative area for that city/metro area. An example is the Spokane Intl Airport (GEG). Completely surrounded by pavement, and not in the city of Spokane. GEG has much higher TMIN than the USCRN site (TRNW1) 17 miles south at Turnbull NWR, with similar longitude and elevation. Just an 8F difference today, but I have seen many summer mornings with as much as a 21F difference. (e.g. GEG at 55F, TRNW1 at 34F.)

So if that is true then the US sites which are much more accurate and numerous than the non US sites show less warming than the rest of the world. If CO2 is a well mixed gas that Mauna Loa says it is then how can that be Mr. Mosher? How can we have a 33% markup difference or 25% margin difference in the warming between US rural and the rest of the world (rural and urban)?

“So if that is true then the US sites which are much more accurate and numerous than the non US sites show less warming than the rest of the world. If CO2 is a well mixed gas that Mauna Loa says it is then how can that be Mr. Mosher? How can we have a 33% markup difference or 25% margin difference in the warming between US rural and the rest of the world (rural and urban)?”

1. dont assume the US sites are any better. They have been through many moves, instrument changes and changes in methods of observation. You asssume US sites are better. They are not.

2. C02 is a well mixed gas. NOT perfectly uniform. In any case c02 and OTHER GHGS control the release of energy at the ERL. There is no connection between the temperature you see at the surface and the release of radition to space at altitude.
C02 doesnt magically warm the surface uniformly. C02 controls the escape of radiation to space. What happens at the surface locally is not controlled by c02 at altitude. So for example, you will see heat transfer at the surface poleward making certain regions warmer and other regions cooler, but ON the WHOLE, the average will be higher and that average is driven by the rate of cooling to space, at the ERL.

Global temperature increased from 1850 to 1880, decreased from 1880 to 1910, increased from 1910 to 1945, decreased from 1945 to 1975 and increased since 1975. Overall an increase of 0.75°C in all that time. This is due to CO2 (Really?) and people want me to change my life style because of it?

I don’t believe for a moment that temperature measurements from 1850 were:

1. considered part of a global network; they were local weather stations;
2. nearly numerous, or widespread enough, even across the ‘developed’ world far less anywhere else;
3. capable of being assessed to within 1°C never mind a fraction thereof;
4. recorded by scientists; many of them by tea boys (and cabin boys for SST’s).

Nor do I believe instrumentation was faithfully, accurately and reliably calibrated, nor Stevenson screens maintained to a common standard, nor that daily readings were reliably recorded, stored and transmitted.

Until, probably, the early 20th Century (the Cutty Sark was a working ship until 1922) SST’s were undoubtedly taken by illiterate cabin boys or deck hands chucking a bucket over the side, to no predetermined depth and sticking a thermometer in it, but more likely a finger.

I know for a fact at least one Stevenson screen, part of the official Scottish ‘network’, was regularly read by schoolchildren as it was located (badly) in the grounds of my secondary school. When the Science teacher couldn’t be bothered/was sick/too busy or on a field trip our usual MO was to trudge up to his office and make a guess at the temperature/precipitation/wind speed and log the result rather than go out in the rain or snow. (We were caught out once when someone recorded data a week in advance in the middle of summer and we were hit by short lived cold front).

I also understand that up until very recently (if it has been addressed at all) there were no international standard for shipboard temperature measurement water intakes in terms of size or depth. I also understand Argo buoys aren’t cooperating with IPCC predictions, much like radiosondes and satellites.

Without citing every variable I can think of I would suggest that both land and SST’s can only be relied upon over the last 50 years or so at best. Even satellites have been bedevilled with problems and, as Anthony Watt’s ably demonstrated, up until very recently no one really cared about the siting of temperature measurement stations. Even then, up until digital thermometers were introduced (which have also been bedevilled with calibration problems) they were still fraught with data acquisition issues.

Nor in your claim of warming of 0.75 (°C or °F?) do you include any margins of error which, judging by the above, I would expect to be ∓ 2.5°C at the very least.

Strangely enough, almost by way of confirmation, the last 20 or so years of relatively sophisticated temperature measurement have shown very little warming once El Niño and La Niña have been factored out. And even that’s a fudge as I neither believe the global consequences of these events were truly recognised until joined up climate forecasting was considered, despite them being identified in 1600 or so. They are part of the global climate system and are dismissed as weather by the IPCC only because insufficient historic data exists to include them in climate models.

So we are left with a huge amount of guesswork, scientifically described as homogenisation (other terms are available).

Alarmist scientists and predatory politicians and bureaucrats are determined to predicate mankind’s future on this decidedly questionable ‘evidence’, if it can be described thus.

does it ever occur to you that, even if the thermometer record is correct ( highly unlikely imo but there you go), it is quite possible that all that is being evidenced is that in 1850 there was a relative low (a ‘trough’) in global temperatures as part of a surface temperature that fluctuates periodically and recently there has been a relative high (a ‘crest’) and all that has been evidenced is a trough to crest ‘trend. You have heard of (naturally ocurring) ice ages haven’t you? These fluctuations are well evidenced and on a rotating planet, with an orbiting moon, in turn orbiting a star in concert with a bunch of other planets and asteroids and dust etc there are a lot of potential, periodic mechanisms to cause such fluctuations.

You can get the same false trend with data that conforms to a pure sine wave which by definition has zero trend? All you need to do is deliberately, dishonestly or gormlessly fiddle with the data period vis a vis the longer term fluctuation. Imagine a group of paleolithic people turning up at the beach for the first time at low tide. The rising tide might be characterised by the shamans and other power/rent seeking shysters in the group as the gods being angry with you for pissing in the ocean or whatever.

As for the surface thermometer record being even remotely fit for purpose other than for local temperature measurement, UHI affected or not, give me a break. This has to be the very essence of junk science. Rather sadly I guess, we simple have not had a ‘fit for purpose’ global temperature system since the satellites and balloon systems went up 30 odd years ago.

As for some ‘link’ with CO2 increase in the atmosphere, I reckon a link between the amount of concrete and bitumen on the land surface has gone up orders of magnitude more than COR since 1850. When the ‘science’ eliminates that as a prospective driver of so called ‘global’ warming (as distinct from a global temperature system, UHI pollution defect) and properly and credibly filters it out ( if it even can) I’ll start to listen. And don’t start me on the thermometer-bucket system at sea back in 1850.

I also would like to query about the site distribution from 1850 -1950.
For a large swath of the western US, water distribution was minimal, and AC was non-existent. This meant large areas where uninhabitable that now have cities. The same holds for the majority of the Arctic and Antarctic regions. They were uninhabitable, except by indigenous people and a few explorers, trappers, and minors. None of which was getting paid to faithfully record temperatures.
How can we possibly know what the change of temperature is to TENTHS of a degree per DECADE in areas that were not conducive to human life!!!
Not to mention the fact that we had wars, depressions, famine, plagues, and the introduction of life altering technology, that were much more important to people living during those times. If you lived in areas where water, food, and shelter, were the difference between life and death, minute changes in air temperature from year to year, was not making the priority list. And delicate temperature measurement instrumentation was not going to provide you with anything on your priority list.

Re your comment immediately below: Your 2. is B.S.; didn’t your hero Gavin Schmidt explain to us lunkheads it is the change (e.g. changes in anomalies) not absolute temperatures that are of concern? It doesn’t matter where or at what absolute temperatures you are measuring; it is only the anomalies!

“Re your comment immediately below: Your 2. is B.S.; didn’t your hero Gavin Schmidt explain to us lunkheads it is the change (e.g. changes in anomalies) not absolute temperatures that are of concern? It doesn’t matter where or at what absolute temperatures you are measuring; it is only the anomalies!

1. Gavin is not my hero. I guess you are new to these parts and don’t recall the
many battles gavin and I had in the past over code and temperature.
2. Change in anomaly and change in Absolute is exactly the same thing.

I think you and Bob Tisdale fundamentally misunderstand what Gavin meant. However, This thread is about UHI. Regardless, I am responsible for MY arguments and MY positions and am not responsible for Gavin’s. Long time Libertarian here and I take offense to the kind of leftist thinking that makes me responsibel for stuff gavin says. I am here, gavin’s not. I’m talking about UHI, you are changing the subject with your Drive by comments about Gavin.

“does it ever occur to you that, even if the thermometer record is correct ( highly unlikely imo but there you go), it is quite possible that all that is being evidenced is that in 1850 there was a relative low (a ‘trough’) in global temperatures as part of a surface temperature that fluctuates periodically and recently there has been a relative high (a ‘crest’) and all that has been evidenced is a trough to crest ‘trend. You have heard of (naturally ocurring) ice ages haven’t you? ”

very good question Sir ! Here is how I got interested in the temperature record. basic Physics tells us that c02 will have SOME effect on the temperature of the planet. But how much? How much? How much is natural cycles and how much is human forced?
In the begining i wasnt prepared to answer this. Why? Well to answer it I thought you probably needed a good record of temperature. What I saw didnt make me happy. I worried about micro site, UHI and adjustments. So I started to demand code from NASA and later demanded code and data from CRU.

One other thing bothered me on the skeptic side. I saw skeptics arguing that they were certain it was natural cycles and at the same time attcking the very record that showed natural cycles. This struck me as odd. Again, the solution seemed to get everyone on all sides working with the best data that could be provided.

Today I think the best explaination for the rise in temperature is a combination of human and natural forces. That’s the best explanation, not the only one. I think there was an LIA and the temperature record shows this. I think there are some quasi cycles ( think AMO ) that also play a role. I dont think all the warming is due to UHI. I dont think all the warming is due to micro site or adjustments.

For people like you who may not believe in AGW I want to provide the best, clearest, well documented record of temperatures so that you can go out and make better skeptical arguments.

Yes, the reasons they give for homogenizing the data should randomly warm or cool, and cancel each other out,
except urban heat island, which should warm.
Yet we get a strong cooling of the past,’
What gives ?

“Heeeeyyy…wait a second…who are you?
Complete sentences, points made in a cogent manner, questions answered?
What have you done with Mosher?”

For the most part I comment using my phone. Taking a taxi to the airport, stuck in the lounge, out to dinner. Moments stolen here and there.

When Charles took over moderating I told him I would set aside some time to actually sit at a computer and try to provide some useful comments to those folks who are still open minded and willing to discuss things. I still hit send too quickly, and I could probably pause and re read and edit. In due course. I also promised him that I would try to set aside some time to do a few posts. I think there are good skeptical arguments that need to be made; however, those good arguments are drown out by silly and stupid arguments.

These are all in the same 2.5 degree cell covering Stockholm. The reporting locations shift from inland to ocean and you can clearly see how the amplitude dampens because of this. Curious how you avoid calculating a steeper sloope than actual.

These are all in the same 2.5 degree cell covering Stockholm. The reporting locations shift from inland to ocean and you can clearly see how the amplitude dampens because of this. Curious how you avoid calculating a steeper sloope than actual.”

Not sure you want to use 1951-1980 for a baseline. If you are working with a small area, and a small collection of stations there are a couple methods you can use.
CAM; common anomaly method. Look at your data pick the time period with the most stations. Thats a good baseline period. For every station you average the Jan/Feb/mar etc for the station and and create a normal.

Your problem, however, is not baseline problem. The problem is that coastal locations tend to have dampened amplitude. So as your sample moves ocean ward you will get the shifts you note.

in our model we implemented a factor for distance from coast. It doesnt really impove anything ( as much as I hoped). Primarily because the effects are seasonal and also tied to natural cycles and they are different for different types of coasts. I would probably suggest kriging with drift for what you want to do. as long as you are dealing with one small geographic area that approach should work

Yes I also see there is a problem aiming for a common baseline period for this cell. From what I had understood when working with gridded information you first create a temperature series by averaging accross the stations within a cell and then you calculate a baseline temp with anomalies following that.
I think you agree that in my Stockholm example that method would render a pretty large error. The CAM method should be better as long as we strictly work with overlapping years. What that method masks though is UHI.

If we instead could stick with temperatures as long as possible we would actually see how stations drift apart over time. It should be possible to align stations with an offset the first overlapping years and then get a min/max range as the years progress. I have done this manually for the Stockholm case and the slope is about 50% smaller.

Don’t know if you read such old threads but I put it here for reference and as a Thank You.
The CAM method you propose works well. I can use it to verify my serialization (first overlapping years) method.
The average slope for the group of stations in a cell is almost identical for the cells I have tested so far. With CAM the slope is calculated on average anomalies, with serialization the slope is instead calculated on average adjusted temperature.

My theory is: By sticking to actual temperatures (as with serialization) as long as possible in the process I can also catch station drifts. Those drifts will indicate errors caused by thermometer/location change or UHI.

“Remove the “rural” stations close to urban areas and see what happens.”

Did that.

But I am interested.

It looks like you are basing this on the perception that UHI can advect over space. That is, UHI in city A can advce to the suburbs and beyond. A couple of questions.

A) how did you test this?
B) is there a sensivity to wind speed and direction
C) what sized cities have this kind of effect?
D) why do we see cool zones in cities
E) How many kilometers away?
F) what is your expected Effect size
G) does it vary with season

lets just take question F.

Suppose we have a city that is 500,000 people and suppose
that on the worst days it has a UHI max of 1C. Now note that this is not every day.
You dont see 1C of extra warming every day, only on windless cloudless days do you
see this maximum figure. Any way, assume 1C of excessive warming in the city
of 500,000 people. if you move 10 km outside that city will you see 1C?
how about 20km?
What if the city is only 25,000 people?

In short, I need a testable hypothesis from you. you say remove rural stations
close to urban. we need to operationalize that.

How far? is it different distances from different sized cities? How did you pick those numbers? anything to back it up? any guidance? or are you just hoping

“Suppose we have a city that is 500,000 people and suppose
that on the worst days it has a UHI max of 1C. Now note that this is not every day.
You dont see 1C of extra warming every day, only on windless cloudless days do you
see this maximum figure.”

I look at this from a different angle. I live in a rural area located outside a small town about 40 miles southeast of Tulsa, Oklahoma, and Tulsa’s temperature is always about two degress F higher than my location (including the small town). That temperature difference is evident in all seasons even though Tulsa is located northwest of me and should be cooler than my location.

In other words, the UHI effect shows up all year round when compared to the surrounding rural territory.

UHI is 3-4 degrees in the largest cities (6-10 degrees F hotter “in the city” in every daily weather report I hear cross-country compared to “the suburbs”).
2-3 degrees warmer for every small and medium city – again compared to “the suburbs” around each city.
Add 1 degree to that to compare “the suburbs” to the “country”. Even in a series of little 5000 person little towns across north Michigan, each “town” had a distinct “fog-clear-fog” change as soon as you left the fields, entered town, then got back in the fields again. Still day that time, no wind = Greater UHI, just as you would expect.

Steven Mosher, I live in Adealaide, Australia, a city of a bit more than 1 million and they recently moved the weather bureau back to roughly where it was 30 years ago – as there were concerns over the ‘new’ location. When the two stations were operating together the difference was up to 2 degrees Celsius (as reported in the local paper). One was reporting record temperatures, and the other more or less normal summer temperatures. I don’t see how you can make a historical temperature record accurate to be usefule, when measurements from more or less the same place can be different by a couple of degrees.

Remove the “rural” stations close to urban areas and see what happens.

There’s far more to vetting station data than just removing stations within or close to urban areas. What is needed is an objective measure of coherence to ascertain that data are representative of regional, rather than just local, variation at the lowest frequencies. Sadly, none of the index makers seem equipped to perform such model-free vetting, relying instead upon various “homogenization” schemes that don’t reflect the actual (and highly variable) spectral structure of the data. Despite much pseudo-scientific bravado, BEST’s “scalpel” algorithm is the worst of the bunch in mangling, rather than detecting, the low-frequency signal components that determine the apparent “trend.” Small wonder that Mosher finds no substantive difference between urban and rural records.

Not if they also conform to station siting rules. I have already posted based on the WUWT station siting project analysis of 88% of USHCRN stations. Contrary to BEST, UHI is real and the GISS corrections are not. Provably using posted data.
Put otherwise, a software project that demonstrably muffed data ingestion (example Rutherglen) and demonstrably muffed ‘regional expectations QC (example station 166900 in fn 25 to essay When Data Isnt in ebook Blowing Smoke))

The sites are rated 1-5. I would take the sites and collect the relevant satillite data for each of them. basically 30 meter ground cover data. Then, look to see if there is any relationship between the satillite data and the rating given .

To be more specific

Take the sites and split them into 2 piles. Training data and test data.
Then using the training data create a classifier that looks at the satillite data
Then test the classifier on the held out data.

I’ve already done this with the first round of siting ratings and it worked pretty good.
basically it will allow you to automatically classify ( give a crn rating) to thousands of sites. Not perfect of course, but as higher resolution data comes on line the goal would be to use machine learning to classy sites from imagery.

As I recall, “rural” was defined as a population of several hundred thousand. But it is not the population that necessarily tells the story; it is population density or a reflection of population density based on the volume of concrete and pavement, say runways at airports. In underdeveloped countries there may only be one or two long term thermometers in the entire country. All in highly populated “rural” cities.

“As I recall, “rural” was defined as a population of several hundred thousand. But it is not the population that necessarily tells the story; it is population density or a reflection of population density based on the volume of concrete and pavement, say runways at airports. In underdeveloped countries there may only be one or two long term thermometers in the entire country. All in highly populated “rural” cities.”

Good theories . all testable and wrong

Just some background. In order to study UHI you first need a defintion of what urban is

Let me quote Anthony
[Note: this is an AGU poster displayed at the annual meeting, available here as a PDF. I’ve converted it to plain text and images for your reading pleasure. I’m providing it without comment except to say that Steven Mosher has done a great deal of work in creating a very useful database that better defines rural and urban stations better than the metadata we have available now. – Anthony]

In that study we looked at population , nightlights, urban area and yes airports.

There are a few challenges in each of these areas. Let me explain.

1. Population: yes density matter more than raw count because density is really
a proxy for building height which can really drive UHI ( radiation canyons) and
surface roughness. Further population datasets tend to assign people where
there are no people. They have population figures for adminstrative boundaries
and those boundaries dont line up with grid cells in GIS data so they spread people
over large areas where there are no people. Newer datasets avoid that. In the latest
datasets they assign population to areas ( at 30 meters) where satellite data
detects actual buildings. Pretty cool, but tons of work. The other issue with population
density is that there is no clear cut off for what counts as rural or urban. IF you mis
identify rural as urban then you will be including higher temps in your rural bucket

2. Nightlights. Nightlights is good for detecting lights. go figure. After looking
at thousands of stations and the lights around them it became clear that there
were situations where you had buildings and no detected lights and lights and no people!.

3. Urban cover. For Berkeley earth we used a dataset of urban land cover. This was
MODIS data 500m data and you needed 2 adjcent grids to count as urban.
Folks like Willis were critical of this as 1sq km is a lot of urban cover. I moved on
from MODIS ( the data had other problems) and started working with 300m data
and finally 30 meter data. With 30 meter data you can even pick up roads.
here is an early example of what 30 meter data looks likehttps://stevemosher.wordpress.com/2012/10/08/sample-suhi/
Now we have 30 meter data for the whole world and we have population assigned
to each of those grid squares. facebook is even working on a finer detailed map
going down below 10 meters.

4. Airports. I’ve search in vain for an airport effect. To start with the existing metadata doesnt always give you a clear picture of airports. Why? well sometimes they have a thermometer listed at an airport, but there is no airport there!. basically they had the thermometer at an airport, they closed the airport, kept the name of the station the same and moved the thermometer to a new location. And they also do the opposite. Thermometer is at an airport, but the name doesnt indicate this. So, you have to geo locate actual airports and see how far away the stations are. Airport locations used to be open data form the FAA
but after 9/11 that changed. So you have to use other sources. Luckily I found them and they also include airport size, so even if you have a dirt runway. In any case you cant find
a consistent effect for airports. I dont doubt you can find an isolated effect here or there.
Gosh one time I went and got minute by minute data from airports to see if you could spot aircraft taking off. Nope.

So the big question is how do you define rural/urban. It’s not a 0/1 type of thing.
And even within cities you can find NEGATIVE UHI. They are called Cool islands

Here is the thing that makes it really tough. If you have a rural site that is POORLY sited, then it could be warmer than an urban site in a cool island. This will confound your
urban ruaral comparison. Also, if you wrongly classify a rural site as an urban site it will also confound your analysis.

1. Look at the total population within 20km or so of an CRN site.
2. Look at the nightlights within 20km of a CRN site
3. Look at the urban area ( ground cover) of a CRN site
A) within 100meter
B) within 1 km
C) within 5km
D) within 20 km

That set of features gives you a filter that you can then run all the stations through.
Do they match CRN characteristics or not.

you would be amazed how many do.

In any case, ya I have looked at all the factors you suggested. I still look. You see
it should be EASY. The theory is that UHI is big and it should be easy to find. It should be dead easy for any skeptic to show the effect on global temps and get an awesome publication. That’s what I originally wanted.

Just split your sites into rural and urban and compare. Right?

So you wont find a difference. Then what? reject your theory? Nope. what most people do is they reject the classification of rural. They say.. That’s not rural, look at Nightlights.
Opps, found nothing there. Then they say, look for regions where there are no lights within
20km of the station. Test that theory. Nope, still busted. Then they suggest using urban area.
So you test that theory. Nope still busted. So they object that 500m data is not good enough. So you test with 300m data. theory still busted. Next suggestion, what about 30 meter data. The whole area needs to have no human building within 10 km or 20km. Ok, test that theory. Nope
We STILL see warming. Heck maybe there was an LIA! Then they suggest airports. Well airports show up at 30 meter data, but hey you test that theory. Answer : It is STILL warming, there was an LIA. Then they suggest unadjusted data. Well, you only work with unadjusted data. So ya, it still is warming there was an LIA. Then they attack the precision or the use of anomalies. jeez, the absolute temps are warmer and the anomalies are warmer. There was an LIA. its warmer now! heck I thought skeptics agreed we were coming out of an LIA! Then they attack the very notion of saying the average temperature in the past is lower than the average temperature now. You cant average temperatures! Gosh, now they are attacking the existence of an LIA. Huh?

So, anyone who wants to suggest a new criteria for rural is welcomed to.
I will try to do the test.
Then when their hypothesis is busted, they can suggest something else. never drop that theory. Feynmans law.

Did you even look at Steve McIntyres post? Or just go on a typing blitz and mostly agree with what was stated with some added posturing that basically said nothing.

Look at the graphs on the link I included. Can you not tell rural from urban? Can you not tell light urban from heavy urban?

UHI is real. It exists. And it is not adequately accounted for. NOAA uses the Tom Peterson study to justify not making any UHI corrections. GISS uses “lights” to adjust for UHI, but only a maximum of .1 degree.

Would you please include the BEST algorithimn for UHI in your next posting that shows how it is accounted for in the adjusted temperature history presented by BEST. I would also like to see some UHI surveys in large, medium and small density cities that are the basis for the BEST algorithims.

Here is Anthony
‘[Note: this is an AGU poster displayed at the annual meeting, available here as a PDF. I’ve converted it to plain text and images for your reading pleasure. I’m providing it without comment except to say that Steven Mosher has done a great deal of work in creating a very useful database that better defines rural and urban stations better than the metadata we have available now. – Anthony]”

Do you think Anthony was lying? Is that what you have come to now?
You dont even ask questions or study the past you just read my name and say
“its a lie”

‘There is also the problem with growing development around the so called rural stations that hasn’t been accounted for.”

Really? which rural stations did you look at? What actual data did you look at to determine this? How did you define rural? Really seriously, now that we have 30 meter satellite data going back over the whole history of landsat we can actually see . Your imput here would be really helpful

So lets get your theory straight. There are rural stations that I used that are no longer rural?
or have been encroached upon? Is that your testable theory?

We could test that, but I think you would keep your theory regardless of the outcome

LOL, Love it Stephen. BSBB (BS Baffles Brains) doubtless you’ve heard of it. Predictable marketing ploy, from bad marketeers, invariably exposed in the fullness of time. The go to term of BSBB is “prove me wrong”. You use it a lot, or variations thereof.

The only skill set he has is marketing (an extremely good skill set I might add) which endows one with the ability to seem credible whilst not having the first clue what one is talking about. I was a marketer, the difference is I didn’t BS people into believing I knew the first thing about the nuts and bolts of the products and services I was promoting.

“The only skill set he has is marketing (an extremely good skill set I might add) which endows one with the ability to seem credible whilst not having the first clue what one is talking about.”

“Here is Anthony
‘[Note: this is an AGU poster displayed at the annual meeting, available here as a PDF. I’ve converted it to plain text and images for your reading pleasure. I’m providing it without comment except to say that Steven Mosher has done a great deal of work in creating a very useful database that better defines rural and urban stations better than the metadata we have available now. – Anthony]”

Steven,
Looking in detail at Australian sites, one finds that the more remote from people the site is, the worse the data quality is, in terms of missing days and outlier ?mistakes.
I tried to find a representative set of Aust sites from ‘pristine’ locations, to compare with ‘urban’, to see if a systematic difference might quantify UHI. I found 40 candidate sites, excellent absence of signs of the hand of Man, but the data were shockingly bad.
It failed because of poor data quality. I thought, what if you infill missing data using reasonable assumptions and then check again for systematic differences.
It failed.
Next I thought, What if you do the full Monty of homogenization and then look for differences? It failed.
Then I realized that I could have been looking at a similarity to a BEST situation.
It also fails because the data quality of the pristine stations cannot be reasonably corrected to compare with the error-filled, corrected urban sites.
When you compound the uncertainties you are left with too large an uncertainty to make a prudent inference. You cannot really say that rural sites warm as much as urban sites because the perturbations on each are not the same. Apples with oranges again, and again a convenient lack of a proper process to assess and propagate the errors realistically.
(It remains the case that Australian data behave differently before and after the cessation of the LIG thermometer in the Stevenson screen about the late 1990s. Electronic thermometers have caused significant differences, or vice versa, but people are getting tired of checking the official data for such mistakes. They do not go away if you ignore them.)
Alternatively, Australian data might be different to rest of world. Geoff.

Steven,
Looking in detail at Australian sites, one finds that the more remote from people the site is, the worse the data quality is, in terms of missing days and outlier ?mistakes.

1. it would good to tell the data source.
2. It would be good to define how you measured ‘remoteness’ from people.
Did you measure the distance to the nearest town? nearest town of what size?
Say you picked town at 10K people, is the answer different if you defined town
at 5K people? or 25K or 50K? remoteness from people is effectively un checkable
unless you make a decision about what counts as a town. Or did you use the surounding population and look at missing data versus population. Lastly, how much
missing data? was it just missing singlur months, or were there gaps in data.
As for outliers, same question. The standard rates of outliers is pretty low to begin
with. Are we talking going from .05% to .1% a difference that makes no difference
makes no difference.

“I tried to find a representative set of Aust sites from ‘pristine’ locations, to compare with ‘urban’, to see if a systematic difference might quantify UHI. I found 40 candidate sites, excellent absence of signs of the hand of Man, but the data were shockingly bad.”

1. Again, data source.
2. Again, how was pristine defined? what metric? objective metric that can be
applied to the ROW.
3. bad? How defined? Did you set up a criteria before hand or merely decide
after the fact what was bad?

“It failed because of poor data quality. I thought, what if you infill missing data using reasonable assumptions and then check again for systematic differences.
It failed.”
1. never infill missing data and try to do tests. The stats exist to handle missing
data if you know what you are doing. this preserves the uncertainty rather than
hiding it inside your missing data infil.
2. What test did you use for “systematic differences” there are standard double
blind tested methods. Which did you use? If you made up your own method
did you test it?

“Next I thought, What if you do the full Monty of homogenization and then look for differences? It failed.”
1. Again data source.
2. What homogenzation did you use? why?
3. How did it “fail” its code, it runs.

“Then I realized that I could have been looking at a similarity to a BEST situation.
It also fails because the data quality of the pristine stations cannot be reasonably corrected to compare with the error-filled, corrected urban sites.”

That would not be a Berkeley situation. The study you probably want to look at
for an example of how to do this is Zeke’s 2013 paper

“When you compound the uncertainties you are left with too large an uncertainty to make a prudent inference. You cannot really say that rural sites warm as much as urban sites because the perturbations on each are not the same. Apples with oranges again, and again a convenient lack of a proper process to assess and propagate the errors realistically.”

What you meant to say is that you cannot find the UHI signal you thought was real.

“(It remains the case that Australian data behave differently before and after the cessation of the LIG thermometer in the Stevenson screen about the late 1990s. Electronic thermometers have caused significant differences, or vice versa, but people are getting tired of checking the official data for such mistakes. They do not go away if you ignore them.)
Alternatively, Australian data might be different to rest of world. Geoff.”

I would not rule out those down under doing things upside down. In some places the best approach may be an expert directed homogenization. Take CRU. They dont do adjustments
they rely on the local experts to produce a vetted temperature series. This is a bottoms up approach. We do a top down approach and yes just put the data through an Apolitical meatgrinder. The meat grinder adjusts the data to minimize disagreement, NOT to try to get every station correct. It attempts to decrease the error from a global perspective
rather than produce the best record for each station from a bottom up approach

Why do this? Why use an approach that works top down in a data driven way?
Simple. Skeptics argued that data adjusters were cooking the books. Look at what you did
Look at all the choices you made. Look at all of the questions I had? See that? when a
person is involved in making the choices ( what counts as pristine, how to infill, what “fails” how to adjust etc ) then skeptics start to QUESTION THE PERSON. They want to know
who did it, what his politics are, FOIA his emails, question every choice, etc etc etc.
Like we did with CRU. They said they made adjustments, we wanted to know how.
At berkeley we took the opposite approach. Not bottom up work every station by hand, with
humans looking at it, but top down. The algorithm decides what fits and what doesnt fit.
If a station stands out from all its neighbors the ALGORITHM says it has a lower quality.
And after giving it a quality score the whole globe is recalculated. And this iterates until
the error is minimized. Some stations go up, some go down. NONE are correct, but they are all less bad. It adds maybe .15C to the warming record for the land. The important thing is this. The top down process REFUTES entirely the notion that GISS or NCDC cook the books to achieve a desired result. No human looks at those stations to tweak them to go in a desired direction. No human says. “cool the past, warm the present” The algorithm grinds the meat. dumbly, apolitically, unemotionally, and its gives the same GLOBAL answer as NCDC or NASA. For australia, last I looked we probably warmed the record less than the BOM did. At some point a bunch of people were working on checking our adjustments versus CRU and the various expert based approaches. Some countries we warm more, some we warm less. Globally the answers match. Locally they always differ by some amount.

From a technical standpoint I would probably prefer an expert guided approach. the meatgrinder approach DOES produce some odd ball stations. However, our goal was NOT to create the standard station database. Our goal was to test ONE THING: the claim that climate scientists adjustments were FRAUDULANT. We wanted an approach were we could let an algorithm decide how to adjust and then see. Did this approach give the same answer as the bottoms up approach used by others? I know one thing. My thumb, Zeke’s thumb, Roberts thumb never got close to the scale. I suggested every extra test I could think of to make adjustments smaller.. No differences. What I conclude from this is as follows:

1. We have no evidence from looking at NCDC code or GISS code that they have
created algorithms that bias the answer.
2. We have no evidence, no emails, no discussions, no data, that suggests they
monkey with adjustments to get a desired result
3. Our own work matches their answer at a global record and we know WE did not
mess with the algorithm to acheive a desired result.

Therefore, the belief that claimte scientists fraudulently adjust data to get a desired result is NOT supported by any evidence we could find. The opposite in fact.

However, It could be that we were all just lucky. We developed different algorithms and they magically give similar results. And it could be that these results are biased high by some
unknown mechanism. We tested for that. We took pristine data ( computer generated) and we then had another party secretly add errors to the data. We then set lose the algorithms
to see if they could spot the errors and correct them. The algorithms could.

Conclusion.

1. the scientists did not overtly crate adjustments to get a desoied outcome. No evidence.
2. The algorithms do not have flaws which magically create warming out of nothing.
3. If you look at individual adjustments on thousands of adjsuted stations You MUST
find some that are done poorly. Statistics.
4. The feild is still wide open for better algorithms.
5. it is warming. there was an LIA. Skeptics believe this, but they weirdly attack some of the best evidence we have for supporting their belief.

PS: Who is now paying you to spend all this weekday time posting on an “obscure” climate blog, Mr. Mosher?

Stephen is now employed by Berkeley Earth as far as I’m aware. Nothing wrong with that, everyone needs a job, but from the lukewarmer he previously was, he’s had to convert to a full blown Berkeley supporter. Unsurprising as they are paying his wages.

Personally, I have no problem with that, he’s using his marketing skills and being paid for them.

What I do object to is him presenting himself as some sort of scientific climate guru when he’s just a marketing bloke selling his employers product. Any self respecting marketeer concentrates on his job, getting product to market, not presenting himself as an expert the subject matter itself.

And if you look at Steven’s posts, most of them are a variation of the ‘prove me wrong’ tactic. We had Mars Bars in the UK (a sweet caramel confection bar covered in chocolate) advertised as “Helps you work, rest and play”. Well of course it did, it was pure sugar. The slogan was eventually banned because it promoted the consumption of sugar as a health giving substance when it clearly isn’t. But the ‘prove me wrong’ of “Helps you work rest and play” was never defeated.

Clumsy, but if you look at almost any advert for anything, that’s the substance of their sales ploy.

That’s what Steven does. Not obvious unless you know something of sales and marketing.

Steven,
This Australian comparison of ‘pristine’ versus ‘urban’ sires was calculated in 2011. Here is part of my introduction —

“This pristine selection was culled from a larger set of 157 sites that were plausibly pristine except that some had too much missing data, a cost of being isolated and little affected by the hand of man. All of the sites were first selected for me by Steven Mosher, perhaps for the Berkeley BEST project. The Mosher spread sheet had about 588 sites pre-selected.
There are about 1,200 sites with temperature data on the BoM CD 2007 product.
………………………………..
Problem is, my copy of the work I sent to you was lost in a disc crash soon after 2011. It had the sites you selected, to which I added columns for distance to nearest sign of habitation, approx population of same at different census years, distance from the sea, altitude and perhaps some other factors now forgotten.

The simple outcome was that it was pointless to continue with the analysis because the data were unfit for purpose. I can understand this, because in some pristine places, my colleagues and I were told local stories of Wild West events like pistol fights by drunks; some of whom, because of tiny numbers of people there, were plausibly also record-keepers. Yet today, some of these sites play a critical part in the BOM national climate maps, which would have great gaping holes on them if the data were not used. The latest addition to the ACORN-SAT data base is Rabbit Flat, typical of that description.

Sorry about the science suffering. I do not like it any more than you do. But, although you tried hard, it remains that you cannot make a silk purse from a sow’s ear. Geoff

‘Stephen is now employed by Berkeley Earth as far as I’m aware. Nothing wrong with that, everyone needs a job, but from the lukewarmer he previously was, he’s had to convert to a full blown Berkeley supporter. Unsurprising as they are paying his wages.”

I have volunteered for Berkeley Earth from 2012ish to 2013 From 2013 to 2015 I recieved a small stipend which I disclosed. From 2016 on I volunteered. No pay. I like it better that way.

Seriously I think you went to the poptech school of bad investigative reporting.

If you doubt any of that you can ask the moderator. You see here is the funny thing.
It doesnt make very much sense for you to speculate badly and wrongly about a person you want to discredit. As a marketer you should know that you need to throw mud that sticks. That would mean checking with other folks about whether or not I was currently paid by berkeley. I’m not. Go ahead ask the moderator.

“Problem is, my copy of the work I sent to you was lost in a disc crash soon after 2011. It had the sites you selected, to which I added columns for distance to nearest sign of habitation, approx population of same at different census years, distance from the sea, altitude and perhaps some other factors now forgotten.”

I am sorry to hear that. I too have suffered data losses over all these years and when I moved abroad I had to leave all my stuff in storage– or with charles.

That said, I can probably start over from scratch. If it was prior to 2011 then it would not have been Berkeley data.

Dividing observations into urban and rural is extremely inaccurate. If we can only have two divisions, we should make them ‘pristine’ and ‘contaminated’. I guarantee that we don’t have 15,000 pristine observation sites in the world.

It does not take a city to bias an observation site to the upside. A little more pavement nearby, some taller hedges or trees in the vicinity, a few more buildings than when the site was established will all produce a warm bias. Most sites that we label ‘rural’ are still contaminated by nearby growth and have a warm bias, even if the local population has not changed much, or remains small.

The assumption that UHIE is proportional to human population is convenient, since we have good population data, but a poor substitute for actual site surveys. ‘People’ do not cause the UHIE. It is pavement, buildings, exhaust, landscaping and so on that cause the warming bias. An abandoned town would have nearly as much UHIE as a populated one.

Mr. Moshers statement that the rural sites have as much warming as the urban sites is not strong evidence of global warming. It is just as likely, if not more so, that the rural sites are just as contaminated by nearby growth as the urban sites. Without a documented site history, it is impossible to say.

“Dividing observations into urban and rural is extremely inaccurate. If we can only have two divisions, we should make them ‘pristine’ and ‘contaminated’. I guarantee that we don’t have 15,000 pristine observation sites in the world.”

1. This is a typical goal post move that people make. First they complain about urban
and they point to studies of UHI in Huge cities.
2. When we split sites according to population, and find no UHI, they
ask for smaller population.
3. When we go down to zero population sites, they shift the argument again.
4. Lastly they ask for Pristine sites, maybe they point to CRN as examples.
5. In the end they cannot define what they mean by Pristine.
6. Its true there are not 15000 pristine sites. Thankfully you only need a few dozen

“It does not take a city to bias an observation site to the upside. A little more pavement nearby, some taller hedges or trees in the vicinity, a few more buildings than when the site was established will all produce a warm bias. Most sites that we label ‘rural’ are still contaminated by nearby growth and have a warm bias, even if the local population has not changed much, or remains small.”

1. We usually Refer to contamination within a few hundred meters as Microsite.
2. It could be that all the warming since 1850 is microsite and we are still in the LIA!
( not)
3. I asked for the micro site data over 6 years ago, but Skeptics refused to share it

“The assumption that UHIE is proportional to human population is convenient, since we have good population data, but a poor subvstitute for actual site surveys. ‘People’ do not cause the UHIE. It is pavement, buildings, exhaust, landscaping and so on that cause the warming bias. An abandoned town would have nearly as much UHIE as a populated one.”

1. yes I have tried to explain to skeptics that population is not a perfect proxy
for UHIE. For example when Roy spencer did a study of UHI using population
I pointed out your points, and skeptics screamed at me!
2. At one point I was testing the abandoned town hypothesis using midwestern
cities that had lost population. What you say is not strictly true as any study
of wekkend UHI will show you ( see tokyo studies on UHI as a function of days of the week)

“Mr. Moshers statement that the rural sites have as much warming as the urban sites is not strong evidence of global warming. It is just as likely, if not more so, that the rural sites are just as contaminated by nearby growth as the urban sites. Without a documented site history, it is impossible to say.”

1. You would need to define what you mean by nearby? In studies of UHI advection
the UHI signal from large cities falls off rather rapidly.
2. Did you just disappear the LIA?

I call bullshit. The only station I’m familiar with personally was moved, and there’s no record of it. This, even though the UCHCN notes that moving a site means that is now a different temperature record and “now” can no longer be compared to “then”.

“Does rural mean rural towns? UHI effect is shown in even small towns.”

I spent a year looking at the small town issue. There is very little fundamental research on this problem most of the research is big cities. Toruk did tiny bit of work, there are a couple, not very well documented studies. Key question is how do you specify mall?

Typically when I select sites for rural I would do something like the following

1. Get population data.. population data that is geolocated to satellite sensed buildings.

2. Create buffers around the sites. Like sites with zero population within 50km,
25km, 20km, 10km etc etc
3. Create a second tier of tests for BUILT enviroment. So 30 meter data for urban surface
roads, buildings etc. Create buffers for these as well.
4. Check the distance to the nearest named placed with x000 people or more.
that gives you another check.
5. check the distance to the nearest airport

Then you can just use any criteria you like. you could say

1. No people within 20km
2. No urban surface within 20 km’
3. 50km distant from any place that has more than 1000 people.
4. No airports within xkm

This is basically the test that Zeke and I did years ago. we found a small UHI signal in raw data. it worked great ! It wasnt a very meangingful signal, rather small, but we found it,

CAGW ignores anthropogenic H2O as it’s negligible compared to the natural emissions.
Of course, the anthropogenic emission of everything is also negligible unless they also cause feedback effects such as more H2O.

The problem isn’t that they assume anthropogenic emissions of H20 are negligible compared with natural emissions. That’s correct.

The problem is that they assume the feedbacks are real when there is no Tropical Hotspot and no runaway positive feedbacks to a 2nd Venus after the first natural forest fire before man evolved.

Global Warming Alarmists are certainly aware of UHI effect and use graphs of its effect on major urban areas to promote the idea that there actually is dangerous widespread Global Warming, by selecting the best examples of UHI. They also make use of ‘corrections’ to counter UHI growth which have the effect of cooling their past records to increase apparent warming in the face of evidence for none.

GEOENGINEERING is why Water Vapor gets no recognition as the chief greenhouse gas. The plan is to DIM the PLANET using chemtrails…as if that will cause global cooling…it will cause global warming. The greenhouse effect is a nighttime problem – clouds holding the day’s heat in during the night and not letting the heat dissipate into space. More clouds = more warming. So bad an idea it must be a LIE….it probably is a direct depopulation effort….A MAJOR CRIME AGAINST HUMANITY is global dimming. It must be rejected !!!

Where’s the empirically derived evidence that atmospheric CO2 causes the planet to warm? 40 years of research and not one convincing, credible scientific study demonstrating it. If there was, don’t you think every scientist in the world would be lining up behind it?

Where’s the evidence of CO2 having a direct, observable negative effect on anything when there’s ample evidence it has had a direct benefit to plant life?

Take your scientific hat off Steven, it doesn’t fit. You are marketing, sell your clients products by all means but don’t BS the world into believing you know what you’re talking about.

1. He eliminates all the data prior to 1895. FOR A REASON.. maybe ai should do the
post in my analysis of his data hiding tricks
2. He hides the C02 correlation below 300ppm even though these values are in his code.
‘3. he calcluates the US average wrong by doing a simple average.
4. he does a correlation on smoothed data.. bad monkey!!!
5. one of his readers tried the same thing for australia, no match.
6. The NCDC code for adjustments has nothing about C02 in it.
7. Since adjustments and c02 both increase over time, OF COURSE they will be corrlated.
8. using the same USHCN data and the CORRECT way of doing area weighted averaging
I get different results.
9. IF the temperature is biased low, and if C02 is a cause of warming, then necessarily adjustments will be correlated with c02.
10. Different regions of the world have adjustments that GO THE OTHER WAY.

“10. Different regions of the world have adjustments that GO THE OTHER WAY.”

I have asked you this question before Mr Mosher, but you have never answered it.
Tell me about the NOAA Global Temperatures for 1995, 1997 & 1998 and how they have changed as follows
in 1998 the Report for 1997 showed
62.45F or 16.92C
and also stated 1995 was
62.30F or 16.83C
in 1999 the Report stated that the temperature for 1998 was higher than 1997.
Currently the value for 1997 is
58.13F or 14.53C

So can you explain to me how the latest calculations reduced the 1997 temperature by
4.32F or 2.39C?

Now don’t give me the baseline change that is on the NOAA website as the reason as they made the mistake of posting the 1997 temperature as an Actual Temperature instead of an Anomaly.

Mosher
**That chart is funny.**
And so are a lot of other charts.
You must have missed the other post where there was over 80 percent correlation between temperature adjustment and CO2.
**1. He eliminates all the data prior to 1895.**
You are deflecting the discussion. Most of the adjustments upward are for recent observations. Remember that most “warmers” only discuss very recent events, for example the Arctic is screaming – ice decline only since 1979. Would you say they “eliminated” the data prior to 1979. It was there in the IPCC 1990 report. Say what??

**6. The NCDC code for adjustments has nothing about C02 in it.
7. Since adjustments and c02 both increase over time, OF COURSE they will be corrlated.**
Now that is FUNNY!!
SINCE ADJUSTMENTS INCREASE OVER TIME!!!!
That is the point the 98 percent correlation is making. It is no accident that the adjustments match the CO2 increase. You need a better excuse!!

The disagreement in terms of trend and monthly anomalies between the 2 main satellite lower troposphere temperature data producers, RSS and UAH, is far greater than the disagreement between all the surface data producers.

UHI is specified as a micro-climate parameter. It is a physical reality, so can be measured.
The question becomes “how much” ?
The image by LBL lends the obvious idea to quantify the scale and quantity of the UHI.
A methodical placement of a series of themometers positioned on a transect line, or an areal grid. Maybe one thermometer centered in every 10 square meter block. A city with a surface area of 100,000,000 square meters would only need 10,000,000 thermometers. Sounds like a proposal.
Then those 10E6 thermometers just need to be monitored continuously for the rest of time.
Say, 10 times per minute. That’s 60x24x365 minutes per year, times 10 readings per minute, or 5.5E6 data points per year per themometer or 55E12 data points per city.
It’s just a matter of execution, place a “UHI monitoring network” (UMN) over every city on Earth, since each micro-climate will differ from any other. Just the top 10000 cities need monitoring. You have to be realistic.

Then integrate the UMN proportionally into the rural global surface temperature network. Of course, this only allows the current UHI effect to be quantified, it can not be extrapolated into the past, since the UHI of each city by itself evolves over time as cities grow and change over the years and centuries. I imagine that changes in daily prevailing winds have some impact on the variance of the resulting data.

Satellites don’t (and can’t) do this. Only thermometers placed near the surface, in the boundry layer.
There are no proxies for UHI, so no possibility of any historical context. UHI can never be “corrected” or “subracted from” the historical record, again due to annual changes in the effect that are unquantifiable.

Tim tells us: “The problem is the effect of water vapor as a GHG is so large that it is probable that even a 2% variation could explain a great deal of the effect of CO2 and indeed all the effect of human-produced CO2.”

According to Modtran (which Dr. Ball could easily have consulted instead of guessing), 2% increase in water vapor produces a forcing of 0.5 W/m2, about 20% of the current forcing from CO2. Changing from 300 ppm to 400 ppm makes no difference, so overlap isn’t’ an issue.

However, saturation vapor pressure (ie at equilibrium) increases by 7%/K, so the nearly 1 K of warming in the last half century has increased the carrying capacity of the atmosphere by 7%, not 2%. Water vapor in the atmosphere is far from equilibrium with liquid water in part of the atmosphere, but rising and cloudy air masses (roughly half of the lower troposphere) are near saturation. Most studies show that absolute humidity is rising at about 7%/K, as expected. Climate scientists take this into account by using water vapor feedback of about +2 W/m2/K. Modtran shows a 1.6 W/m2 radiative forcing from a 7% increase in WV.

Dr. Ball: “All agencies agree that Water vapor is by far the largest and most important, but it gets virtually no attention.”

This is because the likely changes in water vapor are about 7%/K, while CO2 has already increased by 50% and is almost certain to double. Changes in other well-mixed GHGs add another 50% to the radiative forcing from CO2. And water vapor feedback alone more than doubles the no-feedbacks climate sensitivity for doubled CO2 from 1.15 K to almost 2.5 K. It is the major reason why the IPCC believes rising aGHGs are a major problem. They certainly are not ignoring water vapor, they treat it as a feedback for good reasons.

Dr. Ball is spouting nonsense about what climate scientists do and don’t know about the forcing from water vapor simply to increase your distrust of the IPCC. There are plenty of good reasons to distrust the IPCC, but this isn’t one of them. The fact that a “hot spot” in the upper tropical troposphere due to rising water vapor still eludes detection is a possible reason to distrust the IPCC consensus.

The link refers to a doubling from 400 ppm. The poster Frank is clearly referring to the pre-industrial baseline, which is 280 ppm. That’s also the benchmark used for ECS calculations. Double 280 ppm is just 560 ppm. At current rates (2.3 ppm per year over last 10 full years) we’ll make that by early 2080s.

CommieBob: By almost certain to double, I did mean from 280 ppm to 560 ppm. (I’d already referred to the current nearly 50% increase in CO2.)

The phrase “almost certain to double (reach 560 ppm)” was an opinion, not a fact. Here is my reasoning.

The amount of CO2 in the air in the future obviously will depend on two factors: How much fossil fuel we burn (and some minor sources like CO2 released making concreter)? What fraction of the CO2 in the air will be taken up by natural sinks?

Currently we are emitting enough CO2 to raise the level in the atmosphere at 4 ppm/yr and about 2 ppm/yr is disappearing into sinks. If that trend continues for another 75 years, that will add 150 ppm and reach 560 ppm (doubling). The future is more likely to be worse than the current trend, not better:

1) Scientists project that some sinks will begin to saturate. I have seen any convincing evidence that saturation has begun.

2) Since coal is our most abundant fossil fuel resource, there is a good chance that we could be using more coal in the second half of the 21st century. Making electricity from coal emits about 4-fold more CO2/mWh than from natural gas. If needed, petroleum can be made from coal for about $100/barrel.

3) Attempts to limit emissions have been a failure so far. IIRC, emissions from the less-developed world grew 15% under the Kyoto protocol and would grow another 15% by 2030 under the Paris accord even if all countries met their voluntary commitments. Those commitments are contingent on aid from the developed world that is unlikely to occur. There are about 4 billion people on the planet who want to emulate the Chinese are grow their economies using cheap fossil fuel.

4.) Google has demonstrated that no current renewable technology for producing electricity is competitive with natural gas and coal, even the government guarantees a market. Since electricity can’t be stored or transported long distances for a reasonable price, electricity is the most perishable product in the world. When asked why worn-out, obsolete wind turbines were not being replaced, one wind farm owner said: “No one around here wants to buy electricity when the wind is blowing. I’m waiting until the government guarantees that someone will buy my electricity.

5) No one has even attempted carbon-capture and storage on commercial scale.

6) With a record of a major accident about once a decade, nuclear energy will have trouble expanding with today’s technology. A much-needed crash program to explore safer designs will required decades implement and validate.

Sure, a cheap source of renewable power could change this picture, but even that would take decades.

I am very familiar with MODTRAN. I did not mention it because it is not worth mentioning. It as another grossly simplistic model of the upper atmosphere that like all atmospheric models is built on no data and limited understanding of the mechanisms. Here are Willis Eschenbach’s comments

Tim: Thank you for the polite reply. Unfortunately your reply is as misleading as your post.

Radiative transfer calculations, such as those done by Modtran, are not “grossly simplistic models”. The physics of the interactions between GHGs and thermal infrared (the only mechanism by which heat leaves our planet) are part of quantum mechanics, a theory that has been tested for a century. The absorption coefficients needed to use this theory have been carefully measured in laboratories and initially compiled for use by aeronautical engineers, also long before CAGW (for the major GHGs). The predictions of radiative transfer calculations have been tested in the field many times. While not perfect, validation is beginning to be limited by the accuracy of observations that can be made in the field. For example, see:

When doing radiative transfer calculations, one needs to specify the temperature, pressure and composition (mostly humidity and clouds) of the atmosphere at all altitudes. For global radiative forcing, one needs to specify this for a representative samples of the atmospheric conditions found on the planet. The standard estimate of radiative forcing for 2XCO2 (3.7 W/m2) obtained temperature, pressure and composition data from observations (re-analyses). Climate models do radiative transfer calculations using about 1 million grid cells. The problem with climate models is that they generate their own temperature, pressure and composition data for each grid cell, and that requires correctly calculating convection, evaporation, condensation, and wind in all of those grids cells. AOGCMs and weather forecast models are fairly lousy at these tasks. They can’t get the changing stratospheric winds associated with the QBO right without a fine mess of grid cells in the stratosphere, but these altitudes aren’t important to the radiative forcing from rising GHGs.

Willis’s “mystery” is easily solved. Willis says Hansen claims that the radiative forcing from 2XCO2 is 4.5 K. Willis doesn’t know that Hansen is talking about effective radiative forcing – forcing after the temperature, humidity and clouds in the atmosphere have fully adjusted to 2XCO2, but the surface has not. The effective radiative forcing for 2XCO2 ranges from 2.1-4.6 K for the CMIP5 models. The latest version of Hansen’s GISS models actually have an effective ECS of of 2.1 and 2.3 K. (:)) Modtran calculates the “instantaneous radiative forcing”, the change in OLR before any adjustment occurs. There is no reason instantaneous and effective radiative forcing should agree.

Adjustments to temperature data from urban areas are a bit of a black art because the distortion won’t be linear (new buildings and roads, changes to factories, changes to traffic patterns etc. and distances from the station will vary) and it might not be homogenous regards winds from different directions.
The usual practice is to adjust all data according to what was being recorded near the end of the period for which the station was at a given location, but the distortion might have been far greater then than it was in the past, which means that earlier data gets over-adjusted.
Most people live in urban areas so in that respect they are impacted by rising urban temperatures, but what’s needed is a reference network of purely rural stations (site criteria yet to be defined) so that we might understand what’s really happening with temperature. Maybe we should use only maximum temperatures too because they are less likely to be affected by large shadows falling across weather stations.

One December I stayed at a hotel in downtown Charlotte. Some jerk pulled the fire alarm in the middle of the night and we had to evacuate the building. Despite me not being dressed for the cold, we all stayed warm by staying near the building. The bricks and concrete of the hotel radiated enough heat to keep us warm.

At my brick house, we have plants planted next to the house. Despite many subfreezing nights, the plants next to the house have yet to be overcome by frost. The bricks radiate enough heat to keep the plants protected. During every snowstorm you can always tell where the concrete is because the snow isn’t nearly as deep. You cannot convince me UHI is not real and significant because I see firsthand the effects of concrete and bricks every winter.

Of course, you are correct. In last 150 years 1000s of square miles of forests and farmland have been turned into suburbs and concrete and pavement. It’s absurd to think it hasn’t influenced temperatures in those areas affected by land use changes.
SM loses credibility when trying to convince anyone otherwise. At some point common sense dictates conclusions.

Thank you for the data, except of course “Global” is not Global is it?
The North Pole does not show a trend, but plateau and steps, hardly consistent with steadily increasing temperatures due to CO2.
The South Pole shows no Trend and interestingly does not show the 2016 El Nino and the Southern Oceans actually show a negative trend.
Oops

I would think places like the Texas Pandhandle and Western Oklahoma and Kansas night time temps are greatly effected by irrigation. Without the billions of.gallons of water pumped annually these areas would be straight up deserts. With much cooler nights and warmer days.

No, not deserts. Quite suitable for “dry land farming”. Namely winter wheat. Rain is not very dependable, you may only get a decent crop 2 or 3 out of 5 years. 30 + bushels good, 15 or less not cost effective to harvest. Irrigated corn or soybean far more dependable and profitable. Corn yields average around 160 bushels an acre. I’ll leave you to do the math.

yes irrigation is a factor in some places.
There are even places where the city sucks so much water form rural suroundings that the rural can be warmer than the city. Seasonal issue documented by Oke and Grimmond as I recall

Now wait just one darn moment . . . the third graphic in the above article indicates that, compared to average temperatures in standard rural areas (most representative of what 99.9+ % of Earth’s land areas looked like more than 300 years ago; i.e., before large scale industrialization), urban residential and commercial areas are now running about 1.5-2.0 C hotter and downtown city areas are now running as much as 3.5 C hotter.

The IPCC and CAGW alarmists are all atwitter over Earth warming by 2-3 C by year 2100. According to them, Earth will be essentially uninhabitable if this much warming occurs.

I think it is past time to inform them that the large scale experiment has already been completed, and that people living in cities and suburbs around the globe are alive and well—and generally finding life quite satisfactory—at even higher temperatures than their projected “catastrophic limits”.

“I think it is past time to inform them that the large scale experiment has already been completed, and that people living in cities and suburbs around the globe are alive and well—and generally finding life quite satisfactory—at even higher temperatures than their projected “catastrophic limits”.”

There is one obvious flaw in that argument….

The subject at hand is GLOBAL temperature not REGIONAL temperature. UHI will never melt icecaps, or cause a slowdown in the PJS.
Despite what you may think, this has not happened yet, it lies decades/centuries in the future.
IOW: any UHI (that occurs on still sunny, summer days) is an effect outside of the ‘grander scheme’ of future GLOBAL warming – the consequences of that cannot be equated to the situation present for people very locally now.

No, not deserts. Quite suitable for “dry land farming”. Namely winter wheat. Rain is not very dependable, you may only get a decent crop 2 or 3 out of 5 years. 30 + bushels good, 15 or less not cost effective to harvest. Irrigated corn or soybean far more dependable and profitable. Corn yields average around 160 bushels an acre. I’ll leave you to do the math.

I published in 2008 “Climate Change: Myths & Realities” and in 2010 “Annexure-I: Weather Aberations Perspective of Dry-land Agriculture in Andhra Pradesh”. These are available on line at http://www.scribd.com & Google Books. In the 2008 book, Chapter 7 presents the “Ecological Change” pages 103-125 – on page 112 presented vertical section and horizontal section [same as that given in the present article] of urban heat island effect.
Luke Howard, an amateur meteorologist in England, first recorded the heat-island effect. Beginning in 1807, he started comparing temperatures from several sites within London with those measured a few miles beyond the city’s edge, and through the years, he noticed that the city was consistently warmer. Howard wrote in his book, “The Climate of London”, in 1818 “under the varying circumstances of different sites, different instruments, and different positions of the latter, we find London always warmer than the country, the average excess of its temperature being 1.6 degrees. Today, the effect is more noticeable. In the largest cities, average temperatures can range 5 to 10 degrees Fahrenheit hotter than surrounding areas.
Kenneth Chang presented a report in New York Times, which was reproduced by San Jose Mercury News on August 22, 2000 “Urbanites feel the heat when cities replaces trees and greenery with buildings and blacktop” [I was there at that time and collected the map from the San Jose Mercury paper]. On page 125, presented surface temperatures on a summer day in 1998 [11 AM, June 30] in downtown Sacramento – blue areas are vegetated and relatively cool, 77 to 86 degrees; red areas are 120 degrees and above [Figure on page 117].
On page 138 presented global average surface temperature anomaly data [NASA] along with satellite data. The satellite data clearly showed no warming trend. Later the satellite data was removed from the internet. The revised satellite data follow the global average temperature anomaly data. In fact the previous one is realistic but later one is modified to fit the warming.
Ecological changes include both heat-island and cold-island effects. The surface data was not adjusted for cold island effect. So we get warmer condition.
The so-called heat island effect correction is highly bogus as in the urban areas, the heat-island effect may not affect the met station but it changes the lapse rate. This is also true with cold island effect. Under heat-island effect the temperature increases over the standard lapse rate with height and decreases with cold-island effect. Thus, the balance reduces the temperature on land surface. This was clearly reflected in satellite data [original]. But this was replaced with warmer pattern to cooperative with warmist’s propaganda. In this there is no role of carbon dioxide.

Regarding UHI. You are familiar with Agr Canada and their network of research stations. Near my home is the Lethbridge Research Station (LRS) …you have probably been there. It’s on the eastern border of Lethbridge. They are proud of their 104-year (?) weather station.

A few years ago a retired researcher published an article in the Leth Herald and said temperatures in southern Alberta had risen 1.75C° in 100 years. I had previously downloaded both the rural airport (YQL) data back to 1938 and the LRS data back to 1907 or so and done comparisons so knew the 1.75 number was not correct. Before the article, I had noticed that with time there was a divergence of the mean annual temps at YQL and LRS (only about 10 km apart) with LRS getting warmer and warmer. The rural airport station is in the country near the small airport and the LRS station is now only 500 m from the city limits and the land to the west has changed from open fields to a fully developed industrial area..full of buildings and pavement. The city had expanded a lot in the previous 40 years. And you know our predominant winds are from the SW and W blowing right over the city and over the LRS station.

I calculated the mean annual temperature diff between YQL and LRS and plotted the difference against population growth of the city. The R-squared value was 0.93. As the city grew the LRS station was recording higher and higher temps. The differences at any moment can be as much as 4C° and are most noticeable on calmish winter nights and fall evenings when a breeze is wafting over the city. For many years, the mean annual temp at LRS is always warmer than the airport and another accredited wx station just 3 km from LRS but which is not tainted by UHI and is a bit lower and should indeed be warmer..it is not.

Subsequently, I had some correspondence with LRS staff who responded by saying they had good equipment, but they never challenged my claim. Why would they? Many LRS researchers are believers and probably a few base research grants on global warming for which they claim to have data…tainted as it is.

“ECMWF routinely processes data from around 90 satellite data products as part of its operational daily data assimilation and monitoring activities. A total of 40 million observations are processed and used daily; the vast majority of these are satellite measurements, but ECMWF also benefits from all available observations from non-satellite sources, including surface-based and aircraft reports.”

40 million input observations per 12-hour forecast run, thus way more observational inputs to WX forecasts than has ever been possible prior. There are newer and better methods to gather and process area observations data, much faster and in greater volume than ever before. Asserting or implying global WX forecast models lack sufficient reliable input or forecasting ‘skill’ is clearly not the case. Weather forecast models have not been more useful or accurate than they are today. A general assertion or allusion that more ground stations in the past equated to better forecasting in the past, is a crock IMHO.

There are several to and fro arguments between Steven Mosher on the one hand and several commenters on the other regarding rural data. Has anyone used the same source raw data as Steve M and verified/disproved/eliminated stations to him? If they have I apologize but I haven’t seen it.

At the moment for me there’s nothing but my personal experience. As a leisure cyclist in (very) rural France all I add to the argument is that rural should mean a field invthe middle of nowhere, entering even a small hamlet of a few dozen houses on a still sunny day the change in temperature is noticable.

Maybe we need a system of land based Argo type monitoring stations placed at regular intervals to get past the arguments about the reliability of measurements, as well as having to extrapolate temperatures over 2000 miles in the Arctic and elsewhere.

The urban heat island effect obviously effects the local climate in urban areas; and we know that urbanization has increased substantially in the last 120 years.

So does the UHIE therefore increasingly effect the overall climate globally as urbanization continues to increase?

Or, does the UHIE only result in a redistribution of both atmospheric heat energy and rainfall within a stable global atmospheric system?

It seems to me that the latter would be the case, because UHIE does not have any effect on the atmospheric heat absorption, greenhouse effect or not, and the overall energy balance of the planet remains constant but for outside forcing factors, like solar irradiance, catastrophic vulcanism, impacts from large asteroids, etc.

The constant battle about UHI is just about to send me into meditation! UHI exists and it is, in fact, what people live in on day to day basis. It’s hotter in the big city than in the country, no need to argue about it. It doesn’t mean that the world is heating up, it means that the city is big enough to influence the local temperature, and possibly the local humidity (see Phoenix, Arizona).

What is the temperature in real rural areas? Get calibrated weather stations placed there and we’ll find out. It would be an interesting exercise to have a fish-eye camera on each station to record the current environment when the data was collected. The implementation of that feature is an exercise for the reader.
CO2 collection at the same time would give us a ground level (+2m) indication of that environment.

Publishing the enthalpy of the location would eliminate a lot of the arguing at a scientific level, but I suspect that Politicians don’t want anything that could constrain their ability to seek control of our lives.

“It would be an interesting exercise to have a fish-eye camera on each station to record the current environment when the data was collected. ” I took a 360 degree panoramic photo of each of our weather stations when they were installed, or shortly thereafter.

I’ve done water research. Yes, it is quite difficult to measure precipitation accurately, and an order of magnitude harder to measure snowfall. The best you can do with snow is melt it and record it as water-equivalent snowfall. Otherwise the standard measure for snowfall is to set up a square meter table, and once an hour measure the depth of the snow, then sweep it off.

Also for consideration are the large numbers of Power Plant Cooling Towers mandated in the last 50 years, The advent of Air Conditioning and the large industrial coolers evaporating tons of water for each high rise building in the city. There is a large shopping mall near me on the edge of the city. The “man Made Cloud created by the dozen or so Industrial coolers needed to cool the mall is visible at least 300 days of the year. On calm days you can see the water vapor cloud rise like smoke from a chimney up to about 5,000 feet where it slowly drifts to the east (or whatever way the wind is blowing that day up there). The cloud it creates is about half the size of the one created by the ~1,000 MW power plant nearby. To claim this has no effect on global temperature is ridiculous.

1) How many remember when Bender led the way in teaching about the three dimensions of uncertainty, wherein the data uncertainty is generally the *best*? (The others are model and model-parameter, with model uncertainty quite frequently blowing the whole thing out of the water.)

What I’m seeing here is vigorous discussion of various unworkable climate “models”. Folks, there are tons of variables we’re clueless about in the bigger picture.

2) I’ve learned over the last few years that the uncertainties involved in temperature measurements are generally not being handled properly. I have a relative who was a physics major (so he understands this stuff), and is senior enough at the world’s leading standards-calibration equipment manufacturer (Fluke and subsidiaries) to be able to point me to some nice resources… Here’s a pointer: They (flukecal.com) have a nice public reference library on related topics. Insights on things like the real uncertainty in our measurement devices, the mathematical impact of making a series of measurements of unknown temperatures (vs a series of measurements of a supposedly-known temperature) etc.

(If I had more time I’d go look up the docs again but that will have to wait.)

UHI is real and easily measurable and verifiable, as has been shown over and over. Including land use change in addition to a lot of irrigation around the planet, including cities that are also heavily irrigated on top of a changing albedo and absorbing and radiating more thermal heat especially at night. It has been demonstrated time and time again that on balance, what we change, we generally make it more efficient at absorbing and retaining heat. It isn’t just cloudless, windless sunny days that a city is subject to UHI.

Just think of a city like Winnipeg in the middle of winter when it is-40 in the country side, a city/suburbia of nearly 1 million that are heating every single building with Nat Gas or electricity, and all those buildings are slowing radiating heat as they cool, day and night. Including the thermal exhaust of tens of thousands of trucks and automobiles that are also releasing water vapour and CO2. Just look at an infrared arial photo of a city in winter. The temperature in downtown Winnipeg might be a full 5-7 degrees warmer than a rural farm 50 miles away in the same atmospheric condition. I remember well every winter growing up on the prairies how the airport a few miles outside of the city was almost always 2-3 degrees colder from the TV station downtown, and the weatherman used to always point this out. It would always in spring or fall, freeze earlier at the airport than it would in the city. UHI is probably more responsible for the rising global warming temperature record than even natural variation since pre industrial times. That is what we should be measuring from is pre LIA, not 1850. We should be celebrating the additional warmth we humans have been able to geo-engineer the last 100 years. Sure is better than the alternative, cooling.

If it is any consolation to the carbon cult, large cities already have a much higher CO2 background level locally, higher than what is ever projected for the rest of the world ever. As Bob Dylan sang in the mid 1960’s, “You don’t need a weatherman to know which way the wind blows”.

I trawled through GCHN and Hadcrut when the argument was burning hot. While I understand the difficulties in getting the data, it struck me that it really couldn’t be used for any serious analysis. The aliasing of the temporal data and paucity of spatial data makes any numerical sophistry suspect. ( I am a Brit so I don’t like to be rude).

Cities and high-rise buildings change the refection angle (inclination) of light and therefore its Emissivity.
Some cities should be looked at as artificial mountain ranges .
Has anyone checked the difference in lapse rate between ground and a 100 mtr high-rise roof. The temperature on the roof should be colder then the ground, But at different times of the day, I bet this is the opposite because the buildings shade the ground and the roofs are bathed in sunlight. High-rise buildings are made of concrete and steel and are giant antennas and help transport electrons and ions from ground to atmosphere, much the same as this process . http://www.australianrain.com.au/technology/howitworks.html

“planetary temperatures. – The planets are solar thermal collectors on a large scale. The temperature of a planet’s surface is determined by the balance between the heat absorbed by the planet from sunlight, heat emitted from its core, and thermal radiation emitted back into space. Emissivity of a planet is determined by the nature of its surface and atmosphere.[5]
temperature measurements. – Pyrometers and infrared cameras are instruments used to measure the temperature of an object by using its thermal radiation; no actual contact with the object is needed. The calibration of these instruments involves the emissivity of the surface that’s being measured.[6]”https://en.wikipedia.org/wiki/Emissivity

Dr Ball, I respectfully suggest you clarify your units in your first paragraph. When I read it I first thought you meant CO2 makes up 4% of the atmosphere and knew that was wrong. When I looked at it again I realized you were saying it’s 4% of the greenhouse gases, a subset of the total atmosphere. Given how many people think CO2 is a much larger chunk of the atmosphere than it really is, I think it would be better to clarify the actual greenhouse percentage of the total atmosphere. That ensures readers without much technical background can properly understand what you are saying.

The “Warmers” claim that they have a mathmatcal formulia which they use to remove the UHI effect. So what about a similr formulia to convert the Satillate readings near the ground to the same as the ground based weather station.

As its a simple matter of checking the results such as the balloons, then lets do it.

The idiot BEST temperature record by climate fraud Richard Muller used an ass-backwards 2nd-derivative heat island assumption to bias his record towards global warming. He assumed that heat island effects would be smaller for rural stations when as every economist knows it is the FIRST increments of change that tend to be the most effective at creating change.

The first building 40 feet from your “rural” weather station creates the most disturbance. Mueller assumed the opposite: that the marginal heat-island effect went UP as development proceeded. Some “genius.”

But I believe that it is not actually that Muller is a moron. Turns out the Muller and his daughter run a consulting company that makes much of its money off of global warming alarmism. That old saw about not attributing to evil what can as readily be attributed to stupidity? Hard to apply to a top physicist.

The bastard is EVIL. He plotted the whole thing, first positioning himself as a skeptic, the better to give credence to his phony alarmist conclusions, the better to sell his rotten climate-alarm-based business model, but his error is so blatant that it completely reveals him. Impossible to make such a STUPID error innocently when we know for a fact he is not actually anywhere near that stupid.

“The idiot BEST temperature record by climate fraud Richard Muller used an ass-backwards 2nd-derivative heat island assumption to bias his record towards global warming. He assumed that heat island effects would be smaller for rural stations when as every economist knows it is the FIRST increments of change that tend to be the most effective at creating change.”

Wow, lets see. This is why WUWT is seen as an echo chamber. You accuse a man of fraud
with zero evidence.
Nice place you got here !
Now some points:
1. For our urban study there were NO signs of urban built area for 10km.
we also tested 25km, same result.
2. We are talking UHI, not economics. Everything you said about the first increments is
wrong and not borne out by any data

“The first building 40 feet from your “rural” weather station creates the most disturbance. Mueller assumed the opposite: that the marginal heat-island effect went UP as development proceeded. Some “genius.”

1. Random assertion by a person prone to make baseless and libelous charges of fraud.
2. Muller actually made no such assumption.
3. You actually have no evidence for your assertion.
4. The microsite study also shows you are wrong

“But I believe that it is not actually that Muller is a moron. Turns out the Muller and his daughter run a consulting company that makes much of its money off of global warming alarmism. That old saw about not attributing to evil what can as readily be attributed to stupidity? Hard to apply to a top physicist.”

1. Wrong the consulting campany makes no money off alarmism
2. The consulting business has ZERO funding.
3. The focus is actually on nuclear waste storage
4. Muller’s views on extreme weather attribution is that it is most bunk.

“The bastard is EVIL. He plotted the whole thing, first positioning himself as a skeptic, the better to give credence to his phony alarmist conclusions, the better to sell his rotten climate-alarm-based business model, but his error is so blatant that it completely reveals him. Impossible to make such a STUPID error innocently when we know for a fact he is not actually anywhere near that stupid.”

It has been considered and the response was to ensure that the people who do not conceal the effect are blacklisted for grants. There must be a very large number in this position as both I and my daughter worked with or for someone in this position. The probability of this given the wide difference in our occupations is highly remote.

Steven Mosher: It is very hard to believe that there is no UHI effect visible in the data. One obvious reason for this is that you are talking about anomalies while people here are talking about absolute temperatures. But still, an example from Canada provided by Clyde Shaupmeyer here suggests that the anomaly is not constant. What is your opinion?

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy