Announcing the first ever CONUS yearly average temperature from the Climate Reference Network

UPDATE: NOAA plans to release SOTC at 1PM EST today. Look for updates soon and a special report on today’s release. The map below will automatically update when we have the new December COOP Tavg value, probably later today. I’ll have another post on the differences between the CRN and COOP in the near future. – Anthony

Being a state of the art system, it is well sited, and requires no adjustments and the data is well spatially distributed by design so that it is representative of the CONUS. Here’s the current plot (click to enlarge):

Each (small) number in blue represents one of the NCDC operated U.S. Climate Reference Network stations in the CONUS that we use. Here’s the data reports for December and the entire year:

SUMMARY National Average of Monthly Mean Temperatures = 2.6° C or 36.6° F National Average of Monthly Average Temperatures = 2.6° C or 36.7° F

EXCLUDED STATIONS The following stations reported no data (-9999.0) for either T_MONTHLY_MEAN or T_MONTHLY_AVG and were not used:
CRNM0101-PA_Avondale_2_N.txt

================================================================

From the NCDC provided FTP data files we can calculate a yearly CONUS Tavg, which has never been done before by NCDC to my knowledge. Odd that is falls to somebody outside of the organization don’t you think?

Climate Reference Network Data for 2012

Month

Tavg

1

36.8

2

38.1

3

50.6

4

54.8

5

63.3

6

70.8

7

75.6

8

72.9

9

65.6

10

53.9

11

43.9

12

36.7

Sum

663

/12

55.25

Therefore, from this data, the Average Annual Temperature for the Contiguous United States for 2012 is 55.25°F

Note also the value from the CRN from July 2012, 75.6°F far lower than what NCDC reported in the SOTC of 77.6°F and later in the database of 76.93°F as discussed here.

Makes you wonder why NCDC never mentions their new state of the art, well sited climate monitoring network in those press releases, doesn’t it? The CRN has been fully operational since late 2008, and we never here a peep about it in SOTC. Maybe they don’t wish to report adverse results.

I look forward to seeing what NCDC comes up with for the Cooperative Observer Network (COOP) in their “preliminary” State of the Climate Report for Dec 2012 and the year, and what the final number will be in 1-2 months when all the data from the COOP network comes in.

I’ll have more on this in the near future. I’ll be offline for the rest of the day traveling.

UPDATE: 10:30PM PST, Climatebeagle and others have been puzzled over the 117 stations used, and can’t reconcile with the larger list. Here’s the logic:

Some stations, such as the Oak Ridge, TN and Sterling, VA were removed due to them not reporting regularly or at all (they are test sites). The one CRN station in Egbert, Ontario Canada is not part of the CONUS, and is removed also. None of the stations in Alaska are used as they are also not part of the CONUS.

UPDATE2: 9:30AM PST, 1/8 Reader Lance Wallace noted a mistake, which has to do with versioning control on our end. One CRN station in Egbert Ontario was inadvertently included in the monthly code, where it was not in the daily code we run. We’ll rerun it all and update. I’m thankful for the many eyes of WUWT readers – Anthony

145 thoughts on “Announcing the first ever CONUS yearly average temperature from the Climate Reference Network”

“NCDC has updated the Climate Reference Network Data for December 2012. I’m still waiting on the NCDC State of the Climate report to come in with their number, and I’ll update the graphc when it is available…”

Dont you need to multiply each monthly average by the number of days in that month, add all the months up and then divide by 365 or 366 to get an unbiased average?

REPLY: already handled in code, we took each stations Monthly Tavg (which NCDC caclulates from daily data) and calculated a CONUS monthly Tavg. All the data is there in case anyone wants to replicate it independently. – Anthony

I’ve been simple averaging the USCRN hourly data and 2012 consistently comes out as the hottest US year regardless of the stations. E.g. hottest since 2003 when only looking at stations with a complete record since 2003, hottest since 2008 when only looking at stations with a complete record since 2008 etc.

I’m not actually a great believer in averaging temps, but should a spatial weighted average be used, rather than a simple one? E.g. a couple of areas have two nearby stations rather than just one.

On the hourly numbers I calculated an USCRN 2012 yearly average of 12.1°C or 53.7°F, but I think my list of USCRN stations is different since I have 124. Probably at least because I’m using all the USCRN stations, thus not the same as CONUS.

However, is there a listing of WBANNO numbers for the USCRN stations? I haven’t found a simple list online, thus generated one manually and may have made mistakes.

A much needed corrective to NASA & NOAA’s cooked books. Appears close to one station per 26,666 sq. miles (with a few gaps), so, as you note, automatically adjusted for elevation, urban, rural & all other parameters.

A man with a watch knows what time it is.
A man with two watches is never sure….

So at a minimum this says that “station selection” has a 2 F variation in it. So much for the assertion that “station dropout” doesn’t matter…

That the NCDC/SOTC data / report is 2 F warmer and the CRN stations are supposed to be ‘the best’ strongly implies that the NCDC/SOTC data are skewed high by 2 F. As that is more than the “Global Warming” they claim to have detected, that ought to mean we are colder now rather than warmer.

As I’m experiencing a colder winter than in the ’90s that accords with my ‘reality check’.

Looks to me like it’s pretty clear that “Global Warming” is an instrument error artifact.

Someone, Somewhere, has massively cocked SOMETHING up if there is difference as large as that between this temperature record for 2012 and the old record for 2012. (I make it about 10 degrees centigrade??)

Still a major difference, enough to put global warming scare stories into doubt, at least in the united states. And the temperature record for the rest of the world can’t be of any better quality either.

Using the CRN monthly dataset, I get rather different numbers for 2008-2011 (about 52 F) compared to Anthony’s number of 55 for 2012. I haven’t downloaded the data for 2012 yet.
Here are the values for 2008-2011. These are obtained by averaging across all months for a year rather than averaging across each month and then dividing by 12 the way Anthony did, although I would think this would not make much difference.

E.M.Smith says:
Looks to me like it’s pretty clear that “Global Warming” is an instrument error artifact
===============================================================
LOL – that is one way to look at it. Other ways are to call it human error, hubris, pseudoscience or just plain wrong.

Thanks Anthony !! I love it when you skewer NOAA with their own “data”. Great work, and sorely needed these days.

****** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ******
** This is a United States Department of Commerce computer **
** system, which may be accessed and used only for **
** official Government business by authorized personnel. **
** Unauthorized access or use of this computer system may **
** subject violators to criminal, civil, and/or administrative **
** action. All information on this computer system may be **
** intercepted, recorded, read, copied, and disclosed by and **
** to authorized personnel for official purposes, including **
** criminal investigations. Access or use of this computer **
** system by any person, whether authorized or unauthorized, **
** constitutes consent to these terms. **
****** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ******

This is hugely important, it is proof of conspiracy. The “hottest year ever” will still be announced, I fear, I think they figure more people will see that than read here. What they haven’t figured out yet is that the general public are losing interest, and rapidly. Particularly as COLD weather and record snows are causing such inconvenience everywhere – not to mention the deaths. These guys are just ramming their foot (feet?) deeper and deeper down their throats. Someone ought to trip them up while they are in that position – they deserve to fall flat (and much more!).

Problem is you cant compare the averages without correcting for altitude differences.

For example, have a look at Roy spencers average using ISH. Note that he does a lapse rate adjustment. So for example, if you have 1000 stations at 500 meters above sea level and you average them, you come up with say 14C. Now average 1000 stations at sea level. guess what, the lower stations will be slightly warmer per the lapse rate.

This is especially important if you have any missing data as that will sku the answer even more.

From the looks of it a simple average was computed with taking no account of alt differences. Even Roy understands why that matters.

You’d been amazed what a difference of 100 meters gets you. In fact, if you take CRN data and compare it to nearby stations ( one CRN has 14 ISH hourly stations nearby ) you might find cases where the lower CRN station, although well sited, is warmer than the horrible ISHs at airports.
Why? because the airports happen to be at higher colder elevations. For reference there are around 400 ISH hourly stations within 100km of CRN stations, so its not that hard to illustrate

So, before you compare averages of absolute temperatureyou MUST insure that the sampling distributions come from the same altitude OR correct for lapse rate. Of course if you work in anomalies you dont have to account for this.

The enviromental lapse rate is around 6.5C per 1km or .65 C per 100 meters. On average the CRN stations tend to be lower in altitude than other collections of stations. Not by alot, but precision matters, after all if you apply imprecise methods to gold standard data.. you lose what you thought to gain

Anthony, how can I trust your numbers when they have not been statistically mutilated? Having read a wide variety of “AGW consensus papers” it is clear to me that the man-made warming signal cannot be teased out of such noisy data without very sophisticated, statistical mutilation of the raw numbers. The test for proper mutilation is that earlier century temperatures move down from the raw value and letter century values move up from the raw figures.

Steven Mosher – What is wrong with having the least amount of data alterations? If all the data stays as raw as possible, doesn’t that remove most of the criticisms of the main stream data sets? Also, just the shear fact it is early afternoon on the east coast and mid morning on the west coast, there are differences in how much change there has been in the day and how much change is left in the day. All of which I’m sure can be “corrected” but shouldn’t the goal to have as little “corrected” or modified data as possible? Shouldn’t consistency be the most important yard stick?
The point I’m intending to make is, what skews the data and in what direction shouldn’t that be less important than consistent data usage?

It seems to me the whole exercise is fairly meaningless as a comparative measure is all that is required. There is no such thing as “absolute average temperature for a continent”. What is needed is information on whatever processes are used to generate what is in effect a “comparative average temperature”. They might in principle be doing anything.

EM.Smith “So at a minimum this says that “station selection” has a 2 F variation in it. So much for the assertion that “station dropout” doesn’t matter…”

Yep, that is a large difference. I have always suspected that the loss of data from remote stations (how did they manage that, must have been a big effort) and the “adjustments” (lol), and the lack of proper allowance for UHI and the increased reliance on urban temp data etc all contributed to the rise in the “global average urban land temperature” in the pre-1998 period

The satellite record shows two basically zero trend sections 1979-1997, and 2000 – 2012, with a step change in the middle.

I doubt there was much real warming in the 1970-1998 period at all, yet that is the period that the CAGW hoax is built on. The small rise in the land temp was all from data manipulation and lies.

Problem is you cant compare the averages without correcting for altitude differences.

I believe that is an unnecessary complication of the problem. I do not want to know what the temperature WOULD be if the entire US were ironed smooth to sea level, I want to know what it *IS*. But more importantly, I don’t need a lapse rate adjustment for surface stations if what I am interested is trend over time. A station at 6,000 feet in Colorado will remain at 6,000 for the rest of this interglacial. I am interested in the change in temperature over time, not trying to “correct” it to sea level.

A average of all reported stations is good enough when those stations do not require a correction for station hijinks and UHI. It just is what it is. If you start messing around and adjusting for lapse rate, then that opens the door for making all sorts of other adjustments. What about a wind direction adjustment? In some places one can get a very warm condition when wind is blowing from a certain direction and one gets adiabatic warming from air flowing downhill (Chinook or föhn winds). Conditions are quite different when the wind blows in the other direction.

No, lets just leave things as they are. Part of the temptation to do this comes from the desire to create a “fill” value where one is missing. That’s bogus, too, because in many parts of the country there are microclimates that make doing that an exercise in futility anyway. Trying to “fill” a missing value in a California station, for example, by using a value from a distant station is likely to be futile. Which CRN station are you going to use to fill data for a missing value at Truckee? You will notice on the map above that no station anywhere around it has anything like Truckee’s temperatures. Once you start doing adjustments, it is over.

Mosh should demonstrate altitude correction in the current surface data set as far as I know it is not there either in the straight averaging may have been doing or the climate divisions method . can’t comment much as I’m using my cell phone . and I’ll be driving again shortly

Re: Government warning. Yes, indeed, the NOAA ftp site link given at the end of this post does indeed lead you to a file directory with the scary U.S. Gov’t warning message. Do I need to start looking over my shoulder?

See the “Mosher” is an unhappy bunny. It’s not just about this data…it’s the way you AGW crowd have got it wrong again and again. After the Aqua Satellite debacle this should have been over in 2002…has anyone ever discovered the missing heat in the Troposphere? In 2004 AGW became “Climate Change”…after all it sounded so much better than “Global Warming Freezing” which was what some of your comedians were debating…refer “Climatgate”…on this site. Today the MET Office reckon no warming again for a long time yet…the technical reason for this is the Hansen’s, Gore’s, King’s, Mann’s.Trenberths, Jones et al will be long gone.

One explanation for the somewhat different averages listed above is the very confusing choice made by the CRN data group of descriptions of two different numbers. One is the “traditional” Tavg = (Tmin+Tmax)/2. This is described as T_Monthly_mean in their data description quoted below. The other is the average of all available “continuous” measurements, i.e. the hourly averages across the entire month. This they call T_monthly_avg. This latter measure is much closer to what most people would consider the “true” average. I discussed the difference (with maps of all the stations) in a guest post or two a few months ago. We can see for example that almost all stations have a consistent difference across all years (either positive or negative), averaging about 0.5 C, between the “traditional” average from the min and max measurements and the better estimate using hourly averages.

This is the CRN definition of the two terms.
cols 57 — 63 [7 chars] T_MONTHLY_MEAN
The mean temperature, in degrees C, calculated using the typical
historical approach of (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2

cols 65 — 71 [7 chars] T_MONTHLY_AVG
The average air temperature, in degrees C, for the month. This average
is calculated using all available day-averages, each derived from
24 one-hour averages. To be valid there must be less than 4 consecutive
day averages missing, and no more than 5 total day averages missing.

The difference between the N of 116 and 124 is basically that 7 locations have 2 stations (in one case 3) sited close by. My calculations weight every site equally, so actually have 124 sites in 116 locations (probably should have made that clear earlier, since my table listed only the 116 locations.) I believe this is better than averaging the two sites at the same city, since they are not true duplicates but often separated by some miles, and may be in quite different locales and subject to different meteorological conditions.

I think that properly we should just look at the individual stations to see how they are varying. Also since CRN is so recent, and “trends” are limited to about 4 years (4 datapoints) for the full network, I can’t think that a trend analysis would mean a thing until after about a decade or two. I do hope that this network, well planned and maintained as it is, will retain funding for the future.

Elevation, UHI, and various other surface temperature data tweaks are red herrings. An “average global temperature” based on the atmosphere does not account for humidity, wind kinetic energy, and latent heat. No matter how many corrections we make, we’ll never have a meaninful apples-to-apples comparison.

Worse yet, this so-called “global temperature,” even with the best corrections for elevation, etc., is irrelevant compared to the energy balance of the oceans. The thermal capacity (BTU/⚬F) of the oceans is 1100 times greater than the atmosphere.

Instead, we’re trying to measure the most transient part of the system, the one with the most noise. Is it any wonder the climatasters are using arcane statistical methods to tease out a signal?

Oh, yeah, the fictional “anomaly” is supposed to fix all that… the one that isn’t done until it’s all ‘grid/boxes’ in the last step….

“Why? because the airports happen to be at higher colder elevations.”

Must not fly much… FYI Airports are built where there is a lot of flat land, typically. As often as possible down in the valley floors or even next to water (so a long approach can be made with a low flat surface). Examples? SFO approach over the bay. Moffett Field approach over the bay. SJC San Jose approach over the bay. Not one of them up in the surrounding hills. ORD Chicago on flat land (as is all of Chicago near the lake). Denver down on the flatter part down slope from downtown. Etc. etc. etc.

Folks only put airports on mountains and mountain tops when there is no alternative. One finds LAX down low, not in Beverly Hills… Even Reno and Lake Tahoe airports are on the flat land, not in the hills you can see from them… It is easier to get enough density altitude and runway speed on a lower flat runway than on a high bumpy one.

Yet it is widely done. GIStemp keeps temperatures AS temperatures all through the various transformations and creation of a fictional “grid/box” value (that they call a temperature). Then at the very end they make a ‘grid/box anomaly’ between two of these fictional grid/box temperatures. All fundamentally hokum due to averaging intensive properties and not dealing with enthalpy. But “it’s what they do”. So your instinct is sound.

I’ve averaged temperatures for the purpose of seeing the ‘shape of the data’ that causes some “climate scientists’ to have a hissy fit as they presume I think that results in a temperature when it doesn’t. (But it is a good way to see what the basic nature of the change in the numbers might be… bigger, smaller, more variation. Metadata about the data…)

With that said, to do it with some sanity you need to weight things for a variety of stuff that approximates enthalpy and sample bias. It ought to include areal weighting, altitude, distance from water, relative humidity, phase change of fluid (snow, ice, evaporation) and a few more. “Climate scientists” pick a couple from the list that would give an extrinsic property and ignore the rest. So you can ‘cherry pick’ a few too.

What I did was to never average a temperature. (Other than that the input I had available was a min-max average already… I really ought to re-do this with just the mins and just the maxs). Just do a ‘first differences’ style anomaly creation on one, and only one, instrument record at a time. I think that gives the cleanest view of what is going on. At that point, averaging the anomalies is valid.

What it shows is that at any given place any given month may be going up or down in trend. Overall, not much changing on the globe. Some nations warming, some cooling (often next door to each other). Overall impression is that it is “data are variable”.

Yet there are sea change moments, such as the point when the MMTS is rolled out, where a ‘jump’ happens. Is it the instrument or the ‘fix’ for the change? In either case, it’s not the reality, it’s the fiddling…

There is NO global warming. There are some instrument records for some places in some months that rise (often from the lows being lifted, not the highs getting hotter). Only averaging that in with all the ‘no change’ or ‘cooling’ places makes anything “global”, and that number is a data artifact ridden fantasy…

If you read the pdf you will discover that one aspect of the CRN network is to provide long term, un-interrupted sites. However, they have planned for possible relocations due to owners requirements, failures etc…

However, already they are talking about ‘adjustments’ to the data:

“273 (Collow et al. 2012). It is now planned that if a station must be removed for non-emergency
274 reasons, such as the changing needs of the site host, there would ideally be one or two years of
275 time to run a new USCRN station at a nearby site so as to develop an accurate calibration of the
276 differences in climate between the sites and adjust the data of the discontinued site to match the
277 new site. This process is currently underway for one station in Goodwell, OK, that is required to
278 be removed because of unanticipated planned local LULC. Given such sufficient advance notice, ”

This assumes things that should be up for argument:
a.) We know that nearby sites have ‘correlated’ data but the assumption that a universal longterm adjustment can be made is absolutely silly.
b.) If the second replacement site is close enough (I am not sure as to how to properly define it) it is more reasonable to assume the new site is ‘equivalent’ to the old site and any common time data should, perhaps be averaged. But to argue that a prior or later site to permanently adjust just opens the door to further adjustments.

Even if one sees one site is systematically warmer or colder for the approximate two year window it should/would would only show that micro-climate variations exists and we can not begin to account for them for every station. It just shows that weather varies in in locales. How you can argue that one is a BETTER representative of the area vs another if they both meet siting guidelines is silly.

So I plugged all these values into Minita. First, I was really surprised that the data is fairly well normally distributed. Second, summary statistics show the average as 36.6 with a 95% confidence interval of 2.4. Assumptions of accuracy or precision less than 2.4F are BS. So the discrepencies between the data sets from a statistical standpoint are meaningless: both averages are tolerable estimates of the true population average. But it’s not known to within 0.1F folks!

I still think this average business is foolishness. It has no real value for anything. A nice but only marginally useful number. It is just another over simplified meta data that is used by the spin doctors on all sides of the issue. The reality is it gets you nothing, except a pork barrel grant, and is about a useful and meaningful as a politician. If we want to look at tightly defined geographic areas, altitude corrected, etc. then at least we have a number that the people in those regions understand and may for them be useful. I think the land and water masses cause some difficulties that further confound world wide average to an even more meaningless number.

possibly a silly question, but for average yearly temperature should the monthly averages be weighted by number of days in the month? Or is the yearly average normally just the mean of the month averages for comparisons over several years? I suspect there would be very little difference in the results anyhow when looking for trends over years/decades….but just a mean of the monthly averages would be skewed a bit by an exceptionally warm or cold February, for example…or maybe the monthly averages are already somehow normalized?

More mistakes can happen when records from other countries are fed into the global compilations. CLIMAT data feeds are based on whatever a country supplies, not necessarily on (min+max)/2. About 40% of countries, the largest China and the former Soviet Union, use the mean of evenly-spaced observations. About 20% try to replicate a true mean through weighted temp averages at particular hours, mostly in Europe and Latin America.
It’s believed by some authorities that as long as this is consistent through time it doesn’t distort long-term trends, although it can affect comparisons between countries. Most have been consistent but some have changed methods or observation timing, sometimes conflicting with daylight saving.
The consensus seems to be that these conflicts negate each other and don’t cause a systemic bias in global temperatures. Personally, I remain to be convinced.
We know that Australian data fed to GISS from 1994 was on average .15C below the BoM’s HQ figures, an error that wasn’t noticed until around 2003, and the GISS and NCDC records from 1994 to 2004 weren’t corrected till 2009. Is it corrected now?
This happened because it was agreed that Australia send (min+max)/2 data to the US but forgot to do it for nearly a decade. A 0.15 deg discrepancy could be way, way off the mark, higher or lower.
Australia’s BoM processes raw data from observer sheets to produce a homogenised version, then a High Quality network of more than 100 stations that vanished fairly quickly, now a new version called ACORN including some different stations. There are periods of a year or more when these versions can differ at a nominated station by more than 1C.
Working back from deg C to deg F you find that if one place after the decimal is used, you can get two solutions for one conversion. If you try to rationalise, you find a problem that cannot be solved. It turns out that at many Australian sites, temperatures were recorded in whole deg F more than 30% of the time.
Working with grids and interpolation before sending numbers to the US, of course no interpolation scheme is perfect. One issue we see with a situation like the current one is in areas with steep gradients and sparse networks, as Stephen Mosher nores above. A good example is in Australia’s Nullarbor. Long-term averages from Cook station go into the average fields which are about 5-6C warmer in summer than the coast is. But without any current Cook observations to “anchor” the analysis, on very hot days the anomaly from Nullarbor Roadhouse will be projected too far inland. So, for example, if you have a 45-degree day at Nullarbor (which is 18 degrees above average), that +18 anomaly will be applied to Cook’s 32 degree average to give an analysis of 50 – but in reality on very hot days there’s usually little difference between the inland and coastal sites.
Personally, I think absolute values are far more important than trends, for one of the main uses is in proxy reconstructions which don’t seem to have a natural mechanism for change at country borders or when a new normal is introduced.
This whole topic suffers from the old hymn, “Build on the rock and not upon the sand” for the sands here are forever shifting.

Mosh, the lapse rate surely depends on the time available for the rising, cooling air mass to shed its excess heat, by whatever mechanism. It can’t be 100% instant radiation loss. There must be some conduction loss to move a thermometer. Can’t see how one value fits all.

Dennis Nikols says:
January 7, 2013 at 4:26 pm
“I still think this average business is foolishness.”

Agreed. Conpletely. The earth has temperature variation due to altitude, longitude,and latitude, and season, amongst other factors. To boil all this down to an average number is meaningless. It’s like measuring the temperature inside my furnace, inside my fridge, inside my freezer, inside my garage, and inside every room of my house, including basement and attic and then quoting an average of all measurements, and then to say the average is trending upward because the sun comes out and warms the attic. Total misuse of statistics and physically meaningless.

Geoff Sherrington, even the automatically recorded BOM data is a mess.

As you know, we are experiencing a spot of hot weather in south eastern Australia. I keep an eye on the Canberra readings (which come from the airport, but that’s another story.) Anyway, the other night, according to the BOM, the temperature dropped from about 22 to 5 degrees in 10 minutes. I assure you that the temperature did not change much at all. Furthermore, that ‘minimum’ stayed on the chart as the lowest minimum for the rest of the reporting period. The other thing I noticed is that when it got really hot (high 30s) the chart would just blank out altogether for sometimes hours at a time. I would not trust these readings, which no-one seems to check (see the absurd drop to 5 degrees in the middle of a heatwave, uncorrected for at least 12 hours, if ever) as far as I can throw Al Gore.

Back on topic, I am awestruck that a citizen scientist with a family, a business and the world’s biggest science blog manages to do quality control for massively funded public agencies in his spare time, such as it is. The overpaid and lazy slobs who are supposed to be in charge of this stuff should hang their heads in shame. Anthony, if you were being paid adequately for doing their jobs for them, you would never have to work again.

I guess the real point is that you cannot compare temperatures measured with different systems.

The CRN is a new system, and it will take several years before they can draw any comparisons.

The same when you “disappear” 2/3 of your measuring stations. You are then working with a new measuring system. And when you allow urban encroachment within that system, you =have a continually changing measurement system.

You CANNOT reliably compare calculated /averaged readings even a couple of years apart because the overall system has changed.

Because of the massive unreliability of the measurement system, it does make it very easy to fudge the data to say what you want someone else to believe., if you have an agenda to do so.

The whole thing is a mess and totally unreliable. Why the heck they are wasting so much money on idiocies related to temperature rise, when , in reality, we actually have NO IDEA whether any real rise has actually occured.

Tom in Florida says:
January 7, 2013 at 5:14 pm
Dennis Nikols says:
January 7, 2013 at 4:26 pm
“I still think this average business is foolishness.”

Have to agree, totally. Isn’t the whole problem caused by the ridiculous idea that Earth has an average temperature?

. . . And the even-more ridiculous claim that this fictitious ‘global’ temperature can provide evidence that mankind is about to overheat the planet by burning fossil fuels. It’s a very clever magical trick; easy to pretend you’re sawing the Earth in half while the audience is distracted by smoke and mirrors—or rather, smokestacks and ice floes.

David L says:
January 7, 2013 at 5:55 pm (Edit)
Dennis Nikols says:
January 7, 2013 at 4:26 pm
“I still think this average business is foolishness.”

Agreed. Conpletely. The earth has temperature variation due to altitude, longitude,and latitude, and season, amongst other factors. To boil all this down to an average number is meaningless. It’s like measuring the temperature inside my furnace, inside my fridge, inside my freezer, inside my garage, and inside every room of my house, including basement and attic and then quoting an average of all measurements, and then to say the average is trending upward because the sun comes out and warms the attic. Total misuse of statistics and physically meaningless.

##########################

Actually it is not meaningless. People need to get the notion out of their heads that Hansen, Jones, etc are calculating an average temperature. They ( and we ) are doing something quite different although “averaging ” is used, and people call it “an average”. What it is, what it mathematically is ( forget the PR and focus on the science ) is an estimate of temperature at UNMEASURED places. Such that, if I take all the measures together and use the correct techniques I can win the following game.

1. Pick a place, any place on the planet. Hide a thermometer there for 1 month.
2. I will now calculate “the average” NOT USING that point.
3. Challenge people to guess the temperature at your unknown location.

all players get the time ( the month ) and all known data for that month.
The job is to estimate the temperature at an undisclosed location.

the “average” (done right ) will be the best estimate. Now tell me the month, the latitude, the longitude and the altitude and my estimate will be even closer. And we can test this synthetically by generating centuries of synthetic data ( that looks like weather ) for the entire global and then sampling that complete field sparsely. and seeing how well our estimating proceedure works.

Or, I can USHCN to “predict” what you will see at CRN.

So, we use an “average” ( its really not an average ) to come up with an estimate for the temperature at any given spot. Its not really an average, although people call it an average. But when you look down into the math of things you see.. “Oh, this is an estimate of the temperature at unknown locations that minimizes the error of prediction”

Finally “average” temperature also has a meaning when we talk about things like the LIA and say
“It” was cooler in the LIA.. or “It” was warmer in the MWP.

“If you read the pdf you will discover that one aspect of the CRN network is to provide long term, un-interrupted sites. However, they have planned for possible relocations due to owners requirements, failures etc…”

yes, one such move will supply data on the effect that nearby roads have on temperature measures. So instead of speculation ( roads will corrupt the data ) you’ll actually have data and magnitudes and all sorts of science.. as opposed to speculation

So can you point me at where GIStemp does their lapse rate adjustment?

Didn’t the folks at NCDC say it doesn’t matter if stations come and go? Where is their lapse rate adjustment?

###########################

EM
1. Remember the concern about the loss of thermometers.. What was your concern? Loss of high altititude thermometers. Why? because they tend to be colder.

2. When you work in ABSOLUTE temperature then you have to take care to adjust in lapse rate. This is why, GISS works with anomalies

Lets do a little example

We are going to average two stations. 1 station is at sea level. the other station is at
1km. above it.

Ready. We will do 10 years of data. station A is at sea level and station B is at 1000 meters

A) 6 6 6 6 6 6 6 6 6 6
B) 0 0 0 0 0 0 0 0 0 0

See. Now we made that all simple because all the data is there and its always there. So we can can just sum every month and divide by 2 and presto.. the average is 3.

Now lets have a data drop out.

A) 6 6 6 6 6 6 6 6 6 6
B) 0 0 0 0 0 0 0 0 NA NA

And we average.. and Opps! the average goes from 3C to 6C what the hell!

So. You have two options.

Option 1. Use anomalies

Option 2. Adjust for lapse rate.

Since Giss uses Anomalies they dont have to and should not correct for lapse rate.

So, if you are working in ABSOLUTE temperature and trying to COMPARE your dataset to another dataset in ABSOLUTE temperature, then you must check for altitude differences or you can just adjust for lapse rate. Its easy. You can even do it empricially.

As others have stated, crosspatch points out what needs to be stressed.
Any point averages need to be justified. Alaska, Hawaii, Oregon, Kansas and Florida have independent climates that are not related to each other. This is basic science. There is no physical meaning to averaging tropical data with polar data.
There may be some justification for averages, but not in this case.
Also, there is danger in averaging time data. Why are you averaging July with January? Does averaging July with January have physical meaning?
I’ve plotted monthly temperatures of 15 CRN sites with 10 full years of data. None show any significant trend in temperature since those stations began operating.

The best climate site that addresses the AGW issue is http://www.surfacestations.org/
When the average reader sees photos of thermometers next to air conditioners, they understand what is going on.
Another important story to me is what GISS is doing to their “data” on a monthly basis.
Keep up the good work.

“Since Giss uses Anomalies they dont have to and should not correct for lapse rate… its already been proven that station drop out doesnt matter, cause we put the stations back in and the answer didnt change. Anomalies. Lovely thing.”

climatebeagle says:
January 7, 2013 at 11:17 am (Edit)
On the hourly numbers I calculated an USCRN 2012 yearly average of 12.1°C or 53.7°F, but I think my list of USCRN stations is different since I have 124. Probably at least because I’m using all the USCRN stations, thus not the same as CONUS.

######################

there are some additional stations beyond those that anthony talks about, these are regional networks. Also excellent stations. To get the correct CRN you need to download the metadata
Also, Folks should realize that not all of the CRN are actually commissioned and operational, some are experimental. This status is in a different file that I have around her somewhere. Last I looked I had a count of around 108 that were actually commissioned and non experimental. Then of course you should drop those that are actually in built up areas or have concrete around them ( 30 meter NLCD data can help you spot that in a jiffy )

““Why? because the airports happen to be at higher colder elevations.”

Must not fly much… FYI Airports are built where there is a lot of flat land, typically. As often as possible down in the valley floors or even next to water (so a long approach can be made with a low flat surface). Examples? SFO approach over the bay. Moffett Field approach over the bay. SJC San Jose approach over the bay. Not one of them up in the surrounding hills. ORD Chicago on flat land (as is all of Chicago near the lake). Denver down on the flatter part down slope from downtown. Etc. etc. etc.”

#################

Sorry I wasnt clear.

Here is the example I am thinking of.

You have a CRN station at 5 meters high, and 10 km away you have an airport at
200meters above sea level

Guess what?

With a lapse rate of 6.5C per 1000 meters how much cooler is a airport at 200 meters versus a station at sea level?

get it.

Lapse rate, it will get you every time if you are not careful. Perhaps a few examples of prefect stations that are warmer than their neighbors simply because they differ in altitude by 200 or 300 meters. 6.5C per 1000 meters in altitude. Guess you dont fly much

Thus 130 stations in the USCRN that reported hourly data in 2012:
For CONUS we can exclude:
AK – 12 stations
HI – 2 stations
ON – 1 station

Leaving: 115.

But Anthony says he is using 117 USCRN CONUS stations, with one skipped for missing data.
TN Oakridge,SA TIKSI and VA Sterling also seem to be missing from the monthly data.

So 115 vs. 117?

Here’s the list I manually created from the USCRN list and mapping file names to the WBANNO numbers (thus it has 130, excluding TN Oakridge,SA TIKSI and VA Sterling) :
(Mods feel free to remove this if it’s too long)

climate beagle if you’ll just be patient I will be able to get to a hotel and get you this information. you could also simply look at the text files I presented in the blog post and use those numbers for the station ids

Not that I know of. Nor for his regular lawbreaking, nor for his using his GISS position for politics, etc.

BTW, there are more examples of GISS “adjusting” the temperature record. Their adjustments take one of two forms: either lowering past temperatures in order to show more rapid warming, or adjusting current temperatures upward. The result is always more alarming than reality.

D Boehm, I’ve seen the same trend and agree fully on your GISS observations. However, recently I’ve found one station (Yakutat) where old temp data have been increased, resulting in a decadal cooling trend when compared to data I saved six months ago.
I still have the six month old Yakutat text file with that data, along with about a dozen other key stations that I’ve been monitoring for years. So far, four Antarctic stations (Amundsend-Scott, Vostok, Halley and Davis) have not been altered. All show zero temp changes since their data began in the 1950s.

Year after year, my admiration for Anthony and his dedication to the science of meteorology and climate continues to grow. The world owes him a debt of gratitude for his efforts to promote understanding of these complicated issues.

I expect a warm US and a cool world for December. Most of December was very warm in the US plains and east and they didn’t get a cold blast until late in the month. They will be getting another cold blast in the last two weeks of January, too. This is one reason why I tend to like seasonal averages rather than monthly averages. You can have an unusually cold 4 week period during a season that is split between two different months. The seasonal average might be a more accurate measure of climate.

There are others who are monitoring what NOAA is doing to historical “data”
========================
The blink graph was damning. I think it is appropriate to draw conclusions about this temperature record business. I am glad to see that we have a network of individuals who are dedicated to keeping up with all of this. Some day, I feel, they will be called to give testimony.

For starters neither temperature nor anomalies vary directly with w/m2. Averaging them results in under representing changes in warm areas and over representing them in cold areas. But put aside the completely absurd notion of averaging things that ought not be averaged in the first place, and let’s consider your approach. There are more options than the ones you have listed.

If the above example were done with some dose of reality, there would be thousands of data series, not two. The first step would be to simply drop incomplete data series out altogether, and only average the series that are complete. I know, I know, we don’t have diddly squat for weather station data that is complete from one end of the record to the other. So what to do? The answer is to follow a process similar to what Leif Svalgaard described for normalizing sun spot counts. You take series that overlap, and determine which ones vary together in the overlap period so you can apply a compensation factor to let series B stand in for series A after the data from A ends. Complicated? Not really. Just a gawd awful number of calculations which is why we invented computers. I can think of some problems with this approach too, and I can also think of other approaches. But my main point is that the options aren’t limited to the two you use.

Hi Anthony – I know I’m violating my New Year’s resolution, but I have to address the silly notion that someone has asserted here that you need to correct temperatures for lapse rate before averaging. The answer is, of course, NO! Temperatures are being measured at climate stations a specified height above the Earth’s surface, so the Earth’s surface-averaged temperature will not care what altitude it’s at. Temperature is temperature. You can carry out a spatial average on the absolute temperatures and they can be just as meaningful as any other (arbitrary) averaging scheme (perhaps more meaningful). To put it another way, if it’s 50 degrees F in Denver, CO, will I feel colder/warmer in Denver than I would if I were exposed to 50 degree air in Charleston, SC? Of course not! What WILL change between the two locations is the air density simply due to the change in static air pressure with altitude.

The link if I did it right should lead you to an Excel file in Dropbox with a list of 125 sites in the USCRN network. The list includes latlong and altitude data. There are 118 locations, but 5 locations have 2 associated sites and two locations have 3 sites. One location is in Canada (Ontario) leaving 117 in the “US” part of the USCRN. I count 8 locations in Alaska (not your 12) and two in Hawaii, leaving 107 separate locations and 114 sites in CONUS. I never ran across TN Oak Ridge or VA Sterling in this dataset. What the heck is SA TIKSI? Somewhere in Russia? SA is not a US state abbreviation but might be the Sakha Republic in Siberia.

The data are from a monthly file created by NOAA back in about August of this year, so should be up to date.

The mean values I provided in my table earlier were for the 125 sites in the USCRN network. This included the 8 sites in Alaska, the one in Canada, and the two in Hawaii, so no doubt were a bit lower than one would get for a continental US average.

Problem is you cant compare the averages without correcting for altitude differences.
———————————————————————————————
Are you saying that the continent is rising and subsiding by 100M frequently? Or do you want to adjust the readings so they are all the same?
You could just use one thermometer for the whole USA to acheive that end. Temps for any location could be created by adjustments. Good for research grants and eliminates the need to go outdoors.

My earlier table above was for the full USCRN dataset (125 stations, including 8 in AK, 2 in HI, and 1 in Ontario CAN). This table is for the continental US (114 stations, except about two fewer in 2008 and 2009). Presumably if Anthony removes the two Hawaiian stations his overall average will drop from 55 to a bit closer to the values of about 53 listed here.

1. Spatial averages of temperature are meaningless in reality. You claim to want to know what the temperature is in a region that is not being measured? Fine, you estimate temperature using the usual algorithms, but that is not the same as claiming you know the temperature or its response/history over time, it should merely give you a rough figure to work with. Such a figure would merely be used prior to the temperature actually being measured (you would only want to know the temperature prior to something being done at that location) . It serves no purpose otherwise, and it certainly does not serve the purpose it is being used for (to comment on the state of the climate)! Looking at the change in spatial averages over time is not meaningful because nothing on the planet responds to such an average; it merely allows for statistical masturbation.

2. Meaningful comparison for climate science purposes can be made by looking at the rate of change in the temperature readings of single stations over time. If those stations are well sited it significantly reduces or eliminates most of the siting biases. More specifically, as the climate and weather patterns are driven by differences in temperature across the globe examining those changes is far more meaningful than some pointless averages. Given this, there is zero need for altitude/lapse rate correction as this is constant. Why are you trying to compare an airport with a sea-level site in any case? There is no need to do such a thing!

4. Core samples, tree rings etc are useful in identifying localised climatic conditions only. Averaging these to come with a spatial average is even more meaningless than doing the same with thermometers. The correct usage of such data would be to compare various points on the planet over time, look at the changes in climate and deduce whether various events were localised or global in nature. Beyond that you are trying to coax information from whence none exists.

Anthony, a smaller (XX.XX °C XXX.XX K) somewhere sure would be appreciated by many I bet and let the user round to whatever precision they feel is proper. Now that would be fast, easy and very useful for everyone without a calculator wristwatch. ;)

The remaining discrepancy between my list and Anthony’s is a single station at Goodwell OK. My list includes only one station at Goodwell (at the Panhandle Center) and Anthony’s list (also Climate Beagle’s) includes two. I expect his list of 115 sites (dropping the two in Hawaii) is correct. I have matched the other 113 sites with Anthony’s in the Excel file in Dropbox.

I haven’t read all the comments yet, so I’ll post what I’ve found even if it has been posted already.
There are some problems with the list of stations you are using. The USCRN folder of site data has many more stations in it than are part of the network. Maybe you could get a concise list from someone at USCRN. A BAMS article recently put up on the USCRN site says:

260 WHERE ARE THE USCRN STATIONS? The number of USCRN stations distributed across
261 the CONUS is 114, consisting of 7 paired sites and 100 single sites, or 107 total sites that are
262 fully instrumented; resulting in an effective national average spacing of approximately 265 km.

I noticed site number 64757 CRNM0101-ON_Egbert_1_W.txt is located north of Toronto. Also there are 7 pair sites of which I found you used them all except number 53927 CRNM0101-OK_Stillwater_5_WNW.txt. Since those 14 sites are so close to each other, I wonder if it might be appropriate to average their data.

Here is the pair list I came up with by looking at the map in this photo file.

I emailed the MET office recently and asked if there is a trend ( as they claim ) for rising temperatures on average across the UK/US etc etc then why is there not a “trend-towards-now” for absolute temperature records being broken ? I stated that “surely this illustrates that your data/assumptions can not be correct because basic ( and I mean VERY basic ) statistical theory states that extreme records should be tumbling left-right-and-centre but are not…. needless the say the answer I got back was unscientific gibberish and counter-logical… I also asked someone recently how the surface temperature measurements taken on the QUEEN MARY 2 are calibrated and verified since they are used to calibrate and verify satellite temperatue measurements. No one seems to know !? I also asked if the ship’s own “heat island” was isolated from any measurements ?! no one seemed to know or care….. so maybe this could be another area for Anthony to delve into….is there a floating heat island taking dubious measurements on a daily basis as well as probably consistantly serving Champagne at the incorrect temperature….

The dry adiabatic lapse rate is 9.8C/km, but this is heavily affected by ONLY H2O) in its various states. The 100% humidity lapse rate is about 4.5C/km (iirc) and since there is nearly always H2O in the atmosphere a generalised average value of 6.5C/km is taken in lieu of other information.

In other words, at any time, the lapse rate could be anywhere between 4.5C/km and 9.8C/km.

So really its just another way to introduce further “variables” for temperature “adjustments” ;-)

ps, I can just see what Hansen would do with lapse rate adjustments.
”
Adjust using 6.5C/km…… oh, reading is still too cold, ,
must have been dry so I’ll adjust using 8.2c/km instead, spurious reason given..
that’s looks better.. maybe try 9.2 and get it even warmer at sea level.
”

They have currently nearly run out of adjustments they can use, so the global urban average temp has levelled off..

It would be interesting to see Moshs BEST work with the full range of possible adjustments instead of using his 6.5C figure.

Mind you to have a belief in the accuracy of your data you would first need to check each temperature point instead of averaging a historic probably incorrect one off figure with thousands of other probably incorrect one off figure in the belief that averaging somehow makes lots of wrongs right

ps, I can just see what Hansen would do with lapse rate adjustments.
”
Adjust using 6.5C/km…… oh, reading is still too cold, ,
must have been dry so I’ll adjust using 8.2c/km instead, spurious reason given..
that’s looks better.. maybe try 9.2 and get it even warmer at sea level.
”

AndyG55 – The whole temperature “correction” for lapse rate idea is just silly. Why on Earth would anyone do this? It appears that people who are motivated towards this correction think that an average “temperature” that is adjusted so that every point on the planet is at some effective “sea level” is somehow meaningful. And as you point out, the actual lapse rate depends on the local weather, and so introduces even more complications.

The whole idea is a nonsense anyway. They are effectively assigning the same temperature to a very large area, without knowing if its in any way applicable to that area.

In the old network, UHI increased urban temps are applied over large areas of countryside which are in no way affected by urban heat, Nearby readings that might be unaffected by urban expansion, are homogenised so that they are.

The whole issue of a global average land temperature is a joke and a farce.. !

Plummeting mercury, coupled with thick fog cover, threw normal life out of gear in the entire North India on Monday, with 24 more people succumbing to the cold wave in various parts of the region.

—

I suppose these poor people didn’t realize that their lapse-rate corrected temperature was really warmer than what the thermometer was telling them, so they should not have succumbed to the cold.
/climate-science

Steve Mosher wrote: “Lapse rate, it will get you every time if you are not careful.”

I believe it’s got Steve this time. Here’s anecdotal evidence why you can’t apply lapse rate (even if you wanted to). Here in Austin, TX, the new international airport east of town is sited on a plain several hundred meters lower than the western half of the city proper, which occupies a hilly area. So the airport temps should be warmer than temps in west Austin due to the lapse rate. But in fact the airport is almost never warmer. Reason? Urban heat island effect. Temps at higher elevations in Austin run up to 10 degrees warmer than those at the lower-level airport.

Taking UHI effect into account, it seems to me there is no valid way to make a blanket adjustment.of temperature, and it all points to the absurdity of trying to find a valid “average” temp. Reminds me of my days at a major Texas daily newspaper, arguing with a business editor against his plan to average 10 economic forecasts to arrive at a prediction of future growth — to two decimal places!
.

Using anomaly or absolute temperature data statistical analysis results to prove or refute AGW, is simply numerology, the lowest form of scientific observation and discussion. Let me state it another way. Regardless of the cause, small scale weather pattern variation trends cause temperature variation trends at sensor level. But importantly, larger scale oceanic and atmospheric parameters remain in control of resultant weather pattern variation trends. Therefore, to prove or disprove anthropogenic cause, you must look at large scale oceanic and atmospheric parameter trends, not temperature trends. Hansen and his ilk prefer to dabble in low hanging fruit (temperature trends), hoping gullible sheeple will eat it all up and lick the plate.

As long as those who seriously doubt this AGW fad continue to argue over low hanging fruit, we will get nowhere fast.

What we really need is the data related to large oceanic and atmospheric parameters (IE semi-permanent pressure systems, global cloud cover data, smaller pressure system tracks, etc), over at least a 60 to 100 year span of time, and ignore temperature all together. Why? For humans to definitively cause large scale temperature change, what we are supposedly doing must first definitively affect large scale climate and temperature drivers beyond their normal random walk.

Frank K. says:
January 7, 2013 at 10:07 pm
“… To put it another way, if it’s 50 degrees F in Denver, CO, will I feel colder/warmer in Denver than I would if I were exposed to 50 degree air in Charleston, SC? Of course not!”

Frank you probably realized after you posted this how it is too simplistic. You probably realized that you must also consider wind, humidity and insolation on exposed skin when saying “will I feel colder/warmer” .

A lapse rate correction will do nothing but add noise to the data because weather yanks the lapse rate all over the place. Any fixed lapse rate correction against temperature data over time without knowing how the atmosphere over the station changed would be wrong. A dry lapse rate of 5.5°F/1,000 ft (10°C/km) is often used to calculate temperature changes in air not at 100% relative humidity. A wet lapse rate of 3°F/1,000 ft (5.5°C/km) is used to calculate the temperature changes in air that is saturated (i.e., air at 100% relative humidity). Pick one and try to link it to the local weather the station experiences.

Note that your suggested .65c/100m rate is an average and would be wrong for anything other than a rough estimate, It is not a real piece of data like the temperature measurement. Got a balloonsonde site nearby and have that daily lapse rate data in parallel? THAT would be real data.

Remember this is 2 meter surface temperature, not barometric pressure. The thermometers are not changing altitude against the surface. Changes in air density and temperature (updraft/downdraft) is what lapse rate is used to predict for aviation and models, but it isn’t accurate without knowing the state of the atmosphere at that location at that point in time. Otherwise we wouldn’t need a global balloonsonde network measuring twice daily.

For example, try using a general value for density altitude while taking off in a loaded Cessna at Leadville Colorado on a hot and humid summer day and cross your fingers that you get off the runway to altitude before smacking into the mountain. This is why density altitude is calculated by pilots from hourly data just prior to takeoff so they know if the plane will fly or not. Without the hourly data,(Baro/ Dewpoint/ Temp) knowing the state of the atmosphere (and thus lapse rate) is a crapshoot. Using a standard average lapse rate to correct temperature over elevation to any degree of accuracy is equally a crapshoot. All it will do is add noise the way you propose it.

Comparing station to station data at different altitudes nearby, yes you need a lapse rate correction, but you also need the other variables (baro, DP) for it to be accurate. Comparing two roughly similar networks (remember NCDC designed the CRN to be roughly the same distribution as the COOP so while station count is lower, distribution by area and altitude placement is similar) at 2 meter SURFACE averaged, you don’t. When I present more data, you’ll see that the CRN and COOP network actually match some months, a good indication that they are similar.

Unless Mosh knows the daily weather conditions at the time of the readings of each of the instrumental records he used for BEST, surely his rough rule of thumb for the lapse rate is so approximate as to destroy the idea of a robust data base?

which I assumed was the official list. The list I gave earlier is manually derived from that list.
It has 12 AK stations.

It’s a pity NOAA doesn’t seem to have an easily consumable version of their metadata, e.g. a csv file, for all the reference stations.

@ Anthony
Thanks for the list, I also took the approach of looking at the mismatches in WBANNO numbers from your 116 stations (from your one of your monthly files) and my CONUS list, which then confused me even more.

Your monthly file included these that were not in my list.
NM_Santa_Fe_20WNW,03087
CO_Colorado_Springs_23_NW,53007
UT_Blanding_26_SSW,53012
AL_Selma_6_SSE,63897
Note that these four are not USCRN stations according to:

My list included these that were not on your list.
NM_Los_Alamos_13_W,03062,USCRN
PA_Avondale_2_N,03761,USCRN
CA_Santa_Barbara_11_W,53152,USCRN
OK_Stillwater_5_WNW,53927,USCRN

REPLY: To satisfy your request while traveling, I recreated the list from scratch last night in my hotel room, and obviously failed. Let that be a lesson not to do detailed work after a full day of driving. I’ll have my office forward the correct list (which I don’t have on my laptop) later today and then I’ll post it. Until then, please just stop speculating. – Anthony

I have commented recently on solar threads about the possibility there has been no real warming which fits very nicely with Lief’s new sunspot count. That is, if all the warming due to UHI and adjustments is removed the real temperature of the planet has changed very little over the last 150 years. This works very nicely with a sunspot count that also hasn’t changed very much.

The primary changes in temperature would be the variation due to ENSO and the AMO (although other minor factors exist). These variations can lead to melting ice caps, glaciers, etc. But that will soon stop now that the PDO has flipped and the AMO will flip in the not too distant future. Add this is a real solar cooling due to the L-P effect and it is likely we will see cooling over the next few decades.

One thing to beware of is any theory that uses the temperature record in any manner. Even if it is skeptical in nature it may very well be correlating to bad data.

What does this mean for the GHE? Why isn’t it warming like it should? I’ve stated my opinion in the past. The GHE is only one of the effects of adding GHGs to the atmosphere. There are other effects and some of them cool the planet. When all effects of GHGs are taken into consideration they more or less cancel out at the current concentrations and temperatures.

Frank K. says:
January 7, 2013 at 10:07 pm
“… To put it another way, if it’s 50 degrees F in Denver, CO, will I feel colder/warmer in Denver than I would if I were exposed to 50 degree air in Charleston, SC? Of course not!”

Frank you probably realized after you posted this how it is too simplistic. You probably realized that you must also consider wind, humidity and insolation on exposed skin when saying “will I feel colder/warmer” .

—

Hi Tom – 50 deg F will feel the same to me provided all other variables are the same (as you say wind, humidity) – it won’t matter if I’m in Denver or Miami :) That’s because my reference point is my body temperature (98.6 F). My point is that lapse rate corrections for spatial temperature averages are a silly idea.

Anthony Watts says:
January 8, 2013 at 6:54 am

“Comparing station to station data at different altitudes nearby, yes you need a lapse rate correction, but you also need the other variables (baro, DP) for it to be accurate.”

My point Anthony is that you really don’t need any correction at all, no matter the altitude. If it’s 50 F on top of a local mountain and 60 F in a nearby valley, I would be fine with averaging the two readings together. The reference point for temperature climatology is the 2 meter height of your thermometers from the ** surface of the Earth **.

Here’s another example – if today ALL thermometers in the CONUS had an reading of 50 F (all stations identical, no regard to altitude), would the CONUS average temperature be 50 F or some different lapse-rate adjusted temperature?

It was illustrated today during a morning when we were 10F in Nashua NH (119 feet elevation) at 1015UTC with a steep very low inversion. Jaffrey NH airport at 1013 feet was at 18F and Worcester MA at 1009 feet at 29 the same time and Boston at sea level at 33F. Note sure how one would even attempt to ‘adjust’.

Lapse rates as you stated vary considerable – from sharp inversions to superadiabatic depending on surface factors like snowcover, humidiysaturated soils vs dry ground , vegetation type and state, air masses and fronts, time of day and year, clouds, winds and microclimate factors. Pressure adjustments are more straightforward but even those have issues relying on a standard atmosphere when the atmosphere is rarely standard. In much the same way as temperatures are rarely at the ‘average’.

Anthony, I’m not speculating, I’m trying to get to a point to reproduce your results. From the list of WBANNO numbers in your monthly files you seem to include three non-USCRN stations and the Ontario station, while excluding three USCRN stations.
It’s also me trying to understand what exactly is the defined set of USCRN stations and their CONUS subset, in this posting we have three people trying to define that list and each seem to have a different list and probably source for their list. Meanwhile NOAA seems to say that USCRN has 114 stations but list 130, while Mosher says only “around 108″ are active.
For a reference network it seems NOAA has already lost control of it.

I just got word that NOAA plans to announce SOTC at 1PM EST today. I got an advance copy. The official Annual CONUS average they state is 55.3°F, which is .05 from my 55.25F CONUS Tavg from CRN.

So much for Mr. Mosher’s complaints about the need for lapse-rate adjustments. I wonder though, how much is any my pre-release might have affected NOAA?

Of course some will claim that means the old surface network is “reliable” since the COOP annual value matches the CRN.

I predict though that the COOP CONUS Tavg will go cooler in a couple of months. In a not too distant future blog post, I’ll show how the networks are different, especially when it comes to UHI. Mr, Mosher won’t like that either, but that’s science.

Let the data be furnished raw. Let scientific investigators perform the studies and argue the value of lapse rate, homogenation, etc. etc. etc., but let the data be furnished raw by these %8**^3^*#!! lest they succumb to the temptation to adulterate the record for ideological advantage, as they apparently already have. The integrity of the NOAA is no longer.

Whether it is absolute temperature or anomaly, it doesn’t matter because both are the same using fixed data points. What we want to see is how the actual data changes, not how adjustments made change the data. A lot of poor adjustments especially with Arctic data are caused by weighting, where just one station can represent 10 percent of the total. Location of one station never reflects the temperature of a whole area 1200km. There is too much weather variance within this size area to call one station reflects it.

1) 10c + 15c + 20c (Station A 1500m, Station B 800m, Station C 100M)

Total 45c, mean = 15c

2) Mean = 15c so this = 0.0c anomaly

Anomaly

-5.0c + 0c + 5c = 0c

0c = 15c so it is the same.

Where this does matter is when the altitudes change for the station location and they are comparing the same location over years with a different altitude. The surface stations do this with no adjustment for it anywhere. It is the reason why higher regions have been replaced with low ones to introduce warming bias not corrected for. The biggest issue I have with anomalies is the way it hides what has been changed when. Absolute values often point out the error, where anomalies aren’t so obvious.

Just plotted it against Central England 2012 and it looks like the USA is about six weeks ahead of the UK.
USA peaks in July and the UK in August. The USA minimum is in December and the UK in February.

Agreed that it is difficult to get a final list. It’s a moving target. The link is to a file on Dropbox, which is my most complete metadata file, obtained from several files at NCDC in about August of 2012. It has 142 sites, including 125 labelled “USCRN” and another 17 in Alabama labelled USCRN-Regional or something like that. The 125 sites include 7 “paired” sites. There are 8 sites in Alaska. 2 in Hawaii, and one in Ontario CN. So to get the “contiguous” US, or CONUS, there would be 125-11 = 114 sites. These match Anthony’s 115 sites exactly with the exception of an extra site that Anthony has in Goodwell OK. This site is described in the recent 10-year review article mentioned above. Apparently it involved a move and some overlapping measurements, so it probably was not available in August 2012. Although now it has a total of about 18 months of data from June of 2011.

Sites are being added in Alaska, so I expect your value of 12 rather than 8 is probably indicative of later additions. But these would have only 1 or 2 years of data or less, so would not be too useful for a while.

note to everyone on this thread. we have also updated the station master list in the link above. please also note that close agreement between the CRN and COOP please also note that the CRN is slightly warmer. this occurs every winter season but in the summer the CRN becomes cooler this is an example of the heatsink effect at work . I will have a new write up on this issue with supporting data in a couple of days. for now I think it is safe to conclude that no adjustments are needed for that CRN.

Claim a sign of coming times due to global warming. The article talks about an average temp for Oz, and it makes me wonder about the quality of their met sites, especially because of what you report here.

REPLY: Our flags show them as USCRN, but there may be some issue with the definitions in metadata we have vs what you have. They very well might be USRCRN http://www.ncdc.noaa.gov/crn/usrcrn/ which is a regional equivalent.

Will look into it when I get back into the office in a couple of days. As we have seen, things change with NOAA rapidly, so we may be victims of versioning differences. -Anthony

“It’s official: 2012 marked the warmest year on record for the contiguous USA, scientists from the National Climatic Data Center in Asheville, N.C., announced Tuesday. The past year smashed the previous record for the warmest year, which was 1998.”

I can almost hear the collective deep breath for the start of the alarm

@ tomharrisicsc The problem is that the network was only completed in late 2008, and was commissioned and built up slowly from 2002, We don’t have full CONUS coverage prior to 2008, though some stations do go back to 2002

Correct.
Most of the energy in the climate system is in water which with its high thermal capacity and large energy changes with phase changes from solid/liquid/vapor forms is the key dominant component.

The obvious macro scale measures of the energy in the climate system are sea level, ocean heat content, land ice mass balance and atmospheric moisture content.
They reveal how much energy is increasing in the system from the shift from ice to vapor and thermal expansion of the liquid phase.

the implication of the changes seen in these values is obvious.
The confirmation from the NCDC that the CRN shows 2012 was the warmest measured year by 1degF is just the result of the macro scale influence of the oceanic and atmospheric parameter trends.

I know there are two process differences between mine & Anthony’s, which are both using USCRN:
a) Anthony is using the monthly means, I’m averaging the hourly figures.
b) Anthony is using an earlier USCRN list, I’m using the one from Sep 2012 (see earlier comments). Not really sure why the USCRN network is changing its station list at this point in time.

Also, just to note with the hourly figures:
There are over one million records for 115 USCRN stations for a single year.
In 2012 I calculated 0.37% of data was missing, which seemed to be:
0.10% data entries missing from input files
0.27% data entries with missing data values (e.g. -9999.0).