Does Hansen’s Error “Matter”?

There’s been quite a bit of publicity about Hansen’s Y2K error and the change in the U.S. leaderboard (by which 1934 is the new warmest U.S. year) in the right-wing blogosphere. In contrast, realclimate has dismissed it a triviality and the climate blogosphere is doing its best to ignore the matter entirely.

My own view has been that matter is certainly not the triviality that Gavin Schmidt would have you believe, but neither is it any magic bullet. I think that the point is significant for reasons that have mostly eluded commentators on both sides.

Station Data
First, let’s start with the impact of Hansen’s error on individual station histories (and my examination of this matter arose from examination of individual station histories and not because of the global record.) GISS provides an excellent and popular online service for plotting temperature histories of individual stations. Many such histories have been posted up in connection with the ongoing examination of surface station quality at surfacestations.org. Here’s an example of this type of graphic:
Figure 1. Plot of Detroit Lakes MN using GISS software (from Anthony Watts.)

But it’s presumably not just Anthony Watts and surfacestations.org readers that have used these GISS station plots; presumably scientists and other members of the public have used this GISS information. The Hansen error is far from trivial at the level of individual stations. Grand Canyon was one of the stations previously discussed at climateaudit.org in connection with Tucson urban heat island. In this case, the Hansen error was about 0.5 deg C. Some discrepancies are 1 deg C or higher.

Figure 2. Grand Canyon Adjustments

Not all station errors lead to positive steps. There is a bimodal distribution of errors reported earlier at CA here , with many stations having negative steps. There is a positive skew so that the impact of the step error is about 0.15 deg C according to Hansen. However, as you can see from the distribution, the impact on the majority of stations is substantially higher than 0.15 deg. For users of information regarding individual stations, the changes may be highly relevant.

GISS recognized that the error had a significant impact on individual stations and took rapid steps to revise their station data (and indeed the form of their revision seems far from ideal indicating the haste of their revision.) GISS failed to provide any explicit notice or warning on their station data webpage that the data had been changed, or an explicit notice to users who had downloaded data or graphs in the past that there had been significant changes to many U.S. series. This obligation existed regardless of any impact on world totals.

Figure 3. Distribution of Step Errors

GISS has emphasized recently that the U.S. constitutes only 2% of global land surface, arguing that the impact of the error is negligible on the global averagel. While this may be so for users of the GISS global average, U.S. HCN stations constitute about 50% of active (with values in 2004 or later) stations in the GISS network (as shown below). The sharp downward step in station counts after March 2006 in the right panel shows the last month in which USHCN data is presently included in the GISS system. The Hansen error affects all the USHCN stations and, to the extent that users of the GISS system are interested in individual stations, the number of affected stations is far from insignificant, regardless of the impact on global averages.

Figure 4. Number of Time Series in GISS Network. This includes all versions in the GISS network and exaggerates the population in the 1980s as several different (and usually similar) versions of the same data are often included.

U.S. Temperature History
The Hansen error also has a significant impact on the GISS estimate of U.S. temperature history with estimates for 2000 and later being lowered by about 0.15 deg C (2006 by 0.10 deg C). Again GISS moved quickly to revise their online information changing their US temperature data on Aug 7, 2007. Even though Gavin Schmidt of GISS and realclimate said that changes of 0.1 deg C in individual years were “significant”, GISS did not explicitly announce these changes or alert readers that a “significant” change had occurred for values from 2000-2006. Obviously they would have been entitled to observe that the changes in the U.S. record did not have a material impact on the world record, but it would have been appropriate for them to have provided explicit notice of the changes to the U.S. record given that the changes resulted from an error.

The changes in the U.S. history were not brought to the attention of readers by GISS itself, but in this post at climateaudit. As a result of the GISS revisions, there was a change in the “leader board” and 1934 emerged as the warmest U.S. year and more warm years were in the top ten from the 1930s than from the past 10 years. This has been widely discussed in the right-wing blogosphere and has been acknowledged at realclimate as follows:

The net effect of the change was to reduce mean US anomalies by about 0.15 ⹃ for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).

There were however some very minor re-arrangements in the various rankings (see data). Specifically, where 1998 (1.24 ⹃ anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ⹃) for the top US year, it now just misses: 1934 1.25⹃ vs. 1998 1.23⹃. None of these differences are statistically significant.

In my opinion, it would have been more appropriate for Gavin Schmidt of GISS (who was copied on the GISS correspondence to me) to ensure that a statement like this was on the caption to the U.S. temperature history on the GISS webpage, rather than after the fact at realclimate.

Obviously much of the blogosphere delight in the leader board changes is a reaction to many fevered press releases and news stories about year x being the “warmest year”. For example, on Jan 7, 2007, NOAA announced that

The 2006 average annual temperature for the contiguous U.S. was the warmest on record.

This press release was widely covered as you can determine by googling “warmest year 2006 united states”. Now NOAA and NASA are different organizations and NOAA, not NASA, made the above press release, but members of the public can surely be forgiven for not making fine distinctions between different alphabet soups. I think that NASA might reasonably have foreseen that the change in rankings would catch the interest of the public and, had they made a proper report on their webpage, they might have forestalled much subsequent criticism.

In addition, while Schmidt describes the changes atop the leader board as “very minor re-arrangements”, many followers of the climate debate are aware of intense battles over 0.1 or 0.2 degree (consider the satellite battles.) Readers might perform a little thought experiment: suppose that Spencer and Christy had published a temperature history in which they claimed that 1934 was the warmest U.S. year on record and then it turned out that they had been a computer programming error opposite to the one that Hansen made, that Wentz and Mears discovered there was an error of 0.15 deg C in the Spencer and Christy results and, after fiixing this error, it turned out that 2006 was the warmest year on record. Would realclimate simply describe this as a “very minor re-arrangement”?

So while the Hansen error did not have a material impact on world temperatures, it did have a very substantial impact on U.S. station data and a “significant” impact on the U.S. average. Both of these surely “matter” and both deserved formal notice from Hansen and GISS.

Can GISS Adjustments “Fix” Bad Data?

Now my original interest in GISS adjustments did not arise abstractly, but in the context of surface station quality. Climatological stations are supposed to meet a variety of quality standards, including the relatively undemanding requirement of being 100 feet (30 meters) from paved surfaces. Anthony Watts and volunteers of surfacestations.org have documented one defective site after another, including a weather station in a parking lot at the University of Arizona where MBH coauthor Malcolm Hughes is employed, shown below.

Figure 5. Tucson University of Arizona Weather Station

These revelations resulted in a variety of aggressive counter-attacks in the climate blogosphere, many of which argued that, while these individual sites may be contaminated, the “expert” software at GISS and NOAA could fix these problems, as, for example here .

they [NOAA and/or GISS] can “fix” the problem with math and adjustments to the temperature record.

This assumes that contaminating influences can’t be and aren’t being removed analytically.. I haven’t seen anyone saying such influences shouldn’t be removed from the analysis. However I do see professionals saying “we’ve done it”

“Fixing” bad data with software is by no means an easy thing to do (as witness Mann’s unreported modification of principal components methodology on tree ring networks.) The GISS adjustment schemes (despite protestations from Schmidt that they are “clearly outlined”) are not at all easy to replicate using the existing opaque descriptions. For example, there is nothing in the methodological description that hints at the change in data provenance before and after 2000 that caused the Hansen error. Because many sites are affected by climate change, a general urban heat island effect and local microsite changes, adjustment for heat island effects and local microsite changes raises some complicated statistical questions, that are nowhere discussed in the underlying references (Hansen et al 1999, 2001). In particular, the adjustment methods are not techniques that can be looked up in statistical literature, where their properties and biases might be discerned. They are rather ad hoc and local techniques that may or may not be equal to the task of “fixing” the bad data.

Making readers run the gauntlet of trying to guess the precise data sets and precise methodologies obviously makes it very difficult to achieve any assessment of the statistical properties. In order to test the GISS adjustments, I requested that GISS provide me with details on their adjustment code. They refused. Nevertheless, there are enough different versions of U.S. station data (USHCN raw, USHCN time-of-observation adjusted, USHCN adjusted, GHCN raw, GHCN adjusted) that one can compare GISS raw and GISS adjusted data to other versions to get some idea of what they did.

In the course of reviewing quality problems at various surface sites, among other things, I compared these different versions of station data, including a comparison of the Tucson weather station shown above to the Grand Canyon weather station, which is presumably less affected by urban problems. This comparison demonstrated a very odd pattern discussed here. The adjustments show that the trend in the problematic Tucson site was reduced in the course of the adjustments, but they also showed that the Grand Canyon data was also adjusted, so that, instead of the 1930s being warmer than the present as in the raw data, the 2000s were warmer than the 1930s, with a sharp increase in the 2000s.

Figure 6. Comparison of Tucson and Grand Canyon Versions

Now some portion of the post-2000 jump in adjusted Grand Canyon values shown here is due to Hansen’s Y2K error, but it only accounts for a 0.5 deg C jump after 2000 and does not explain why Grand Canyon values should have been adjusted so much. In this case, the adjustments are primarily at the USHCN stage. The USHCN station history adjustments appear particularly troublesome to me, not just here but at other sites (e.g. Orland CA). They end up making material changes to sites identified as “good” sites and my impression is that the USHCN adjustment procedures may be adjusting some of the very “best” sites (in terms of appearance and reported history) to better fit histories from sites that are clearly non-compliant with WMO standards (e.g. Marysville, Tucson). There are some real and interesting statistical issues with the USHCN station history adjustment procedure and it is ridiculous that the source code for these adjustments (and the subsequent GISS adjustments – see bottom panel) is not available/

Closing the circle: my original interest in GISS adjustment procedures was not an abstract interest, but a specific interest in whether GISS adjustment procedures were equal to the challenge of “fixing” bad data. If one views the above assessment as a type of limited software audit (limited by lack of access to source code and operating manuals), one can say firmly that the GISS software had not only failed to pick up and correct fictitious steps of up to 1 deg C, but that GISS actually introduced this error in the course of their programming.

According to any reasonable audit standards, one would conclude that the GISS software had failed this particular test. While GISS can (and has) patched the particular error that I reported to them, their patching hardly proves the merit of the GISS (and USHCN) adjustment procedures. These need to be carefully examined. This was a crying need prior to the identification of the Hansen error and would have been a crying need even without the Hansen error.

One practical effect of the error is that it surely becomes much harder for GISS to continue the obstruction of detailed examination of their source code and methodologies after the embarrassment of this particular incident. GISS itself has no policy against placing source code online and, indeed, a huge amount of code for their climate model is online. So it’s hard to understand their present stubbornness.

The U.S. and the Rest of the World
Schmidt observed that the U.S. accounts for only 2% of the world’s land surface and that the correction of this error in the U.S. has “minimal impact on the world data”, which he illustrated by comparing the U.S. index to the global index. I’ve re-plotted this from original data on a common scale. Even without the recent changes, the U.S. history contrasts with the global history: the U.S. history has a rather minimal trend if any since the 1930s, while the ROW has a very pronounced trend since the 1930s.

Re-plotted from GISS Fig A and GFig D data.

These differences are attributed to “regional” differences and it is quite possible that this is a complete explanation. However, this conclusion is complicated by a number of important methodological differences between the U.S. and the ROW. In the U.S., despite the criticisms being rendered at surfacestations.org, there are many rural stations that have been in existence over a relatively long period of time; while one may cavil at how NOAA and/or GISS have carried out adjustments, they have collected metadata for many stations and made a concerted effort to adjust for such metadata. On the other hand, many of the stations in China, Indonesia, Brazil and elsewhere are in urban areas (such as Shanghai or Beijing). In some of the major indexes (CRU,NOAA), there appears to be no attempt whatever to adjust for urbanization. GISS does report an effort to adjust for urbanization in some cases, but their ability to do so depends on the existence of nearby rural stations, which are not always available. Thus, ithere is a real concern that the need for urban adjustment is most severe in the very areas where adjustments are either not made or not accurately made.

In its consideration of possible urbanization and/or microsite effects, IPCC has taken the position that urban effects are negligible, relying on a very few studies (Jones et al 1990, Peterson et al 2003, Parker 2005, 2006), each of which has been discussed at length at this site. In my opinion, none of these studies can be relied on for concluding that urbanization impacts have been avoided in the ROW sites contributing to the overall history.

One more story to conclude. Non-compliant surface stations were reported in the formal academic literature by Pielke and Davey (2005) who described a number of non-compliant sites in eastern Colorado. In NOAA’s official response to this criticism, Vose et al (2005) said in effect –

it doesn’t matter. It’s only eastern Colorado. You haven’t proved that there are problems anywhere else in the United States.

In most businesses, the identification of glaring problems, even in a restricted region like eastern Colorado, would prompt an immediate evaluation to ensure that problems did not actually exist. However, that does not appear to have taken place and matters rested until Anthony Watts and the volunteers at surfacestations.org launched a concerted effort to evaluate stations in other parts of the country and determined that the problems were not only just as bad as eastern Colorado, but in some cases were much worse.

Now in response to problems with both station quality and adjustment software, Schmidt and Hansen say in effect, as NOAA did before them –

it doesn’t matter. It’s only the United States. You haven’t proved that there are problems anywhere else in the world.

My climate change prediction
“it doesn’t matter. It’s only the United States and Europe. You haven’t proved that there are problems anywhere else in the world. ”
“it doesn’t matter. It’s only the United States, Europe and Asia. You haven’t proved that there are problems anywhere else in the world. ”
“it doesn’t matter. It’s only the land surface of the world. You haven’t proved that there are problems anywhere else in the atmosphere. ”
“it doesn’t matter. It’s only the surface of the world. You haven’t proved that there are problems anywhere else in the atmosphere. ”
“it doesn’t matter. It’s only the lower atmosphere of the earth. You haven’t proved that there are problems anywhere else in the atmosphere. ”
“it doesn’t matter. It’s only the atmosphere. You haven’t proved that there are problems anywhere else in the …. “

Steve:
As I noted on the earlier related thread and to Gavin at RC, the two charts are misleading when placed side by side because of the different y-axis scales. If you make them the same, then your point about the trend becomes more obvious. Perhaps, you can simply put them on the same chart and highlight the issue of temperatures in the 30s.

There is one other significant point. Much of the alarmism has been based on the “fact” that 8 of the hottest years in the century occurred in the blast decade. Now it is only 3, with 4 in the decade of the 30’s. Rather changes the concern one might feel. Murray

When comparing the two graphs it’s obvious that the high points in recent years are similar, but there is a very large difference WRT the 30s. I cannot believe that only the USA had elevated temperatures in the 30s. Would sure like to see the adjustment procedures for the rest of the world…

jae:
My point exactly. It just so happens I am in the middle of a pretty good book by Ian Kershaw, Fateful Choices (I kid you not!)that describes the Japanese invasion and occupation of Manchuria and large swathes of China starting in 1933. For the rest on the 30s and 40s China and East Asia was a war zone. The odds of continuous and reliable temperature records strike me as slim though, of course, possible. My guess is that there has to have been some “adjustments” to the record. Of course, the published ROW record may be spot on. But it certainly behooves us to check.
One thought that struck me was to first add data from Canada and Mexico for the same period so that a regional picture would emerge. I am assuming that the Canadian record would have a similar accuracy to the US record. This would at least allow the NA record a more reasonable weight in any comparison to the ROW. Then we can check those locations that have reliable and relatively uninterrupted records – Northern and Sourthern Hemispheres focusing on 1920 through 1950 to see if the 30s were a strictly regional versus global spike. I would have thought that this was important to rule-out any solar explanations for temperature cycles.

Steve,
I just noticed that the spike in the 30s is in the ROW chart though suppressed. Given that with an area based weighting (approx 2%), the purely US data should have a microscopic impact I find this somewhat surprising.

Yes, it reveals a sloppiness in handling data which seems to be a persistent problem with parts of NOAA. The data is treated as if it were a minor adjunct to a perfect theory of warming. Whenever data is found that conflicts with the AGW models it is always the data that is suspect, never the models. On the skeptic side, raw data is taken more seriously, especially by people like Steve McIntyre. From what I’ve seen on CA and Real Climate I would say that here data is the foundation of theory while at RC theory is the foundation of data.

The tendency of AGW advocates to prefer theory over facts is why I found this comment by Tammino on the RC website to be remarkably revealing. After a post by a skeptic Tammino wrote:

The most revealing aspect of the AGW debate is this: when the “warmers” make a claim, it’s based on serious research and considerable effort, and if an error is found, it’s admitted and corrected ASAP. When “deniers” make a claim (like [referring to the skeptical poster]), it’s based on the lack of serious research or considerable effort, and if an error is pointed out, it’s excused away or triggers a scrambling attempt to change the subject.

Here once again theory trumps facts. The theory that all “deniers” are ignorant, lazy dilettantes who are probably dishonest as well takes precedence over the facts. The facts are that Mann has never (to the best of my knowledge) admitted to a problem with his work while Christy and Spencer acknowledged their error. Mann still refuses to reveal parts of his work while Christy and Spencer opened up everything to examination. On the non-expert level Al Gore has acknowledged no errors or exaggerations in “An Inconvenient Truth” but Michael Durkin has accepted that “The Great Global Warning Swindle” contained errors that he said would be corrected in a revised version.

The question of whether or not Hansen’s error results in a huge effect isn’t the issue. What is important is that for the past decade we have been assaulted by the media with pronouncements of impending doom because “the warmth of 1990’s was unprecedented” and almost all of these statements could ultimately be traced to Hansen or Mann. If it was so well known that the 1930’s and the 1990’s were separated by a few millidegrees then why were we subjected to such apocalyptic harangues? Those few millidegrees seemed awfully important prior to their reversal.

For certain AGW advocates facts only matter when they confirm the CO2 theory of warming. When the facts go against the theory the facts are inconsequential.

The most revealing aspect of the AGW debate is this: when the “warmers” make a claim, it’s based on serious research and considerable effort, and if an error is found, it’s admitted and corrected ASAP

I don’t think that this characterizes Mann for example. Mann strenuously argued the unarguable. He wasted a lot of time by vehemently denying that there was a bias in his PC method. See for example here . This was the worst possible point for him to contest as his position on this issue was hopeless. Even after the Wegman and NAS reports came down against him on this particular point, he continued to litigate the point in his Replies to the House Committee (and has never conceded this point.)

Steve:
I just went for a quick tour around Canada using the cool GISS map feature here (for others, you click on the map and up comes a list of proximate stations. I simply zeroed in on stations that had data for the 30s). Two things struck me: First where data exists the pattern of temperatures for the 30s is thae same as for the US. Second, there are a large percentage of the sites with missing data for the 30s and few have data prior to the 30s. I do not have the tools for a more systematic analysis. Perhaps others can quickly scan other locations and check my impression.
Of course, if someone has access to an official historical record for Canada that would help – the only one I found started in the 40s.
Looks like the pattern is at least Continental rather than simply for the US.

I was having a few festivals last night with an economist friend of mine and the conversation turned to global warming. I mentioned to him the recent discovery of Hansen’s Y2K error and how five of the top ten hottest years over the past century were in the 1930s. I likely didn’t have all my facts right nor my faculties, but his response astounded me, and thus, I thought you might be interested. He said, “Well, of course, economists for years have predicted the cause of the depression had to do with agriculture. The vast majority of the population on the planet in the 1930s subsisted by an agrarian existence and agriculture was as vital to the economy then, as manufacturing is today. If all of a sudden, it gets very hot for ten years or so in the 1930s, this has a catastrophic and cascading effect on crops, their harvest, but most importantly for the economy, their yield.”

No one needs reminding about the great stock market crash or the retched outcome of mass unemployment during that time. It caused hyperinflation globally (Germany being hit the worst) and was certainly one of the commanding heights that explain Hitler’s rise to power.

Now we have a cross correlation between temperature measurements and global economics that validate one another. I believe Ross and others may have written about this in other threads over the years, but I apologize, the details are rather sketchy. The hot temperatures in the USA during the 1930s help explain much more than merely the mendacity of AGW.

Steve, you are correct, I misspoke. “Predicted” is not the word my friend would have used. It is truly a bad translation from his, as far as I can recall, elegant ruminations. Um, did I mention there was beer involved? Perhaps a better way might have been:

Well, of course, economists have for decades believed that the cause of the depression had to do with agriculture.

Economists are still not in universal agreement regarding the root cause of the 1930s depression, but many now agree that if Milton Friedman’s monetary policy had been in effect, the depression may never have happened. In fact, Roosevelt’s New Deal made things worse. WWII greased the wheels of industry that kick started the economy. That was what got it back on track.

As for the interesting correlation between hot weather from the 1930s and the linked dismal economic productivity during that time, it does suggest, I think, that the temperature records of the 1930s are good despite the two World Wars sandwiched between them.

Just want to give major thanks to the website authors for your work. True scientific curiosity and genius at work. As a previous poster said — IT IS VERY SIGNIFICANT that 4 of the 10 hottest years now take place in the 1930s. When urban heat sink effect was much less. And much less of America was paved over with asphalt etc for further heat sink effects

I’m a newbie to this site and the GW debate. Until I ran into this site I always understood that these government agencies were giving us good data. I am shocked (to say the least) about what Steve found out in his analysis of data fudging at GISS. I guess I don’t understand climate science but I figured observed data is observed data. Why does it need to be “fixed” anyhow? Also I am appalled at some of the photos of these stations where they are apparently observing ground level temperature. I saw some photos at surfacestations.org and I couldn’t believe how they could have an A/C exhaust or BBQ grill a few feet away from these temperature sensing devices. Unbelievable. Why should we even trust data from these stations? I guess Freeman Dyson is right, the government is spending too much money on models and not enough on getting data and observing nature. Also, as a taxpayer I don’t understand why GISS doesn’t post their algorithms unless it affects our national security. Isn’t there a Freedom of Information Act to get this stuff out? Thanks for your work Steve and this wonderful website.

The numbers don’t need to be correct to be factual and facts don’t have to be real to prove arguments. What really matters is that truthiness is equal to or greater than the sum of all claptrap.

This is just sickening. When this finally plays out it is going to leave generations of fringe groups, pseudo-scientists, quantum crystal grippers and religious wackos claiming that since the human induced climate apocalypse theory was bunk, they deserve at least as much respect for their bunk as any other science.

Just as every headcase out there claims to be the modern day equivalent of Galileo, the cry of “remember global warming” will issue from countless crackpots down through time. I always knew Al Gore would ruin the world.

Has anyone done a compare/contrast between US and Australian trends? It appears to me that their (Aus) sites are much closer to being in compliance with the siting requirements. It also appears that they have a good representative coverage of the country/continent.

Well, of course, economists for years have predicted the cause of the depression had to do with agriculture.

Huh? You forget that the prices of agricultural products dropped so low during the depression that it was unprofitable to transport the goods to market. Indeed, there were large surpluses and correspondingly massive burn programs throughout the country in a (failed) coordinated effort to drive up the prices.

It caused hyperinflation globally (Germany being hit the worst) and was certainly one of the commanding heights that explain Hitler’s rise to power.

The German hyper-inflation was in the early 1920s and ended in roughly 1924 with the introduction of the Rentenmark.

So it turns out that man really does cause global warming by awful mathematics and poorly sited weather stations. We don’t have the whole story here. Just some large cracks made in the cosmic egg of the greenies and the (very) soft science types

For certain AGW advocates facts only matter when they confirm the CO2 theory of warming.

Those are the ones that pay better. Did you know Hansen once worked for Enron?

Until I ran into this site I always understood that these government agencies were giving us good data.

They give you the data that will bring in the most money to their bureaucracy, wholly independent though ideologically convergent financial interest than the financial firms that are pushing the debate now in order to grease through trading schemes.

16/Steve: Mann is one of the worst in refusal to admit an error. No doubt. Unfortunately at times you have emulated him, such that I think you see that as a benchmark for yourself (vice normal science standards).

P.s. This is not off-topic. If you are going to discuss this stuff, I should be able to reply.

She says that scientists should release their data, but remained silent when her colleagues Holland and Webster were asked directly but refused to share their data from their recent hurricane paper. Here are some of Curry’s comments:

“I have seen too many examples in the climate field where scientists do not want to make their data and metadata available to skeptics such as Steve McIntyre since they don’t want to see their research attacked (and this has even been condoned by a funding agency). Well, in the world of science, if you want your hypotheses and theories to be accepted, they must be able to survive attacks by skeptics. Because of its policy importance, climate research at times seems like “blood sport.” But in the long run, the credibility of climate research will suffer if climate researchers don’t “take the high ground” in engaging skeptics.

With regards to Steve McIntyre and climateaudit. In the early days of McIntyre’s attacks on the “hockey stick”, it was relatively easy to dismiss him as an industry “stooge.” Well, given his lengthy track record in actually doing work to audit climate data, it is absolutely inappropriate in my opinion to dismiss him. Climateaudit has attracted a dedicated community of climateauditors, a few of whom are knowledgeable about statistics and are interesting thinkers (the site also attracts “denialists”). For all the auditing activity at climateaudit, they have found relatively little in the way of bonafide issues that actually change something, but this is not to say that they have found nothing. So taking the high ground, lets thank Steve and climateauditors if they actually find something useful, assess it and assimilate it, and move on. Such actions by climate researchers would provide less fodder for the denialists, in my opinion.”

Steve C.
It’s great that someone is looking at the base data and locations for accuracy and whether the data is representative of the location. To often we only see the output which results. We know how manipulative that can get. Look at all the “sky is falling” stories we get now from the media.
Steve C.

Lorne Gunter appears to have the facts correct. Four of the hottest years in the US over the past century were in the 1930s, NASA’s GISS quietly corrected the error discovered by Steve McIntyre, describes James Hansen as the “godfather of global-warming alarmism”, and describes McIntyre’s work as “the bane of many warmers’ religious-like belief in climate catastrophe”.

Steve-
I’d like to talk to you further about this piece and possibly get you on the Bob Davis Show on AM 1500 KSTP in Minneapolis to talk about it, but I don’t see any kind of contact information on this site. Please e-mail me so we can talk

I see that Schmidt at realclimate is trying to argue that source coed should not be provided. It’s interesting to recall this statement by a Narional Research Council panel chaired by Tom Karl, Adequacy of Climate Observing Systems, online at: http://www.nap.edu/catalog/6424.html previously discussed at CA here:

The free, open, and timely exchange of data should be a fundamental U.S. governmental policy and, to the fullest extent possible, should be enforced throughout every federal agency that holds climate-relevant data. Freedom of access, low cost mechanisms that facilitate use (directories, catalogs, browse capabilities, availability of metadata on station histories, algorithm accessibility and documentation, etc.), and quality control should be an integral part of data management.

Oh, yes. Hansen was a member of that commission. I guess that he just meant other people.

I have to say that years before I had heard there was widespread skepticism about man-induced global warming, I began having suspicions when I compared different editions of the World Almanac and spotted significant increases in mean monthly temperatures in U.S. cities, particularly in the Southwest, and noted much smaller increases in Eastern cities or in some cases, DECREASES of monthly means. This was just 30 year monthly averages rounded off to 1 digit, comparing 1951-1980 with 1971-2000, but it seemed to me the temperature changes of a given city coincided awfully closely with the population changes in that city. The more rapid the expansion of a city, the more its temperature went up, as happened in Phoenix, Arizona, and in Miami and Tampa, Florida, and in a few shrinking cities like Baltimore, Maryland, and Cleveland, Ohio I noticed decreases. I knew from having lived on Long Island, in a town 50 miles east of New York City, that urban heat island effect is very real and has a very significant localized effect on temperatures, especially at night (up to 15 degrees difference on a clear night.) It’s clear to me, in fact, that if New York City were reverted back to its rural days, that it would very likely be no more than one or two degrees warmer than Providence, Rhode Island is now.

What really sparked my skepticism, though, was hearing about the Southern Hemisphere warming and how it was supposedly warming faster than the Northern Hemisphere, with most of the increase taking place in diurnal minima. I knew from my geography that the Southern Hemisphere, relative to the Northern Hemisphere, has a much faster exploding population that is also rapidly urbanizing. I could only think that the people who compiled the data did not take this into account, because diurnal minima increases without the maxima also going up points squarely at an increase in urban heat island effects. Also, the Southern Hemisphere should have been warming more slowly because of the far greater ocean area, and would have shown this retardation of temperature increase with a “pure” dataset. Not to mention, Antarctica which still has basically rural stations should not be getting colder and building up a thicker layer of ice.

So, apparently, my suspicions about this aspect of MIGW are definitely confirmed from the remarks in this article. My trust in these prognosticators and modellers employed by the United States Government has been forever destroyed, and if their gloom and doomsaying predictions of a global warming catastrophe are based on fudging and careless interpretation, then they are really doing no better than witches at forecasting the future climate of the earth, and we may as well listen to the witches as follow the advice of IPCC and other such organizations.

Steve, according to Gavin at realclimate.org the algorithm for fixing is pretty simple and well-known and so he doesn’t feel like sharing the code base. Do you know anyone else who has implemented such an algorithm to “fix” urban data using rural data? I don’t understand why he doesn’t feel like sharing the code but maybe if someone implemented and validated the “fixed” data it would help determine if access to his code is even necessary. Of course, such fixing seems too simplistic to me anyways but since I’m not a climate scientist I will assume that scientists have figured out such data modifications are reasonable. Maybe the truly scientific way to validate them would be to validate the predictions of these models. I’d love to hear about predictions that have been made before and have been validated/invalidated recently.

Do you remember Mann saying “a conventional Principal Component Analysis (PCA) is performed”. Of course that’s not what he did. There were a variety of tweaks and adjustments in his methodology that were not described accurately. If the methodology is simple, then the code will be simple too.

In econometrics, here is the policy of the American Economic Review:

It is the policy of the American Economic Review to publish papers only if the data used in the analysis are clearly and precisely documented and are readily available to any researcher for purposes of replication. Authors of accepted papers that contain empirical work, simulations, or experimental work must provide to the Review, prior to publication, the data, programs, and other details of the computations sufficient to permit replication. These will be posted on the AER Web site.

Please see the discussion of this on CA over two years ago here . Nothing has changed. There is no reason why GISS shouldn’t also follow a best-practices policy.

“Although the contiguous U.S. represents only about 2% of the world area, it is important that the analyzed temperature change there be quantitatively accurate for several reasons. Analyses of climate change with global climate models are beginning to try to simulate the patterns of climate change, including the cooling in the southeastern U.S. [Hansen et al., 2000]. Also, perceptions of the reality and significance of greenhouse warming by the public and public officials are influenced by reports of climate change within the United States.”

“…perceptions of the reality…”? I’m not sure what to think about that.

Anyway, apparently, the y2k-and-later errors would be important to the authors of that document.

Interesting in that the code-sharing among econometricians has not made econometrics any more useful as
a predictive tool. One might, if one were being snide, say that since there are so few contact points between econometrics/economics generally and empirical reality, that there is “nothing outside the code”, and so
such sharing simply provides the illusion of some exterior referent beyond the math.

But should this kind of discipline be held up as a standard for Climatology?

“Huh? You forget that the prices of agricultural products dropped so low during the depression that it was unprofitable to transport the goods to market. Indeed, there were large surpluses and correspondingly massive burn programs throughout the country in a (failed) coordinated effort to drive up the prices.”

Nothing inconsistant here if warmer weather improves crop yields. A step increase in supply of a perishable commodity will likely crash the market.

Mr. McIntyre desrves a lot of credit for bringing a dose of healthy and data-based scepticism to the climate debate currently overwhelmed by the doom sayers. However, the glaciers worldwide receding and Arctic ice melting seem to corroborating the global warming. I am looking forward to hear opinions on this issue.
Jaroslaw Sobieski, Hampton, VA

Interesting in that the code-sharing among econometricians has not made econometrics any more useful as
a predictive tool. One might, if one were being snide, say that since there are so few contact points between econometrics/economics generally and empirical reality, that there is “nothing outside the code”, and so
such sharing simply provides the illusion of some exterior referent beyond the math.

Nobody ever said it would, duh. The point has nothing to do with “making it better,” it’s all about falsification. Curious you would imply audit a bad thing. Do you not favor the scientific method, or is it only applicable when someone else does it?

The most important aspect of your discovery Steve (congratulations by the way) is that many more people now understand that there are all kinds of “adjustments” being made to the data.

While a 0.15C error in the “adjustments” is important, I think people need to understand that Hansen (and Jones) have “adjusted” the data by another 0.5C besides this error. People need to know that global temperature series as well as the US series has 0.6C to 0.7C of adjustments contained in it as well.

While some of the “adjustments” like the time-of-observation bias appear valid, noone can really tell if these are accurate or contain errors as well. Noone really knows the extent of individual adjustments done or how much it really adds up to.

My view is that ALL the adjustments in climate data (including the ones done to the Arctic sea ice extent last January) are favourable to the global warming proposition.

Let’s go after the next set of adjustments (although I imagine Hansen will never admit to any errors ever again which is a problem.)

It is derived from what the Australian Bureau of Meteorology calls a high-quality network of stations relatively unaffected by urbanisation and disruption. It hasn’t been subjected to any intense scrutiny as far as I know.

The 1930s warming seen in the US is largely absent, but the Pacific Climate shift of the late 1970s seems to be strongly evident. Since 1980, the temperature has risen maybe two or three tenths, which seems in line with the satellite temps. However, trends are significantly smaller than the year-to-year fluctuations, not to mention the adjustments visited upon such data. There is no sign yet of dramatic change.

There are regional changes going on that diverge from the whole. You can check other graphs on the website.

Jaroslaw, a lot of things are happening and have happened to our planet every second of every day of every millennium of every era of every epoch since the beginning of everything. Is anyone with any speck of common sense arguing that nothing right now is happening to the temperatures on earth?

It’s when you get to the What-Does-This-Mean and What-Are-We-Gonna-Do part of it that the fur starts flying.

Someone such as Steve McIntyre rips the Settled Scientists’ erroneous data to shreds, and what does he get from his opponents? “That didn’t matter anyway,” which is a loose translation of the native tongue version: “You @#$%! have no @#$%! clue what the @#$%! you’re talking about, you @#$%! of a @#$%! is going to KILL US ALL! YOU @#$%!-@#$%!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!”

I’m just a regular guy who wants to know what’s going on. Is it that impossible to find out? Because this is what I’m seeing a lot of:

Scientist A: “I have a question about your findings. Can you explain precisely how you got your results?”

Actually the guys I’m sort of feeling sorry for at the moment are Martin Hoerling and his colleagues at the NOAA Earth System Research Laboratory, in Boulder, Colorado. They just had an article accepted by Geophysical Reseach Letters on the 18th of July which is now “in press” titled ” Explaining the Record US Warmth of 2006″.

The introduction starts:

Based on preliminary data, 2006 was the warmest in the 112-year record for the conterminous United States. In NOAA’s official press release dated 9 January 2007(http://www.noaanews.noaa.gov/stories2007/s2772.htm), the annually averaged US temperature was reported to be 55°F (12.8°C), eclipsing the prior record set in 1998. The warmth was widespread, with each of the 48 states being above normal, and warmth in the majority of states ranking among the 10 highest years since 1895. In this article, evidence is presented that the primary cause for the warmth was related to global
warming due to increasing greenhouse gases, and that the simultaneous occurrence of El Niño conditions in the tropical Pacific—-a natural fluctuation of climate—- did not contribute to US warmth.

I don’t think I’ve seen many articles in GRL that reference press releases. And why can’t the NOAA agree with NASA what the order is of record temperatures (or was before it wasn’t)in the US? Why does (did) NOAA think that 2006 eclipsed 1998 when Hansen/GISS/NASA doesn’t (didn’t) think so?

At any rate, assuming this article doesn’t get withdrawn before actual publication, the gist of the story is that they ran some models with and without El Niño and found that out of the +1.1°C “departure” in 2006 from the “normal” temperature range, and based on their greenhouse gas modeling assumptions, that the “ensemble mean signal is +0.62°C (computed relative to the models’ 1971-2000 climatology), accounting for more than half of the observed warmth”.

The “good” news is they say there is only a 16% chance that 2007 will be a record, since about 0.5°C is needed from “naturally occurring warmth” added to a greenhouse signal similar to 2006, and this is not so likely.

How unlucky can you be to have your article accepted and the premise severly undermined three weeks later before it’s even published? Oh well, hope for better days.

Steve…please keep auditing and scrutinizing…it is good for the science indeed!

But, the back-slapping congratulations and cheer-leading coming from the most partisan conservative pundits, as if the entire concept of global warming is now gone, is a joke. I’m not saying anyone here is saying that…or that you have full control over the pundits. Just be careful as you go out on ‘tour’ with this news.

Steve, I have a question about the reliability of the data from the stations. I have a political science background so I am sensitive to data issues, and did some climate study in college, but haven’t kept up with the academic literature. I learned way back when that some of the stations, originally located outside cities, were eventually surrounded by built up areas when the cities expanded. This made them read warmer than they used to because of the effect of reflected heat. Just now I was going through some of the GISS online graphs in a very impressionistic way, looking at rural station data and also that from cities, especially those with recent growth. Of the former I looked at they showed little trend. Of the latter, there was a marked upward trend. But the increase was not uniformly recently. E.g., looking at the Washington Reagan National Airport data the upward trend was in the 60s-80s, then it flattens out. That coincides with the time period in which that part of northern VA was built up. Have any studies been done that compare rural to urban station data to attempt to test the hypothesis about the effects of urban sprawl on the data sets? It would be interesting even at the most basic level to see if those classified as rural showed significantly different variance than those in urban areas. As well, if someone had the time to really look into the locations of the stations it would be worth seeing what % of the increase in temperature readings could be explained by local factors. Or have these studies been done? Thanks for all your meticulous work.

Does anyone know why NOAA has not yet updated their global temperature anomaly data for July yet? Usually they have the previous month updated before the 10th of the month, but here it is the 15th with no update yet. Is it because of the temperature revision as a result of Steve’s work?

The suggestion is that there are “cool farms” – the opposite of heat islands. Since the area under irrigation has tended to increase in the US, there has been a cooling effect which masks the general warming trend. Thus, while 1998 may have been warmer globally, it wasn’t as warm as it otherwise would have been in the USA.

There is this little thing in Hansen 2001 that has been bugging me:
page 7.
“The strong cooling that exists in the unlit station data in the northern California region is not found in either
the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same
period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the
possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this
region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with
neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for
Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted), so these
apparent data flaws would not be transmitted to adjusted periurban and urban stations. If these adjustments were not
made, the 100-year temperature change in the United States would be reduced by 0.01°C.”

The magnatude of the change is not important, but the logic is very Odd. He calls these data flaws.
I started some basic checking. First a simple one Crater Lake NPS HQ. data in ushcn ran from 1919 to
present. I took Prospect 2w, the next closest station in oregon. And I ran a monthly difference
for every month from 1919-2005 n three values Tmin, tmax tmean. Tmean for example, had crater lake
colder by an average of 6.6C.

Well, the place is damn cold and snowy.

Then I checked station Altitudes
Crater Lake 6475ft
Prspect 2482ft

About 1.2km

Then I recall this from Hansen 1999

“We also modified the records of two stations that had obvious discontinuities. These
stations, St. Helena in the tropical Atlantic Ocean and Lihue, Kauai, in Hawaii are both located on
islands with few if any neighbors, so they have a noticeable influence on analyzed regional
temperature change. The St. Helena station, based on metadata provided with MCDW records, was
moved from 604 m to 436 m elevation between August 1976 and September 1976. Therefore
assuming a lapse rate of about 6°C/km, we added 1°C to the St. Helena temperatures before
September 1976.”

So, since crater lake is 1.2km higher than Prospect, it can be expected t be 7.2C cooler
And I found it t be about 6.6C cooler.

I wonder if elevation difference entered his analysis when he compared Crater Lake Ore
to surrunding stations.

A scientist confident that his hypothesis is true does not hesitate to release the data and calculations that support it. If his hypothesis is true, it corresponds to reality — so the scientist knows that additional examination and research by others can only produce additional evidence supporting his hypothesis. Thus, he welcomes and invites such additional research.

If he is concerned that his data may be “misused” or subject to some sort of inappropriate interpretation, he preemptively refutes it by showing why those interpretations are wrong and why his interpretation is correct. He is not afraid to discuss his reasoning or his data.

By contrast, a scientist that refuses to release significant parts of the research on which his claims are based — a scientist who, in fact, declares the matter settled and beyond debate — a scientist who demands an end to funding for those who disagree with him — such a scientist is evidently a man with something to hide. Such behavior is completely inexcusable in science.

Was the cause Y2K or Time of Day? Red faces at NASA over climate-change blunder
“Puzzled by a bizarre “jump” in the U.S. anomalies from 1999 to 2000, McIntyre discovered the data after 1999 wasn’t being fractionally adjusted to allow for the times of day that readings were taken or the locations of the monitoring stations.”

Can you clarify if this error was due to a Y2K computer bug, to a “time of day” adjustment, or to a switch in data sets that happened to occur in 2000?

Actually the guys I’m sort of feeling sorry for at the moment are Martin Hoerling and his colleagues at the NOAA Earth System Research Laboratory, in Boulder, Colorado. They just had an article accepted by Geophysical Reseach Letters on the 18th of July which is now “in press” titled “Explaining the Record US Warmth of 2006″.

The introduction starts:

Based on preliminary data, 2006 was the warmest in the 112-year record for the conterminous United States. In NOAA’s official press release dated 9 January 2007(http://www.noaanews.noaa.gov/stories2007/s2772.htm), the annually averaged US temperature was reported to be 55°F (12.8°C), eclipsing the prior record set in 1998. The warmth was widespread, with each of the 48 states being above normal, and warmth in the majority of states ranking among the 10 highest years since 1895. In this article, evidence is presented that the primary cause for the warmth was related to global warming due to increasing greenhouse gases, and that the simultaneous occurrence of El Niño conditions in the tropical Pacific—-a natural fluctuation of climate—- did not contribute to US warmth.

I don’t think I’ve seen many articles in GRL that reference press releases. And why can’t the NOAA agree with NASA what the order is of record temperatures (or was before it wasn’t)in the US? Why does (did) NOAA think that 2006 eclipsed 1998 when Hansen/GISS/NASA doesn’t (didn’t) think so?

At any rate, assuming this article doesn’t get withdrawn before actual publication, the gist of the story is that they ran some models with and without El Niño and found that out of the +1.1°C “departure” in 2006 from the “normal” temperature range, and based on their greenhouse gas modeling assumptions, that the greenhouse gas “ensemble mean signal is +0.62°C (computed relative to the models’ 1971-2000 climatology), accounting for more than half of the observed warmth”.

The “good” news is they say there is only a 16% chance that 2007 will be a record, since about 0.5°C is needed from “naturally occurring warmth” added to a greenhouse signal similar to 2006, and this is not so likely.

How unlucky can you be to have your article accepted and the premise severly undermined three weeks later before it’s even published? Oh well, hope for better days.

A switch in data sets occured in 2000 for data of previous
years to include time of observation adjustments. However,
for 2000 and subsequent years, they reverted to using “raw”
data without the time of observation adjustments.

How does it make econometrics more falsifiable? Is falsification then equivalent to finding
problems in the code?

You betray your ignorance of the scientific method with this statement. It’s a sort of check and balance. Falsification does not mean the results will be right, only that the process has been done correctly.

Econometrics, btw, is an apples and oranges comparison, yet another bifurcation from someone that doesn’t understand the reasons the analogy isn’t even close to approximate. There’s a huge difference between econometrics and climatology. If you don’t understand why, which I doubt is true, I cannot help you. IMO, you’re simply a troll.

Steve cites a Jan 7, 2007 NOAA press release about 2006 being the warmest on record in the US. Similarly, the media’s reporting of 1998 as the warmest year also springs from NOAA data. I believe NOAA and NASA/GISS approach the data in different ways. Have NOAA’s ranking changed as well? If not…

The issue is this. The scientific method is based on the BELIEF that
Results should be repeatble. Nature doesnt change its rules moment to moment.
Imagine what driving would be like if F=MA changed randomly every few seconds.

BCL I give you a rudimentary example:

IF you raise the temperature of H20 to 212F at sea level it will BOIL.

If you do X, you will Observe Y
AGW is similiar>
If we burn Fossil fuels, Greenalnd will melt and barabara streisands Mansion in
Malibu will get wet.

DO X: Observe Y. Power of science. It is not like prayer or blwing on the dice.

Things to Note about biling water.

1. A conservative will get this answer if he follows INSTRUCTIONS
2. A liberal will get this answer if he follows instruction
3. A jihadist will get this answer if he follows instructions
4. A Monkey will…. yu get the point

The key is ADEQUATE instructions for behavior.

The claim is that the ANWSERS of SCIENCE are Independant of the interests
of the people executing the instructions for behavior. The method is designed
to isolate and identify Bias in the experimentor.

When you do not share methods. When yu do nt give INSTRUCTIONS FOR REPLICATION
you undermine science and its conclusions. Hansen cannot merely tell me what he did.
he must give INSTRUCTIONS FOR BEHAVIOR. Do this, and you will observe X.

The point IS we show our data and methods so that ANYBODY can duplicate the
the results and REMOVE experimenter BIAS as a concern.

The other day a Guy from Big oil told me that C02 caused Global warming.
I questioned his motives and therefore his views. He told me that 2+2=4. I’m wondering what his
Agenda is. He believes in the exstence of dark matter. I bet Exxon is trying
to exploit it.

Re 71: You should copyright that. Great explanation, in terms anyone SHOULD be able to understand about how science deals with investigator bias. Extraordinary claims require not extraordinary proofs, but the most ordinary one — replication. If you keep doing the same thing over and over again and get different results, you are likely not onto somethin. And if you don’t (c) I plan to steal it someday, with attribution, of course.

I knew Real Climate would downplay it before I ever read it on their website.
James Hansen is really as piece of work, particularly sincehe is an astronomer,
not an environmental scientist As stated by others here, I think a fellow scientist
that will not assist in defending his own work has suspect work at best.

#75 How about the revelation of the seminal Hockey Stick problems. the discovery of corrupting of the temperature data set by incorrect adjustments, the discovery of incorrect analysis of Sea Surface Temperatures, the discovery of incorrect analysis of hurricanes, due to incorrect accounting for off shore, and especially East Atlantic hurricanes.

There are probably more.

Curry is quoted by reporters as saying some very odd things, she denies it of course. No one ever asked the reporters if they had a tape.

I would be very interested to know.

By the way, what happened to all the collegiality when Curry was here, where are the proposed joint papers with Curry and Bender?

#71 And giving you ructions has no particular connection with giving you code,
or no result would have been replicable before the invention of computer modelling. Which is
what Schmidt has been saying over at RC.

You are welcomed to steal it ( nly fix the dang spelling mistakes as the
key fr my letter ‘o’ wrks smewhat rhapsodically….aarrrrrggggg)

Now most people do not characterize the scientific method as instructions for
behavior. The idea is something I picked up from Morse Peckam ( Explaination
and Power) and for some folks it is a bit too pragmatic and instrumentalist.

#71 And giving you ructions has no particular connection with giving you code,
or no result would have been replicable before the invention of computer modelling.

But alas, now that computer modeling exists, it becomes yet another example of something requiring audit. And no, results would still have been replicable if sufficient instructions were given, albeit by hand calculation, which would have required (then) even more detail in the instructions. Providing code lessens the necessity to detail all the nits, however (code itself is somewhat instructional).

Many people have noted the similarity between some of the tactics employed by Climate Change Denialists and Intelligent Designers.

I am astounded that anyone can have such simplistic and polemical views.

I wonder what group I fit in. For example, I think that the reality of climate change is obvious. Ie, climate changes, and has done so over historical times. However, I don’t think that warming will necessarily be a bad thing (for example, there are far more people who die each year during winter than those who die during summer hot spells), and it is far from clear to me that any warming that has been going on or might currently be happening is mainly due to the activity humans.

#71 And giving you ructions has no particular connection with giving you code,
or no result would have been replicable before the invention of computer modelling. Which is
what Schmidt has been saying over at RC.

Probably before your time, code was originally taught to first time users on “Big Iron” as Intruction Sets.

I thought it might be interesting to go back and take a look at some of the articles from the past few years for an idea of how the “warmest it’s been in a hundred years!” comments compare to current events.

1. NASA – http://www.nasa.gov/vision/earth/environment/2005_warmest.html
Notable quote: Previously, the warmest year of the century was 1998, when a strong El Nino, a warm water event in the eastern Pacific Ocean, added warmth to global temperatures. However, what’s significant, regardless of whether 2005 is first or second warmest, is that global warmth has returned to about the level of 1998 without the help of an El Nino.

Notable quote: Over the past 30 years, Earth has warmed a bit more than 1 degree Fahrenheit in total, making it about the warmest it’s been in 10,000 years, Hansen said. He blamed a buildup of heat-trapping greenhouse gases from the burning of fossil fuels.
Others might argue it was just a busted calculator.

SteveM. My computer has been on the fritz.. So I’m stuggling and will prolly have to
get a new one..

AGAIN, Hansen (2001) wrote this:

“The strong cooling that exists in the unlit station data in the northern California
region is not found in eitherthe periurban or urban stations either with or without
any of the adjustments. Ocean temperature data for the same period, illustrated below,
has strong warming along the entire West Coast of the United States. This suggests the
possibility of a flaw in the unlit station data for that small region. After examination
of all of the stations in this region, five of the USHCN station records were altered in
the GISS analysis because of inhomogeneities with neighboring stations
(data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to
1911 for Electra Ph, data prior of 1906 for Willows 6W, and all data for
Crater Lake NPS HQ were omitted), so these apparent data flaws would not be transmitted
to adjusted periurban and urban stations. If these adjustments were not made,
the 100-year temperature change in the United States would be reduced by 0.01°C.”

Well. I started by looking at CRATER LAKE. It’s always cold. There is no “data flaw”
I suspect a lack of adjustment for lapse rate. That would be an easy quick check.

THEN, I looked at Lake Spaulding. If you download GISS raw + ushcn adjusts and look at
the annual you will Note this huge Dip Round about 1929. Hansen Removes all data
PRIOR to 1927.

1. If you look at the annual Lk. Spauld GISS data you’ll see that 1948 and 1954
exclipse the low of 1929. Got me thinking. The low experienced in 1929 is not out of range
with subsequent records ( the plot seemed vaguely self similiar if you aligned
the nadirs… vaguely)

Why? If the site is messed up, then I expect Every season will show the chilling trend

NOT!!!. only the summer trend line ( J,J,A) showed a preciptous but roughly linear drop
frm 1914-1929. and this quarter of the data appears, at first blush, to be the cause
of the “COOL” lake spaulding. hmmm

More scouting to do.

Anybdy have any thoughts on what would contaminate data only in the summer?
and do it in a roughly linear fashion for 15 years? shading?

Reading the comments on this thread, you’d think someone just discovered a cure for cancer or something. Did y’all have one of those moments where you spray champagne all over yourselves?

An error was found and corrected. This is good! But, it’s like y’all have never seen this happen before….this is science.

The reality of the global trend still exists regardless of how a piece of data is hyped or downplayed. If its the media and its reporting of data that you despise, go after them! But don’t pretend that this is some ‘hit’ against the overall understanding.

The reality of the global trend still exists regardless of how a piece of data is hyped or downplayed. If its the media and its reporting of data that you despise, go after them! But don’t pretend that this is some ‘hit’ against the overall understanding.

LOL. Let’s wait and see what shows up when the non-USA temperature records are audited.

RE92, Mosh, remember that Lake Spaulding was recently moved. The old location was in a conifer forest area and the Stevenson Screen was placed on stilts to overcome snow depths.

See these photos and not the fence rails that have been squashed by the weight of the snow at the old original site.

three possible situations here:

1)Heavy snowfall during that periods (haven’t time to check right now)
2)Station possibly not on high stilts at that time, closer to snow or still buried late…I’m betting some years they had to tunnel in.
3)Possible tree shading during that time that had been removed later, contributing to later snowpack melt near station.

An error was found and corrected. This is good! But, it’s like y’all have never seen this happen before….this is science.

The reality of the global trend still exists regardless of how a piece of data is hyped or downplayed. If its the media and its reporting of data that you despise, go after them! But don’t pretend that this is some ‘hit’ against the overall understanding.

I think y’all might be over reacting to what you view as an over reaction. What Steve M and Anthony Watts have revealed about temperature measurements and corrections can be summed in the word: “sloppiness”. The degree of sloppiness exposed puts in skeptical minds like mine (and I’d like to think a reasonable mind) some doubts about the reliability of the scientists involved. In the end that will not affect the truth about climate change but it can affect the efficiency by which we arrive at it.

Most of western Europe, Asia, the eastern US and the southern hemisphere were cooler than the 1930s. The southern hemisphere was much cooler.

This was Posted by Patrick Henry | August 15, 2007 2:20 PM on accuweather reply section (I hope it is ok to quote this person or is this not correct procedure? if not please advise, and do not post.

The 1930-40 period has been used as the base period. 1930-60 gives similar results. Also I’d be interested to know if in fact the 250 km radius is more accurate than the 1200km option. And if anyone (expert) could comment on the analysis

The reality of the global trend still exists regardless of how a piece of data is hyped or downplayed. If its the media and its reporting of data that you despise, go after them! But don’t pretend that this is some ‘hit’ against the overall understanding.

It might help if you can remember something I have said way too many times on this topic: We (humans) chose to measure temperature in the places they chose to measure it for some pretty straightforward reasons that have no semblance of satisfying the requirements of sound statistical analysis.

If looking at the available data taught me anything it is that the temperature record around the globe is pretty spotty. You can actually see this in the animations I put together earlier this year: See http://www.unur.com/climate/.

Pause the videos and take a look at how much blank space exists in the 30s, 40s and the 50s. (These videos were made before the corrections to the GISS. Updating them is on my agenda). Compare what you see on the screen with videos posted at http://data.giss.nasa.gov/gistemp/animations/ and ponder the differences.

Ya I went back and looked at Russ’s report. The WEIRD thing is that the cool period for
Spaulding is from 1914 ( begining of record) to 1927-29. Hansen cuts the record prior to
27. So you have to go to GISS raw+USHCN to get it the pre 1927 data..

If you plot annual for Colfax ( 1914-1930), Tahoe City ( 14-30) Nevada city ( 14-30)
and spaulding 1914-30

You can see the discontinuity .Spaulding cools.

Simplistically.

Tahoe city 1914-1930. Tmean is about 6C. linear fit has a slope of about .063 ( r2=.33)
Colfax 1914-1930 weaves its way above and below 14C. linear fit slope is .076 (r2=.30)
Nevada city 1914-1930 trendless at 12C. slope is -.006 (r2 =0)
Lake Sp. 1914-1930 . Drops in a linear fashion from 11C in 1914 to 7.3C in 1930\
The slope of linear fit is -.2033 and R2 = .69

So for 15 years while surrounding sites are flat or trending up, Lake spaulding is
dropping.

THEN afterwards Lake spaulding picks up the trends of the other sites!!!

So, its not a sudden change. It is a gradual cooling. Damn near linear.

By eyeball, looking at seasonal data I think the disconituity is in Summer/fall…
That is the winter and spring data for the four sites track each other trend wise.

I am Looking at individual months. now.

When Hansen spoke of “data flaws” I had to wonder. What kind of flaw manefests itself
over a 15 year period in the summer months and then magically goes away. Winters track
each other so I don’t think its snow cover under the station.. Shading???
Cant be elevation since the problem appears to be seasonal.

Maybe it was just 15 years of cold summers??

The decision to dump crater lake data because it has always been colder there compared
to surrunding stations just seems arbitrary. Crater lake area is COLD, it gets record amounts
of snw because of the local weather pattern. No data “flaw” just a sampling of one of
the chilly spots. I havent loked at the trend data there. in due course.

I’m not disagreeing with the concept of scrutiny and verification, so I hope people aren’t getting that impression.

Re #99: “You might prefer to wish that this is the only error that has been found in “climate science” pronouncements, however it is but one of several.”

I hope to see the several other errors corrected and integrated very soon. Seriously. If you put any scientific analysis under a microscope, you will find errors…what matters is how those affect the understanding. Anyone that expects a scientific endeavor as complex and global as this to be completely error-free, has never seriously studied natural systems. How significant the errors are to the understanding is what counts. As the several errors you cite come in in the next few days (?), and if they significantly alter the understanding, then we are on our way. It just sounds to me like there’s a lot of buzz and excitement over nothing SO FAR. Do the work and let’s see.

Re #101: “The degree of sloppiness exposed puts in skeptical minds like mine…some doubts about the reliability of the scientists involved.”

So, you are going after the scientists and not the science? I don’t think that’s a good move. Firstly, something of this scale is the result of much more than a few individuals research. Secondly, going after the people is a witch hunt and typically done by people with ultra-partisan ideologies that require them to eliminate the opposition. Keep going after the science, the method; that is what is good for really figuring this all out. Stay above the personal battles…there’s enough of that.

“An error was found and corrected. This is good! But, it’s like y’all have never
seen this happen before….this is science.”

Actually we did see this happen before.

Apollo 1.
Challenger
Columbia.

The mistake Hansen made was not a science mistake. It was an engineering mistake.

If you have ever read scientist code, you will know exactly what I mean. This is
not a slam against them, it’s just my experience.

If they opened up the code to outside review they might actually get help from
software guys, stats guys etc.

“The reality of the global trend still exists regardless of how a piece of data is hyped or downplayed.
If its the media and its reporting of data that you despise, go after them!
But don’t pretend that this is some ‘hit’ against the overall understanding.”

I think most folks here, get the importance of finding a .15C error in the last 6 years
of a 120 year old record. The US is only 2% of the land mass. there are 1200 stations.
Anthony and team have checked maybe 25% of that. So, who would think you’d be so lucky as
to find a nugget like SteveMc did in such a tiny tiny sample. MAkes you wonder.

Re #104 – Fair enough…like I said, the scrutiny is good for the overall understanding.

I’m just responding (#102) to these kinds of statements from above:

“The question of whether or not Hansen’s error results in a huge effect isn’t the issue. What is important is that for the past decade we have been assaulted by the media with pronouncements of impending doom”

“Just want to give major thanks to the website authors for your work. True scientific curiosity and genius at work. As a previous poster said — IT IS VERY SIGNIFICANT that 4 of the 10 hottest years now take place in the 1930s”

“To often we only see the output which results. We know how manipulative that can get. Look at all the “sky is falling” stories we get now from the media.”

“Without “the hottest year on record was 1998″ the climate looks more naturally variable. Scientifically this is minor correction. Propaganda wise it is a big thing.”

“My trust in these prognosticators and modellers employed by the United States Government has been forever destroyed, and if their gloom and doomsaying predictions of a global warming catastrophe are based on fudging and careless interpretation, then they are really doing no better than witches at forecasting the future climate of the earth…”

While the ‘auditors’ may be doing some good work and due diligence, the results of this one single reported instance SO FAR are obviously resulting in some overarching statements about ‘winning’ the debate. The statements above (presumably from non-climatologists, but i’m only guessing) are all about the reporting and perception. One commenter even uses the word ‘propaganda’. And right now the talk radio generals are giving orders to their soldiers about talking about global warming in light of this news. What a circus.

I am excited to see the impact of the several errors that have been found (re#93, JerryB)…when will they be released?

It is my belief that if we are to solve this Global problem, we must look at the problem as it is. A Global issue. Everyone all over the world has to deal with this and we can come to a conclusion. Firt things first, we need to anaylize all the facts in evidence and all the data. When this is done we can then reach a hypothosis or better yet a conclusion on what this data means and how we can apply it to solving this issue of Global warming. I believ that making the NASA scientists think twice about their original conclusions can possibly get a more accurate result on the data anaylized. People all over the World are all uptight about Global warming, I think we should look at the results before we jump to conclusions and start a bigger problem.

Hey Anthony the trend in Spaulding divergence from the other sites,
Is NOT seasonal. It also ends in 1931, not 1927 as Hansen suggests.

I looked at (Tahoe city – Lake spaulding) for 1914-31. trend line was
.27 R2 = .92, then I checked every month ( T-S) same trend line, slighty stronger in
the summer months.

1932-49 ( equivalent period on the other side) (T-S) settles down
to a slope of .06.

When you look at T-S ( tahoe- spaulding) you expect to see a small slopes ( sites move
up and down together). BUT from 1914 – 1931 Lk spaulding IS cooling in a regular linear
fashion relative to the sites closest to it. Gradual Linear cooling.. And its there
Every Month of the year. So it isnt snow.. a shade tree growing? The fascinating.

A commenter at realclimate said: – and Gavin Schmidt, so quick to opine on critical posts – let this untrue statement pass without comment:

My understanding is that McIntyre pointed the finger at something that was wrong, but that the NASA group actually figured out what it was and did the corrections.

Others (e.g. TCO at bcl) have made similar comments. This is not true. I gave them a diagnosis and the diagnosis has been confirmed. I identified a change in data version pre- and post-2000. Here is the letter that I sent them on Aug 4 (which I had previously posted up on another thread), showing that I had diagnosed the problem to changing data provenance. Yes, they calculated the impact using their code but the diagnosis of the problem was delivered to them.

Dear Sirs,
In your calculation of the GISS “raw” version of USHCN series, it appears to me that, for series after January 2000, you use the USHCN raw version whereas in the immediately prior period you used USHCN time-of-observation or adjusted version. In some cases, this introduces a seemingly unjustified step in January 2000.

I am unaware of any mention of this change in procedure in any published methodological descriptions and am puzzled as to its rationale. Can you clarify this for me?

In addition, could you provide me with any documentation (additional to already published material) providing information on thecalculation of GISS raw and adjusted series from USHCN versions, including relevant source code.

Yes, I agree with you that this does not appear to be, from the data alone, a significant correction. And yet it changed the ranking of 1998 from warmest to second warmest – in other words, if this is an insignificant change, then the fact is that 1998 was never significantly warmer than 1937 (or whenerver it was)! And *that* is significant from the public perception point-of-view.

It’s pretty easy to get people to want to do something when you tell them that 1998 was the warmest year on record. That becomes harder when it is changed to 2nd warmest ever – doesn’t have the same impact. And I think many people will be questioning this very point – viz “why did you make such a song and dance about 1998, when this small and, in your own words, insignificant change results in such a large change in which years were hottest? Do you have an agenda?” Like that. Regardless of whether you think AGW is real or not, and regardless of whether you think we need to spend trillions of dollars on a fix for it, I don’t think the general population (or, at least, a significant proportion of it) will look at such claims in the same way again.

This may turn out to be bad, if AGW is shown to be correct – I have previously made the point at RC that exagerating a position is not helpful and may actually be dangerous if you are right, but they didn’t seem to get it (although that may be because of the editing that RC is infamous for). They may be in for a real world example of why you should not be careless with the truth, even for the noblest of causes. Even if you are certain you are right, you should not exagerate because, like this event, it’ll come back to haunt you – the boy who cried wolf and all that.

I have personally heard an IPCC scientist say that we are “certain” that temperatures are rising, that we are “certain” that CO2 is the cause and we are “certain” that the CO2 to blame is man made – on national TV in Australia! That is another example of exageration that is entirely unhelpful, not to mention untrue. The implication is that he is 100% sure of this, that it is beyond question. No “out” for him if he’s even a little bit wrong. If it turns out that anthropogenic causes are even 80% of the increase, he looks like an idiot (he clearly is not) and severely damages not just his own reputation, but that of the IPCC and scientists in general (not just climate scientists). I’m fairly sure that, like Gavin@RC, he truely believes he is right and that he wanted to make a difference; wanted to make people sit up and take notice, and do something about it. All very noble and altruistic – enviable, even. But ultimately, I feel, counter-productive.

So, you are going after the scientists and not the science? I don’t think that’s a good move. Firstly, something of this scale is the result of much more than a few individuals research. Secondly, going after the people is a witch hunt and typically done by people with ultra-partisan ideologies that require them to eliminate the opposition.

You mean like this?Activists Trying to Shut Down Climate Debate, Skeptics Say

#108 Gavin starts the promotion in the head post of “1934 and all that”.

Steve McIntyre wrote an email … there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data.

Steve, are you sure that Hansen et al did not know they had a problem before you alerted them? They reacted fast and without fuss, suggesting that they knew they were going to be caught out, and that if they delayed they would be just digging a bigger hole for themselves. I think they knew they had a problem and were waiting and hoping and praying for a big El Nino to get them out of trouble. Seventy-three years of CO2 addition to the atmosphere and we can’t beat a record set three generations ago?

To avoid the problem of UHI etc, I chose five records from mainly forestry research stations. I though that five was enough to demonstrate something. The question is if you took a high quality record from each of the US states, would that give you a temperature record that captured 99% of the accuracy that the 1,200 station Hansen effort purports to have? Or even 99.9%? I think this is very important because the surface warming of the last 30 years might be wiped out completely – and end up similar to the satellite record. If it can be shown statistically that a 50 station record, or 100 station record, is as good as the NASA one, that means the field is wide open.

I enjoyed your news coverage very much.
Environment Canada records back to 1938. http://www.climate.weatheroffice.ec.gc.ca/climateData/canada_e.html
I may not understand all the numbers, but I do think that it’s all bunk that people are causing the climate change. I seem to remember in the 70’s all the hype about how we were headed for the next ice age for the same reasons.
I also seem to recall being taught that at one time all of Canada and most of the U.S. were covered by massive amounts of ice. Seems that if it was not a naturally occurring cycle of the planet to warm and cool, we’d all be Mexicans today.
I do watch allot of science programs, many of which, using geological evidence prove that this is not the first time that the earth has warmed. One stated that there were records of medieval times being warmer than now, another on the Discovery Channel SHOWED through excavations in China, Africa and others continents, that there have been at least 3 periods over the past 10,000 years (not positive on that number) that were worse than predicted for us now. Researchers in the Antarctic excavating nesting sites of penguins found that over thousands of years, the ice has receded and reclaimed the land several times. Was not ancient Egypt once a fertile green and thriving land? Where were the cars and factories spewing toxins then?
The problem is that most people are willing to believe what’s being shouted the loudest and it seems that the old days of the one nut on the corner screaming the world is ending has infected the masses. Plus that’s where the grant money is loosest now, to prove it, not disprove it. These days no one wants to hear that all is well or that they are helpless in fixing things.
Sorry to ramble on, just my opinion as a casual observer and fellow traveller on this spinning wobbley planet we all live upon.

I seem to recall a Gavin Schmidt statement that only 60 properly spaced stations are needed to measure the global temperature. Since the US only covers 2% of the earth, that would mean you’d only need one US station to get a good measurement ;-)

Gavin, in true Mannian fashion, is choosing his words carefully. Technically, you didn’t know exactly what was causing the problem – you could just tell there was one and conjecture as to the problem. GISS had to figure out exactly what and where their SNAFU was. After all, you didn’t have access to all their methods and code to confirm you suspicions. He’s just stretching the truth as far as he can in an attempt to belittle things.

I also love how Gavin said, “the NASA group actually figured out what it was and did the corrections,” as if they beat you to the punch or something.

My response to Harry (#110) was deleted…I said nothing of religion, I was acknowledging the partisanship he points out and then pointing it out on this thread….I guess that’s “bad behavior”. His post gets to stay, but not mine….nice.

Well it’s little bit odd situation in Finland, too. Helsinki, now urban city of some 1 million population has nowadays higher average temp than Mariehamn, Åland (island southwest). But untill 1950’s Marieham has higher temp than Helsinki. So how reliable are those official figures of Helsinki temperature? I think we have same problems than americans with more concrete and asphalt near thermometers. However officials deny these arguments as pure propaganda. However in China they found some 50 to 80% of warming as just Urban Heat Island effect. I’m very skeptic especially of cooling period 1940-75 data when UHI and other failuries were even stronger than today.

As I read Gavin Schmidt’s response, I got curious about how much of the data in the GHCN come from the U.S. Of course, there are more sophisticated ways of doing this, but I looked at a few counts.

First, 481 out of 4,495 locations (identified by their WMO number) in the GHCN are in the U.S.

Second, if we exclude the locations in the U.S., Antarctica and ships, then of the remaining 3,974 locations that appear in the GHCN, only 930 had a non-missing raw temperature in January 2007 (checked using the August 8 version of the GHCN raw means downloaded from their FTP site).

Out of the 224 countries in the data set, 141 had at least one data point for January 2007 and for 41 out of 141 those monthly means was missing.

There are 7,086,372 monthly data points in the August 8, 2007 version of the GHCN. 365,755 of them are missing values. 2,244,733 of the non-missing values are from the U.S.

I hope these numbers put in to proper perspective the importance of U.S. data.

Sinan

PS: The global average does not go down by a lot after the revision because of the 1200km grid procedure.

Re #101: “The degree of sloppiness exposed puts in skeptical minds like mine…some doubts about the reliability of the scientists involved.” So, you are going after the scientists and not the science? I don’t think that’s a good move. Firstly, something of this scale is the result of much more than a few individuals research. Secondly, going after the people is a witch hunt and typically done by people with ultra-partisan ideologies that require them to eliminate the opposition. Keep going after the science, the method; that is what is good for really figuring this all out. Stay above the personal battles…there’s enough of that.

Brian, Brian, Brian. Witch hunts and ultra-partisan ideologies have nothing to do with what I said. Lets stay on the topic of the thread. When scientists do sloppy work or tolerate sloppy work or seemingly get a free pass when doing sloppy work, I think they deserve all the criticism and critiques they receive. The scientific method is alive and well, but it is subject to abuse. I would appreciate your response to what has been revealed by Anthony Watts and Steve M regarding the quality control used by those responsible for obtaining realistic temperature measurements, statistically adjusting them for historical purposes and informing the public and other scientists how these adjustment are made (in detail), how the uncertainties for measuremnets and corrections are determined and what assumptions are made in consideration of the quality of the station data used for making adjustments. Those responses mean more to the discussion than talking about how and why people respond the way they do.

Re #122: Antero, I am very interested in creating a group to check he Finnish
meteorological mesurement stations. My guess is that they are better handled than the US ones … but on the other hand the belief in the US was that the stations were ok before Anthony started his work. The only way to find out is to start collecting information. If you are interested contact me by mail (use google to search for my name, you’ll find the mail address on my web site). Other finns are also welcome because there is a large area to cover.
r.
Lars Silen (physicist)

#127
Sinan:
Finland looks like it has a similar profile for the 30s as does North America. Do the years line up? If so then the ROW trend predicated on low temperatures in the 30s looks more and more suspect. What is holding down the temperatures in the ROW series?

Our Climate Change Minister has stated the following in answer to a question from me regarding the release of data by both Mann and JOnes.

My officials advise me that the work of Mann and Jones has been investigated, (for example, Mann’s work has been studied by the US National Academy of Science) and their results have been confirmed by independent studies.Comments?

Paul:
It looks like the reply you received comes directly from that British comedy series, “Yes, Minister”. Note the subtle separation of investigation by NAS and ..confirmed by independent studies. Makes you believe that NAS confirmed Mann’s methodology. Those civel servants really do have a way with words!!

I honestly have no idea what is driving the anomaly data. However, the case of Ankara, Turkey is interesting. Note that Ankara was basically a small village in the 20s and now it is city of 3+ million people. (plot of raw data).

Just heard Steve interviewed on Radio 4’s today programme.(6:50AM UK time) This is a leading current affairs prog in the UK. Steve presented dispassionately but got his point across and mentioned Tucson. Amazingly, Steve was listened to quietly with no hectoring and no counter opinion was presented from the RC camp although that must come later.

I also heard Steve. I thought it was an excellent interview, well paced and balanced. The interviewer allowed Steve to state what he had found and the effects this has had on the US temperature record. I think the observation that the data is both adjusted and also indicates that 4 of the hottest 10 years are in the 1930’s will be quite an eye opener for the listeners.

For those who don’t know BBC’s Radio 4 ‘Today’ programme…I would go further than Paul and state that it is ‘the’ leading current affairs programme on either radio or television in the UK.

Interstingly following Steve’s interview there was an article on a quantum tunnelling experiment in which claims that photon travel at faster than light speeds were examined! Here the interviewer and interviewee were not quite so clear as Steve!

Nice of the BBC to allow “sliding” the timer directly to the interview. Even better (though not related), may be the what follows: namely the “tunneling” between the constraints of “special relativity” to break the light (speed) barrier!

Congratulations both on the interview, and on that bit of BBC programming good fortune.

First of all it tells us thet there are quite a few scientists who either work sloppy or are willing to fake data to make it look better to support their theories. In the second case this is a very stupid thing to do, since there will always be some point where someone takes a closer look at their data, just as has happened this time.

But what this does not tell us is that global warming is just a fake. Even if quite a few people would like to make it look like this data is the one and only indicator for the global warming.

There is not that much of useful temperature data available, 100 years is nothing for clima change, it just looks like a long time from the standpoint of a single human. Also air temperature is a quite unreliable measurement as all the references to things like urban/rural, irrigation etc. show. Local temperature can be influenced easily by a lot of factors. Unless there is a very dense net of sensors each working with well defined parameters the temperature data is not very useful.

Actually air temperature is quite insignificant as an indicator for the overall clima situation. Much more energy is stored in the other components of the Earth climate system, like water in its three states and land mass. The most critical thing we see in the last couple decades is the retreating of glaciers and the polar caps. That is data which does not need any adjustments for measuring error, you can simply look at sattelite photos of the polar caps or look at glaciers today and photos from 50 years (or even less) ago. Melting ice takes a lot of energy and that is energy which has now been added to the clima system. The same is true for water vapor, it stores a lot of energy.

LOL. Lets wait and see what shows up when the non-USA temperature records are audited.

Yeah, and when all those faked glacier melt photos are exposed. And when it’s shown all those migratory species are in collusion with Hansen and begin migrating early, just to support him …. Yeah!! Just wait!!

E.g. Udine-Campoformido, North-East Italy, used by GISS, has totally wrong data since 2004, using data just from 8AM to 5PM for making averages, so overestimating mean monthly temperatures (overall during summer months, when overestimate is 2-3°C).
Other stations data seem more correct; but, e.g. Verona-Villafranca airport, used by NOAA, in the last years is often warmer than the corresponding rural station, and it is often among the warmest (both for values and for anomalies) of all North Italy.
There is also a problem of average value: as for USA, here rural stations show nowadays mean temperatures not above 30-40ies mean values, with a temperature drop between about 1955-1960 to 1990-1995 (laso, a strange case, plain stations show the chillest decades in 70-80ies, while mountain ones show them between 50ies and 70ies, with the 80ies warm almost as 90-00ies); and, NOAA use a 1968-2006 average, while many Italian and European graphs and maps based on NOAA archive data start just from 1948 (just at the beginning of cooling).

the glacier photos arent faked, and nobody has implied that they are.
What they have pointed out is that melting, where it is occuring is either;

1) Not due to warming (ex. Kilamanjaro)
2) Or it is a continuation of trend that started long before the middle of the last century.

1. That’s the problem with the myopic deniers. They believe there should be one monolithic global response to warming and are unable to grasp the concept that warmer temperatures, equate to more water vapor, which equates to ice/snow losses on the leeward side of mountain ranges and more deposited on the windward sides (eg Kilamanjaro, Antartica). The expectations for all regions of the globe to be effected the same is logically infantile.

2. To accept this, one would have to accept anthropogenic aerosols as the explanation for mid-20th Cenury cooling, which would then run counter to the denialists claims man’s activities can’t effect the environment.

If the geese are heading south early, wouldnt that imply that winter is coming early? Not what you would expect in a warming world.

Do you have an SOLID arctic ice figures for the 1920s-30s? Seeing as the 30s were the warmest on record for Greenland (and now, the USA), I wonder what the ice looked like in August then? We didn’t have satellites monitoring the situation 24/7 like today.

I call “observational bias”.

Your polar bears are doing quite well in the (lack of?) ice, in case you wondered. Thriving, actually.

Do you have an SOLID arctic ice figures for the 1920s-30s? Seeing as the 30s were the warmest on record for Greenland (and now, the USA), I wonder what the ice looked like in August then? We didnt have satellites monitoring the situation 24/7 like today.

Bull butter. You can’t claim ’34 as a record in the US, for the same reason Hansen in 2001 stated you can’t call ’98 a record. They are well within the margins of error.
And a regional warming, is not global warming.

Your polar bears are doing quite well in the (lack of?) ice, in case you wondered. Thriving, actually.

Wildlifer, your behaviour is “turning the omelette” as we say here.
Who is telling that global warming is hitting everything everywhere in almost the same way? Global warming supporters.
Who told that all glaciers around the World are melting and retreating due to global warming, despite many ones are not (even not retreating and melting – I consider among them even the inner Greenland and Antarctica, with about 70-80% of World’s land ice)? Again global warming supporters.
Who is claiming every month that 2005 was the warmest year ever despite being well within uncertainty range? Once more global warming supporters.
etc. I agree such thinking is more Al Gore’s or WWF’s than Hansen’s own, but you cannot blame “deniers” (I do not consider it myself: I do not deny e.g. warmer temperatures or less Arctic ice, if there is not exaggeration) to have brought discussions to such puerile level.

Then I do not understand your answer: what do aerosols matter (even if they were responsible for 1945-1975 cooling phase) with a retreating trend for many glaciers (e.g. Alps ones) which began back to mid XIXth century, continued during second quarter of XXth century, and the last melting phase is even the smallest among these three?

Look at the NAS and Wegman report yourself, and send them some snippets. It sounds as though they got the Mann summary of the NAS report, and didn’t actually read the NAS report very politely, very deferentially, taking Mann’s study apart. The NAS DID state (without details) that several other studies supported the general conclusions, but did not do due diligence on them, and one in particular, IIRC, isn’t even completed a year later, much less published and reviewed.

1. Thats the problem with the myopic deniers. They believe there should be one monolithic global response to warming and are unable to grasp the concept that warmer temperatures, equate to more water vapor, which equates to ice/snow losses on the leeward side of mountain ranges and more deposited on the windward sides (eg Kilamanjaro, Antartica). The expectations for all regions of the globe to be effected the same is logically infantile.

The problem with your response to MarkW is that it completely ignores what he posted. You’re creating a strawman of hefty proportions. The post is more than a little insulting too. Mark posits that melting is either caused by other factors (and gives an example), or is the continuation of a long term trend that precedes the current AGW finger pointing. This wonderful forum has very few people that deny global climate change, and SteveM himself has stated he believes that global warming of exists and is probably due in part to anthropogenic causes. The issue that is constantly debated here is the accuracy of the temp record. If the warming trend is smaller than originally publicized (and that’s the way things are moving), then the case for solar cycle and other warming causes gains plausability.

I find it more than a little bit irritating that people like you and many of the folks over at RC want to simply dismiss CA as a denialist site. To do so suggests you really haven’t read the articles or the discussions.

Roy Spencer thinks that a sign of global cooling will be a very rainy period as the energy redistributes.

Vapor turning into rain sets free thermal energy. Aggregate changes of water are always connected with a lot more energy than just warming or colling the stuff. And energy does not get lost, if it can not be radiated into space it stays in the system.

The problem with this whole climate thing is that the system is so complex that it is beyond todays capabilities to build a usable model (of course you will find people telling you otherwise). It may be possible to simulate isolated effects but the problem in the real world is that they are not isolated. Due to the complexity of the system you will always have the effect that trends seem to reverse, local climates seem to prove the opposite of whatever you believe. It is just too damned complex to make a good guess about what is going to happen. The more cautious we should be to adversely influence the system, it is not a good diea to experiment with the only ecosphere we have access to.

Whatever mankind does, anything done on some scale does influence the climate. A change in climate is not the end of the world, but it has a good potential of turning this planet into a place less well suitable for humans.

Who is telling that global warming is hitting everything everywhere in almost the same way? Global warming supporters.

Wrong. You can find a plethora of described “exceptions” in the lit. (it seems odd you would try to play ignorant of that fact when it’s the denialists who try to play up those “exceptions”).
What your amorphous “global warming supporters” do state is unsupported by you. The fact is that it’s global phenomena, with evidence in every region on the sphere that cannot be explained by any known mechanism, other than increased CO2.

Who told that all glaciers around the World are melting and retreating due to global warming, despite many ones are not (even not retreating and melting – I consider among them even the inner Greenland and Antarctica, with about 70-80% of Worlds land ice)? Again global warming supporters.

Again, unsupported by you. The facts are that the majority (note: not all) of glaciers are melting. The fact is, due to a plethra of reasons, we expect the southern hemisphere to lag behind the northern hemisphere. The facts are we expect – due to warming and increased water vapor, the higher elevated interiors of Greenland and Antartica to gain some, while the edges melt away.
It is the deniers who seem ignorant of the physics of what happens when warm moist air is forced to rise. And it’s also the deniers who must continually be reminded that Global Warming does not equal >OC/32F everywhere on the globe.

Who is claiming every month that 2005 was the warmest year ever despite being well within uncertainty range? Once more global warming supporters.
etc. I agree such thinking is more Al Gores or WWFs than Hansens own, but you cannot blame deniers (I do not consider it myself: I do not deny e.g. warmer temperatures or less Arctic ice, if there is not exaggeration) to have brought discussions to such puerile level.

LOL … right as if it’s not the denial industry – from evolution, HIV/AIDs, tobbacco, and global warming etc – that’s not trying to reduce science to “mere” opinion, where all other opinions are equally valid.

Then I do not understand your answer: what do aerosols matter (even if they were responsible for 1945-1975 cooling phase) with a retreating trend for many glaciers (e.g. Alps ones) which began back to mid XIXth century, continued during second quarter of XXth century, and the last melting phase is even the smallest among these three?

Your last statement is in contradiction with everything I’ve read on recorded rates of ice melt – mainly that the current rate of melt in areas affected is exponentially faster then what has ever been recorded.

The facts are that the majority (note: not all) of glaciers are melting.

That’s not true at all. Only a very small percentage are even monitored in the first place, so the term “majority” loses immediately. Only a small percentage of those monitored are known to be “melting.” Many are simply receding, which is different than melting, some are advancing, some are stable.

The problem with your response to MarkW is that it completely ignores what he posted. Youre creating a strawman of hefty proportions. The post is more than a little insulting too. Mark posits that melting is either caused by other factors (and gives an example), or is the continuation of a long term trend that precedes the current AGW finger pointing.

A “long-term trend” that speeds up is oxymoronic.

This wonderful forum has very few people that deny global climate change, and SteveM himself has stated he believes that global warming of exists and is probably due in part to anthropogenic causes. The issue that is constantly debated here is the accuracy of the temp record. If the warming trend is smaller than originally publicized (and thats the way things are moving), then the case for solar cycle and other warming causes gains plausability.

The solar claims haven’t a leg to stand upon. So it really doesn’t matter if the record is off a few hundredths of a degree, because the (your) argument won’t probably won’t change. If CO2 can’t cause an increase of, let’s say, 2C, it can’t cause an even less increase of 1C. That entire line of thinking is illogical. Why would less of an increase in temps (again in 100ths of a degree) eliminate CO2 as a causal factor (aside from the requisite denial of CO2’s role as a greenhouse gas and the entire field of radiative physics)?

I find it more than a little bit irritating that people like you and many of the folks over at RC want to simply dismiss CA as a denialist site. To do so suggests you really havent read the articles or the discussions.

Well any willing association with the author of the thoroughly fisked allegations against Mann, and any willing association with Anthony “science through photography” Watts, defines “denialist site.” (I mean, come on, Watts should just take a picture of some snow and declare warming falsified.)

Your last statement is in contradiction with everything Ive read on recorded rates of ice melt – mainly that the current rate of melt in areas affected is exponentially faster then what has ever been recorded.

If today, I set up a rain gauge on my property and begin keeping a record of rainfall, I will eventually experience a thunderstorm, after which I’ll be able to claim that rainfall on my property is falling at a rate exponentially faster than what has ever been recorded.

Now, I’m reasonably sure that you are able to recognize why that claim may be both true and meaningless. Can you not see how that applies to something like the recorded rates of glacier retreat?

Steve has never said ‘no global warming’ , nor ‘CO² has no effect or any other progosis along those lines. The significance of what Steve and A have done is to test the data and analyses techniques for errors and in so doing exposed some rather sloppy arithmetics. The real scientists will take this help and use it to improve their own analysis and therefore eventually their computer models. I admire greatly what Steve is doing and the professionism he brings to his forum. He has studiously avoided exagerations. It is trully brilliant work. Now, if REALCLIMATE could bring the same standard of reporting/comment to bear climate understanding would move forward faster. Note, it is only AGW people who are publishing what appear to be non-factual opinionated statements. Steve , NEVER

Thats not true at all. Only a very small percentage are even monitored in the first place, so the term majority loses immediately. Only a small percentage of those monitored are known to be melting. Many are simply receding, which is different than melting, some are advancing, some are stable.

Do always feel the need to be insulting when talking with people who disagree with you? If so, that indicates so insecurity on your part.

1. Thats the problem with the myopic deniers. They believe there should be one monolithic global response to warming and are unable to grasp the concept that warmer temperatures, equate to more water vapor, which equates to ice/snow losses on the leeward side of mountain ranges and more deposited on the windward sides (eg Kilamanjaro, Antartica). The expectations for all regions of the globe to be effected the same is logically infantile.

2. To accept this, one would have to accept anthropogenic aerosols as the explanation for mid-20th Cenury cooling, which would then run counter to the denialists claims mans activities cant effect the environment.

1) Sheesh, you manage to read in a lot that isn’t there. Perhaps you are overcompensating for something.
I said nothing that can be interpreted as saying that there is one answer for every problem. I also said nothing about whether warmer temperatures would affect the amount of water in the air.

As to my example, Kilamanjaro is losing it’s glacier because deforestation is decreasing the precipitation. The mountain itself has gotten cooler over the last few decades.

2) Now you are really being assinine.
2a) I have never said that man can do nothing to affect the atmosphere, neither have most of us skeptics. If you want to be taken seriously, this is the kind of
2b) Did I say anything that would lead you to believe that I was talking about only the last few decades? The trends I was refering too have been going on for over 400 years.
2c) Even if you accept the possibility that aerosols can cool the earth, it’s far from proven that the cool down experienced in the 70’s was caused by aerosols. (whether the net affect is positive or negative is still subject to debate – a recent study has found that the aerosol cloud in China is causing net warming.)

So you believe it doesn’t matter that the temperature sensor network hasn’t been maintained? You already know what the right answer is (because the models told you?) so accurate data is only something that “denialists” care about?

Shouldn’t that bother everybody? Where’s the data? By how many degrees (or perhaps 1000ths of degrees) are these sites polluted? And is any of that offset by rural sites where there’s irrigation and/or man-made lakes? Where are those photos? They don’t exist and won’t exist because truth isn’t the purpose of surfacestations.org.
It’s useless.

By how many degrees (or perhaps 1000ths of degrees) are these sites polluted?

Yet another misstatement of the scientific method. Anthony is providing _audit_ of the sites, not calculating what a difference it makes. The onus is on those that use the data to prove _their claim_ that the sites represent “high quality” data.

The fact remains that the sites do not meet up (many don’t even come close to meeting up) to the standards.
It’s not up to us to prove that the noted contamination has an affect on the data. You are the one who wants to use the data, you need to prove that the data is good.

Do you honestly believe that a station in the middle of a parking lot is only affected by a few hundredths of a degree?

If you believe that Anthony’s hoardes are deliberately ignoring good stations, or stations that have cool biases, I suggest that you get yourself a camera and take a few pictures. Until then, you sound like a whiny child who isn’t getting his way.

BTW, it’s been shown that irrigation causes temperatures to increase. All that water vapor in the air, dontcha know.

You may wish to broaden your reading to lists that don’t have the words ‘Selected Literature’ in the beginning of title.

You may wish to expand on what in the above statements is in contradiction. All glaciers melt. If they didn’t the world would be covered in ice. If the rate of melt is less than the rate of ice formation then the glacier increases in mass and typically advances. If the rate of melt exceeds the rate of ice formation then the glazcier loses mass and typically recedes. Note the use of the word ‘typically’ in both instances. The dynamics of glacier retreat and advance are not limited to mass balance alone. Confounding factors such as the presence of water under part of the glacier and the amount of siesmic activity can affect the rate of advance or retreat irrespective of any changes in the mass balance.

The mass of the glacier doesn’t tell us whether changes are due to increased melting or decreased ice accumulation. To know that you need to monitor precipitation and melt rate. Does your literature document how many glaciers are monitored for mass, loss to melting, and gain from precipitation? I’m curious where the alleged contradiction lies.

Shouldnt that bother everybody? Wheres the data? By how many degrees (or perhaps 1000ths of degrees) are these sites polluted? And is any of that offset by rural sites where theres irrigation and/or man-made lakes?

Good questions. The fact that we don’t have the answers makes the AGW claims based on those stations suspect.

Wildlifer, I hoped you were a reasonable person, but I see you are only a dogmatic believer of every thing Al Gore may say… You just give unsupported data, and I am just tired from repeating that Alps glaciers lost the largest part of their mass between 155 and 125 years ago, than a second between 1920 and 1950, and the last one since 25 years is the smallest (also because there is little left), and that is a thing accepted by every climatologist or meteorologist (even AGW strong supporters) here; so I am tired to make notice to people like you that Antarctica and Greenland growing ice masses are not just a few glacier or some ice, but millions square kilometers and at least 70-80% of glacial mass of all World; that Antarctica is not just late, but it is behaving in a completely different way than predicted, for glaciers (inner and many coast ones) for ice pack and for temperatures; that you do not know the basis of mathematics, speaking about meaningless things like “exponential grow/decrease” (where? and do you know how an exponential decrease works? because maybe you were right on exponential retire of Alpine glaciers: they are losing less and less mass, tending to stabilise to a fixed value); that you are really annoying when saying that “deniers” make a trick by giving any opinion the same value, while you did just the same on the phrase above (who told that 1998 was not a record, as told by supposed “deniers”, because of uncertainty range?); you are the one who say “nothing cannot explain GW, than my theory would”, when GHGs theory is full of “holes” (AGW supporters begin 2007 by saying all temperature increase since 1880 was man-made, then had to admit that until 1940 Sun prevailed, then had to admit that Solar influence until mid ’80ies had a very good correlation, then tried to deny any correlation post-1985 – as if GHGs suddenly begin to act – with a puerile and false statement, then had to pass from “exponential grow” of temperatures to “no real grow” since 1998 waiting for a 2010-2014 peak: what next?); etc.
I am so tired to repeat these things by weeks, months and sometimes even years, that you would excuse me if I consider the discussion ended from myself and all the data given in previous posts and articles, here and in other sites.

So you believe it doesnt matter that the temperature sensor network hasnt been maintained? You already know what the right answer is (because the models told you?) so accurate data is only something that denialists care about?

No, I believe you can’t determine a number from a picture.
Intuitively, one might believe the sites have agregiously influenced the numbers. But then too, intuitively, one might believe as they drive down I-40, that the earth is flat.

BTW, its been shown that irrigation causes temperatures to increase. All that water vapor in the air, dontcha know.

I think he’s referring to the article published recently that suggests that irrigation causes cooling because irrigated areas within the study area have a reduced Tmax…which gets us into the proscribed thermodynamics argument if we go beyond pointing out that we already know that increased humidity decreases TMax and increases Tmin.

What you don’t get is that these stations violate NOAA’s own guidlines, yet they consider them the “quality” sites. Dr Pielke had already pointed out the problems and was dismissed because his survey was only in Colorado. As Anthony has shown that’s not the case. Try reading NOAA’s guidelines for site placement.

No, I believe you cant determine a number from a picture.
Intuitively, one might believe the sites have agregiously influenced the numbers. But then too, intuitively, one might believe as they drive down I-40, that the earth is flat.

Watts is not trying to “determine a number”. He is assessing compliance with standards.

Wildlifer, I hoped you were a reasonable person, but I see you are only a dogmatic believer of every thing Al Gore may say You just give unsupported data, and I am just tired from repeating that Alps glaciers lost the largest part of their mass between 155 and 125 years ago, than a second between 1920 and 1950, and the last one since 25 years is the smallest (also because there is little left), and that is a thing accepted by every climatologist or meteorologist (even AGW strong supporters) here; so I am tired to make notice to people like you that Antarctica and Greenland growing ice masses are not just a few glacier or some ice, but millions square kilometers and at least 70-80% of glacial mass of all World; that Antarctica is not just late, but it is behaving in a completely different way than predicted, for glaciers (inner and many coast ones) for ice pack and for temperatures; that you do not know the basis of mathematics, speaking about meaningless things like exponential grow/decrease (where? and do you know how an exponential decrease works? because maybe you were right on exponential retire of Alpine glaciers: they are losing less and less mass, tending to stabilise to a fixed value); that you are really annoying when saying that deniers make a trick by giving any opinion the same value, while you did just the same on the phrase above (who told that 1998 was not a record, as told by supposed deniers, because of uncertainty range?); you are the one who say nothing cannot explain GW, than my theory would, when GHGs theory is full of holes (AGW supporters begin 2007 by saying all temperature increase since 1880 was man-made, then had to admit that until 1940 Sun prevailed, then had to admit that Solar influence until mid 80ies had a very good correlation, then tried to deny any correlation post-1985 – as if GHGs suddenly begin to act – with a puerile and false statement, then had to pass from exponential grow of temperatures to no real grow since 1998 waiting for a 2010-2014 peak: what next?); etc.
I am so tired to repeat these things by weeks, months and sometimes even years, that you would excuse me if I consider the discussion ended from myself and all the data given in previous posts and articles, here and in other sites.

Alright by me as your English is as hard to follow, as your failed logic (Alps are the only glaciers of relevance, that expected interior gains in Greenland and Antartica negate the predicted losses, etc). That as of at least 2001 it was written ’98 could not be considered a record for the US, so this little “correction” means something other than the two are tied in a statistical dead-heat. To include the fact your 1880 claim is false – eg the beginning of the “industrial age,” was not the immediate beginning of A-influenced GW. GHGs were also masked mid-20th century by industrial aerosols, so they didn’t “suddenly begin.”

BTW, its been shown that irrigation causes temperatures to increase. All that water vapor in the air, dontcha know.

By the way, it’s been shown that large windfarms cause temperatures to increase, apparently due to the turbulence collecting additional water vapor. Modelling suggests that a very large windfarm can cause up to around 2C warming over a several square mile area.

The simulation suggests that during the day, while sun-induced convection handily mixes the lower layers of the atmosphere, such a wind farm wouldn’t have important climatic effects. In predawn hours, however, when the atmosphere typically is less turbulent, a large windmill array could influence the local climate. For example, at 3 a.m., the average wind speed at ground level was 3.5 meters per second (m/s) in the absence of windmills. Adding the wind farm would increase the average wind speed to 5 m/s. Also, the 10,000 windmills would increase the temperature across the area. Turbulence caused by the rotating blades would shunt some of the high-speed winds typically found 100 m off the ground down to Earth’s surface, says Roy. Those surface winds would boost evaporation of soil moisture by as much as 0.3 millimeter per day.

I cann longer access the paper, but it cites real-world measurements at existing windfarms, albeit much smaller.

And by extension cast doubt on the numbers, without supporting the assertion that the failed complaince unduely biased the numbers.

Well, first, it can be easily observed that using only a crude measurement device – human skin – if one stands 10′ from a parking lot at mid-day and closes ones eyes, one can clearly tell the direction of the parking lot from the difference in heat – especially if one is shaded from direct sunlight. This is suggestive that the difference is several degrees C.

The USHCN Handbook itself suggests that being adjacent to a parking lot can cause error/difference of >= 5C.

From the USHCRN manual:
The USCRN will use the classification scheme below to document the “meteorological measurements representativity” at each site.

This scheme, described by Michel Leroy (1998), is being used by Meteo-France to classify their network of approximately 550 stations. The classification ranges from 1 to 5 for each measured parameter. The errors for the different classes are estimated values.

Well, first, it can be easily observed that using only a crude measurement device – human skin – if one stands 10′ from a parking lot at mid-day and closes ones eyes, one can clearly tell the direction of the parking lot from the difference in heat – especially if one is shaded from direct sunlight. This is suggestive that the difference is several degrees C.

How about Watts put up instrumentation far enough away from all his pet sites to be in “compliance” and publish the data?

The USHCN Handbook itself suggests that being adjacent to a parking lot can cause error/difference of >= 5C.

From the USHCRN manual:
The USCRN will use the classification scheme below to document the meteorological measurements representativity at each site.

This scheme, described by Michel Leroy (1998), is being used by Meteo-France to classify their network of approximately 550 stations. The classification ranges from 1 to 5 for each measured parameter. The errors for the different classes are estimated values.

And by extension cast doubt on the numbers, without supporting the assertion that the failed complaince unduely biased the numbers.

Any failure to comply with _your own published standards_ necessarily should cast doubt on the numbers. Again, the onus is on the station keepers and users of the data to prove there should be no doubt.

Any failure to comply with _your own published standards_ necessarily should cast doubt on the numbers. Again, the onus is on the station keepers and users of the data to prove there should be no doubt.

The pictures show us that the sites are contaminated.
As you state, we can’t tell the exact degree of contaimination from just a picture.
I agree that before we can tell what the real temperature at these stations actually is, you have to go out and make physical measurements in order to put a number on the contamination so that an accurate number can be deduced.

Since that has not been done, you must agree that anyone who uses the numbers from these contaminated stations is using numbers that are useless.

Which puts into doubt the claim that we know with any degree of precision, how much the earth has actually warmed.

I guess we are back to the standard argument. As far as wildlifer is concerned, until somebody can prove to him, beyond a shadow of a doubt that a parking lot would cause anomolous readings in a temperature sensor, he is just going to assume that they don’t.

And there is no amount of evidence that will convince him. He’s made it quite clear that his mind has already closed.

I guess we are back to the standard argument. As far as wildlifer is concerned, until somebody can prove to him, beyond a shadow of a doubt that a parking lot would cause anomolous readings in a temperature sensor, he is just going to assume that they dont.

And there is no amount of evidence that will convince him. Hes made it quite clear that his mind has already closed.

That’s not what I’ve said. I said I said I want to see the data that shows these sites biased the US/Global results by something more than 100ths-1000ths of a degree. My assumption is that if they were biasing the results, they would also be masking trends, ie there would be no global trends of increased anomolies.

That’s not what I’ve said. I said I said I want to see the data that shows these sites biased the US/Global results by something more than 100ths-1000ths of a degree.

I assume you simply pulled these numbers out of a hat? Immaterial what impact the observations have anyway, since the onus is on the data users and station keepers to prove what significance they do not have.

My assumption is that if they were biasing the results, they would also be masking trends, ie there would be no global trends of increased anomolies.

Again, what’s your assumption based on? These are only US stations, btw, not global, so the context alone makes your argument a strawman.

As I wrote in the townhall nasa error thread in more detail http://www.climateaudit.org/?p=1929#comment-127775 Gavin says the methods have already been validated from results, and that anyone wanting to check should make their own methods, run them, and compare. Using the easy to follow, easy to implement simple steps detailed in two papers. That their own code was “badly written fortran” ran “by one person” and a mixture of scripts, programs, utilities, dead ends, previous methodologies and unused options which they didn’t have time to put together, nor see the need to do.

Or in other words, for the adjustments, the response is basically ‘They match, we’ve proven it. You want more, RTFR and do it yourself’.

As far as glaciers, care to tell me how many glaciers in Antarctia are tracked by the World Glacier Monitoring service? And how much mass they gained or lost between 2003 and 2005? I doubt it. It’s 1 and it lost about 150 mm of mass.

WGMS uses 30 out of 160,000 (‘not even 1%’ is putting it mildly) and the number with postive net balances went from 13% in 2001/2002 to 19% in 2004/2005 The ones they track, 90+ in their set (again, ‘less than 1%’ is being charitable, yes, most have lost mass the last reporting period http://www.wgms.ch/mbb/mbb9/sum05.html , but some gained it. Also, look at Norway.
Most do have a downward trend, but not all. And their data only goes back to 1945. At best. And there’s lot’s of other factors. Go RTFR. http://www.wgms.ch/mbb/mbb8/MBB8.pdf

If you want movement, only like 4% are receding, and 2% advancing. They have 58,000 they have listed at the NSIDC. The bulk of theirs and the other 100,000 are unknown. Most of the ones they know about are stationary, like over 6% Not sure if those percents are out of their set or the 160,000. Doesn’t really matter. Go RTFR. http://nsidc.org/data/docs/noaa/g01130_glacier_inventory/

So from now on, any claim to be made needs some basic data, the statement “RTFR” and a link.

I assume you simply pulled these numbers out of a hat? Immaterial what impact the observations have anyway, since the onus is on the data users and station keepers to prove what significance they do not have.

No it isn’t. You disagree with validity of the data – the onus is on you.

Again, whats your assumption based on? These are only US stations, btw, not global, so the context alone makes your argument a strawman.

A consistant source – like AC/asphalt etc – will be, well, consistant and won’t have trends. Hmmmm, and I could of sworn the US was part of the world and its data was added to the world’s data. My bad.

Any failure to comply with _your own published standards_ necessarily should cast doubt on the numbers. Again, the onus is on the station keepers and users of the data to prove there should be no doubt.

OK, it’s time to play Devil’s Advocate here. A significant error has been found and corrected, but only for the contiguous USA. Given that the agreement between normalized Hadley, GISS, RSS and UAH temperature anomalies is quite good over the time of overlapping measurements, do we really expect to find additional significant errors? In spite of my reaming by Sinan, I still think someone with the requisite skills needs to start analyzing the data from the sites audited so far for differences in trends between well sited and poorly sited urban and rural stations. If for no other reason, the evaluation methods ought to be posted and critiqued before the audit is complete, or complete enough or the whole process is going to take forever. Surely no one is going to argue that any analysis is invalid unless it includes data from every single one of the sites. At some point short of completion the audit will be complete enough that adding the rest of the sites would be unlikely to change the conclusion. Peterson was able to publish a result that included data from only 289 sites and there doesn’t seem to be any evidence that his sites were randomly selected.

As I wrote in the townhall nasa error thread in more detail http://www.climateaudit.org/?p=1929#comment-127775 Gavin says the methods have already been validated from results, and that anyone wanting to check should make their own methods, run them, and compare. Using the easy to follow, easy to implement simple steps detailed in two papers. That their own code was badly written fortran ran by one person and a mixture of scripts, programs, utilities, dead ends, previous methodologies and unused options which they didnt have time to put together, nor see the need to do.

Or in other words, for the adjustments, the response is basically They match, weve proven it. You want more, RTFR and do it yourself.

Just today Gavin wrote:

We publish hundreds of papers a year from GISS alone. We have more data, code and model output online than any comparable institution, we have a number of public scientists who comment on the science and the problems to most people and institutions who care to ask. And yet, the demand is always for more transparency. This is not a demand that will ever be satisfied since there will always be more done by the scientists than ever makes it into papers or products. My comments above stand – independent replication from published descriptions – the algorithms in English, rather than code – are more valuable to everyone concerned than dumps of impenetrable and undocumented code. – gavin

and

It is better for someone to come up with the same answer by doing it independently – that validates both approaches. – gavin

Sounds more than reasonable to me.

As far as glaciers, care to tell me how many glaciers in Antarctia are tracked by the World Glacier Monitoring service? And how much mass they gained or lost between 2003 and 2005? I doubt it. Its 1 and it lost about 150 mm of mass.

WGMS uses 30 out of 160,000 (not even 1% is putting it mildly) and the number with postive net balances went from 13% in 2001/2002 to 19% in 2004/2005 The ones they track, 90+ in their set (again, less than 1% is being charitable, yes, most have lost mass the last reporting period http://www.wgms.ch/mbb/mbb9/sum05.html , but some gained it. Also, look at Norway.
Most do have a downward trend, but not all. And their data only goes back to 1945. At best. And theres lots of other factors. Go RTFR. http://www.wgms.ch/mbb/mbb8/MBB8.pdf

If you want movement, only like 4% are receding, and 2% advancing. They have 58,000 they have listed at the NSIDC. The bulk of theirs and the other 100,000 are unknown. Most of the ones they know about are stationary, like over 6% Not sure if those percents are out of their set or the 160,000. Doesnt really matter. Go RTFR. http://nsidc.org/data/docs/noaa/g01130_glacier_inventory/

So from now on, any claim to be made needs some basic data, the statement RTFR and a link.

I never said all. The majority are receding based on the representative samples. The few that aren’t have coastal/other influences – Norway.

sorry lifer, but the onus on proving the data good, lies with the person who wants to use the data.
This goes double if you are going to use this data as the basis by which the entire world’s economy is going to be turned upside down.

No it isn’t. You disagree with validity of the data – the onus is on you.

No, I did not disagree with the validity of the data one way or another. They made the assumption that their network was “high quality.” That is an assumption they need to prove given the evidence the network is not high quality, based on their own standards.

[snip – please do not be abusive towards other posters. If you think that I’ve not been evenhanded in this, identify offending posts to me and I’ll deal with them rather than putup with this sort of exchange]

If the truth is that the network is not of the quality the scientists using it claims, isnt that a truth that should be revealed?

For one thing, science admits it has to take into account multiple problems in the data, so they’re not claiming it’s perfect. And pictures will never reveal anything, other than some sites may be less than optimal. You’ll never, and can’t know anything more than that from surfacestations.org.

I never understand the disconnect from objective reality. That one has problems with a data set, does not mean that one has problems with all data sets. Also, if there is a problem with a data set does not necessarily mean that the conclusion(s) is wrong. It means that the data has problems. It may mean that the conclusion can not be proven (accepted) as is. However, the conclusion may be correct and even better explained with correct data. Though some seem to forget, the scientist is supposed to be offering proof. Their position is not supposed to be “prove me wrong”. Ignoring a data quality issue is not acceptable. Having a data quality problem that is explained, measured, and quantified is acceptable; whether talking about CO2 or ice extant coverage.

For one thing, science admits it has to take into account multiple problems in the data, so theyre not claiming its perfect. And pictures will never reveal anything, other than some sites may be less than optimal. Youll never, and cant know anything more than that from surfacestations.org.

They, in this case, did not admit it until forced, so therefore they could not have taken “into account multiple problems in the data”. They did not do this. OR is this “climate science” not real science, this science you have admitting.

As someone who has used pictures professionally, pictures reveal to experts many sorts of judicially acceptable evidence. I know that all too many criminals wish that pictures “never reveal anything”. What professionals such as I find is that pictures provide evidence, or often even more importantly, provide the means where evidence can be found that reveals not just more, but much more than those persons, who are hiding things, wish us to see or prove.

“sorry lifer, but the onus on proving the data good, lies with the person who wants to use the data. This goes double if you are going to use this data as the basis by which the entire world’s economy is going to be turned upside down.”

Finally the truth of motivation!! “We have to protect the world from socialist take-over,” no matter how much we have to lie and misrepresent the facts.

Bovid excrement. Kyoto may be the dominant “solution” now, but that doesn’t mean it’s the only solution. And it’s also one I do not support.

“Lie and misrepresent” is indeed an excellent description of what you are doing. The only one to mention “socialist” on this thread is you. Search the thread. It’s you. That’s a straw man argument. It is an imaginary statement that ONLY YOU HAVE MADE, a lie created by you to try to discredit those who disagree with your revealed wisdom.

Around here, several things are valued:

1) The scientific approach, based on transparency and replicability.

2) Honesty about what the other person has actually said.

3) Citations or clear calculations or computer code to back up any claims made.

4) Answering people’s questions about your claims, again with detailed answers and further citations and explanations as needed.

5) Cordiality

I have read all of your posts. They are lacking in all of those qualities. They are argumentative, contain straw man arguments, refuse to answer questions, and provide citations that are so vague as to be useless. You have consistently misrepresented what people have said. You have refused to discuss the issues.

For example, someone questioned your claim that the majority of glaciers are shrinking, saying that most of them are not measured. In response, you provided a citation to the home page of a web site containing pages and pages of data, another to a glacier bibliography, and a third to a study saying some glaciers are shrinking. That is useless, and in the process, you avoided answering the question.

If you had been honest, you would have looked up the number of glaciers worldwide (from memory about 63,000), and the number for which we have mass balances (again from memory, a couple hundred or so). If you had been honest, you would have come back and said “The truth is, we don’t know what the majority of glaciers are doing, we’ve studied less than 1% of them”.

You don’t seem to understand that the majority of posters on this site, both scientists and non-scientists, are very bright folks, with little tolerance for what I might call argumentation by repeated claim followed by evasion. You will never get any traction here with that kind of nonsense.

My suggestion to you is, if you want to make a difference, audit the site just as we are auditing the science. If you find incorrect claims, point them out with citations, calculations, or computer code to support your statement that they are incorrect. Answer questions about your methods and your data sources when asked. Avoid claims about people’s motives, you don’t have a clue why someone you’ve never met is posting something. Admit mistakes when you are wrong.

Because you see, folks on this site are quite willing to admit mistakes. Steve M. has pointed out errors in my work, I’ve found errors in his, it’s no big deal, it’s all part of moving the science forwards. That’s what auditing is about.

But claims such as your fantasy that we are trying to quash an imaginary “socialist take-over” merely reveal the shallow and deceptive nature of your analysis to date. Me, I think you could probably do much better work than that.

I invite you to honestly examine any and all of the work on this site, that’s science … but please do it in a scientific manner.

For one thing, science admits it has to take into account multiple problems in the data, so theyre not claiming its perfect. And pictures will never reveal anything, other than some sites may be less than optimal. Youll never, and cant know anything more than that from surfacestations.org.

You are simultaneously claiming that: 1) Based on pictures alone, no inferences can be made as to the maginitude of the error that might be introduced by these “less than optimal” installations, and 2) That any effect is sufficiently small and isolated to be safely ignored. Those two claims are mutually exclusive. If no inferences can be made from these photos, then that includes the inference that any effect is small enough to be ignored.

The most you can argue is that without quantifying the effects, we cannot assess its impact on the apparent temperature increase seen over the last century. It might be a trivial effect or it might be significant — but you cannot conclude that it may safely be dismissed while at the same time arguing that no conclusions can be drawn from it. You can’t, unless you are prepared to ditch logic.

I agree that “independent replication” has a use. But that doesn’t show anything about if the entire method being used now doesn’t error one way and then back the other or anything else. Certainly checking the code has a use also. In case you hadn’t noticed, even the independent replication argument (excuse) had to be developed over a couple hundred posts. If it had any real bearing upon this issue at all, that would have been said in the first rebuttal/comment.

It seems to me when Steve refutes Gavin’s points and Gavin keeps changing the subject, a place is reached where Gavin just leaves or doesn’t answer. That’s just my impression and opinion, I don’t know and I don’t have any opinion on who is “right” and who is “wrong”, just what people do.

You may think it’s reasonable what the developed response is, but others don’t, and they certainly have reasons for thinking it’s not reasonable, which they’ve detailed themselves. I don’t think Gavin stonewalling on the issue (of checking the code) by using another issue (replicating the methods) is helping. Checking code is not replication and replication is not checking code.

If he or you or me or anyone is of the opinion it’s the same issue, that’s immaterial. It’s like arguing about what’s better, chicken wings or chicken strips or chicken nuggets or chicken soup. Or arguing if they’re the same thing or not. Since after all, they’re all two words that start with the word “chicken”, right?

There’s no use arguing with me over the fundamental disagreement between the two viewpoints that they are different things. If you think they’re the same thing, great. I see reasons for both. If you don’t see reasons for both, wonderful.

Oh, and I can give you the explanation of photos. You see, if you have a bunch of photos of something, you can look at the something without having to go back. Others can see without having to go look. You know what something looked like at a certain point in time.

I never said “you” said “all”, that was more a generic question backed up with data. Feeling guilty? So, were you aware beforehand they only track 1 in Antartica and it only lost 150 mm over more than a year? Start using some numbers if you’re asking for some and maybe you’ll get them on other subjects.

And as far as influences, if they’re not losing mass, then they’re not losing mass. The ones in Norway that gained mass did so because of certain influences, the ones in Russia that did so because of certain influences. Just like the ones in Sweden or the US that lost mass did so because of certain influences.

So you’re arguing that things make stuff happen? I agree.

If you think 0.0001875 (30) of the world’s glaciers are a representative sample of mass balance, fine. In that case, the number that had a positive balance doubled over 4 years. If you don’t think it’s a representative sample, then what they did is meaningless. Neither supports that “the glaciers are melting”. Rather both support “We have basically no idea what almost any of the glaciers are doing, but some measurements of up to 60 years that are mostly satellites and models of a tiny tiny fraction of glaciers are losing mass.” If that’s “the glaciers are melting” to you, great. But even if “the glaciers are melting” then you still have to show what’s causing “the melting” and then you have to show what that means.

If you think that movement means something, what 87% are doing is unknown or not categorized at all. Of the rest, a little more than half are stationary, a little less than half are receding, and about a tenth are advancing. What you take out of that is up to you, but “The glaciers are melting” isn’t backed up by the numbers. “Almost 50% of glaciers with known movement are receding” is. As is “More than 50% of glaciers with known movement are stationary” or “About 10% of glaciers with known momement are advancing”. Then you have to correlate receding=melting. But even if you do, then you have to correlate stationary=melting. Then you have to extrapolate that the 13% is indicative of the other 87%. Then you still have to show what’s causing the melting and then you have to show what that means.

If you could possibly do almost all of that, but couldn’t correlate stationary=melting, then all you can say is “Half the glaciers are melting, and half the glaciers are doing nothing.” In that case, you are left with the problem that even if receding=melting and CO2=warming=melting, you still left with the inconvenient problem have to explain how more than half of them (stationary or stationary+advancing) aren’t melting. Good luck on that by the way.

There’s no use arguing with me over the data, take it up with WGMS or NSIDC.

You are simultaneously claiming that: 1) Based on pictures alone, no inferences can be made as to the maginitude of the error that might be introduced by these less than optimal installations, and 2) That any effect is sufficiently small and isolated to be safely ignored. Those two claims are mutually exclusive. If no inferences can be made from these photos, then that includes the inference that any effect is small enough to be ignored.

or
3) The direction of any possible error – increase a trend, mask a trend or reduce a trend.

I just got through spending a little time standing outside by my AC unit (Trane XR13). There was no detectable heat coming from the sides of it and when I held my hands over the fan on top, it actually felt cooler than the ambient temperature.

So, not trusting my digits as accurate measurements of temps, I sat a thermometer on top of the unit, and sure enough, I obtained a max of 87 degrees F. At ~ three feet over the unit, the temp was 83 degrees F. I then sat it one foot away from the base of the unit (it’s elevated 3 feet off of the ground) and within minutes it had dropped to 82.5, I also. to be thorough, sat it five feet away, and downwind, the temp remained at 82.5.

The most you can argue is that without quantifying the effects, we cannot assess its impact on the apparent temperature increase seen over the last century. It might be a trivial effect or it might be significant  but you cannot conclude that it may safely be dismissed while at the same time arguing that no conclusions can be drawn from it. You cant, unless you are prepared to ditch logic.

I never stated it could “safely be dismissed.” I stated we will never and can’t know jack from a photo and that if truth was the goal, some should get out of their armchairs and do the science. Me, I’m too busy with T&E species.

Ah, that’s not troll-food, it’s just me providing all the information I possibly can for study, or logical and civil arguments based upon the issue, and eagerly awaiting similar specifics in the same manner in return. I’m quite sure the responses will be very illuminating.

Or responses about AC units, local reported weather, drinking Chimay, and going to the beach. :)

Seriously, if an audit at some of the sites not meeting standards could measure both at the sensor and a couple of locations nearby (5 and then 10 and then 20 feet at each of NSEW at both 2 feet and 10 feet of altitude?) of:

Last one whilst they’re loading up the firewood soes we can pump more carbon into the atmosphere:

Or responses about AC units, local reported weather, drinking Chimay, and going to the beach. :)

You have something against good Belgian ale too? Patience grasshopper, you’ll just have to wait until tomorrow.

Seriously, if an audit at some of the sites not meeting standards could measure both at the sensor and a couple of locations nearby (5 and then 10 and then 20 feet at each of NSEW at both 2 feet and 10 feet of altitude?) of:

or
3) The direction of any possible error – increase a trend, mask a trend or reduce a trend.

This is a lame attempt to evade, by changing the subject, the fact that your contentions are contradictory.

I never stated it could safely be dismissed. I stated we will never and cant know jack from a photo and that if truth was the goal, some should get out of their armchairs and do the science. Me, Im too busy with T&E species.

But in fact, we CAN “know jack from a photo”. Or are you prepared to argue that — for instance — a photo of man holding a blowtorch to a thermometer would not be grounds for assuming an upward bias in the thermometer’s readings? Granted, Watt’s photos do not show something as hot as a blowtorch on the thermometer, but the effect of being surrounded by asphalt as just as undeniable. Your continued refusal to acknowledge this makes YOU , if anyone, the “denialist”.

At any rate, I’m glad to see that you now agree — in contrast to your previous assertions — that you cannot dismiss the questions raised by the fact that these stations are obviously out of compliance and obviously include factors that bias the temperature readings upwards. However, I wonder why your little experiment involved an attempt to measure the impact of an AC unit on local temperatures, when the specific location in question is biased by the effects of asphalt (its current environment) versus non-asphalt (its previous environment)?

After I tried to post my comment, I noticed that Steve McIntyre replied in a similar vein. But since I hate to be censored, I will post my comment here.

———
Gavin,

Your response to McIntyre in #207 claims that McIntyre would be much better off just writing new code himself. I disagree. If he writes his own code he will probably be accused of making mistakes as he was once before.

Here are comments written by you and Michael Mann published here on January 27, 2005:

As detailed already on the pages of RealClimate, this so-called correction was nothing more than a botched application of the MBH98 procedure, where the authors (MM) removed 80% of the proxy data actually used by MBH98 during the 15th century period (failing in the process to produce a reconstruction that passes standard verification proceduresan error that is oddly similar to that noted by Benestad (2004) with regard to another recent McKitrick paper).

But if McIntyre is supplied with the source code, (which is a requirement Judith Curry pointed out above (#69) has now been signed into law) then everyone knows he has the procedure that was used. If he finds a flaw, he can point it out and others can evaluate his discovery. If climate scientists continue to hide data and methods, it has to be reverse engineered and that wastes time and science is not progressed. This lack of openness is not a hallmark of science but of pseudoscience – Pay no attention to the man behind the curtain!

or
3) The direction of any possible error – increase a trend, mask a trend or reduce a trend.

This is a lame attempt to evade, by changing the subject, the fact that your contentions are contradictory.

Not. You can’t know the direction of any possible error or the amount of any possible error from a photo.

I never stated it could safely be dismissed. I stated we will never and cant know jack from a photo and that if truth was the goal, some should get out of their armchairs and do the science. Me, Im too busy with T&E species.

But in fact, we CAN know jack from a photo. Or are you prepared to argue that  for instance  a photo of man holding a blowtorch to a thermometer would not be grounds for assuming an upward bias in the thermometers readings? Granted, Watts photos do not show something as hot as a blowtorch on the thermometer, but the effect of being surrounded by asphalt as just as undeniable. Your continued refusal to acknowledge this makes YOU , if anyone, the denialist.

You’re assuming the asphalt is effecting the readings without knowing any other possible conditions (winds funneling heat away due to structures present, etc) or if any alleged heat from the asphalt even reaches the instrumentation.

At any rate, Im glad to see that you now agree  in contrast to your previous assertions  that you cannot dismiss the questions raised by the fact that these stations are obviously out of compliance and obviously include factors that bias the temperature readings upwards. However, I wonder why your little experiment involved an attempt to measure the impact of an AC unit on local temperatures, when the specific location in question is biased by the effects of asphalt (its current environment) versus non-asphalt (its previous environment)?

For one, because I’ve seen multiple pics pointing to AC units.
But for consistancy, at ~9:30 am I began an asphalt experiment:
Local (again from wunderground.coom):

Look for Frankfurt (Frankfurt/Main, Germany)
GISS’s datas do end 1990.

If you go to the DWD’s site (Deutscher Wetterdienst, http://www.dwd.de).
it’s absolutely easy to download monthlies up to the previous month,
there are even daily datas, usually datas are two/three days behind
the current date.

I learned (by simply reading the comments here over the last two years,
{sorry, when it comes to capture a rather sceptical view on global warming/
climate change, I’m a ‘Jonny-come-lately’)} that it’s by far better to dig
through the datas of each national weather service and their datas and
combine it into a global database.

A lot of the national/regional weather services do supply more realible
data than GHCN/GISS/HEADLEY.

#232 where is TCO to demand immediate publication of this highly significant research?

Better yet lets put TCO to work (we will give him an unpaid posting leave of absence from CA) to show us how to write a proper paper and get it published by doing the heavy lifting for author, Wildlifer. In preparation of writing up your results, Wildlifer, you might want to consider using a normalized temperature anomaly, since making all these readings and allowing the thermometer to equilibrate for each and not having the ambient temperature change even a 0.1 degree may pique too much reviewer attention. You will also need a minimum temperature for all these conditions and that a wildlifer surely knows comes in the late night or early morning. And such are the sacrifices we must make in name of science.

To return to the question of “does Hansens error matter, there are certain claims being made about global warming. How do we check those claims? In the case of the Hockey Stick, it was claimed that recent years were the hottest in 1000 years (pretty alarming!) but this proved to be a house of cards in terms of the data input, statistics used, etc. In the case of the US temperature data, the claim is that n of the past x years are the warmest on record. After Steves fix, this is no longer true. This logically raises questions about the rest of the computations and the rest of the worlds temperature records. In traditional science, replication is how things are tested. A person claims that under certain conditions, you can synthesize a given organic molecule. You test it and find out that by golly its true! (or not). Why not replicate here? First, the raw climate history data are given, and cant be replicated. BUT: then they are adjusted, selected, interpolated, UHI corrected, and averaged in some way. The explanations of how this was done are vague and not sufficient to replicate what was done. Any attempt to code up how it was done will not be right because it is too complicated. This leaves auditing. The CPI, unemployment figures, housing starts, etc for the economy are compiled and checked carefully by hundreds of people. Any goof will cause huge public outcry, so they are very careful. In contrast, the fiddling with the climate data is much more extensive but is carried out by a handful of people. It is not open to inspection by anyone.
What has the small-time auditing found so far?
1) UHI adjustments adjusting the rural rather than urban location (grand canyon)
2) The recent Y2K bug
3) Urban stations called rural
4) Uncorrected station moves such as to a roof
5) Uncorrected UHI trends (particularly overseas)
Do these (and more) prove the data is junk? Of course not. But they do suggest that an audit is in order. Why would any honest scientist reject help with refining the data? Why would any society accept the claims of perfect computing by a bunch of people from NASA, an agency with regularly scheduled disasters?

#244 Auditing and more. Having worked with data I’m totally paranoid about it being right, because there always seems to be another error, even after I run it three times, and eyeball it. So I am kind of understanding about errors. But when Hansen says this error doesn’t matter, some things that concern me are:

1. The errors are up to 1.5C in a site and are easily seen by eyeballing the data (why isn’t this done after every change?)
2. Hansen says it doesn’t matter.

This was for a Tropical Wind Exposure survey. I can’t get to the USCRN site but I believe they have site photos also. Something to put in their wallets? Want to complain to your congressman about NOAA wasting money on photos?

The posting of reviewers comments and authors assessments shows once again that the IPCC is not a peer review process. They are free to pick reasons out of a hat without any scientific or mathmatical support and just dismiss the concerns expressed by the reviewers. There is no iteration with the reviewers, where they have to justify their respponses. Here is part of the response to my comment in WG1 chapter 10 based on the Roesch diagnostic study which showed that ALL the AR4 models had a positive surface albedo biase against solar:

“Future climate projections are reported as anomalies relative to a base period at the end of the 20th century. Thus, it is assumed that systematic errors are subtracted out in performing those differences, and we report climate changes, not absolute climate values.”

What is their basis for this assumption that systematic errors are subtracted out in a non-linear system such as the climate, that assumption isn’t even true in a linear system.

If you fit the 20th century data with a positive albedo bias against solar, that bias must be have been compensated for someplace else, possibly by an increased sensitivity to CO2. Where do they “substract out” this increased sensitivity to CO2, when projecting future increasing CO2 scenerios? If anything, the error is amplified. I can’t believe they are this stupid, they must just be unethical. They evidently didn’t think they could respond to this over at realclimate.org

Looking at the graphs was a confirming expierence. One can understand why the climate scientists have concluded that with enough local weather stations one can see the “climate”. However, since that is an assumption, the effect of micro and meso scale heat influences on such an assumption can also be easily seen. Visually it is easy to conclude that the actual effects must be resolved. The thing that disturbs me is that as you look at those graphs and start thinking of putting together a world map of anomolies, why do the climate scientist’s graphs look so data scarce?? It would appear that one could get a wonderful trend line, with error/confidence bars, that would be wonderfully smoothed on a monthly basis. I have to admit I am a bit disappointed seeing such a data rich set compared to what we are getting for the money we are poring into climate change science.

In the interest of clarity, I’m a bit confused by some of the statements attributed to NASA regarding the relative size of the US to the world. According to my fraying “Atlas of the World,” The US is about 6% of the world land area, with the contiguous US being about 5%. I know those NASA folks have access to better data than my 1970’s era atlas, but surely things wouldn’t change by that much. Does anyone have an exact quote of Hansen or Schmidt saying the US is 2% of world land area?

Back in the U.S., it turns out 1934 and 1998 have been swapping (statistically insignificant) spots on the ranking for a number of years. Mr. Ruedy said NASA downloads new thermometer data every month, and recalculates annual average temperatures based on things like thermometer calibrations, “urban warming” corrections and other adjustments. In 2001’s calculations, 1934’s average was warmer by a very slight amount. By 2006, 1934 and 1998 were at an exact, not just statistical, tie (NASA rounds the figures to the nearest hundredth of a degree Celsius). In an early 2007 update, 1998 had edged ahead, but by July, 1934 was back on top by 1/50th of a degree Celsius. All of these movements were the result of NASA’s calibrations, not the flaw identified by Mr. McIntyre. The latest shift showed up on NASA’s Web site when it did because the agency incorporated all of its latest data online when it was making the unscheduled update to address the flaw Mr. McIntyre spotted.

I think it does matter, though I do not fully understand why. As a scientists, I am inclined to agree with Hansen and Schmidt that an error of 0.15 degrees in the U.S. temperature data (0.00x degrees globally) is not, by itself, a big deal in the context of predicted warming.

However, an informal survey of friends and colleagues — by now everyone has heard about “NASA’s Y2K problem” — suggests that it may matter a lot to the general public. The revelation of NASA’s problem — possibly also the way it was discovered — seems to have shaken confidence in the whole AGW story. This is in contrast to M&M’s work on the hockey stick, which (IMHO) should matter but seems to have made little or no impression except among serious statisticians.

Something to think about, anyway…

(FWIW: My “convenience sample” would be classified as generally smart, Democratic and liberal; it receives news from, among other sources, the Washington Post, NYT, and NPR; it also tends to “believe in” [my term] AGW).

#253 Does it matter? We are talking about the effect of one discovered error on one index calculated index from that error. But put this in the context of a data management company. If the CEO got up and said, after an obvious error was discovered, that the error didn’t matter, in any context, they would be carpeted pretty quickly.

Rather, instead of bashing deniers, some appropriate responses as Steve has said, are a bulletin to users alerting them, and a public plan to ensure it doesn’t happen again. You know, there are probably over 10,000 data managers in the US who could put this together and maintain cleaner data than appears to be available.

That’s why it has hit people’s buttons I think. AGW but also potentially indicative of problems that should be rectified.

In the interest of clarity, Im a bit confused by some of the statements attributed to NASA regarding the relative size of the US to the world. According to my fraying Atlas of the World, The US is about 6% of the world land area, with the contiguous US being about 5%. I know those NASA folks have access to better data than my 1970s era atlas, but surely things wouldnt change by that much. Does anyone have an exact quote of Hansen or Schmidt saying the US is 2% of world land area?

Mike B, the confusion, I would guess, is in using the entire global area (land plus oceans) versus using just the land area in the denominator. Your numbers would be approximately consistent with saying the US makes up 2% of the world’s surface area.

#244 I think only touches the tip of the melting glacier (sorry, couldn’t help it).
I am not familiar with how the temperature measuring enclosures are created, nor their thermodynamic properties, but a simple change in the material, covering, distance of thermometer from enclosure side, etc can make a significant (i.e. measurable at > tenths of degrees) difference as to the measured temperature. An undocumented change in the condition of the measuring station can change the observed temperature to such a degree that it would negate any observed trend (whitewash vs white acrylic paint could show an increase in measured temp of > 1C  sorry, on holiday and dont have access to source material  can someone verify?). How often the enclosure was painted could be significant depending on the paint used (maybe the linear decrease in temperature noted in a previous post was due to an overzealous whitewasher?).

The people putting together the data are some smart folks and Im sure they normally take into consideration as much of the contributing factors as they can possibly think of (or are aware of). When I see responses of “It doesn’t matter” I question whther they are responsible enough to account for even a majority of the possible varients. Having created quite a few heat models (albeit space and not terra firma related) and written a few million lines of code myself, I am bit shocked about the nonchalance with which this oversight has been received. Statistically speaking, the observed trends of Global Warming could very well fall within the margin of error induced by the modeling assumptions. Im not saying that they are, but how are we to know what the trend has been if we dont know how the values were computed. For all we know the temp increase is double what has been estimated and well all die within the next decade (although I plan to live for at least another 50 years). No matter how many times someone points to the general algorithms used and says thats all you need, you cant tell how an application really works until you look at the code! Is it really necessary to state that? Are there really so many people ignorant of what it takes to code that this is even a point of debate? If only this was true I could save a lot of time not having to talk to customers.

In any case, lets not confuse the science with the politics. Even the pro-AGW scientists *mostly* quantify their data and include all of the proper wording in their papers to say they arent absolutely certain that their findings point to a concrete conclusion. It is usually the media and those writing the abstracts that make the definitive statements. I’ll never be convinced that man-made CO2 increases have more than a miniscule effect on global temperatures, but I do have an open mind as to it being something other than solely due to that big yellow thing in the sky that provides us with our heat in the first place. I would say that those skeptics of AGW should be allowed to use the same rhetoric in reverse that the pro-AGW people have used. If it was such a big deal 1998 was the hottest blah blah blah, it should be as big a deal that it wasnt. If every error one side makes is plastered all over the media then the same should be true of the other way. Personally, I would prefer that the issue was handled solely in a scientific manner, but since it isnt and it has become more of a political issue than a scientific one, I would like to see a little bit of fair play.

Does any error “Matter” in the scheme of things? Of course! We are talking about adjusting and correcting measurments, any error affects the rest of everything else…

wildlifer, ah yes. Whomever you are. Indeed, I am patient. Thanks for the suggestion, but I don’t really need to wait for anything. I eagerly await some specific answers. Maybe in the meantime you and TCO can co-publish a peer-reviewed study, as suggested by the other helpful denizens of this blog. Because as you know, nothing matters at all if it’s not a published peer-reviewed paper. Maybe Roger A. Pielke, Sr. can help you with the effort. Oh. The photos of which you speak, you mean the ones everyone can look at to see as a record. Of course, yes indeed. An excellent resource. Glad you’re on board with the idea of being able to look at things in the future. And thanks for giving us the local weather. I’m sure going to something like the Weather Channel website was quite beyond the rest of us to experiment with.

Steven, thanks for the question. :) And congrats on your books! So you are fluent in both Cantonese and Mandarin. Cool! So. The place to find everything as you mentioned above to get glacier info, the MBB 8 2002-2003 PDF is at this scintillating location: http://www.geo.unizh.ch/wgms/mbb/mbb8/MBB8.pdf It has lat/long stats on the glaciers they look at as well as a map showing where they are. I read elsewhere about this, took a look at the studies (quite interesting) and matched the same results as to the numbers. Cuz iffn ya can’t replicate and cite what you wuz usin’ to do it, whut use iz it?

Uh … thanks.
BTW, I’m a wildlife biologist, hence the name “wildlifer.” Although, if I worked on a hot-shot crew, “wildfire” wouldn’t be too bad.

In any case, lets not confuse the science with the politics. Even the pro-AGW scientists *mostly* quantify their data and include all of the proper wording in their papers to say they arent absolutely certain that their findings point to a concrete conclusion. It is usually the media and those writing the abstracts that make the definitive statements. Ill never be convinced that man-made CO2 increases have more than a miniscule effect on global temperatures, but I do have an open mind as to it being something other than solely due to that big yellow thing in the sky that provides us with our heat in the first place.

The word never indicates you’re not going to be quite as “open-minded” as you try to lead us to believe. I, on the other hand, will accept the common wisdom that CO2 is a greenhouse gas.

I would say that those skeptics of AGW should be allowed to use the same rhetoric in reverse that the pro-AGW people have used. If it was such a big deal 1998 was the hottest blah blah blah, it should be as big a deal that it wasnt.

It fell from a tie, to a tie, in the US and is allegedly still a global record.

If every error one side makes is plastered all over the media then the same should be true of the other way. Personally, I would prefer that the issue was handled solely in a scientific manner, but since it isnt and it has become more of a political issue than a scientific one, I would like to see a little bit of fair play.

Science isn’t fair, a democracy or the ever popular opinion poll and should rise or fall on the merits. Thus far, while maybe making for great propoganda and misrepresentations to feed the sensationalist MSM, the denialists have produced nothing of merit.

But let’s say they succeed in demolishing any and all confidence in historical temps, no matter what the margin of inaccuracy or accuracy may be. What then? What signals should we – would you – look for to determine if the globe is indeed warming and whether or not man is contributing to that warming?
Or is that the “end of the story” for you?

Wildlifer, welcome back from your Yahoo-Message-Board-like trolling endeavor on this blog.

So sayeth the trollish one.

You say:

What signals should we – would you – look for to determine if the globe is indeed warming and whether or not man is contributing to that warming?

No, I asked. And you offered no answers.

And as a wildlife bioligist, surely youre not denying the role of propaganda or photographs in the media in getting the right message across to the public.

Propoganda and public ignorance go hand in hand. For decades we biologists have seen the same techniques used against us, and the biological sciences, as the AGW-deniers are using against climate science. Same story, different day.

1. While it’s important to get correct land temperatures it is several times more important to get correct ocean temperatures. Not only do the oceans make up 70%+ of the planet’s surface but their humidity impacts land Tmins and other aspects of land climatology.

2. While there are questions about the accuracy of recent temperatures I think the bigger questions are about the reconstructions of temperatures farther back in time. I have some confidence in the satellite-derived temperatures and when I plot satellite vs surface records since 1980 ( here ) I don’t see huge differences in the recent decades. GISS may have the global surface record more or less right for the last couple of decades. But, the farther one goes back in time, the more adjustments and assumptions there are in the the reconstructions. What are they? Why? Those are very interesting questions which Steve M and others are exploring but the answers seem mostly shrouded in smoke, which they should not be.

3. GISS made a 0.10 to 0.15C error in 2000, a magnitude roughly equal to the decadal trend. Yet they accepted the erroneous higher temperature without blinking. Had the error been in the other direction, a spurious cooling of 0.15C, I wonder if they would have quickly reexamined things and caught the problem. If I were GISS, I’d be concerned that I may have missed the error simply because the result of the error fit my worldview.

4. Like DeWitt, I marvel that GISS moved so quickly to acknowledge the error. That quickness is uncommon behavior in most large organizations. I do wonder if GISS already had discovered the problem and were trying to figure out how to correct it with minimal fallout. I think they would have made the correction, probably at a time when other revisions would be made.

ACtually The first study of the CRN ( climate reference network) compared the
New pristine CRN station to an accepted ASOS station ( that’s one at an airport)
And concluded that the ashpalt from the adjacent runway biased the ASOS reading.

AGW believers are far from uniform. While there are plenty of them who believe (usually stridently) as a matter of faith, there are far more who believe as a matter of trust.

Al Gore and his cronies do not care so much about the “faithers”, because they will always be on board with any nonsense he feeds to the PR firms.
However, the “trusters” are far more numerous (if quieter), and hence more important. Mostly, they will not have looked into the issues to any great degree – they have their own lives to lead, after all. They are people who accept what “the scientists” say, and assume that the scientists are saying what the newspapers say they are saying. In particular, they are accepting that all AGW sceptics are odd-balls scrabbling around the edges of the mainstream, and generally not to be taken seriously. They mostly will not have the scientific background to really be able to follow the arguments, particularly abstruse statistical matters such as are discussed here.

It seemed odd to us that Mann and Co continued to blindly insist that their hockey stick was not broken – at worst, slightly bruised, but still clearly a hockey stick. From the point of view of the PR campaign, they did exactly the right thing. As long as they presented a consistent front that nothing had really changed, the “trusters” could continue to trust.
In this case, the big Daddy of the AGW pseudo-scientists has admitted he was in the wrong. This should make an enormous difference to a truster, as it is clear confirmation that his trust was misplaced. It may be that it was an honest mistake, it may be that the difference is minimal, but it is incontrovertible that the AGW establishment cannot be trusted 100%, and the strange folk on the margins cannot all be dismissed 100%. This is an enormous difference to an honest truster. (Although the faithers won’t care.)

I should add that I am about as far away from being a PR bod as it is possible to be, so this could well be complete tosh. No doubt any professional spin-doctors in the audience will be happy to correct me – Steve Bloom, any thoughts ?

It is a common and shallow misperception that Surface stations is merely taking pictures.

They are confirming metadata. I’ll give you a simple example

They confirm the lat/lon with a on site GPS recording. Why is this important?
We have found discrepencies between
the NOAA/GISS records and ground truth. Why do lat/lon matter?.

Simple. Hansen uses nightlights satillite data. He measures pixel instensity at the
lat/lon of the site. Every pixel is 2.6km in extent. It’s a good thing to have
exact lat lon to improve the accuracy of nighlights. NO ONE, to my knowledge has checked
on this. If hansen used innacurate Lat/lons to look at nightlights, then we might
have an issue. hmm

Polar bears are religious icons? Come, come, there must have been more there than a reference to the mean old MSM’s treatment of you poor wildlife biologists. That is after all what we’re discussing here. You really think you poor wildlife biologists get the short end of the stick from the mean old MSM? And for relgious reasons to boot? (That is, after all, what this whole discussion thread seems to imply.)

This is not meant as a “parlor trick.” Just a straight-forward question. A straight-forward answer would be nice for change. And just to get this back on track, I think this all started with whether errors in climate studies matter, and whether pictures can help us understand the nature of any measurement errors that might be there. Of course, the only reason any of that matters gets back to the “how much” and “why” question.

It is a common and shallow misperception that Surface stations is merely taking pictures.

They are confirming metadata. Ill give you a simple example

They confirm the lat/lon with a on site GPS recording. Why is this important?
We have found discrepencies between
the NOAA/GISS records and ground truth. Why do lat/lon matter?.

Simple. Hansen uses nightlights satillite data. He measures pixel instensity at the
lat/lon of the site. Every pixel is 2.6km in extent. Its a good thing to have
exact lat lon to improve the accuracy of nighlights. NO ONE, to my knowledge has checked
on this. If hansen used innacurate Lat/lons to look at nightlights, then we might
have an issue. hmm

And that will prove what? That Hansen has the SS troops out there moving sites around? And where can I get a camera that takes lat/long?

Speaking of lat/long, I’ve tried to insert historic locations (2000) for which lat/longs were taken into a map, and found them 100 miles away from the landmark they were supposed to be next to.

268, I don’t think Hansen had any choice on this one. The hockey stick was obscure enough and esoteric enough to where Mann could basically deny. This is a lot more obvious. They may also be getting some internal scrutiny at NASA that may have taken the deny option off the table.

I think your basic thesis is correct, though. That’s why the “consensus” meme is so critical. Once they lose the public perception that there’s a consensus of scientists who swallow the extreme AGW scare scenarios hook, line, and sinker, it’s all over.

I essentially agree with your statements in this post. The satellite data should provide good calibration points for surface temperatures. I do, however, see sufficiently large and periodic differences (particualrly in proportion to the total trend) between GISS and the satellite measurements to question whether the published confidence limits for the GISS data are correct. Do you have handy the confidence limits or error bars for the satellite data?

The 1930s to current US temperature comparison which I think is important to the whole AGW argument has to be left to the trust we have in GISS and other land records like it. We really only apparently have the current audits of land stations to understand how much quality control has gone into station compliance generally and attempt to determine whether we judge the compliance is currently better or worse than the past. I think the bigger find may be the increased uncertainty that noncompliance puts on the land station data and why I’d like to see the acknowledged error bars for the satellite data.

Using satellite measurements as a calibration point for land surface measurements would also require a relatively high resolution by satellite in order to determine the accuracies of regional and local differences in climate change. These local variations, I think, are important considerations when attempting to deal with what a global average change means in terms of local variations. What is the resolution for satellite temperature measurements?

At some point I would like to see a side by side comparison of from the various sources of land surface temperature measurements and satellite measurements on all levels global: NH, SH, US, IL in US, and local regions of IL in US. The comparison would be made with the various sources’ confidence limits compared to the differences in source measurements. I may have to do this myself in order to get a better mental picture of discrepancies in sources and to resolve a long standing problem I have had in understanding the source of widely varying (and completely adjusted) temperature trends in relatively closely spaced stations in IL.

I contend that anyone who just dismisses the auditing of the stations is closed-minded and has no imagination and not worth discussing the issue with. Especially when any point of theirs is disproved or the effect shown to be meaningless or so on, their reaction is usually to just ignore it or switch the subject. You see it from this blog’s commenters, you see it at the people that run Real Climate, you see it all the time. Or at least I do.

It’s easy to come up with a list of uses for either photographs or the audits and various aspects of such. Any exercise of that nature makes it fairly obvious. Photographs: historical record, hard proof of siting requirement status, ability to recheck by looking not traveling, ability to see the work of others easily, to help in later study of all stations, assisting in placing CRN stations, determining if the reported location is correctly described. The audits are fairly obvious also. Condition of site, siting requirement status, assisting in placing CRN stations, knowing which stations are less adjusted and which are more, determining the accuracy and quality of the existing network, finding potential errors, and seeing if the people running the network (and who work for the US taxpayers) are both a) doing their job and b) doing their job the way they say they are.

We are asked to trust. So we check. I’ll leave it up to everyone to determine if that trust is deserved or not, or if it’s been proven or disproven yet.

So as always, we continue to get back to the cause and degree of effect, the why and how much of it all. My contention is that you may be able to adjust for issues blindly and statistically, and get it mostly correct, but if you don’t know exactly what’s wrong and how much it’s affecting things, how can you accurately correct for it and know you have it mostly correct (or correct to the degree of “good enough”)? I say you can’t, and it’s up to those claiming you can to prove it can correct things to at least “good enough”. Or to choose to ignore the issue or change the subject, stonewall and obfuscate; therefore causing both doubt and the impression that something is being hidden.

How many folks think it odd a wildlife bioligist doesn’t see the reason for doing a survey of the measurement stations used to collect scientific data? Or odd that somebody in the sciences doesn’t understand the difference between a documentary raising issues for discussion contending that the warming being claimed is overblown and a museum/botanical garden in Cincinnati bringing history to life from a certain perspective that only a certain demographic would be interested in?

On the issue of media PR, let me ask this one of you wildlifer. “So what sorts of things have you poor wildlife bioligists had to put up with in the mean old MSM?”

Pictures of badly sited surface stations?? :) lol (Seriously, substitute “Corrections to the adjustments since 2000?” or “Melting glaciers” or something else if you want, you can pick what it is your troubles with the MSM are.)

What are your troubles with the MSM regarding biology? 3 examples is probably sufficient. Thanks in advance.

When you consider that 100 miles represents less than .5% of the Earth’s circumference, such a small error will make hardly any difference since every thing else is still where it is supposed to be. It should average out.

LOL.

I suppose that your point is that since other people make 100 mile errors the 1,000 ft errors by “climate scientist” is insignificant despite its bearing on microsite issues (similarly for a move of 1,000 ft. Or 100 ft.)

It turns out for the level of discrimination required to get the accuracy claimed from the historical record, microsite variations are very significant.

And that will prove what? That Hansen has the SS troops out there moving sites around? And where can I get a camera that takes lat/long?

No, But Parker has classified a South Carolina site at a nuclear power plant as a rural site. If you look at the Lat Long on Google Earth it puts the site in a wooded area, but I’ve learned not to trust the posted coordinates or GE. I put that under things that make me say HUMMMMMMMM.

At some point I would like to see a side by side comparison of from the various sources of land surface temperature measurements and satellite measurements on all levels global: NH, SH, US, IL in US, and local regions of IL in US.

Me too. However, I suspect the satellite resolution isn’t good enough for individual states, much less within state. I’ve done a plot of yearly averages for GISS and UAH MSU data for the contiguous 48 states. I’ve adjusted the MSU data to give the same mean as for GISS. As far as error, the UAH readme lists a 95% C.I. of +/- 0.06. I don’t know if this applies to all data or just global monthly data. I presume that an annual average would have a smaller C.I. and a smaller geographic area would have a larger C.I. The correlation seems to be quite good both for the data and the trends. It’s less good on the trend for global, though. I haven’t tested whether the difference is significant, though.

No, But Parker has classified a South Carolina site at a nuclear power plant as a rural site. If you look at the Lat Long on Google Earth it puts the site in a wooded area, but Ive learned not to trust the posted coordinates or GE. I put that under things that make me say HUMMMMMMMM.

Or, one can take a waypoint, download it into ArcGIS, and then find it 50 miles away from the location where you took the waypoint.

How many folks think it odd a wildlife bioligist doesnt see the reason for doing a survey of the measurement stations used to collect scientific data? Or odd that somebody in the sciences doesnt understand the difference between a documentary raising issues for discussion contending that the warming being claimed is overblown and a museum/botanical garden in Cincinnati bringing history to life from a certain perspective that only a certain demographic would be interested in?

I find it as odd as I would a picture of an Arkansas forest claiming to survey for the Ivory-billed Woodpecker. Especially if the picture noted “See! No woodpeckers.” That’s not science.

On the issue of media PR, let me ask this one of you wildlifer. So what sorts of things have you poor wildlife bioligists had to put up with in the mean old MSM?

I didn’t say the MSM did anything to biologists. I said biologists and the biological sciences have our own group of deniers who use (if not invented) the same tactics we see here.

What kind an analogy is comparing looking for an Ivory-billed Woodpecker in pictures of an Arkansas forest (or pictures of Mars, or pictures of a Ford dealership) with looking at photographs of temperature measuring sites as meeting standards, or not, showing if the sites meet standards, or not?

You are getting more and more illogical every post.

As far as the MSM, you said “For decades we biologists have seen the same techniques used against us, and the biological sciences, as the AGW-deniers are using against climate science. Same story, different day.” John M. mentioned something about that, may have been a faulty analogy in and of itself, as yours probably is about “deniers”. But instead of talking about either analogy, you started talking about polar bears and strawmen. Now you’re deciding to talk about the MSM when I brought up a different comparison (pictures of something) instead of polar bears.

What did the “biological deniers” do? Take pictures of “clean petri dishes” and claim they showed if they were “clean petri dishes” or not? Point out a sample of a population of bacteria showed too much or too little of a margin of error? Mitochondria interaction algorithm not sufficiently described? Cell growth rate calculations not sufficiently described? Come on.

You complain about religious subjects being deleted without being at all in the slightest clear about what it was, and it turns out like it was probably something more specific than some link to a museum with some off-the-cuff non-sequiter later; “IOWs, theres no difference between this and this. And both target the exact same folks.” Again, you’re getting more and more illogical and bizarre in how you change the subject.

Some folks might think this is “feeding the troll”. I think it’s just more and more interesting on how transparent you and your methods are becoming. And some folks might think that’s an ad hom, but it’s just an observation of the opinion I’m getting from what you say and do and how you say and do it.

Gee, can we stick to one thing to talk about at a time? I think everyone would appreciate that.

What kind an analogy is comparing looking for an Ivory-billed Woodpecker in pictures of an Arkansas forest (or pictures of Mars, or pictures of a Ford dealership) with looking at photographs of temperature measuring sites as meeting standards, or not, showing if the sites meet standards, or not?

You are getting more and more illogical every post.

As far as the MSM, you said For decades we biologists have seen the same techniques used against us, and the biological sciences, as the AGW-deniers are using against climate science. Same story, different day. John M. mentioned something about that, may have been a faulty analogy in and of itself, as yours probably is about deniers. But instead of talking about either analogy, you started talking about polar bears and strawmen. Now youre deciding to talk about the MSM when I brought up a different comparison (pictures of something) instead of polar bears.

What did the biological deniers do? Take pictures of clean petri dishes and claim they showed if they were clean petri dishes or not? Point out a sample of a population of bacteria showed too much or too little of a margin of error? Mitochondria interaction algorithm not sufficiently described? Cell growth rate calculations not sufficiently described? Come on.

You complain about religious subjects being deleted without being at all in the slightest clear about what it was, and it turns out like it was probably something more specific than some link to a museum with some off-the-cuff non-sequiter later; IOWs, theres no difference between this and this. And both target the exact same folks. Again, youre getting more and more illogical and bizarre in how you change the subject.

Some folks might think this is feeding the troll. I think its just more and more interesting on how transparent you and your methods are becoming. And some folks might think thats an ad hom, but its just an observation of the opinion Im getting from what you say and do and how you say and do it.

Gee, can we stick to one thing to talk about at a time? I think everyone would appreciate that.

What kind an analogy is comparing looking for an Ivory-billed Woodpecker in pictures of an Arkansas forest (or pictures of Mars, or pictures of a Ford dealership) with looking at photographs of temperature measuring sites as meeting standards, or not, showing if the sites meet standards, or not?

The Arkansas woods meets the standards, but there’s no Ivory-billed woodpecker in the photo. So much for “standards.”

So BFD, some “standards” aren’t met. What’s the result? We don’t know, because some people think you obtain measures from pictures. Which is the point, not to produce science, but to produce doubt. “See this site isn’t perfect, so AGW is false.” Hey, I’ve got an novel idea, you’re there taking pictures, how about some temperature measures from a near-by “perfect” site? No, that would defeat the purpose.

As far as the MSM, you said For decades we biologists have seen the same techniques used against us, and the biological sciences, as the AGW-deniers are using against climate science. Same story, different day. John M. mentioned something about that, may have been a faulty analogy in and of itself, as yours probably is about deniers. But instead of talking about either analogy, you started talking about polar bears and strawmen. Now youre deciding to talk about the MSM when I brought up a different comparison (pictures of something) instead of polar bears.

I did not start anything about polar bears, that was someone else. It was you who asked about the MSM when I never said anything, as I’ve already showed you, about the MSM being “mean” to biologists.

What did the biological deniers do? Take pictures of clean petri dishes and claim they showed if they were clean petri dishes or not? Point out a sample of a population of bacteria showed too much or too little of a margin of error? Mitochondria interaction algorithm not sufficiently described? Cell growth rate calculations not sufficiently described? Come on.

I don’t know where you live Sammy my boy, but the biological sciences have been under attack in the US since Epperson v. Arkansas.

You complain about religious subjects being deleted without being at all in the slightest clear about what it was, and it turns out like it was probably something more specific than some link to a museum with some off-the-cuff non-sequiter later; IOWs, theres no difference between this and this. And both target the exact same folks. Again, youre getting more and more illogical and bizarre in how you change the subject.

Do you purposely fail to mention, since others did not see it be fore it was deleted, that the “this” and “this” were links?
ANd’ I’m not the one changing the subject here – I’m responding in this conversation, not leading it.

Some folks might think this is feeding the troll. I think its just more and more interesting on how transparent you and your methods are becoming. And some folks might think thats an ad hom, but its just an observation of the opinion Im getting from what you say and do and how you say and do it.

Thus far, while maybe making for great propoganda and misrepresentations to feed the sensationalist MSM, the denialists have produced nothing of merit.

I responded in #260

And as a wildlife bioligist, surely youre not denying the role of propaganda or photographs in the media in getting the right message across to the public.

To which wildlifer responded in #261

Propoganda and public ignorance go hand in hand. For decades we biologists have seen the same techniques used against us, and the biological sciences, as the AGW-deniers are using against climate science. Same story, different day

And me in #262

So what sorts of things have you poor wildlife bioligists had to put up with in the mean old MSM? Pictures of drowning polar bears?

So that’s where the polar bears came from.

As far as the MSM “doing anything to biologists”, what was literally said and what is clear by implication is for fair-minded readers to discern on their own.

re292
The MSM repeats/uses other’s propaganda, but surely you’re not going to sit there and say it’s oreated by the MSM? They couldn’t sell airtime if they couldn’t paint a picture with two diametrically opposed sides. The MSM shares the blame, I agree, but they’re just fanning the flames. They shouldn’t give any time to all the incarnations of the denial industry.

re: “How many folks think it odd a wildlife bioligist doesn’t see the reason for doing a survey of the measurement stations used to collect scientific data?”

Actually I don’t find it that odd. In my professional interactions, I’ve found that life- and little/NO-science people are far more receptive, even embracing, of catastrophic AGW theory. But even worse, many are strident “AIT in the classroom” proponents. In skeptic, NGW or AGW-agnostic camps, the composition appears more toward physical scientists and mathematicians, with AIT’s value only in teaching propaganda.

As far as I can tell wildlifer, you started it. “It” being bizarre statements. You did say

For decades we biologists have seen the same techniques used against us, and the biological sciences, as the AGW-deniers are using against climate science. Same story, different day.

You admitted this in #274. The 1) this and 2) this are 1) is the global warming swindle, and 2) is the creation site. Your claim is that they are the same. Obviously they are not. IF YOU WISH TO MAINTAIN THEY USE THE SAME TACTICS, FINE. But you did not state that. But you do a lot of legalistic arguing as shown later. But in honesty, some techniques are nothing more than applied scientific method. In #274 you say

Polar bears are your strawman, Ive never mentioned them.

In fact in #270 in response to #269 (though as the poor poster you, are you did not say it was #269 you were refering to, (what was your post about?) you said:

For mentioning polar bears?

When he deleted them, Steve wrote:

[snip – posts mentioning religion are automatic deletes. Im marking the snip since you seem to be a new poster]

and in particular in #264

That particular subjects taboo on this website.

for a quote you quoted of

So what sorts of things have you poor wildlife bioligists had to put up with in the mean old MSM? Pictures of drowning polar bears?

But in #274 you say:

Polar bears are your strawman, Ive never mentioned them.

Remember in #264 you say
That particular subjects taboo on this website. and quote a post that includes polar bears. So your claim is if you quoted someone about a subject, you “never mentioned them” (them = the subject of your quote)? This is your claim to an honest discussion?

#265 says
How so? Or is that just another way of saying parlor trick?

you say in 266

Steve deleted two of my posts for mentioning it ..

What is “it”??

Any reasonable reading of the posts would conclude that it is “polar bears”.

The only problem I have is that typical of a troll when pinned down, trolls change subjects. So you said “it”. True “it” could be anything. As we, on this site, try to audit AGW, we get the same use of indefinite in response to our definite questions. A good example of this is your #274 post where you refer to a movie that is against AGW, and a site that is against evolution. It is your claim, obviously untrue, that they are the same. It is out of thread, in that you did not provide an argument or even what the arguments are, such that a real discussion could take place. Just as in your use of “it”.

Your #261 about trolls and answering is also full of what you complain of. In post 258 you asked questions. 259 answered the questions asked with rhetorical questions as an answer. It was obviuos from your comments in #243, your legalistic arguments in 258 (lawyers and others recognize that “never” should not be used even though it is commonly misused) that your post shows you understood what he said, but still decided to “ride him for it”. Your use of “it” then denial that you used “it” as an indefinite article for polar bears, your use of a comparative fallacy (this and this). Whereas you say, you never mentioned polar bears but included a quote about polar bears, the best response I have to you is your own words:

Science isnt fair, a democracy or the ever popular opinion poll and should rise or fall on the merits.

which directly attacks a comparative fallacy, and then you engage in one yourself

Thus far, while maybe making for great propoganda and misrepresentations to feed the sensationalist MSM, the denialists have produced nothing of merit.

You have convicted yourself; and in that, yes, all in all, you are a troll. And I will heed the advice of those on this blog who wish to substantively discuss the science “Don’t feed the troll”. I offer this post so that those who have just joined our discussion can skip posts that do not advance our discussion. Yours are particularly worthy of ignoring, because they could actually add to the discussion, but don’t, by your choice, not ours.

Actually I dont find it that odd. In my professional interactions, Ive found that life- and little/NO-science people are far more receptive, even embracing, of catastrophic AGW theory. But even worse, many are strident AIT in the classroom proponents. In skeptic, NGW or AGW-agnostic camps, the composition appears more toward physical scientists and mathematicians, with AITs value only in teaching propaganda.

What’s AIT?

And I am neither receptive or embrace “catastrophic AGW theory.” (As opposed to AGW theory.) I accept the fact the globe is warming based on severals things, especially those in my field – early migration, range expansion, changes of flowering/fruiting timing, etc.
I accept that warming is anthropogenic because it makes the most sense in the light of the current evidence.

A colleague engaged in longterm water-quality monitoring used to say: “If you see a trend, check the lab.” While the laboratory was not always guilty, it was a good place to start, and there were many stories (unfortunately, this type of experience seldom gets recorded in a formal publication), involving nutrients, heavy metals, and toxic organics, where supposed “trends” were found to have been introduced into datasets by changes in lab techniques or instruments. It happens.

In light of this, and given the significance of the trends they were observing, one wonders how, for 7 years, managers at GISS failed to notice a trend-inducing error. Is it possible that, so long as their “data” were consistent with their “model” (that is, global warming), they did not look for problems? If so, what are we to make of the many “adjustements” that were applied elsewhere during this period: Were these adjustments used to “fix” sites that were inconsistent with the “model”?

Here’s an oddity. I took the August 2007 (post correction) GISS record, and subtracted the pre-correction (Feb ’07) record (base period 1970-1999). Here’s the differences:

Now, the post 2000 changes make sense, that was the error that Steve found. And GISS says “There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends.” This, I suppose, would make sense, as there would be slight changes in recent years due to the differences in the rural/urban temperatures due to the error.

However, there are several oddities.

1) In general, earlier temperatures were reported as too cool.

2) The changes in earlier years were greater than in recent years.

3) From 1973 to 2000, the “minor knock on effects” abruptly went to about zero. What happened in that year to zero out the “minor knock on effect”?

4) The adjustment varied widely from year to year. 1915 adjusted 0.01°C downwards, while the following year was adjusted 0.03°C upwards.

This kind of mystery is why it is so important that we get access to the year-by-year station list and the computer code. What kind of “minor knock on effect” changes temperatures in two consecutive years, three quarters of a century ago, in different directions?

And why is the 1973-2000 data unchanged? If the changes are related to the post-2000 error, wouldn’t recent years be affected as much as, or even more, than records from a century ago?

I believe what you are seeing is an artifact of the band-aid correction that NASA made to the GISS data set. They are still using a mix of adjusted USHCN data (pre 2000) and USHCN raw (post 2000).

From Reto A Ruedy’s e-mail to Steve M:

“When we did our monthly update this morning, an offset based on the last 10 years of overlap in the two data sets was applied and our on-line documentation was changed correspondingly with an acknowledgment of your contribution. This change and its effect will be noted in our next paper on temperature analysis and in our end-of-year temperature summary.”

It is not clear what 10 years of overlap was used, but I assumed that it was 1990 through 1999, based on the assumption that USHCN adjusted data was not available after 1999. However, Steve M points out in his ‘Lights Out Upstairs’ post:

“Moving on, Hansen says that the USHCN “corrections would not continue to be readily available in the near-real-time data streams”. If GISS is using USHCN adjusted data (as appears to be case from the description in Hansen et al 2001 and the website), this claim is incorrect. Readers in doubt of this may go to the USHCN website ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/ ; the file hcn_doe_mean_data.Z contains three versions of USHCN data, included the version that Hansen says is unavailable. This file was most recently updated on March 1, 2007 and, for the majority of sites, contains adjusted USHCN data up to Oct 2006. At present, GISS has only updated USHCN records to March 2006. Thus, not only are the adjusted USHCN versions available, they are available more recently than presently incorporated into the GISS temperature calculations.”

“Data from the other major station archive (GHCN) can be downloaded from ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2 . The GHCN raw data set and v2.mean.Z and the adjusted data set v2.mean_adj.Z are both updated all the time, most recently Aug 11, 2007. In the version that I downloaded in June, the USHCN record only went to March 2006, the period of the GISS record. However, readers can confirm that both the GHCN raw and GHCN adjusted versions have been archived concurrently and that the switch from one version to another was not required because of version unavailability.”

So I guess the real question is why they chose to normalize the two data sets (adjusted and raw), rather than simply obtain and use the post 2000 USHCN adjusted data. The only reason I can think of is that the Hansen secret sauce corrections require more than running the data through a turn key program. Ad hoc tweaking takes time and I think that they felt the need to respond quickly – a political response to an embarrassing situation.

Clearly, the NASA band-aid normalization involved more than the simple ‘off-set based’ adjustment that Ruedy implies. Maybe they consulted with Mann, on how to properly do the normalization.

But not to worry, the check is in the mail and we can find out what was done next year, in the end of year summary. Hansen has politics to deal with, which has priority over explaining the changes to the entire GISS US record. Besides, Hansen has made it clear that the US record is of little importance anyway – so why bother.

He (Chris Mooney of The Daily Green)goes on to carefully explain reasons why climate change is to blame for a rash of intense storms in the past ten years. “Even though Dean was not ’caused’ by global warming,” he writes, But “the storm is certainly consistent with the argument that there’s something going on out there that’s new – and more than a little scary.”

How can “journalists” get away with such sloppy and discordant logic??

Umm… No, do not discourage anyone, even a blind alarmist believer. There are several reasons.

1) It is impolite.
2) We are interested in the truth. You do not get a truly balanced view if you refuse to speak to people. We are not RealClimate.
3) Never underestimate the power of a convert – look at St Paul!
4) (and maybe most important) We are Auditors. If we listen, people eventually tell us all their nasty little secrets. We should be particularly interested in listening to someone arguing for AGW, even as badly as Wildlife. For instance, he commented that if urbanisation of rural US stations is a problem then it should show as a step discontinuity, not a trend. That is useful information. It indicates that when SurfaceStations findings come out, this might be an argument laid against them. So we need to know that it must be explained how various stations adding more buildings and paving more ground, installing more A/C and operating more cars over the years, may show up as an increasing trend rather than requiring a one-off correction.

Mann, Hansen et al have a more experienced view of the world than Wildlife. They know that they aren’t doing science, and you don’t win political arguments by talking to the opposition. You win them by avoiding talking about anything, and certainly not by talking to anyone who knows what you are talking about!

He (Chris Mooney of The Daily Green)goes on to carefully explain reasons why climate change is to blame for a rash of intense storms in the past ten years. “Even though Dean was not ’caused’ by global warming,” he writes, But “the storm is certainly consistent with the argument that there’s something going on out there that’s new – and more than a little scary.”

However, there is something going on out there that’s new, and it is more than a little scary. That is, the possibility that we may be heading for another Maunder minimum or worse.

It would be ironic indeed if the mis-placed alarmism from the mid-to-late ’70s was the reappear in the next decade.

He (Chris Mooney of The Daily Green)goes on to carefully explain reasons why climate change is to blame for a rash of intense storms in the past ten years. “Even though Dean was not ’caused’ by global warming,” he writes, But “the storm is certainly consistent with the argument that there’s something going on out there that’s new – and more than a little scary.”

I am a struck by the almost complete similarity of Hurricane Janet in 1955 to Hurricane Dean. So the take away question is what made Janet a Cat 5 storm?

And by extension cast doubt on the numbers, without supporting the assertion that the failed complaince unduely biased the numbers.

Why would you think having sites that don’t meet the standards by extension casts doubt on the numbers? And even if it does so by extention, how is that asserting failed compliance unduely biases the numbers? There’s nothing to support, nobody asserted any such thing. Nobody has any idea yet of the specifics of the stations or even if improper (non-standard) siting has much of an effect at all, either individually by station or as a whole, nor do we have an idea as to the specifics of in what ways it does if it does.

How about Watts put up instrumentation far enough away from all his pet sites to be in compliance and publish the data?

Because a) The phyical status of the stations is what’s being documented, not what effects the physical status has on the readings and b) it is not time for analysis yet and c) whatever is published needs to have either the documention or analysis (or both) done beforehand.

Or to answer the eternal question driving from California to New York: Q:”Why aren’t we in New York?” A:”Because we’re in Kansas still.”

I accept that warming is anthropogenic because it makes the most sense in the light of the current evidence.

What would you say are the three main pieces of evidence that lead you to this view ?

I’ll comment on both at once. From the standpoint of “making sense” as an explantion, and given no reason to specifically doubt the appearence that the measurements or signs are at least somewhat valid, from a standpoint of “seems so” and without getting into how warm or what it means or that it’s been trending upward for so long or what correlation CO2 has, etc.

The below is not a discussion of “is there more warming”, it’s simply a “more warming=yes humans=more warming” discussion (or in other words, what trees, icecores, glaciers, oceans, kittycats, native vegetation, migratory birds, coastlines and the like are doing is not part of this discussion because they don’t apply, and neither do any of the other side issues, like if the original source of the heat is hotter these days, or if the magnetic field is letting in more cosmic rays, or if clouds are a positive or negative or neutral forcing. It is also not a discussion of PR tactics, politics, data access, personalities or any of that other stuff.)

==========================

a) There are increased levels of CO2 in the atmosphere. Fossil fuels and respiration creates CO2. Cars, people, animals and vegetation. CO2 transfers heat and absorbs IR.
b) Cities made of asphalt and concrete that absorb heat have been built. Aside from road materials, there are buildings that absorb heat and modify wind flow. As do freeways and highways between places. All of which these affect weather patterns and rainfall. More humid air holds more heat.
c) Farms that allow more heat to the ground (rather than the tops of trees or water) cover much of the world. In addition, irrigation of these farms also produces more humid air to hold heat, and the wet ground covered with crops would tend to hold the heat in better and longer.

The constant addition of water vapor and carbon dioxide into the atmosphere (and to a lesser extent, methane and ozone) due to
a) burning fossil fuels
b) number of humans
c) number of animals humans use for various purposes
d) land use changes (cities)
e) land use changes (farms)
is probably responsible for the warming trend upwards we’ve seen since we started tracking things with thermometers.

While some of the trend is undoubtably due to increases in measurement efficiency and accuracy over the years, and there is an unknown margin of error (that fluctuates), and although it’s only a global anomally average and has limited meaning to regional effects, it certainly tends to seem that the Earth is getting warming on average at least somewhat. The factors above would account for the increase in warming. We don’t fully understand all the mechanisms or how they interact, and most of the estimates are just that, but this seems to be what’s happening (warming) and the probable cause (people).

The specific effects and the degree of those defects is what’s up for debate, as are the actions to take against what we believe are the causes, in what ways, and how much to spend on the actions.

That should have been the constant addition of water vapor and carbon dioxide into the atmosphere combined with the various man-made heat absorbing materials and the effects of both upon the climate system in general, and the amount of heat in particular.

Also, it’s the specific effects and the degree of those effects (not defects lol) is what’s up for debate.

I need to ask a question about the satellite temperature measurements before I waste time researching the issue. My latest take on satellite measurements is that we have these spreads in trends for 1978 to present from the various sources: UAH » 0.14 degree C/ decade; RSS» 0.18 degree C/ decade; Surface records » 0.17 degree C/ decade; Fu » 0.19 degree C/ decade; V&G » 0.22 to 0.26 degree C/ decade. That would appear to be a significant spread taken all together.

Now we also have the satellite measurements of lower troposphere, the mid troposphere and the stratosphere. As I recall the stratosphere measurements show cooling and that in direction at least is in agreement with the climate models. It is next that I become confused. The climate models predict a warmer troposphere trend than surface by approximately 1.2 times averaged over all latitudes for the troposphere. As I recall the satellite measurements do not agree with the climate models as they do not show the higher trends in the troposphere. My questions are the following:

Is the lower troposphere reading the one we normally compare directly to the surface measurements?

Is it the lower or mid troposphere trends that should by climate model calculation be 1.2 times higher than the surface temperature trends?

1. Yes, the normal comparison is between the surface reading and the lower troposphere reading

2. A key factor is that there is no “pure” satellite measurement of lower troposphere, middle troposphere and lower stratosphere. Satellites look at a slice the atmosphere on the horizon, which necessarily runs through all layers. So, every height is to some extent contaminated by other heights.

RSS has a nice graphic here which shows the height weighting of the different measures. For instance, TLT (lower troposphere) is mostly below 5km height but it does include some portion of the middle troposphere.

If one says that the TMT (middle troposphere) shows less warming than the surface then the AGW folks say that the TMT is contaminated by stratospheric cooling. I think there is math which can somewhat reduce, but not eliminate, the contamination effects.

Having said all that, my impression is that the satellite data shows equal or less warming of the troposphere than the surface, including the tropics, and this weighs against classical AGW. The problems, of course, are that the data is limited in accuracy and duration and the change being sought is so small.

David, thanks for your explanation of the troposphere weighting for the levels of troposphere used in temperature measurements. I thought it was Spencer and Christy who were attempting to find a correction for the stratosphere cooling on the troposphere but that it leads to significantly less precision in the measurements.

I remain unsure, however, whether the 1.2 times faster warming predicted by climate models applies to the lower or middle troposphere or perhaps some weighted average of the entire troposphere. My, not entirely serious, punch line was going to be that if theoretically the lower troposphere temperature that is compared to surface temperature records should warm 1.2 times faster than the surface than a correction could be made for a slower warming of the surface temperature by dividing its trend by 1.2.

My understanding is that year to year, season to season and month to month comparisons of measured surface and troposphere temperatures do show a faster warming in the troposphere, but on a decadal basis it does not. The troposphere warming effect is supposed to involve the straight-forward physics of the latent heat from moisture condensation in the troposphere.

I just studied your link further and it gives me some great insights into the measurement processes  thanks again.

I note in Willis’ chart that the biggest negative adjustments were done to years 1934, 1937, 1931, 1921, 1953 which were the only years capable of knocking 1998 out of the record spot.

That tips me over from the “innocent mistake” category to the “non-altruistic adjustment” adjustment category.

It is too convienent for the biggest mistakes to be made to these years only (1887 seems to be the only other outlyer.)

Actually, according GISS, the post 2000 temps were incorrectly reported as too warm (above zero in my graph above), while the pre 1973 temperatures were adjusted upwards because they were incorrect the other way, reported as too cold (below zero in my graph) …

Willis, my point about your chart was this innocent error introduced selective changes across the board (some of which you had already pointed out so I didn’t mention them.)

Why did the error introduce more change to the warm years in the past record than to other years. Why was error for 1921, 1953, 1934 and 1937 0.04C when the error for most other years was only 0.01C? Those just happen to be the potential years which could have moved 1998 out of the record spot.

(1931 was another warm year which wasn’t adjusted however and there is nothing special about the 1887 year.)

Your points about the 1970 to 2000 having resulted in no change plays into it as well.

My point is that errors introduced do not seem to be a random collection of pluses and minuses but were almost all favourable to the AGW cause.

That makes me suspicious that someone knew about the errors all along and since they were favourable they did nothing about it – +0.15C post 2000, bumping 1998 to the record year, reducing the temp of other warm years, smoothing out the cooling of the 1970s, error fully corrected within days – awfully convienent.

I was able to find a recent article here that explains the modeled surface to troposphere temperature trends and compares them to the measured trends. Using the center of the actual measurement ranges and model results shows a significant discrepancy between actual measurements and theory. The article tends to minimize the discrepancies, and particularly so in terms of presenting any evidence that might minimize the connection between GHG and global warming, by emphasizing the uncertainties in not only the measurements but the model outputs. It is probably the influence of my point of view, but I seem to note more uncertainty in these issues when the experimental results go against AGW theory.

Anyway when I put all this information together I find that satellite temperature measurements have stated and published uncertainties that are significant and this is even more apparent when one attempts to use theory to relate them to surface temperatures. I would be wary of using satellite measurements to calibrate surface measurements at this time.

Figure 3 in the linked article gives the global average temperature trends for modeled versus measured trends for the surface, lower troposphere, mid troposphere to lower stratosphere and lower stratosphere. Figure 4 in the article gives the same trends for the tropics (20S-20N).

Using these figures for the indicated middle of the range, I calculated the following ratios for the lower troposphere to surface temperature trends for the tropics and global average:

For the tropics the measured ratio is 0.6 and the modeled ratio is 1.4, while for the global average the measured ration is 0.8 and the modeled ratio is 1.2.

Yet another endorsement of open access to data and and methods from the Recommendations section of the above linked report:

(1) The independent development of data sets and analyses by several scientists or teams will help
to quantify structural uncertainty. In order to encourage further independent scrutiny, data sets and
their full metadata (i.e., information about instrumentation used, observing practices, the environmental
context of observations, and data-processing procedures) should be made openly available.
Comprehensive analyses should be carried out to ascertain the causes of remaining differences
between data sets and to refine uncertainty estimates

Data series 1 : Raw temperature measurements
Data series 2 : Temperature after Hansen’s dodgy adjustments, and before Steve caught him at it
Data series 3 : Temperature that has now been released, unwinding the dodgy adjustments

Where you quote John Lang’s reference to “the errors introduced” he is talking about the transition from Series 1 to Series 2.
Where you say “The new figures say that 1934 and other early years were actually warmer than previously claimed.” you are talking about the transition Series 2 to Series 3.

I think my issue with the recent politicizing of this whole issue is we’ve known about this since the 70s when we all watched Cark Sagan talk about it on cosmos. It wasn’t an issue to the Democrats when they had a chance to push the Kyoto treaty through congress (98/2 in the Senate voted against it, Democrats included). Now all of the sudden it’s an issue. Why? Politics, everyone needs a cause to make him or her politically viable. Science isn’t about politics. If we really have a problem we need to know about it so we need accurate, unbiased data and a solution. We need to further a theory not draw conclusions that are forbidden in science. We need to study cause and effect in real models, not computer predictions. The computers cant predict the weather accurately more than an hour in advance what the heck makes us think it can tell us what it will be in 50 years? Let’s get our head out of our political A## and do some real science. If that solution is to stop burning oil, then we need to stop burning oil and go nuclear. If its an issue with little green men from Mars we need to nuke Mars. Right now I have no idea who to believe and why.

I did some searching on various blogs about the revision and people basically said 1998 was still the hottest year because in the data, they are subtracting the 30 year mean and since it’s been hotter the past 30 years, the absolute temperature is actually higher. (Yes, I realize it’s a circular argument.) I just want to know, on an absolute scale, was 1934 or 1998 hotter. If it’s 1934, why is this article, written today, still saying 1998. Surely, a reporter covering global warming should know about the revision in data. I have only heard it reported once in the mainstream media. That was on Fox news and got me investigating. I guess it’s fine that nobody reports it because they deem it non news, but to still tout discredited data in the story is wrong.

Here’s how Hansen’s error matters. The NOAA Magazine for August 28th 2007 has an article headlined

GREENHOUSE GASES LIKELY DROVE NEAR-RECORD U.S. WARMTH IN 2006

in which they say

The annual average temperature in 2006 was 2.1 degrees F above the 20th Century average …

when in fact it was 1.85°F above the average …

It’s a very “all hell is breaking loose” article, driven by the incorrect numbers reported (and finally corrected) by Hansen.

My question is … how can they still be peddling this bovine waste product weeks after acknowledging the error … yeah, they’re real scientists all right, up to date on the latest findings …

w.

PS – I’ve written to the two contact people listed in the article as follows:

Dear friends:

Don’t you read your own press releases? Don’t you read the papers? The error in the NOAA US temperature record has been all over the news, and despite that, you are publishing incorrect information.

It was bad enough that NOAA didn’t discover the mistake yourselves … why are you compounding it by repeating the erroneous data in your magazine ?

This is a serious question. How could you have missed this? I lost a lot of faith in NOAA when the error was discovered, but your continued hyping of the bogus information after the news of it has gone ’round the world firmly puts you in the non-scientific camp.

Do you plan to publish an erratum or otherwise acknowledge your erroneous claims, or do you plan to just pretend it never happened?

An answer would be greatly appreciated. A published retraction, in the same magazine, would salvage your scientific reputation. What action do you plan to take?

All the best,

w.

PS – an honest examination of the problem would have to include a complete reanalysis of the computer simulations of the US temperatures viz greenhouse and El Nino forcings. As it stands, they are built on wildly incorrect data, and thus cannot be trusted. Do you plan to do that as well?

Willis Eschenbach’s graph of the changes looks too much like Mann’s “Hockey Stick” graph to me. I’m not one to get caught up in conspiracy theories, but I can’t help but wonder if this is not a coincidence. Could it be that “Hansen’s Y2k Error” was a deliberate attempt to massage the numbers to fit a false paradigm certain people were determined to project?

Steve: I doubt that it was deliberate. It was probably just sloppiness. However, I think that there’s a bias in their alertness to error detection: if the error had gone the other way, I have no doubt that they’d have cross-examined the data and code to try to locate it.