Do We Care if 2010 is the Warmist Year in History?

Guest Post by Ira Glickstein
According to the latest from NASA GISS (Goddard Institute for Space Studies), 2010 is shaping up to be “the warmest of 131 years”, based on global data from January through November. They compare it to 2005 “2nd warmest of 131 years” and 1998 “5th warmest of 131 years”.

We won’t know until the December data is in. Even then, given the level of noise in the base data and the wiggle room in the analysis, each of which is about the same magnitude as the Global Warming they are trying to quantify, we may not know for several years. If ever. GISS seems to analyze the data for decades, if necessary, to get the right answer.

A case in point is the still ongoing race between 1934 and 1998 to be the hottest for US annual mean temperature, the subject of one of the emails released in January of this year by NASA GISS in response to a FOIA (Freedom of Information Act) request. The 2007 message from Dr. Makiko Sato to Dr. James Hansen traces the fascinating story of that hot competition. See the January WUWT and my contemporary graphic that was picked up by several websites at that time.

[My new graphic, shown here, reproduces Sato’s email text, including all seven data sets, some or all of which were posted to her website. Click image for a larger version.]

The Great Hot 1934 vs 1998 Race
1) Sato’s first report, dated July 1999, shows 1934 with an impressive lead of over half a degree (0.541ºC to be exact) above 1998.

Keep in mind that this is US-only data, gathered and analyzed by Americans. Therefore, there is no possibility of fudging by the CRU (Climategate Research Unit) at East Anglia, England, or bogus data from Russia, China, or some third-world country. (If there is any error, it was due to home-grown error-ists :^)

Also note that total Global Warming, over the past 131 years, has been, according to the IPCC, GISS and CRU, in the range of 0.7ºC to 0.8ºC. So, if 1934 was more than 0.5ºC warmer than 1998, that is quite a significant percentage of the total.

At the time of this analysis, July 1999, the 1998 data had been in hand for more than half a year. Nearly all of it was from the same reporting stations as previous years, so any adjustments for relocated stations or those impacted by nearby development would be minor. The 1934 data had been in hand for, well, 65 years (eligible to collect Social Security :^) so it had, presumably, been fully analyzed.

Based on this July 1999 analysis, if I was a betting man, I would have put my money on 1934 as a sure thing. However, that was not to be, as Sato’s email recounts.

Why? Well, given steadily rising CO2 levels, and the high warming sensitivity of virtually all climate models to CO2, it would have been, let us say inconvenient, for 1998 to have been bested by a hot golden oldie from over 60 years previous! Kind of like your great grandpa beating you in a foot race.

2) The year 2000 was a bad one for 1934. November 2000 analysis seems to have put it on a downhill ski slope that cooled it by nearly a fifth of a degree (-0.186ºC to be precise). On the other hand, it was a very good year for 1998, which, seemingly put on a ski lift, managed to warm up by nearly a quarter of a degree (+0.233ºC). That confirms the Theory of Conservation of Mass and Energy. In other words, if someone in your neighborhood goes on a diet and loses weight, someone else is bound to gain it.

OK, now the hot race is getting interesting, with 1998 only about an eighth of a degree (0.122ºC) behind 1934. I’m still rooting for 1934. How about you?

3) Further analysis in January 2001 confirmed the downward trend for 1934 (lost an additional 26th of a degree) and the upward movement of 1998 (gained an additional 21th of a degree), tightening the hot race to a 28th of a degree (0.036ºC).

Good news! 1934 is still in the lead, but not by much!

4) Sato’s analysis and reporting on the great 1934 vs 1998 race seems to have taken a hiatus between 2001 and 2006. When the cat’s away, the mice will play, and 1998 did exactly that. The January 2006 analysis has 1998 unexpectedly tumbling, losing over a quarter of a degree (-0.269ºC), and restoring 1934‘s lead to nearly a third of a degree (0.305ºC). Sato notes in her email “This is questionable, I may have kept some data which I was checking.” Absolutely, let us question the data! Question, question, question … until we get the right answer.

5) Time for another ski lift! January 2007 analysis boosts 1998 by nearly a third of a degree (+0.312ºC) and drops 1934 a tiny bit (-0.008ºC), putting 1998 in the lead by a bit (0.015ºC). Sato comments “This is only time we had 1998 warmer than 1934, but one [on?] web for 7 months.”

6) and 7) March and August 2007 analysis shows tiny adjustments. However, in what seems to be a photo finish, 1934 sneaks ahead of 1998, being warmer by a tiny amount (0.023ºC). So, hooray! 1934 wins and 1998 is second.

OOPS, the hot race continued after the FOIA email! I checked the tabular data at GISS Contiguous 48 U.S. Surface Air Temperature Anomaly (C) today and, guess what? Since the Sato FOIA email discussed above, GISS has continued their taxpayer-funded work on both 1998 and 1934. The Annual Mean for 1998 has increased to 1.32ºC, a gain of a bit over an 11th of a degree (+0.094ºC), while poor old 1934 has been beaten down to 1.2ºC., a loss of about a 20th of a degree (-0.049ºC). So, sad to say, 1934 has lost the hot race by about an eighth of a degree (0.12ºC). Tough loss for the old-timer.

Analysis of the Analysis

What does this all mean? Is this evidence of wrongdoing? Incompetence? Not necessarily. During my long career as a system engineer I dealt with several brilliant analysts, all absolutely honest and far more competent than me in statistical processes. Yet, they sometimes produced troubling estimates, often due to poor assumptions.

In one case, prior to the availability of GPS, I needed a performance estimate for a Doppler-Inertial navigation system. They computed a number about 20% to 30% worse than I expected. In those days, I was a bit of a hot head, so I stormed over and shouted at them. A day later I had a revised estimate, 20% to 30% better than I had expected. My conclusion? It was my fault entirely. I had shouted too loudly! So, I went back and sweetly asked them to try again. This time they came in near my expectations and that was the value we promised to our customer.

Why had they been off? Well, as you may know, an inertial system is very stable, but it drifts back and forth on an 84 minute cycle (the period of a pendulum the length of the radius of the Earth). A Doppler radar does not drift, but it is noisy and may give erroneous results over smooth surfaces such as water and grass. The analysts had designed a Kalman filter that modeled the error characteristics to achieve a net result that was considerably better than either the inertial or the Doppler alone. To estimate performance they needed to assume the operating conditions, including how well the inertial system had been initialized prior to take off, and the terrain conditions for the Doppler. Change assumptions, change the results.

Conclusions

Is 2010 going to be declared warmest global annual by GISS after the December data comes in? I would not bet against that. As we have seen, they keep questioning and analyzing the data until they get the right answers. But, whatever they declare, should we believe it? What do you think?

Figuring out the warmest US annual is a lot simpler. Although I (and probably you) think 1934 was warmer than 1998, it seems someone at GISS, who knows how to shout loudly, does not think so. These things happen and, as I revealed above, I myself have been guilty of shouting at analysts. But, I corrected my error, and I was not asking all the governments of the world to wreck their economies on the basis of the results.

Post navigation

204 thoughts on “Do We Care if 2010 is the Warmist Year in History?”

So who is going to make the call that 2010 was the warmest year on record?

There will be howls of derision in news rooms. Come on Hansen and Romm, this is your chance to hammer home another nail into the AGW coffin. Writing in the UK, I am ashamed to admit it will be the Met Office, but their credibility is at an all time low, so they have nothing to lose, apart from their jobs.

2011 will start cold, partly due to La Nina. I am sure GISS will emphazise this reason ad nauseam and add imaginary tenths of degrees to account for this effect.

However, I haven’t seen any subtraction of the El Nino effect from 2010 to make a fair comparison with other years. This would throw 2010 far behind 2005 and several other years and make 2010 pretty unspectacular.

With a recent strongish El Nino 2nd to the 1997/98 one, it is expected for global temperatures to be at similiar high levels. With 2010 only just being able to beat it (if it does) just demonstrates how insignifacnt the warming has been over recent decades. I would have expected at least 2010 to be 0.1c higher just to maintain the assumption of CO2 contributing 0.2c per decade. Only just competing or even less than 1998 just demonstrates that it is at least half smaller then this claim. Though when you see below it’s not surprising the GISS will likely be the only one to show this, if it is the warmest.

Then there is the problem of experimental error to add and a data series that has had so many changes it would be almost impossible to replicate it. I don’t see any control with different techniques using various instruments in spliced data glued together. Which ever ocean techniques shows the most warming lets use that one and implement it recently to raise temperatures further. Painting out land temperatures over the oceans and not just where no data exists, just makes the control only scientifically correct for only the years this has been done throughout the data set. (ie not many at all)

I took a quick peek at the GISS website to try and understand how they crank out their numbers, and even a cursory glance was daunting. Has there been a clear presentation of the methodology somewhere? I would think that once you nail down the method, no matter how many times you run the analysis the results should be the same. If the assumptions regarding initial conditions are so fungible as to allow a reversal of the relative values of the anomolies at will, you don’t have a scientific analytical tool, you have a propoganda tool.

So this race between 1934 and 1998 was in the US temperature record, that comprises 2% of the Earth’s surface.

Not the global temperature record where 1998 has always been ahead of 1934.

The US temperature record is rather a side issue, in that regard, but nevertheless if you don’t trust the US temperature record why not verify it yourself by taking the station data from source and seeing what it shows overall?

Surely without doing that all you can say is you don’t know whether 1934 was warmer or not. One idea would be to get hold of the station data for 1934 and the station data for 1998 and compare an area weighting of them, taking into account any time of observation biases.

Also you’ve provided a timeline of changes, but what’s missing is any description of what changed in the GISTEMP analysis. These changes are often documented. For example they did switch to USCHN at some point. Wouldn’t it be a good idea to alongside the 1998 and 1934 changes mention what exactly changed in the GISTEMP analysis at that point?

The method for analyzing temperatures means the records remain fluid Because this is done with a model each time the model is rerun you will get different results for all periods. This is not like a spreadsheet where the records are fixed once the output is complete as they would be following accounting practices. Using three decimal points when they are starting with whole numbers is also cheating. The end result is that we do not know what the temperature has done for the last one hundred and fifty years!

They just want to save the planet, they think they know CO2 is the problem, so they adjust the data until it fits their preconceptions. They should be absolved from their duties to find fulfillment as street fundraisers for GreenPeace.

Probably should be ‘warmest’ in the title – unless NASA are coming clean about their political activism.

More to the point – someone needs to explain how the Vikings buried their farms under the permafrost, and why tree-mometers cannot reproduce current temperatures, before making unscientific declarations about warm, warmer, and warmest years.

Sorry to write this in 3 comments, you can combine them if you want or stagger them with other comments.

Hansen 2001 titled “2001: A closer look at United States and global surface temperature change” is extremely relevant here. This is the documentation that covers the change made around 2000 that resulted in the US 1934 value going down and 1998 value going up. The abstract reads:

“Changes in the GISS analysis subsequent to the documentation by Hansen et al. [1999] are as follows: (1) incorporation of corrections for time-of-observation bias and station history adjustments in the United States based on Easterling et al. [1996a], (2) reclassification of rural, small-town, and urban stations in the United States, southern Canada, and northern Mexico based on satellite measurements of night light intensity [Imhoff et al., 1997], and (3) a more flexible urban adjustment than that employed by Hansen et al. [1999], including reliance on only unlit stations in the United States and rural stations in the rest of the world for determining long-term trends.”http://pubs.giss.nasa.gov/cgi-bin/abstract.cgi?id=ha02300a

The content of the paper shows that the primary reason 1998 went up relative to 1934 was the correction for time-of-observation bias. The explanation given is:

“However, there have been changes of the time of observation by many of the cooperative weather observers in the United States [Karl et al., 1986]. Furthermore, the change has been systematic with more and more of the measurements by United States cooperative observers being in the morning, rather then the afternoon. This introduces a systematic error in the monthly mean temperature change.”

If that is true it will have added a cooling bias in the data since 1934. So it would seem the GISTEMP adjustment that made 1998 warmer relative to 1934 is valid. At least I see no reason to presume it isn’t. By all means the magnitude could be checked but surely we can only conclude something is wrong after checking?

Of what possible use is this kind of data from GISS or others? I mean other than providing fodder for slow news days, and grist for the political mill? Nobody gives a hoot’n hell about 1/10th degree or even 1 whole degree. Nor does anyone (except possibly their immediate families) care what a bunch of obscure, number crunching, cubicle rats say about the weather/climate a hundred years from now. Bunch of wannabe 007’s is all.

I’m curious as to who has the job of continually, over years and years, going over the temperature records to apply adjustments. How do they do this? Is there a baseline unadjusted data set that they then try a new set of adjustments on, or do they readjust the already adjusted data? How have the adjustments changed over time, and why? Did someone suddenly realise that the last set of adjustments had left out some important factor and it all had to be done again? Surely, there can’t be new 1934 data coming into the data set, so WUWT?

I didn’t spend much time doing it, but I just clicked on all the “Pro AGW views”
that Anthony lists. Only a few had recent threads, of the recent threads I don’t recall any comments saying “Merry Christmas”.

It may seem obvious but I have never seen it explicitly stated so I say it here. It seems to me that there are five completely independent ways to become a sceptic.

1/ Science A person can examine the science of AGW theory and become sceptical of the science.
2/ Predictions. A person can take the science purely at face value but become sceptical when the measured global temperatures can be seen to not match those predictions.
3/ Data sets. A person can become a sceptic by simply losing confidence in the global temperature data sets by noticing the uncertainties in the data collection and “corrections”
4/ Dirty Tricks A person could rationally ignore the science and ignore the temperature graphs and become sceptical solely on the basis of the known fraud, dirty tricks and bad faith of some of the main AGW crew. Lost of trust
5/ Money It would be totally rational to be sceptical of a group of scientists funded by (say) the tobacco industry and consider any of the out put possibly lacking independence. Similarly, it would be entirely rational to become sceptical of a group of scientists who are openly competing for grant money from pro-global warming funding bodies. One does not need to understand science to understand conflict of interest.

The science is the most difficult and demanding pathway so I think many people wouldn’t come at it directly from this route – which would explain the AGW frustration that no one is listening to their “science is settled” mantra anymore. There are so many easier routes by which they have lost credibility. (I note it may also explain why less educated people are less impressed by the “science is settled” mantra)

My own personal route to scepticism for example came first through pathway 4, through first doubts after the release of the Climategate emails, to outright scepticism after reading the Case Study 12 from D’Aleo and Watts (2010) “Hide this after Jim checks it” which you allude to. The idea that you can make undocumented changes to “raw” data and still call it “raw” was quite shocking.

I guess due to pathways 3, 4 and 5 this “hottest year ever” nonsense has lost traction.

If I had the slightest thought that their data was meaningful a pronouncement that 2010 was the hottest year on record would be worth an “Oh that is interesting”.
Given we know from circumstantial evidence such as the iceman and other markers of much warmer temperatures like evidence of arctic forests where today there is tundra, that their “record” is an eye blink in climate history.

Also given that I trust their data quality and analysis about as much as I trust a used car salesman’s pronouncement on the condition of a used car, I frankly could care less.

They can pronounce anything they like, it is garbage data out, of complex manipulated analysis of input data that is highly questionable in the first place. Their supposed precision is completely ridiculous in view or real world error budgets for the systems that they gather the data from. My response will be exactly the same I would have with a dog that barks all the time, ignore them.

The surface temperature record is a two-dimension record in a three-dimensional system, which means that, on its’ own, it has limited scientific significance, The fact that this part of the biosphere is where humans happen to live means that this significance is hugely exaggerated.

It is also, on geological timescales, extremely short, covering as it does a mere 1% of th e current inter-glacial climate period. Trying to draw definitive conclusions from this is, in layman’s terms, akin to discerning the entire plot of “24” by watching 15 minutes of 1 episode.

The methodology itself, using point data to extrapolate temperature trends over wide areas, is also extremely dubious IMO.

On top of all that, both the HadCrut and GISTEMP records are overseen by people who are strong proponents of CAGW. They EXPECT their data to show a strong warming signal. If it doesn’t, there must be a problem with the data – hence the adjustments like those outlined above – all made to emphasise the warming trend. This is “confirmation bias” at its’ very finest.

Onion says:
December 25, 2010 at 4:54 pm
So this race between 1934 and 1998 was in the US temperature record, that comprises 2% of the Earth’s surface. …

If it took GISS over a decade to figure out the 2% of the Earth’s surface that they happen to live in and where they have some control over the measurements, and if they seem confused, at best, how can we expect them to get the remaining 98% right?

The US temperature record is rather a side issue, in that regard, but nevertheless if you don’t trust the US temperature record why not verify it yourself by taking the station data from source and seeing what it shows overall?

Well, by paying my taxes I have funded the experts at GISS to do that job. They work for me (and you) and that gives us the right and duty to comment on their work. And, it is not the temperature record that I doubt (that much) but the fact the same 1934 data record, in hand from before I was born, can be analyzed and re-analyzed seven times to get swings 20% to 30% of what they claim to be measuring. And the same 1998 record taken by modern instruments a bit over a decade ago, is also subject to even larger swings as they repeatedly analyze it. If your tailor made you a tie that was 20% too short, along with a shirt that was 30% too long, would you trust him to do a whole suit?

Fine analysis and all, but does this not show the pure folly, insigificance and politics of this whole entire scheme? For lack of a better phrase, this is just plain stupid. Not this post pointing this out, but the scheming going on at these institutions and all we are talking about are 1/10ths of a degree, all to prove what exactly?

“But, whatever they declare, should we believe it? What do you think?”

This is a holliday joke, right? Given all the “adjustment games” that NASA has played over the years, plus all the obviously biased rhetorical CRAP that Hansen has been spewing, who of all the whos that have been following this stuff could possibly give a damn what NASA is saying today? I think it is gonna be very funny when the conservatives call ole’ Jim and his cronies into hearings this next year. LOL.

If the data set says this is the warmest year ever that data set is garbage.

I am extremely unimpressed with the retroactive fudging of data for the historical record. Especailly when no attempt is made to seriously correct warming bias errors known to be larger than the entire signal. Why should we give such rubbish the courtesy of even discussing it?

I can’t believe anyone with even rudimentary familiarity with scientific method can tolerate the kind of crap that frequently comes out of GISS. And to add insult to injury they get paid my tax money to ignore and circumvent honest science.

Even climategate-CRU has better, more honest data than GISS, and no, 2010 will not be warmer that 1998.
It is about time that Americans start to simply ignore GISS, and acknowledge that their numbers have no connection to the real world out there. Just use CRU (maybe, one day, they might even learn how to adjust for UHI) and UAH (or RSS).
So report like this:
1) UAH says:
2) CRU says:
3) RSS confirms 1), or not
4) We do not trust what our administration tells us, even less than we trust 2)

Is this much re-analysis done for all temperatures all over the world as was done for the continental US for the years 1934 and 1998? And if we do not know for sure if 1934 was warmer than 1998 in 2010, will we have to wait until 2086 to find out if 2010 beat 1998 globally? I somehow get the impression that the GISS people are between a rock and a hard place. It seems to me as if they want to prove 1998 was hotter than 1934. However if they do a too good of a job at this, then 1998 may end up beating 2010, and they do not want this either! Or am I misreading things?

Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!

If NASA was Merrill Lynch or whatever…this kind of fiddle faddling with data and inputs in the financial world…would land somebody in jail. …

Chris

Thanks for the kind words Chris. I was thinking the same thing. If any corporation “cooked the books” to show a recent year as more profitable than an older year to lead investors to think yet higher profits were in the offing, and if it then turned out their original data and analysis showed the opposite, stockholders and customers would run the other way.

Ira, you have beautifully encapsulated what is wrong with the public face of climate science today. As a PP said, if anyone did this in the financial world, they would be (or should be) in jail. I would also add, if anyone did this (up till 15 or 20 years ago) in any field of science, they would be universally condemned for scientific fraud. Today, not so much, it seems.

Most people will never be able to grasp the intricacies of ‘the science’, just as they do not comprehend all the ins and outs of financial fraud. But, they are pretty good at working out when they have been screwed. It looks like the IPCC Ponzi scheme’s chickens are finally coming home to roost.

Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!

So tell us what those consequences are? Longer growing seasons so we can better feed everyone on the the planet? The greening of the Sahara as occurred during the Holocene Climate Optimum?

Just what are these consequences that could be catastrophic for all of us?

Onion Quotes Hansen:
“However, there have been changes of the time of observation by many of the cooperative weather observers in the United States [Karl et al., 1986]. Furthermore, the change has been systematic with more and more of the measurements by United States cooperative observers being in the morning, rather then the afternoon. This introduces a systematic error in the monthly mean temperature change.”

The problem solved was created by sheer idiocy and the solution compounds the idiocy. The problem solved is that persons who recorded temperatures did not do so at the same time. The fact that such a problem exists shows that the persons in charge of collecting data really did not give a damn about the data or they would trained their data collectors properly. Because they did not train regarding time of day, they probably did not train them regarding citing. In other words, for lack of uniform standards, the data is sh*t. It always has been and always will be. But rather than admit that his glorious science is based on worthless data, what does Hansen do? He decides that he will correct all those time of day recording errors in one fell swoop.
Fortunately for Hansen, it is possible to do this because the error are systematic; that is, everyone who made the error made exactly the same error! Lucky Hansen and lucky us! He will use a little program that he wrote and that will make everything hunkey dorey.

Onion, you cannot possibly believe this b*llsh*t. Were you never conned out of your lunch money by an older kid at school? You know, the kind of kid who just takes pride in being sleazy and bullying younger kids. Hansen writes in exactly the same way that the school yard con artist talks.

Hugh Pepper says:
December 25, 2010 at 7:08 pm
“Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!”

You missed out the most important part of the mantra, and that is that this warming is anthropogenic in origin. Now go practice the gospel some more.
For the real data – i’m sure you mean unadjusted – let’s just look at this.

Anything is possible says:
December 25, 2010 at 6:17 pm
“The surface temperature record is a two-dimension record in a three-dimensional system, which means that, on its’ own, it has limited scientific significance, The fact that this part of the biosphere is where humans happen to live means that this significance is hugely exaggerated.”

You are right, but you missed the most important fact, namely that the Nyquist theorem is grossly violated in all dimensions, and thus, computing an average has no meaning. But that’s just signal processing nitpicking…

I don’t understand why anyone now bothers with surface temperature stations, when there has been continuous global satellite coverage for the last thirty years. The UAH near surface global temperature average, updated daily, looks to me like it places 2010 somewhere between 2nd and 4th hottest, behind 1998 certainly, and very close to 2005 and 2009. If one takes the last 13 years and plots a straight line regression through them one gets no increase in temperature over that time. The warmists cry that we must act drastically to hold global temperature increase to less than 2degree C by 2100. Well it looks like the first decade of the century, anyway, isn’t going to contribute towards any increase. I will watch the next 30 years with interest.

“‘Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!”

Hugh Pepper says:
December 25, 2010 at 7:08 pm (Edit)
Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. …

While I cannot speak officially for WUWT, it seems to me nearly everyone here accepts that the planet has warmed over the past century or so, and that some of it is due to human activities. The questions are: 1) How much? (I’d say 0.4 to 0.5ºC, about 30% to 40%, short of the 0.7 to 0.8ºC claimed by the warmists) and, 2) How much of that is due to human-caused carbon gasses and land use? (I’d say perhaps 0.1ºC, while the warmists say most of it).

Personally, I happen to be a lukewarmer-skeptic. Please stick around and you will understand why I (and most others here) do not think there is or has been any kind of “tipping point” or crisis, but we should take reasonable, market-driven action to reduce carbon gas emissions and energy waste. While some warmists like to talk about doing “green” things (usually by government edict), there are others who actually do some of those things voluntarily. For instance, my wife and I share a hybrid Prius and an electric golf cart and I put 40-50 miles on my bicycle each week. We recycle to the max, have a well-insulated house and moderate our cooling and heating, etc.

Werner Brozek says:
December 25, 2010 at 7:08 pm
… will we have to wait until 2086 to find out if 2010 beat 1998 globally? … the GISS people … want to prove 1998 was hotter than 1934. However if they do a too good of a job at this, then 1998 may end up beating 2010, and they do not want this either! Or am I misreading things?

No and no. You do not misread things, and they won’t be able to wait till 2086 to discredit 1998 or 2010, which they seem to have artificially inflated. They will have to start discrediting them within the coming decade if we enter a Dalton-like minimum based on David Archibald’s ideas. THANKS for your great comment!

Tactics, people, tactics. Isn’t the point that global warming isn’t a problem? Who claims it isn’t happening?

It’s been getting warmer for a century or two and if that continues for another century or two it’ll be a warm as it was when the Vikings colonized Greenland. That’s not a catastrophe! The recovery from the Little Ice Age may stop but that’s not the basis of our argument.

The alarmists have a onus probandi that global warming will become a problem. This is why they talk about “tipping points” and the like. Don’t let them off the hook by giving a rip about whether the Earth is merely getting warmer.

The alarmists are right only if the warming ACCELERATES. If the warming merely continues but doesn’t become a crisis, then the alarmists are wrong. They want everyone to make heroic efforts to prevent something. What if that something is good? What if Global Warming just means improved agriculture?

While I cannot speak officially for WUWT, it seems to me nearly everyone here accepts that the planet has warmed over the past century or so, and that some of it is due to human activities.
——————————————————————————
I don’t think that generalisation is true, especially the second part.

Here in Australia, where the weather (ops no – climate) is supposed to be getting hotter and hotter and this is supposed to be very, very bad if not fatal;
Well ——–

There is a continued decade or more long population drift towards the tropical north, top end of our fine continent.
Are Australians all mad this could lead you to ask?

Well no, is the very trueful scientific answer, it’s just that we are not as yet truely accliamatised.
We alian Europeans in our ignorance believe in our very non scientific way, that moving north takes us much closer to the cold, cold artic circle and moving to the cold is our salvation.

In our great ignorance, we may be quite right.
moving to the warm north may yet be our salvation.

21 Dec: Columbia Uni: James Hansen: Q&A with Bill McKibben, cofounder and global organizer, 350.org
Hansen: We scientists create a communications problem by speaking about average global warming in degrees Celsius. Global warming in degrees Fahrenheit is almost twice as large (exact factor is 1.8) and warming is about twice as much over land (where people live!) than over ocean…http://www.columbia.edu/~jeh1/mailings/2010/20101221_McKibbenQA.pdf

While I cannot speak officially for WUWT, it seems to me nearly everyone here accepts that the planet has warmed over the past century or so, and that some of it is due to human activities.
——————————————————————————
I don’t think that generalisation is true, especially the second part.

Not that it matters. Science is not a matter of consensus.

Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age and that, even if it has, not a bit of that warming is human-caused? You are certainly entitled to your opinion, but it is my opinion that you are wrong. As I wrote, we need to address: 1) Q. How much? A. Not as much as claimed by warmists, and 2) Q. How much of that is human-caused? A. Not much, perhaps 0.1ºC.

Let me add a third question: 3) Q. What, if anything, should we do about it? A. Not much immediately, it is not a crisis, some of it may be good, etc. But, as a conservative, I am concerned with the continuing rise of CO2 to levels unprecedented in human times, and (not within the purview of WUWT) the costs of protecting our access to imported petroleum. Therefore it would be prudent, I believe, to use market-based forces to develop nuclear, clean coal, and renewables, and also reduce energy waste.

Oh, and yes, science is based on consensus to some extent, over the long haul.

If a scientist is correct but nearly all the others say she is wrong, she is, of course, still right. We believe the scientific method will, in time, eventually prove her right and, at that point, the consensus will go her way. The problem with climate science is that it has been, to some extent, hijacked by political forces. It is those forces, not science or the scientific method, that constitute the problem.

Peer reviewed research from several data sets confirm that the planet is getting warmer. If you have real data to disprove this, please present it!”

See the Hadcrut3 data set: http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
Despite the warm 2010, according to Hadcrut3, the average anomaly for the last five years (2006 to 2010) was 0.42. However the average anomaly for the previous five years (2001 to 2005) was 0.46. This basically means it cooled off during the decade.
I realize climate is defined by what happens over 30 years. But something else is going on with the climate. There are huge ocean cycles that form a sine wave every 60 years. And right now, we are at the point where we were in the 1950s where things were getting cooler for a few decades and some thought there would be an ice age in the 1970s.

You can’t have global warming if you aren’t continually setting new record highs. It was obvious to GISS and CRU that 1998 was going to be a problem in that regard, setting a temp so high that you couldn’t beat it in subsequent years. This chart appeared on WUWT a couple years ago, and you see their approach to solving the problem.

GISS had been running a good 0.2° higher than UAH/RSS, but for the perfect storm year of 1998, both GISS and HadCRUT tamped down their numbers to give themselves a chance.

This consideration was also recognized in the USHCN v2 adjusted raw data, where 1998 was lowered slightly, enough to make it reachable, but not as much as 1934 was lowered.

D. J. Hawkins says: I took a quick peek at the GISS website to try and understand how they crank out their numbers, and even a cursory glance was daunting.

Yup. Put me off for about 2 years. I kept saying “SOMEBODY needs to look at this!”… then one day I realized “I am somebody.”

I’ve downloaded it to a Linux box and got it to run. It ‘has issues’ but I figured out how to get past them. Details on request and posted on my web site. It required a long slow pass through the code…
Has there been a clear presentation of the methodology somewhere?

No.

There has been a lot of presentation of the methodology. Much of it is in technical papers that present a method, that is used in the code, but the code does more than (or sometimes less than, or sometimes just somewhat different from) what the papers describe.

The code is designed in such a way that EVERY time you run it (with any change of starting data AT ALL) you will get different results. And both the GHCN and USHCN input data sets change constantly. Not even just monthly, but even mid-month things just ‘show up’ changed in the data sets.

So to talk about “what GIStemp does” at any point in time requires specification of the “Vintage” of data used TO THE DAY and potentially to the hour and minute.

Why?

Infill of missing data, homogenizing via the “Reference station method”, UHI via the “Reference Station Method”. Grid / Box anomaly calculation that uses different sets of thermometers in the grid/box at the start and end times (so ANY change of data in the recent “box” can change the anomalies…) and some more too…
If the assumptions regarding initial conditions are so fungible as to allow a reversal of the relative values of the anomolies at will, you don’t have a scientific analytical tool, you have a propoganda tool.

Can I quote you on that? It’s rather well said…

IMHO, the use of “Grid / Box anomalies” (calculated AFTER a lot of data manipulation, adjustment, averaging , homogenizing, etc done on monthly temperature averages…) mixed with changing what thermometers are in a “grid / box” in the present vs. the past lets you “tune” the data such that GIStemp will find whatever warming you like. It’s cleverly done (or subtile enough they missed the “bug”… being generous) and if a good programmer devotes about 2 years to it they can get to this point of understanding. Everyone else is just baffled by it. Draw your own conclusions…

I’ve tried explaining it to bright folks. A few ‘get it’. Most just get glazed. Some become hostile. I’ll explain it to anyone who wants to put in the time, but it will take a couple of weeks (months if not dedicated) and few folks are willing to ‘go there’.

Judging from the look of the code, it was written about 30 years ago and never been revisited (just more glued on, often ‘at the end’ of the chain). From that I deduce that either Hansen is unwilling to change “his baby” or very few folks are willing to shove their brains through that particular sieve …

The bottom line is that “the GIStemp code” is DESIGNED to never be repeatable and to constantly mutate the results as ANY input data changes and that makes ALL the output shift. It’s part of the methodology in the design. Don’t know if that’s “malice” or “stupidity”…

It is my assertion that this data sensitivity is what GIStemp finds when it finds warming. The simple fact that as new data is added the past shifts to a warmer TREND indicates to me that the METHOD induces the warming, not the actual trend in the data. I’ve gone through the data a LOT and find 1934 warmer than 1998 and with a method that IS repeatable. See:

Basically, 1934 and 1998 ought to stay constant relative to EACH OTHER even as new data is added even IF you were ‘adjusting the past cooler’ to make up for something or other (nominaly UHI or TOBS – yes, I know, it sounds crazy to make the past cooler for UHI correction, but it’s “what they do” in many cases).

As it is, they jockey for relative position with a consistent, though stochastic, downward drift of the older relative to the newer. That tells me it’s the method, not the data, that’s warming.

Onion says: So this race between 1934 and 1998 was in the US temperature record, that comprises 2% of the Earth’s surface.

Um, no.

The article is a little unclear on that point, but the details are rather devilish, so I’d give them some slack on the details. The reality is rather complex…

GISS uses a code called GIStemp. It is that US CODE that is finding 1998 warmer than 1934 (sometimes).

GIStemp takes as input BOTH the USHCN (USA only data) and the GHCN (Global Historical Climate Network – whole world data). Except that between about Noveber of 2010 and about 2007? it took in the USHCN but only used it up to 2007. Then in November it suddenly started using all of it (having finally added the code to use the newer version)… EXCEPT that the new version of USHCN was all different from the old version (warmer) so direct comparisions of old and new GIStemp are not, er, “valid”? “reasonable”?

OK, in the first step of GIStemp, it does a garbled “half averaging” of USHCN and GHCN but only for the USA stations. Each, you see, has a different ‘adjustment history’ so it tries to undo some of the adjustments in one and put in the adjustments from the other, except where it only has one, then it just uses whatever one it has, adjustments that don’t match and all.

Oh, and it fills in missing data by making it up.

N0, honest. It is called “The Reference Station Method” and it is used both to press fit the data to look like what they think it ought to be (called ‘homogenizing’) and to fill in missing bits with what they think would look nice and fit in well.

THEN that mush goes on the following steps (that are detailed in the links I gave above for anyone courageous enough to ‘go there’).

So, “onion”, they use the whole global data set. It’s just what they DO with it that’s, er, odd.

In the USSR and its slave satellite subject regions that I visited in the early 80s the ordinary unconnected people would look at the state broadcasts of record grain harvests and then look at the massive queues for what tiny quantities of bread was available, the people could easily see the ‘truth deficit’.

The state/establishment/ruling parasite class is desperate to sell the idea of CAGW, so desperate are they that like the USSR they have turned to telling ever bigger lies and deceptions. The gap between the observations of ordinary people and what they are told has reached breaking point, henceforth we will see ordinary people acting like those of the USSR, they will not believe anything the lying regime says whether its true or not. Trust has broken down, the bonds of trust between the political class and the people is broken, we know the political class and their stooge Lysenko’s are lying through their teeth.

The political class need CAGW whether its real or not, it allows them to control carbon and control the masses, it allows the rich to become richer and the powerful to become more powerful. The CAGW fraud is plan A, I can only imagine that a plan B would be a nightmare.

Hansen 2001 titled “2001: A closer look at United States and global surface temperature change” is extremely relevant here. This is the documentation that covers the change made around 2000 that resulted in the US 1934 value going down and 1998 value going up. The abstract reads:

Part of the problem is that the adjustments are appled to all at once while the things being adjusted happened to a few at a time and spread out.

For example:

(2) reclassification of rural, small-town, and urban stations in the United States, southern Canada, and northern Mexico based on satellite measurements of night light intensity [Imhoff et al., 1997], and (3) a more flexible urban adjustment than that employed by Hansen et al. [1999], including reliance on only unlit stations in the United States and rural stations in the rest of the world for determining long-term trends.”

While this sounds nice, what it leaves out is the simple fact that a station is classed as RURAL or URBAN for a single point in time, NOW. That flag is then applied to ALL data for ALL time for that site. That then means, for example, that an Urban site today would have been urban in 1850 even if it was a cow field then. Then you “UHI correct down” that “urban” station and, oh, wait, it’s a cow field we’re cooling off…

This issue is endmic to the data sets. “Condition” flags have no date axis. An airport today was an airport in 1790…

By all means the magnitude could be checked but surely we can only conclude something is wrong after checking?

Absolutely NOT. Mr. Hansen is on record in a court of law testifying that breaking the law “for the greater good” is a worthy thing to do. This set case law precident in the UK. He is first and foremost a political activist who believes, by his own words, breaking the law is a fine thing. That means he has no moral compass about defending his code or data from political bias “for the greater good” either.

The only valid approach is to assume is is doing exactly what he has testified is the right and moral thing to do: Whatever it takes to achieve your goals if you think it is for the greater good.

That means “assume the ‘fellow’ is flat out lying until proven otherwise”.

Hey, HE testified that those were his methods, not me. So “throw rocks” at him if you don’t like it…

Lionsden says:
December 25, 2010 at 7:35 pm
I don’t understand why anyone now bothers with surface temperature stations, when there has been continuous global satellite coverage for the last thirty years. The UAH near surface global temperature average, updated daily, looks to me like it places 2010 somewhere between 2nd and 4th hottest, behind 1998 certainly, and very close to 2005 and 2009.

According to Spencer the UAH MSU is a tie for hottest between 1998 and 2010.

4/ Dirty Tricks A person could rationally ignore the science and ignore the temperature graphs and become sceptical solely on the basis of the known fraud, dirty tricks and bad faith of some of the main AGW crew. Lost of trust.

In the wake of Climategate I contended that although “the science” hadn’t taken a direct hit, the warmists had suffered the loss of something even greater–the intangible of trust. They were no longer trustworthy. “The shine is off their halos,” I said, predicting that would prove fatal in the long run.

I might regret getting into this, but since 1934 is so close to 1998, is there an official explanation as to why one warming is natural and the other is not?

[REPLY – It is for the US. Not so much for global. But, then, CRU hasn’t released its raw data, so who can tell what they did to the data? The average USHCN station raw data shows a +0.14C/century increase. Adjusted data is +0.59C (both figures ungridded). ~ Evan]

sHx says: The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.

I see others have pointed out the 1934 satellite issue. Yes, it’s that pesky “what baseline?” problem.

Richard Sharpe says:
So tell us what those consequences are? Longer growing seasons so we can better feed everyone on the the planet? The greening of the Sahara as occurred during the Holocene Climate Optimum?

Um, no, doesn’t even get close. I can barely make out much change there. It’s really just some better crop land in Canada and Siberia. Even if you assume the IPCC is correct in their overblown numbers. Climate maps here:

DirkH says:
You are right, but you missed the most important fact, namely that the Nyquist theorem is grossly violated in all dimensions, and thus, computing an average has no meaning. But that’s just signal processing nitpicking…

Um, it’s actually even worse than that… Temperature is an INTENSIVE variable, so averaging it means nothing anyway. Take two pots of water, one 10 degrees the other 40 degrees. Dump one into the other. What’s the resultant temperature?

But please folks, take the time to learn what an intensive variable is. This is critical.

Alvin says: How do the *questionable* New Zealand temperatures affect the over global temp?

Take the locations of the stations. Put a 1200 km radius around them. That enclosed area as a percent of total land area is roughly the impact of N.Z. (GIStemp extends temps out to 1200 km in all directions…)

Ira Glickstein says:

Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age

It was warmer than now BEFORE the LIA. That’s the problem. Temperatures are semi-cyclical with periods of up to 1500 years (Bond Events / D.O. events) and to some degree fractal as they are based on fractal surfaces. Fractal things vary with the size of ruler you use (so changing the number of thermometers changes the result…) and measuring a long period cycle with a short length ruler gives you any slope you like (just be careful in picking start points and end points…)

Over a 10,000 year period, we’re definitly headed down. Over 2 years, we’re headed down. Over 150 years we’re headed up. Over 50 years we’re headed up. Over 60 years we’ve gone nowhere. Over 1500 years we’re way up (Dark Ages were cold and dark, Bond Event One…) but over 2000 years we’re down…

I see some people haven’t grasp the fact that one of the major problems with the “historical temperature record” is the lack of metadata which is something that NCDC admits. Now the question is why is that important? Simple GISS and CRU both use the data in the GHCN dataset compiled by NCDC as their starting point. If you don’t have the complete meta data you miss when equipment was changed, when a station was moved and other factors. Another thing you don’t get is the built in instrument error of the thermometers back then. Go online shopping for a liquid in glass (LIG) thermometer. You will find some have an error of +/- .5°C while others have an error of +/- .3°C.

Why that is important is that they can’t compute the true error bands for the instrumental period back in it’s early years. I stumbled across a paper that NOAA had made dealing with just one station in New Mexico and all these flaws are right there for anyone to see.

No documentation could be found that specified exact location on the fort where Army surgeons took weather observations during the late 1860s to early 1890s. As was customary at other forts, the observations likely were taken at, or near the existing hospital. According to limited documentation, as many as three sites may have been utilized in taking weather observations at Fort Bayard – two permanent hospitals from 1869 through 1893 and one temporary location prior to 1869. The locations of the permanent hospitals are known (Figure 2), with the exact location of the temporary site unknown.

Although specific information was lacking regarding weather instruments used by Fort Bayard Army surgeons, the Army Surgeon General’s Office issued descriptions and instructions to field surgeons in 1856 and 1868 regarding instrument exposure and how the observations were to be taken.

On 1 September 1871, the wet bulb thermometer was broken and observations were not taken after that date. The wet bulb thermometer was replaced between July 1873 and November 1873 (exact date unknown due to lack of data in the NCDC database), and measurements resumed.

The January 1886 observation form contained a note stating the office did not have a serviceable maximum thermometer. It stated a replacement from the Army Surgeon General’s Office had arrived broken and that the office had requested a
replacement, but it had not arrived. The note also stated the minimum thermometer was not in good working condition prior to 8 January 1886. However, minimum temperatures continued to be recorded. A replacement maximum thermometer was received and readings were commenced 19 February 1886.

Fort Bayard surgeons began recording maximum/minimum temperatures 1 September 1888, and on the December 1888 form provided the following note: “In the absence of Maximum and Minimum thermometers, the coldest and warmest periods of the day were recorded by the ordinary thermometer.” Maximum/minimum temperatures continued to be recorded through most of 1889 with no mention of receiving maximum/minimum thermometers.

Specific information regarding weather instrument location and exposure at Fort Bayard during observations by Army surgeons was almost non-existent. The NCDC database was the primary source of weather observations for this report.

There is more in the report but that should give you an idea of what was going on. They have no clue where the instruments were, exactly what instruments were used and what the instrument errors were from those unknown instruments. Hell they don’t even know for sure how many station moves occurred back then or how many stations in the area there was.

Now is every station going to be that bad? Unlikely but it is also unlikely that it is the worst record in the NCDC database. Does anyone think that you won’t find the same crap in the late 1800’s and early 1900’s in places like Africa and South America?

So there is the root problem, no matter how good or bad GISS or CRU or NCDC’s methods are the stuff they got to work with just isn’t that great. Basically at that point it’s GIGO. To imagine you can detect a .7°C to .8°C per century trend when even some of the modern LIG thermometers have built in errors of +/- .5°C is silly.

Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!

Hugh, if you do really care, perhaps you should stop and reflect upon the question of why it is that all you can do is repeat ipcc “Climate Science” C[A]GW Propaganda memes!

What is really at stake concerning your level of “caring”, compared against the wellbeing of Humanity or of any Country or person, is the fact that the ipcc’s alleged Climate Science-based “cure” to its alleged CAGW “disease” is obviously worse on people than is its alleged CAGW disease.

That’s why, where the rubber meets the road concerning doing what’s best, India and China are proceeding with massive programs to construct coal-fired electricity plants. And not only do they not think that fossil fuel CO2 is the virulent cause of a terrible disease, they think that a massive increase in fossil fuel use, thus CO2 production, is absolutely necessary to cure their already existing catastrophic disease, underdevelopment! Or don’t they care?

That’s also why the ipcc excluded countries containing ~5 billion of the Earth’s 6.5 billion people from having to do anything to achieve its Kyoto Protocol CO2 emissions goals – the ipcc knows its “Climate Science” is not real science, and that its allegations about fossil fuel CO2 causing a net GW disease is only another useful Propaganda ploy, involving demonizing fossil fuel CO2 and disasterizing Global Warming, directed toward the usual corrupt ends.

. . . Mr. Hansen is on record in a court of law testifying that breaking the law “for the greater good” is a worthy thing to do. This set case law precident in the UK. He is first and foremost a political activist who believes, by his own words, breaking the law is a fine thing. That means he has no moral compass about defending his code or data from political bias “for the greater good” either.

The only valid approach is to assume is is doing exactly what he has testified is the right and moral thing to do: Whatever it takes to achieve your goals if you think it is for the greater good.

That means “assume the ‘fellow’ is flat out lying until proven otherwise”.

That is the best way of viewing any and all words and numbers coming from Mr. Hansen, or from anyone else who associates with him, or quotes him in defense of his or their views regarding the earth’s climate (or for that matter anything at all).

They were still doing clinical trials of penicillin in 1934, so the quality of the data compared to the quality of the data today is a pretty big issue also. Now since we are talking about a situation where the US of A was still tinkering with penicillin, what kind of quality data do you think you will get from Chile or Russia or any place in Mongolia?

I know you are just the messenger. I just think the global data of that time has no credibility. The technology level between now and then is staggering.

An analysis should include a list of stations fairly well distributed over the area for which an average temperature is to be calculated, and all stations must have an unbroken record requiring an absolute minimum of homogenisation, if any at all, and they must be little affected by urbanisation. This will disqualify nearly all stations, but if the remaining ones show a pattern, it will be a genuine one, and then I think it’s fair to make statements on “warmest” or “coldest” based on what we know. And that will not be the entire world or even entire countries. But it’s better to state what we do know and don’t than simply to fill in the holes and thereby contaminate what we actually do know.

Currently too many long series exist that were constructed by splicing and adjustments that will bring too many assumptations into the final figures when these sereis are combined to create a national or global temperature.

It does not matter how many bad station thermometers there were in 1887. Statistically, more measurements the better! With more measurements, the outliers statistically rectify themselves, with a too high here matched with a too low there. That is the way statistics deals with crude measurements, with lots of readings! that sort each other out.

The problem lately is that they were dropping measurements that they did not like for one reason or another. If I weigh myself every five minutes, I will get slightly differing results. I can either try to get a perfect weight, stepping on the balance carefully, and then living with that number, n=1, and then say, “isn’t my way grand?

Well it’s not grand. A hundred measurements give me the right number with less care, unless, like GISS and their ilk, I add heavy clothes to my body and report the hundred weighings according to my “adjusted” weight now with clothes on [obviously I am cheating in this experimment, because I want a fatter me and that is driving my method.

SO GISS and NOA begin chopping out stations, because they are too rural and we don’t get the temperature at the right time. We then are left with airports and UHIs that have winter boots and coats on (figuratively),

Now we have readings from “solid” stations covering expansive areas, having thrown out the problematic ones. Now we have 75% of historic stations gone, and as I have pointed out, fewer readings, especially with systematic biases, give bogus results.

Can someone go back to the old sites, and take a good digital reading several times a day or a month, thus adding the 75% back? Someone without a Supervisor telling him how the numbers should be? Discarding any that have been terminally “adjusted” out of existence because they were too cool and brought the means down?

Statistics is science. More numbers, taken blindly, are the way of statistics.

Here is an example: I take ten readings at 3:00 PM with a calibrated thermometer. They come out as 11, 12, 14, 11, 12, 16, 09, 08, 12, 13. The mean is 11.8, the median is 12. Pretty good data, even though my thermometer is a beast with a range of 8. With a hundred measurements, the number gets closer to the mean as the variance decreases with more measurements.

Now in our case, we have thermometers sitting in the sun on asphalt much of the time in an airplanes 500° exhaust. And we wonder what is happening to our climate! Armageddon!

You do not need a Cray to figure this out. Spend the money on more stations scattered generously about, and the climate will thank you. Gaia will breathe easier—we’ve been cavalier with the old Sheila, and the good girl is not a bit stuck up! That is statistical science. A poor scientist faults his tools. A good scientist has a bunch of tools.

To Hugh Pepper and Onion et al (hmmm there’s a funny line there somewhere but I can’t think of one)

See if we can follow this logic and arrive at a reasonable conclusion.

GISS pronounces a temperature anomaly (call it P1) We believe it, they are the experts working on this stuff day after day and this is NASA so why not. I believe ’em.

Sometime down the track, they make some adjustments and make another pronouncement, P2. Why would we disbelieve their new pronouncement, these are the experts. HOWEVER WE NOW KNOW P1 WAS WRONG.

Sometime later they make pronouncement P3. Why would we disbelieve………..you get the picture, no need to belabour the point.

Each time GISS makes a new pronouncement (7 according to Ira) they themselves are saying the previous pronouncement WAS WRONG. Yes sure the adjustments may have been warranted and correct, but this does NOT change the fact that this makes the previous pronouncement wrong.

So now I ask you, what are the chances for a pronouncement 8? Most would think there is a good chance. And if there is going to be an 8th pronouncement, then the current one, number 7, is WRONG.

The only reasonable conclusion to draw, even without accusations of fudging or fraud, is that we DON’T KNOW what the T anomalies are. If we don’t know that, then we don’t know what the A in AGW is. If we don’t know A, there is no point getting our knickers in a knot over it is there?

All that can be said is “yeah it’s warmed somewhat since the beginning of the instrumental era, but we don’t know by how much.”

So, if they can pre-predict and pre-conclude that 2010 will be the 3rd hottest on record. I need to know when the artic will be ice free so I can pre-prepare my sailboat to navagate the artic ocean. I want to see Sarah Palins house from Russia.

While I cannot speak officially for WUWT, it seems to me nearly everyone here accepts that the planet has warmed over the past century or so, and that some of it is due to human activities.
——————————————————————————
I don’t think that generalisation is true, especially the second part.

Not that it matters. Science is not a matter of consensus.

Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age and that, even if it has, not a bit of that warming is human-caused? You are certainly entitled to your opinion, but it is my opinion that you are wrong. As I wrote, we need to address: 1) Q. How much? A. Not as much as claimed by warmists, and 2) Q. How much of that is human-caused? A. Not much, perhaps 0.1ºC.

——————————————————————————–

Ira, given all the strong reservations about data integrity, and the methodology which is applied to it, that have been discussed on this site, I am not convinced that ‘the planet’ has warmed ‘over the past century or so’ – nor am I convinced that it has not. It is interesting that the focus of your own post relates to possibly higher temperatures in the US 75 years ago – so there goes most of your century, in the US at least. While we are pretty sure that some parts of the world were temporarily colder a century ago, that doesn’t tell us much about global climate either then or now.

I am even less convinced that we can reliably quantify the human contribution to whatever changes may have taken place over that period, except at a micro, micro level such as how land use in small areas affects local weather.

With the current state of measurement, and that of the past century, let alone what is done with those numbers, a figure like 0.1 degree C (as you cite) is meaningless IMO.

Climatology, as an integrated science comprising many sub-disciplines, is in its infancy. But, your post here is a useful contribution to getting the data right, which is a necessary first step.

We need you and your skills over at Climate Etc. Hansen’s most recent paper has just received notice over there. Please spend a few minutes there. A repost of your contribution here would be a great starting point. One or two if Hansen’s acolytes (Lacis, Colose) hang out there.

Don’t initially mention WUWT as some of the Alarmists go into a faint/seizure/catatonic trance at the mere mention of its name, and a Catastrophist foaming at the mouth is not a good image for the festive season…….

sHx says:
December 25, 2010 at 7:02 pm (Edit)
The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.

Yes, I forgot about those 1934 satellites. Sorry :^)

JohnH says:
December 25, 2010 at 10:01 pm

sHx says:
December 25, 2010 at 7:02 pm
The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.

Warmest in last 30 years only, in 1934 there were no satellites.

PS Where’s the american Harry when you need him.

E.M.Smith says:
December 25, 2010 at 10:46 pm

sHx says: The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.

I see others have pointed out the 1934 satellite issue. Yes, it’s that pesky “what baseline?” problem.

Geez, Louise!

I am quite flattered by all that attention but I’m not really that scientifically illiterate to think that there were satellites in 1934.

What I meant was that satellites record are the only reliable record. And that only extends back to the last 30 years. The first dedicated instrument to check temp anomalies was put into Earth orbit in 2002, right?. Most conservatively speaking, our best records only stretch back to 2002. I understand that it was somewhat possible to detect the heat signal from earlier, undedicated satellite records that stretch back to 1979. In any case, according to the satellite measurements, 1998 was the warmest year ever.

Land based, ‘value added’ instrumental records are not in the same class in terms of data quality as sat records. Therefore 1934 and 1998 are not suitable candidates for comparison. To the best of our knowledge 1998 was the warmist year.

It does not matter how many bad station thermometers there were in 1887. Statistically, more measurements the better! With more measurements, the outliers statistically rectify themselves, with a too high here matched with a too low there. That is the way statistics deals with crude measurements, with lots of readings! that sort each other out.

Nope, incorrect, you made a basic mistake there. There is no large number of measurements at any given point in time in any given place, there is only one. The Law of large numbers does not apply. Think of it this way how many Max readings were taken at State College PA on Jan 21, 1890? Answer one. Now that number was recorded, but due to instrument error that number is not correct and unless you know what the accuracy of that particular instrument is you can never statistically determine the correct mean temp for that day and for every day after that and before that as long as you don’t know the accuracy of that thermometer. From there it rolls down hill. You have no known correct mean temps for that month, so your monthly mean is off and from there your yearly. The same process is happening at every station that you don’t have the proper meta data on. So they also have the same problem which leads to the next point.

Now answer this question: Were every Max/Min Thermometer used in the US the same model and brand. If you can not answer yes then you can not state that all have the same error range. Again this throws off the Law of Large numbers. If only 10% have an error range of +/- .5°C and another 80% have an error range of 1°C and the last 10% have an error of of 2° C. Your proposition that the number of readings higher and lower averaging out falls apart, especially when you take the final fact into account: There was just as few thermometers recording in the world then there is today and all of them didn’t change equipment at the same time to the same brand and model, so your Law of Large numbers falls apart again. (Dr. Spencer has a nice graph on the number of surface stations over time here: http://www.drroyspencer.com/2010/02/new-work-on-the-recent-warming-of-northern-hemispheric-land-areas/ )

You worry about the loss of thermometers that we know the calibration of but think that the start point data taken from less thermometers of unknown calibration, of unknown station changes and even unknown equipment changes as no problem and will not effect the trend. That is illogical thinking since your start variable has a huge impact of your trend.

Also in your example you started a Priori from a false premise, You don’t get multiple readings at the same time, from a known calibrated thermometer in the past, you get one.

Here is an example: I take ten readings at 3:00 PM with a calibrated thermometer.

Max/Min LIG thermometers trip a mechanical device that sticks at the highest and lowest readings. So for that day in 1890 you have 2 variables from an unknown source. You don’t know if it’s calibrated properly, if it is calibrated what the instrument bias was, If the model/brand was changed half way through the year. Nothing but those 2 variables. Sorry the Law of large numbers doesn’t save you. I even ran the numbers for State College Pa with just making 3 changes in equipment over time, each one more accurate then the last. Started with an instrument with a +/- .5°F accuracy, switched to one with a +/- of .3°F in 1930 and switched to one in 1990 that had a +/- of .1°F. When you run the numbers you find that if all of them went one way at max error you get a trend of .6°F, if you go the other way at Max error you get a trend of 1.7°F and you have to do that because you didn’t have a known good thermometer taking side by side readings with each one of those.

Now will they all go in one direction at max? No, but with the changes in equipment over time, becoming more accurate the odds of them all canceling each other out drops significantly because your room for error decreases over time to counter act the larger error in the past.

All those things point to why the Satellites are better even with their own problems. There is only one instrument that takes readings and it is checked against a multiple calibrated PRT’s daily and the background cold temperature of space. Any variances are then easy to account for in the readings. You can read how it is done here:http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/

Keep this in mind, almost everyone of the problems you listed for today were present when the Historical Temp Records started (1880 for GISS, 1850 for HadCrut), plus back then they were just looking for a temperature reading since all they were concerned about was the weather. Who cared if it was 94° or 94.5°F on Aug 3rd someplace in New Mexico, it had no impact. It wasn’t until the 60’s when the Climate Whatever we are calling it this week came into being. That was when they realized they should have all the stations situated a specific way but they didn’t have that, they had weather records that are not suitable for what they are using them for.

Of course 2010 will show the warmest year because GISS make the data up. There are no temperature stations in the arctic ocean, and very few in the arctic circle, yet they have shown a map with huge expanses of raging oranges and reds by smearing the few readings over vast distances. Since the few temperature stations that do exist are at airports, it’s not surprising that vast areas of the arctic appear to show warming.

However, despite them pumping up global temperatures, the best they can achieve is a (possibly) slight increase in 12 years. Once you adjust their adjustments, there is still no increase, or even a decrease.

Just so that I am getting this right, what we are talking about is a temperature anomaly in relation to a long term mean/trend derived from data going back 150 years or so?

As we don’t have a long term New Zealand mean annual temperature set worth talking about I have to go with what I do have, which is my local means for the city of Palmerston North.

Based upon our means, 1998 is the hottest year since records began here in mid 1928, just ahead (0.2 C) of 1971,1974 and 1999. I always have a problem with annual means though when thinking about our weather/climate. Our southern winters are well covered just as the northern summers are, but a third of our hot season (Nov/Dec) is dropped to accommodate the calendar year. As I have stated here in previous posts I like to simplify the seasonal situation into two obvious seasons, hot (Nov-Apr) and cold (May-Oct).

When I look at our data and use the T-Max means we see that the hottest ‘summer’ here was 1934-35, 0.3 C hotter than 1961-62, and 0.4 ahead of 1974 (Both were La Nina summers, which usually equals hot in my part of the world). 1998-99 was 0.58 degrees C colder than 1934-35!

In the end, all of these exercises show that it is critical to take the measurement periods into account. The local weather data that I receive on a daily basis is recorded at 9.am. the following morning. This means that the T-max was for the previous afternoon in general, whilst the T-Min is usually around dawn. Rainfall is taken as being up to 9 a.m. but is adjusted back for the last reading of the month to show that the amount recorded after midnight is alloted to the new month.

There are exceptions to the T-Max/T-Min being at the expected time. We had a situation in September 2009 where the coldest temperature for the day was at 3.30 p.m., while the highest was at the previous midnight, thanks to an unexpected snowstorm that left drifts of 2 metres deep on the lowest part of the mountains behind the city, that hung around for over a week. No one had ever seen anything like it here, especially for that late in the snow season.

Looking at the latest data, 2010 is nowhere near the hottest mean year. The winter was milder than the previous one, but last summer was the 20th coldest of the past 80 years. This summer, starting in Nov, is on track to be in the top dzen or so in the recorded period.

As far as trends go, if you take it from the first full year of 1929 then there is a slight rise of around 0.4 C to the present. On the otherhand if you start from somewhere like the plateau of the 1950’s we see a downward trend of 0.3 C!

Haven’t read this post or comments as I should (forgive me). 2010 was warm, but I want to recall a paper by Patrick J. Michaels and Ross McKitrick, Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data, in Journal of Geophysical Research 2007, which shows that as much as half of thehttp://www.uoguelph.ca/~rmckitri/research/jgr07/M&M.JGRDec07.pdf

From an article by Pat Michaels:

“Almost all the socioeconomic variables were important. We found the data were of highest quality in North America and that they were very contaminated in Africa and South America. Overall, we found that the socioeconomic biases ‘likely add up to a net warming bias at the global level that may explain as much as half the observed land-based warming trend.'”

If 2010 is declared the hottest year on record, it would be a good opportunity to discuss if it’s time to get rid of global mean temperature as a measure of warming: temperature of air is NOT a good measure of its heat content! Instead of temperature anomalies, one should have used enthalpy (total energy) anomalies. 2010 is a good illustration of this, since a lot of the positive anomaly comes from arctic deserts, where it takes a lot less energy to heat a given mass of air by e.g. 5 C than it would take in the tropics – or even in Sahara.
(to measure enthalpy changes, we need to measure air pressure and moisture in addition to temperature, so of course even more could go wrong with the record…)

What I meant was that satellites record are the only reliable record. And that only extends back to the last 30 years. The first dedicated instrument to check temp anomalies was put into Earth orbit in 2002, right?. Most conservatively speaking, our best records only stretch back to 2002. I understand that it was somewhat possible to detect the heat signal from earlier, undedicated satellite records that stretch back to 1979. In any case, according to the satellite measurements, 1998 was the warmest year ever.

Land based, ‘value added’ instrumental records are not in the same class in terms of data quality as sat records. Therefore 1934 and 1998 are not suitable candidates for comparison. To the best of our knowledge 1998 was the warmist year.

No, all you can say is that 1998 is the warmist year since 1979 (or 2002 if you only want to only rely on dedicated Sat records, but as 1998 is before 2002 that rules 1998 out anyway)

Hardly enough to base any funding on more work on a proof of a theory let alone destroying whole industries.

When planes were at a standstill in England, and in Europe, the pilots of an Iceland air plane that was about to take off were informed they could not do so, as there was not enough snow clearing equipment to clear a path for the refueling car. Rather then facing an overnight stay, the captain and co-pilot, took matters and shovels into their own hands. Some 15 minutes of shoveling later, the path was cleared, the plane refueled and went airborne.

That is how Icelandic pilots deal with winter challenges. Perhaps Britain and other countries could learn from that example.

Oh, we can measure temperature quite well. What you see them propagate is an average of some averaged extrapolated average. Guess how the basic average is built: [s] they measure the temperature every minute and add up the numbers over a whole day and divide the sum by 1440. They may use a shorter interval, say 10 seconds, but the gain in precision isn’t worth the effort modifying the computer program. You guessed wrong! Their algorithm is much more sophisticated. You have to be a trained and certified climate scientist to understand how
it works. [/s]

But it gets worse: what location’s temperature are we interested in? Is it the height where we breathe (1.8 m), where most of our crops grow (0.8 m), grass grows (0.08 m), trees grow (18 m), fish grows (-x.8 m)…

[s] I know, the particular location doesn’t matter since you build trends and those would only be off by a specific factor depending on different locations so the temperature itself and the methode to get the numbers isn’t really that important… [/s]

“GISS uses a code called GIStemp. It is that US CODE that is finding 1998 warmer than 1934 (sometimes).”

But not in the global product. Only in the US product (2% of the global product). Globally 1998 has always been well ahead of 1934. Overall looking at the entire globe not much has changed in GISTEMP over time. GISTEMP 2010 is not much different than GISTEMP 1999. This is all I was saying.

Being arrested in protests doesn’t make a person inherently dishonest or signal that they are a fraudster. Squeaky clean people (and I don’t mean that as an insult, I am one of those) who stay within the law are just as likely to be fraudsters, in fact of the various scientific frauds of the 20th century I bet most were conducted by scientists who had no existing criminal records. I just don’t think there is a good enough correlation to base any assumptions on there.

What I prefer to do, and this is all round just more friendly, is to assume people are honest until proven otherwise. Hansen hasn’t yet even been proven wrong – although I do have a specific (personal) requirement for such a thing.

The only real evidence that GISTEMP is wrong in my opinion would be to have an independent temperature analysis that “does it right” which shows a different result. Nothing short of that will really do it for me (and I guess a lot of people). We can complain about the roughness of some of the methods in GISTEMP but that doesn’t prove the methods aren’t “good enough” for the job. For example if a better method than the reference station method was used would it really turn out that 1934 was warmer than 1998? If not then that’s clearly not much of an issue.

So this race between 1934 and 1998 was in the US temperature record, that comprises 2% of the Earth’s surface.

What percentage of the Earth’s surface does the Arctic comprise? Should we be similarly unconcerned about temperatures there? If so, I think we can very safely assume that there’s no problem.
However, the point is that something seems fishy when someone re-analyses the data until they get the ‘right’ result, and then ‘conveniently’ loses the data.

Nothing living experiences “global weather” (unless you’re a giant the size of the earth). All life experiences only local weather. Seeing that my local weather hasn’t been unusual for the last 120 yrs by the temp records, the answer is no. Seeing proxy data, it’s been relatively stable for the last 10000 yrs other than the usual ~1000 yr (MWP, LIA) cycles. Again, no, unless one is worried about future colder periods leading up to the returning ice-age.

Right now, I’m concerned about how the current US east-coast snowstorm plays out.

All those temperature changes… Does it really matter in the big discussion whether CO2 is the cause of rising temperatures on earth (no, not of climate change: this is a convenient word alarmists added to be able to get it right even when they are wrong). Even a similar trend does not mean CO2 is the cause. Climatologists normally do not look any further then their own research and temperature data to find out what is causing what, and only if temperatures and CO2 are really that closely related.

In fact, yes, it seems there is a correlation when you look at small oscillations in the past, but, all data point to an inversed relationship than what alarmists are claiming: CO2 follows temperature changes and not the other way around: there is a 800 year lag between CO2 (and CH4) and temp. It means temperature changed CO2 amounts, not the other way around. There is a good explanation for this: ocean degassing. There is no explanation for if CO2 would be the leading force in the past. No, volcanism can not explain a sudden increase in the past, because there is no geological observation of volcanic activity that would sustain this theory, and it is even harder to explain sudden decreases in CO2 amounts in the past (where did it go if they don’t buy the absorption theory of oceans).

If you look at large scale earth records there is NO relation to CO2 and temperature at all. About 600 million years ago the amount of CO2 was about 10 times higher then today (~7000 ppm). For long periods of time CO2 decreased with steady temperatures! So it seems that CO2 and temp relationships are only related on smaller changes, where ocean-CO2 oscillations are measured and the total amount of CO2 in both ocean and atmosphere were steady. When the total amount of CO2 decreased (total that is ocean AND atmosphere) there was no relation measured between CO2 and temperature. This means that on a scale of hundreds of years the oceans will absorbe most of the increase in CO2 we pump into the air right now, in the end it will end up with a slight total increase of the amount of CO2, but this will, based on historical earth data, hardly affect temperature. Maybe heat exhaustion from cars and buildings could well be a larger temperature affecting parameter than CO2.

In addition, CO2 is a very welcome gas, in fact the best gas we have on earth. A gas from which all life starts: photosynthesis. That CO2-amounts reduced from 10 times bigger values 600 million years ago (when life really sparked off) is best explained by plants and algae using photosynthesis. This means that over millions of years CO2 depletes on earth and that on the long run life will end when plants will suffocate by not enough CO2 (only enough animals and men can compensate this trend by doing the opposite). Suffocation already starts happening when CO2 amounts get under 200 ppm. (remember: from 7000 ppm in the Cambrium to 280 ppm a century ago!). So that we are pumping it up a bit to 400 ppm is not bad, but in fact good. It also enhances crop outputs. In the Netherlands for example they inject CO2 in the glasshouses that produce flowers and crops like tomatoes and paprika’s. It is proven in those glass houses that a doubling in CO2 increases crop-output by 30%.

Why use these adjustments for temperature only? Lets apply their methods to other historical events and see what we get. Nixon never actually went to China, and was in fact a really nice guy. Enron was making an honest profit. Dewey did in fact defeat Truman. And Al Gore was the real 43rd president, thus becoming the first sitting president to win the academy award!

There is a difference between the GISS RIGHT answer and the TRUTH. I doubt GISS knows what they are doing to themselves, all they see is the giant pot of taxpayer money they scammed at the end of their “RIGHT ANSWER QUEST”.

So who is stupid enough to believe it anyway?

Science was quest for truth, regardless where it takes you. As a person with an engineering background, this whole sea level and temperature issue is becoming measurement noise. For instance, how good were cave men at measuring things 10,000 years ago, a blink in time. And who thinks the last 130 years of time means anything. A very short history is a very meaningless history.

It’s Christmas, who believes God would make a home for carbon lifeforms, ie man, that he can destroy by exhaling CO2 or keeping warm. It’s laughable.

Average temperature is a weird concept. You lose so much information when you do an average so as to render it near useless. Take the average score in a class for a test. Do you know if all the students got the same score or if the highs and lows were spread apart? You can then look at the standard deviation. But with temperatures, you don’t even have all the “scores”. Or even worse, the scores that you do have aren’t always accurate.

Is it reasonable that history is defined as the last 131 years? How good was the global temperature data 131 years ago? Oh, and I don’t really care if 2010 broke global temperature records by any factor. It just doesn’t matter.

The only real evidence that GISTEMP is wrong in my opinion would be to have an independent temperature analysis that “does it right” which shows a different result.

No need for an independent temperature analysis onion.
As I stated in my post #12:29am GISS themselves have “redone” the analysis 7 times. They were wrong 6 times (according to them) so what makes you think they are right this time?
How many times does GISS need to say “we were wrong” before we stop believing they are right this time?

Is there such an thing as “margin of error” in temperture reporting? I never seem to see anything about error. Is seems somewhat improbable to boil all the data down to such a level without any errors anywhere along the line.

If we didn’t use so much Air Conditioning in the Summer things would be a whole lot cooler on this planet. I remember when I was a kid we didn’t have AC, we rarely if ever used a single fan in the whole house, and things weren’t bad at all. UHI is all the fault of that guy named Otis too. If he hadn’t developed those bling blangin elevators things would be a whole lot cooler in cities and we wouldn’t need Air Conditioners. Ever ride in on a subway train, in the first car, with the doors and all the windows open? It’s cool.

But not in the global product. Only in the US product (2% of the global product). Globally 1998 has always been well ahead of 1934. Overall looking at the entire globe not much has changed in GISTEMP over time. GISTEMP 2010 is not much different than GISTEMP 1999. This is all I was saying. <> Being arrested in protests doesn’t make a person inherently dishonest or signal that they are a fraudster. <<

If a person is willing to break the law in order to 'save the world', why wouldn't he be willing to merely fudge some data in order to 'save the world'?

Today at 9 am the temperature in the UK city of Hereford was -15.6 deg C (or 4 deg F).
While this might seem balmy to a resident of Winnipeg, it is mind bogglingly cold for England. At the age of 69, until today, I have never experienced temperatures in the UK lower than 12 deg F!
For more than two months of 2010 (Jan and Nov/Dec) temperatures have been massively below average.
Still, if GISS say this is the warmest year on record, I suppose it must be true.

Seeing that 1998 and 2010 were El Nino’s years and the 98 one,one of the strongest
whats the hoo-ha about,Hansen will be quick to shout about the colder LA Nina’s years when the global temp starts dropping.Sceptics need to shout this from the roof tops.Was 1934 a El Nino year as well?Plus i think Hansen has lost alot of credibility
by his rants and adjustments to suit the AGW message.

We have become ants looking at the difference in length of toothpicks over the years toothpicks were made. In terms of a temperature graph, an ant would get his or her knickers in a twist over the ups and downs. Me, not so much. I know how large the noise is.

Here’s an idea. Let’s put a graph together showing degree change using a scale more appropriate to such a noisy data set. It sure ain’t in 3 decimal places. And add error bars to the running average. Plus the linear trend is complete and utter statistical nonsense when applied to noisy, multi-variate data.

Meanwhile, we are breaking snow and cold records here in Wallowa County. And not by a 10th of a degree. Pocket penny change does not matter to livestock and pipes. Whole degrees do very much matter.

We argue over a tempest in a teapot and Hansen is sucking tax dollars to measure it.

One can see the constraints these guys are under. If they raise 1998 to much then they erase GW for 12 years up to now. Therefore its necessary to push 1934 down. You’re an engineer, you can see the problem.

Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age

It was warmer than now BEFORE the LIA. That’s the problem. …

Over a 10,000 year period, we’re definitly headed down. Over 2 years, we’re headed down. Over 150 years we’re headed up. Over 50 years we’re headed up. Over 60 years we’ve gone nowhere. Over 1500 years we’re way up (Dark Ages were cold and dark, Bond Event One…) but over 2000 years we’re down…

Thanks for your comment and your points are granted. There have been natural cycles before and after the LIA. But the topic of this thread is net warming (and cooling) over the past century or so, and the possible contribution of human activities to it.

You and I agree we are up over the past 150 and 50 years, but net out even over 60 years. What is your estimate on how much and what part of that is due to human activities? I think our best skeptic tactic is to admit some human-caused temperature change and then point out it is far less than claimed by the warmists, is not a crisis, and that it seems to have stabilized and may turn down in the coming decades.

clean coal

Precluded by the AGW mantra… And that’s a major stupidity on their part as it’s the fastest cheapest and best way off of OPEC oil and can be done in 5 to 10 years.

I agree with you that clean coal is our best bet. But it is not necessarily precluded by the AGW mantra. Some of the more sane environmentalists (an oxymoron?) are changing their views. The December 2010 issue of The Atlantic, an influential mainstream literary magazine shows an amazing turn-around by the Global Warming alarmists, discussed here by me. Yes, they are still alarmed and predicting imminent climate change disaster, but … BUT, they have reversed themselves on their previous ‘ol devil coal! (This follows their equally sharp reversal on nuclear energy over the past few years.) Turns out we need coal to generate Watts of electricity for our electric cars and, they say, we can do it in a way that is environmentally correct. Their cover story, by respected author James Fallows, is titled Why the Future of Clean Energy is Dirty Coal. {Click the link to read it free online.}

Another good thing about HadCrut: With their temperatures they also show the percentage of the globe, on which these temperatures are based. It starts in the twenties (in 1850), and currently is in the low eighties. They do not cheat with non-existing data from the arctic.

The corrupt bureaucracies of UN and EU want ro raise taxes of their own, and what better to do that than saving the planet. Thus they need some corrupt agency to deliver them the necessary corrupt data. Guess who?

Squeaky clean people (and I don’t mean that as an insult, I am one of those)…

Yet you support a man and his work product who stated under oath in a court of law that breaking the law for the common good (his version of it, by the way) is justified.
Yeh right, you’re “squeaky clean”… and I’m an astronaut.

While the “warmist deniers” are dissing the measurements taken in the 1800’s and early 20th century, they would do well to remember that Science was taken very seriously by the “amateurs” as well as the “professionals”. The quotes are because, at that time, there were very few people employed solely in a single profession. Per force, the people did “real work,” in addition to taking measurements of the weather. The classic example would be the Army Surgeons taking the weather measurements.

Gaining knowledge for mankind was taken very seriously, and most observers tried their best to avoid errors. Today, the measurements by NOAA and NASA aren’t taken nearly as seriously, viz the documented problems with reference stations. To say that only professionals can capture valid data is more than a little suspect. Given the statements by Hansen (see above), and the machinations of CRU, the New Zealand MO, GISS, and other “professionals,” it would seem to be clear that reestablishing the honor of careful amateurs in scientific investigations would be of paramount interest. Only amateurs have no vested interest in funding, or tenure, or political correctness. I’m throwing the last in, assuming – always dangerous – that those of political mien will be in the minority in amateur scientific investigations, and easily exposed when data is falsified.

Anthony is one of the pioneers in this area, by letting serious amateurs and true scientific professionals have a podium. The absolute proof that it works is the presence of the non-scientific Trolls who wander the halls looking for “believers.” When real science is being done, there are no believers, only thoughtful analytics. I’m not using the term sceptic, as it refers to a general negativism in the dictionary. I don’t think the majority here are “Sceptics,” I think we are sceptical of certain undocumented, unprovable claims from the “Climatologists.”

Statistically speaking, 2010 will not be significantly different from 1998, although the numbers will be a bit lower. Also the warmest month will still be in 1998.
But according to IPCC we should have had a warming of 0.4°F or more. We did not. That probably IS significant.
(PS A look at ch 5 indicates that UAH should have an anomaly of less that 0.4 in December; thus there will be no new record)

It is quite possible that 2010 is the warmest year on record even as Europe and Britain are buried under snow. Perhaps a warmer Pacific Ocean is responsible. Problem is, the Pacific Ocean doesn’t have any money.

“”The answer is that in the past 20 years, as can be seen from its website, the Met Office has been hijacked from its proper role to become wholly subservient to its obsession with global warming. (At one time it even changed its name to the Met Office “for Weather and Climate Change”.) “”

Seems much like the 2000 Presidential election. Recount, recount, recount until we get the results we want. Readjusting 1934 numbers is somewhat akin to adding precincts in 2070 for an election 60 years earlier. It is the need to believe.

Folks , Nova Scotia is in the Northern hemisphere and December has been especially warm. There may be snow tonight ,but wet snow from warm water. In fact Eastern Canada has had a mild December. In Nunavit, there is little sea ice.

The area just described , although only a tiny percentage of the earth’s surface is bigger than the US area. The huge snowfall that shutdown Heathrow was actually 3 inches. Airport crews worldwide are still laughing at that one. A touch of frost in Yorkshire does not mean all of Europe is iced under.

Time to remove head and realise that there is more to the world than California and Texas.

“No need for an independent temperature analysis onion. As I stated in my post #12:29am GISS themselves have “redone” the analysis 7 times. They were wrong 6 times (according to them) so what makes you think they are right this time? How many times does GISS need to say “we were wrong” before we stop believing they are right this time?”

The analysis is pretty much the same across all versions globally. Only by zooming in on specific regions do we see a lot of change between the versions. Even then the differences in the various versions can be regarded as proportional to the uncertainty. I wouldn’t say 1998 is clearly warmer than 1934 in the US GISTEMP record, they are probably statistically the same.

“Yet you support a man and his work product who stated under oath in a court of law that breaking the law for the common good (his version of it, by the way) is justified. Yeh right, you’re “squeaky clean”… and I’m an astronaut.”

I don’t support him or his work product. I recognize that the work product is pretty good though. What people like me need to see are alternatives. If the station data actually show something significantly different than Hansen finds with GISTEMP, then what I would expect to find was a temperature record that very much disagrees with GISTEMP. That’s really the only way GISTEMP is ever going to be shown to be wrong, you need a benchmark.

The version adjusted for climate shown below shows above normal temperatures widespread for November 2010.

Just goes to shows that literally the climate is not weather from the Met Office. It doesn’t matter how cold the weather is in one month, the climate version is always above average.

SST’s are taken into account for the climate version, but below do not show this warming enough to change a negative anomaly to a positive one. If it is like this for the very country it is based, what is it like for other countries especailly not well represented.

Let’s see how next months appears with recent SST’s around the UK very cold.

You are totally wrong on this, and confuse accuracy with precision. I knew someone would jump up and do this, but my writing had gone too long as it was.

Accuracy is irrelevant to the current discussion of whether it is colder or hotter today.

Look up accuracy, precision, and reproducibility. You can continue to improve accuracy with newer and newer instrumentation, but that is useless when trying to answer the main premise of this thread—is it colder or hotter today? I can measure your weight with a balance that goes to six decimal places, but that does not tell me if you gained or lost weight. The extra useless decimal places will actually make it worse, as people with your understanding will just argue about things like whether to round up or down, or whether one holds his breath or not.

I wish I had a hundred thermometers around the globe that are reading 20° high, and in operation for two hundred years, instead of a handful of satellites in operation fro 30 years. Think about it.

It is absurd to compare 1930s data with 1990s data, comparing an old measurement with a measurement that only goes back 30 years, no matter how “accurate” the new method is, and then say it is warmer under the new measurement. This is apples and oranges.

Folks , Nova Scotia is in the Northern hemisphere and December has been especially warm. There may be snow tonight ,but wet snow from warm water. In fact Eastern Canada has had a mild December. In Nunavit, there is little sea ice.

The area just described , although only a tiny percentage of the earth’s surface is bigger than the US area. The huge snowfall that shutdown Heathrow was actually 3 inches. Airport crews worldwide are still laughing at that one. A touch of frost in Yorkshire does not mean all of Europe is iced under.

Time to remove head and realise that there is more to the world than California and Texas.

And more than Russia and Ukraine, eh? Oh the hypocrisy is staggering. Especially since the topic is whether or not we care if 2010 is the warmest year in history (which I doubt).

If such a staightforward questions requires such obscure and recurring analysis then what can it possibly mean? We are pretty sure to live warmer and colder years in the next decades. The warmest year debate appears to be just another propaganda issue to cover the real issue: we know still very little about the climate system’s complexity, do not have a clue of the mangitude of the impact of AGHG and, foremost, have no concept of what to do about any adverse consequences in an intelligent manner.

Steinar Midtskogen says:
An analysis should include a list of stations fairly well distributed over the area for which an average temperature is to be calculated, and all stations must have an unbroken record requiring an absolute minimum of homogenisation, if any at all, and they must be little affected by urbanisation. This will disqualify nearly all stations, but if the remaining ones show a pattern, it will be a genuine one, and then I think it’s fair to make statements on “warmest” or “coldest” based on what we know.

And that has been done. TonyB has done it for long lived stations. The net result? We’re not warming, but there is a cyclical rise / fall.

I did some similar (though much less detailed) work and found that as you removed stations with the “shortest lifetime” from the data, you got a more stable world. If you looked for “what part of the data has the warming signal” it was all in the “newer” stations. That lead to several discovers. The massive move to Airports (which are by definition “newer” than 1919 and often “newer” than 1960, especially in the 3rd world. The tendency for a ‘step function’ hotter in 1990 in the data series right when a change of “modification flag” indicates that a change in the processing of the data was begun. I even went so far as to make my own analysis code that did NO homogenizing, NO infill, NO “anomaly” comparing one location to another (as GIStemp and the other folks do…). That is the “Dt/dt” code in the link above.

It finds no net warming in many (most?) countries.

It does suffer from “splice artifacts” (as does any ‘infill’, homogenize, or Reference Station Method code) but part of my goal was to visualize those splice artifacts. (I have two variations on the code, one specifically tuned to mark splices, one not.) This describes the one that smooths the splice in the data a bit more:

IMHO, most if not all of “AGW” is a splice artifact from splicing young poorly sited stations with a warm bias to their adjustments onto old station data. The “Hocky Stick Trick” done on temperature data.

And that will not be the entire world or even entire countries. But it’s better to state what we do know and don’t than simply to fill in the holes and thereby contaminate what we actually do know.

I agree. That’s why I made Dt/dt. One of the interesting bits is that you can look at a given country (say, Sweden, Germany, France, Japan) and see what THEIR instruments say. Then look at subsets of those instruments.

The “Climate Scientists” typically have a cow about then (a Spherical one ;-) and assert one simply MUST do it exactly the way they do to get valid results (areal averaging, THEIR kind of anomalies, etc.) This is “exactly wrong”. In a forensics examination you do things differently from the crook to see where they’ve tuned things. In figuring out a magic trick, you do not sit center front, you move to the side. For “climate science” you change the method and see what happens.

What I found was fascinating. Netherlands is very slightly cooling. Right next to it, France is warming. But only some of the places in France. And some of the months! One of the most fascinating bits was that the individual MONTHS go in different directions in various countries. Often right next door to each other. So England and France will have different months that warm and cool. Japan had (roughly) every other month going the other way. Warm, cool, warm, cool, warm, cool…

Even countries in the same geography area could have “opposite months”.

Think about it… No physicality to the cause…

Best guess I’ve got so far is that it’s variations in solar impact on the record from WHEN thermometers were moved to Airports and how each airport responds to the added heat of sunshine on tarmac. So if France added a thermometer on the Southern Mediterranean Coast, it has a different “sun” profile of exposure than the one they dropped in, oh, Paris. Sun heats tarmac and “Viola!” sunny MONTH warms… (but since both were likely sunny in August, August shows no added warming trend from the move, only the “newly sunny month” shows the trend…)

In one case in particular, Marble Bar Australia, I take the stations in the region and slowly glue their records together, letting you watch the “splice” artifact form. The end result is a greater warming trend than seen at any of the stations alone. This was stimulated by the fact that Marble Bar set a temperature record that has never been beaten; yet GISS shows the area as “warming” since then….

The way the codes, like GIStemp, make their “anomaly” is simply wrong. It is a splice artifact creation system.

I leave it for others to decide if that is due to “malice” or “stupidity” and stick to Hanlon’s Razor: “Never attribute to malice that which is adequately explained by stupidity”; though I note that the level of stupid needed is becoming quite high…

They take a batch of theremometers in, say, Central France in 1950-1980 and make an average of them (with some attempts at merging to a common base) then make a different average of a different set of thermometers in the present “Grid / box”. Then those two AVERAGES are compared to get the “anomaly”. This, IMHO, is just broken. Yes, I know Hansen got it “pal reviewed”. It still just lets you cover over splice artifacts prior to making your “anomaly”. For many boxes there simply IS NO THERMOMETER so the “anomaly” is based on a complete fiction. This is easily seen if you realize that with 8000 “grid / boxes” in the early GIStemp code and only 1200 or so thermometers in the 2009 GHCN world data set, many of those boxes have NO thermometer in them. Recently they moved to 16,000 “grid / boxes” as though that would make things better… but that just means more of them are empty… and filled with a created fiction instead.

The Dt/dt method makes the anomaly as the very first step and does so ONLY comparing a given thermometer to itself. (IMHO, the only proper way to make an anomaly). It finds cycles of warming and cooling, but overall, no warming. Further, it finds that individual countries have “interesting patterns” that point to significant splice artifacts in the data. Especially around the 1990 change of “duplicate number” or “modification flag” in the data. That is also when a huge number of thermometers are dropped from the record. AGW is a data fumble artifact, not a real event.

While it looks like they have broken the graph again (so I’ll get to patch up this posting again), this posting looks at an interesting graph of temperatures in Sweden. It finds that it was warm in Sweden prior to the LIA…

And that illustrates the problem. Are we warming because 1880 was cold? Or was 1880 cold because 1720 was just as warm as now? AND it shows the 1930’s as about the same warmth as now…. so it’s NOT just a US thing.

All the combined instrumental Arctic stations (most are between 64N and 75N) show recent temperatures no different from the 1930’s when comparing the trend in the actual data, not using zones and creating made-up data.

bubbagyro says:
With more measurements, the outliers statistically rectify themselves, with a too high here matched with a too low there. That is the way statistics deals with crude measurements, with lots of readings! that sort each other out.

IFF you are measuring THE SAME THING.

Where the temperature codes go off the rails, IMHO, is that they think averaging a bunch of DIFFERENT readings from different things will have the same property. It doesn’t. It’s like sampling 100 cars in 1950 and 10 in 2010 and saying you know the average weight of the cars to 1 gram. The problem is they got 1 Chevy 2 Mercedes and 97 VW’s in 1950 but got 8 Mercedes, 1 Dodge 4×4 Crew Cab Truck, and a cement mixer in 2010. Average those all you want, it does NOT improve the accuracy and the precision is all False Precision.

(I’ve had folks endlessly berate me thinking I don’t know you can calculate an average to very great precision. You can. That’s not the point. The point is “Does that average have MEANING?” (not all averages do) and for temperature data, it simply has no meaning. As soon as you start by putting a bunch of different things into your “grid / box” and averaging them you have lost it. Intensive variables are that way. Using the car metaphor, it’s like they make a “1000 lb adjustment” for each VW vs Mercedes so they are “comparable”… execpt they aren’t. They each changed weight at a different rate over time, so the ‘correction’ has an unspecified error term. Then there is that Cement Truck that just got glued on in 1990 at the giant Jet Port… )

So the codes end up finding their own error terms and calling it a significant warming…

But you look at individual thermometers and find they are not warming… Strange, that. And you look at “warming places” and find they never set a new warm record (Marble Bar). Strange, that. And you look at “warming” countries and find that the thermometer moved from Paris to Marseille Airport and that only the months that are “Sunnier In Marseille” are warming, while some of the other months are cooling. Strange, that.

And in the end you look at the GIStemp code and say: “Ah HAH! Strange that is!”

(To the rest of your points: Yes! BTW, in most cases the old stations still exist, it’s just not making it from the local country data set into the GHCN. Ask NOAA / NCDC why… )

Doesn’t really matter if 2010 is the warmest year on instrument record or not. AGW will only be seen in the longer term trends such that the decade of 2010-2019 will be warmer than 2000-2009 etc. and Arctic Sea ice will continue to show a downward trend during the decade of 2010-2019. When anyone, “warmist” or skeptic alike jumps on an individual event or even year as proof for or against a positiion then it starts to smell like politics and not science. The only place one can honestly say they see proof of AGW is in longer term trends (decade to multi-decadal) and individual storms, droughts, warm years, cold years, etc. can only be said to be consistent with or inconsistent with what GCM’s say should be happening relative to increases in GH gases over the pre-industrial average.

Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!

——–

You’re an idiot.

[ Watch the name calling. OK, you have some proof. Still, play nice. -MODe]

And now, to avoid being guilty of the Ad Hominem fallacy, let me tell you why.

1. “Peer reviewed research from several data sets confirm that the planet is getting warmer.” — there is evidence strongly suggesting these ‘several’ sets of data are cross-contaminated. That is, they are essentially variations of the same underlying raw data. However, I’ll put that aside, and grant your point.

2. “…the planet is getting warmer. This is not a controversial statement anymore”. – never has been. Even granting point (1) above, no skeptic has claimed the world has not been warming since the end of the LIA. This is a typical straw man people like you, and your ilk, trot out in vain attempts to discredit skeptics.

3. “It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone.” — Baseless claim. Absolutely baseless. There is no evidence — NONE — that consequences of a warming planet will be catastrophic either on balance, or at all, for anyone, let alone everyone (as you ridiculously claim). In short, PROVE IT!

4. “If you have real data to disprove this, please present it!” — Did you read the posted article? It is up to you to present evidence to support your claims, not the other way around. The article nicely rebuts many claims of the CAGW movement. It’s now your turn to rebut the points of the article, if you have any. That’s how the scientific method works. Unfortunately, CAGW conjecture (NOT theory) has turned the scientific method on its head. CAGW is a new religion. It is not science.

gofer says: Is there such an thing as “margin of error” in temperture reporting? I never seem to see anything about error. Is seems somewhat improbable to boil all the data down to such a level without any errors anywhere along the line.

That they report temperature averages to 1/100 C implies they believe that is their margin of error. The codes, like GIStemp, do not compute one internally. You can find pronouncements of one from GISS, but it’s not from the code, it’s made up in some other way (God only knows how…)

The raw data were recorded in Whole Degree F increments in the USA. Don’t know about ROW, but suspect Whole C for some of it. Then monthly averages are computed to 1/10 C in GHCN. (Let’s see, an average of an intesive variable means what again? So we average the months daily data and get what again?…)

The US Data in Whole Degrees F may be simply made up and this happens at several points. The observer may simply “guess” if something is wrong with the instrument. (NOAA took down the web page I’d referenced that gave these directions, but it was the guidance at least until a year or so ago…) So any guess what THAT does to you error band?

Missing data can be created and filled in by a variety of odd processes at just about every step from NOAA / NCDC “Quality Control” steps that may toss out observations and replace them by an average of nearby ASOS stations that, BTW, round UP to whole degrees of C for aviation use and are located at airports near the tarmac. One hopes they use a different data feed from the ASOS (as it has 1/10 C available) but then again, they are using the METAR records that are for aviation so??? Oh, and this substitution is done silently. Maybe you can find a way to find which dailies were substituted, but by the time its in the GHCN monthly averages, it’s hidden… And an average of temperatures (from “nearby” ASOS) means what again?

These data go off to GIStemp who also creates “data” to fill in missing bits via various averaging of averages. “Good luck with that” on the error bar thing…

Onion says:
But not in the global product. Only in the US product (2% of the global product).

OK, I’ll bite: Exactly what is the “GIStemp Global Product” vs the “GIStemp US Product”?

Since there is only one code, and I’ve run it, and it doesn’t keep the globe and the US in different “products”, I’m curious just what you think you are talking about…

There is an NCDC GHCN and NCDC USHCN product, but that’s not GISS…
Globally 1998 has always been well ahead of 1934. Overall looking at the entire globe not much has changed in GISTEMP over time. GISTEMP 2010 is not much different than GISTEMP 1999. This is all I was saying.

And that is very wrong. They completely changed how they handled USHCN data in November of 2009. Use a whole different data set now. The “newer version” of USHCN with an entirely different processing history. You will also find that GIStemp in prior years had 1998 hotter, now have it colder, so clearly it is “much different”, even globally.

Being arrested in protests doesn’t make a person inherently dishonest or signal that they are a fraudster.

I agree. It is the testimony under oath in a court of law that violation of the law and destruction of property are JUST and MORAL if you think your cause is for the greater good that makes him “inherently dishonest”. He has TESTIFIED that he thinks “The greater good” outweighs law and property rights. So he would apply that same belief to the property of the temperature data as he is rabid about the “greater good” of doing whatever it takes to get AGW believed.

I”VE sat in protests (and just barely missed being arrested). But I’d never say that it’s OK to break things, damage private property, do whatever it takes and expect no consequences because you think it’s “for the greater good”. Once you go there, you are no longer a “trusted conservator of the truth” in the data record. He has stated, in essence, “The ends justify the means if the ends are worthy IN HIS EYES.”

I just don’t think there is a good enough correlation to base any assumptions on there.

No “correlation” to anything is needed. Just look at what he has said are HIS guiding principles. “It’s OK to do what it takes if you believe” is a rough paraphrase, and we know “he believes” in spades. He didn’t just protest and get arrested, he testified that folks ought to get off without penalty for breaking the law as their ends were just in his eyes.
The only real evidence that GISTEMP is wrong in my opinion would be to have an independent temperature analysis that “does it right” which shows a different result.

And, IMHO, I’ve done one. The Dt/dt method. It has some minor issues (like the splice artifact one, and you need to apply it to subsets of the data to get a kind of areal average – but doing it by small country blocks like, oh, Netherlands and France, let you see more interesting details in the data structure). It shows a very different result, but oddly, one rather like the “long stations only” independent study done by TonyB, and like the historical record pre “reimagining” and like several others out there. It’s GIStemp that’s really wrong, CRU that’s somewhat buggered, and the Satellites that are probably best (but can’t be compared to anything early enough to say anything about climate; and may have some artifacts of their own).

So yes, having “done it” (both running GIStemp code and making my own) I’m quite comfortable saying “GIStemp is wrong”. That, and the buckets of snow globally while they are runing around hair on fire screeming hottest ever…

Per your desire for a benchmark:

Start with explaining the variation in warming / cooling BY MONTH in these graphs:

with a nice hot 1930’s era and a dropping overall trend, but with an odd compression of the data range at the recent end when GHCN data processes were changed. Tell me that’s showing warming from CO2. Just try to explain it.

And you can do that for every country in the world and you find the same thing. I did. See:

Yes, that’s the Canonical Set of The World Graphs. ALL of them. I spent months making them, so you can look at them all in mere hours. There’s your benchmark (that’s why I constructed it, to be a data benchmark of what the DATA shows prior to any significant processing). Enjoy.

The individual stations are generally NOT uniformly warming. Many countries and places are cooling. For stations that are warming, often they have many months flat or cooling (so the December trend may be warming in all December data while the January trend is cooling across all January data). It all argues for data artifacts (IMHO mostly UHI, land use changes, siting changes – more airports up to 92%, and processing changes in 1990).

To quote myself:

“(To the rest of your points: Yes! BTW, in most cases the old stations still exist, it’s just not making it from the local country data set into the GHCN. Ask NOAA / NCDC why… )”

It’s interesting to note that when the Met Office of Turkey looked at ALL their thermometers instead of just the cherry picked ones that make it into the GHCN, they found Turkey is cooling. “Strange that”… Peer reviewed paper too…

The Great Hot 1934 vs 1998 Race
Hands down on the US record, 1934 wins by a long shot.
In both the records and first-hand accounts of those who lived through it.
What is even more impressive is that it started out on the West Coast in 1932 and migrated to scorch most of the country.
Taking the 2 years into account, this pattern of West to East holds today.

For once, RGates, you are correct: AGW will be seen in the long term… as a natural phenomenon, lovingly referred to as the “Good Old Days”. We’ll tell our grandkids that someday the weather will be nice again, just not now.

Here’s what appear to be the talking points to explain why 2010 is the hottest year:

How Will We Know if 2010 Was the Warmest Year on Record?
Different Groups’ Methods Yield the Same Finding: Warming Surface Temperatures

By Tom Yulsman Last (Updated: December 22nd, 2010)

Including: “In other words, they [GISS] extrapolate across some pretty large gaps — ones as far across as 700 miles.

Hansen and his colleagues argue the approach is valid because research shows that any particular temperature anomaly will tend to be large in geographic reach, particularly at middle and high latitudes.

But the NASA-GISS approach has come in for particular criticism from climate change skeptics, who severely question the scientific basis for extrapolating data across such large gaps.

However, there is significant scientific support for the approach, and for the determination that the Arctic is warming rapidly. Hansen and his colleagues point out that independent measurements of temperatures in the Arctic using infrared instruments reveal significant warming over large areas.

Support also from comes observations of shrinking and thinning Arctic sea ice, thawing of permafrost, and changes to Greenland’s ice sheet — all indicators of widespread warming.”

Quick dodge from the question of extrapolation to other “independent measurements” and then the Hail Mary to the shrinking ice sheets and all that. Half expecting a supporting quote from a polar bear. Instead, it is a quote from:

“Gavin Schmidt, Hansen’s colleague at NASA-GISS, points out that whether you fill gaps or not, you are making a decision about what the temperatures were in those gaps.

“When you have a data gap, you can either interpolate/extrapolate from nearby sources or not,” he says. “Each approach has an implication. If you leave it blank, it is equivalent to assuming that it has warmed at the same rate as the globe. While if you fill it in, you assume that it is changing at the same rate as nearby points. This makes the biggest difference in the Arctic, which is warming substantially faster than the globe. I think the interpolation/extrapolation approach is a better solution.”

A “better solution.” The Arctic is warming most where they extrapolate most.

This article also features a scary looking graph – “Historical surface temperature trends. Credit: U.N. Intergovernmental Panel on Climate Change” – which ends in 2005… for this story allegedly about 2010.

I too share your distrust of the global warming oversimplifications. My paper on the science of climate change which I posted on http://billpeddie.wordpress.com shows substantial oversights in the measurements of temperature to date. I fear that there are now so many media releases purporting to show that we have run away global warming with a simple carbon dioxide cause it will take a long time for the general public to realise they have been duped with shoddy science.

R. Gates says:
Doesn’t really matter if 2010 is the warmest year on instrument record or not. AGW will only be seen in the longer term trends

So far, we agree. Now look at a really longer term trend. You find that the averages are trending lower (though with hundred year scale ‘wobbles’) and even our present “high” individual instrumental record (seen at the right margine in the graph link below) is lower than the individual record excursions (seen as thin colored lines that pop and drop also in the same graph).

So you can look at the averages, the really long term averages, and find a nice “peak” in about 5000 BC; and with a consistent down trend since then. And if you fit a curve to the tops of the individual data sets (including our current one) you get the same rounded rise / fall pattern.

So I guess that means we’re in complete agreement, the long term averages AND the present record when compared to the individual records, shows we’re in a definite cooling trend from 5000 BC to present. (BTW, this also agrees with such archeological data as the way the shore is further out to sea now as the sea level DROPS over that time period…

So, we can hold you to this PREDICTION? You will guarantee a record number of record highs and vanishing number of record lows?
and Arctic Sea ice will continue to show a downward trend during the decade of 2010-2019.

As they say “good luck with that” …

shows 2010 high above almost all the others and the low about middle of the pack. Not seeing any real trend in that set. Just variation and with 2010 more or less ordinary. (Though the longer persistance of ice last winter and the cold this winter do argue for our getting more ice over time. Oh, and the time lag for ocean heat to hit the Arctic is about run out, so when all the present cold water starts getting up there, well, lets just say I’d bet on more ice cubes. Big ones.)

So, given that long term temperature chart above, are you now ready to embrace the fact that we’ve been cooling for about 7000 years and continue to do so? It’s only when you measure from that nice little “dip” in the “Little Ice Age” that we are warming in comparison. And per that warming: to me, “That’s a good thing”.

E.M.Smith says:
December 25, 2010 at 10:46 pm
…
Richard Sharpe says:
So tell us what those consequences are? Longer growing seasons so we can better feed everyone on the the planet? The greening of the Sahara as occurred during the Holocene Climate Optimum?

Um, no, doesn’t even get close. I can barely make out much change there. It’s really just some better crop land in Canada and Siberia. Even if you assume the IPCC is correct in their overblown numbers.

“The nomads there told me there was never as much rainfall as in the past few years,” Kröpelin said. “They have never seen so much grazing land.”

“Before, there was not a single scorpion, not a single blade of grass,” he said.

“Now you have people grazing their camels in areas which may not have been used for hundreds or even thousands of years. You see birds, ostriches, gazelles coming back, even sorts of amphibians coming back,” he said.

Ira says
——————–
. These things happen and, as I revealed above, I myself have been guilty of shouting at analysts. But, I corrected my error, and I was not asking all the governments of the world to wreck their economies on the basis of the results.
————
As you say it depends on your assumptions:
1. You finally do forget the distinction between US and global temps
2. You engage in guesswork about the reason for the variation in US temp analyses
3. You assume the costs of dealing with CO2 will be expensive when the cost estimates are all over the place and therefore subject to their own asumptions.

A.C. Adelaide RIGHT ON! …also to affirm most comments here. I’m a “lay” skeptic as of 2006…research daily since, and I can spot the faulty reasoning in just about everything I read, hear, or is published or posted…with FACTS!

@Onion says:
December 25, 2010 at 5:15 pm
“…Furthermore, the change has been systematic with more and more of the measurements by United States cooperative observers being in the morning, rather then {SIC} the afternoon….”

Did they really say that, with that atrocious misspelling included? If so, then “they” ought to be fired. Or else, you’re just another propagandist with poor grammar trolling here.

Onion says:
December 25, 2010 at 5:15 pm
‘Hansen 2001 titled “2001: A closer look at United States and global surface temperature change”…Changes in the GISS analysis subsequent to the documentation by Hansen et al. [1999] are as follows: (1) incorporation of corrections for time-of-observation bias and station history adjustments…’blah blah blah…
My dear Mr. Onion, does it really need to be pointed out that once you “adjust” or “correct”data it is no longer data, it is corrupted data. If it needs correcting it is not good data. It is bad data. It has become a guess. The suggestion by Hanson that it needs to be adjusted is an admission that they are comparing apples and oranges. It is this that becomes glaringly obvious when reading the harry_read_me documents from the climate gate file dump. They don’t know the provenance or quality, they make some up. To use the vernacular, it is a crock.

Ira: Great post. As you said: “…As we have seen, they keep questioning and analyzing the data until they get the right answers. But, whatever they declare, should we believe it? What do you think?…”

I said in another post here some time back, now that almost all the thermometers are located on tarmac in range of jet blast at the world’s airports, there can be no more “global warming” unless of course the globe is warming. The last few years suggest the globe is in stasis on the ∆T front, and maybe, we have to put a minus sign in there somewhere.

The only other way that T will increase is through the manipulations of data. We’ve seen some doozies – Willis’s “Darwin” evaluation, New Zealand’s fantastic 2˚C increase over the course of a century or so (now entirely discredited), and the seeming paradox of UHI corrections pinning current temperatures at where they are, while reducing historical measurements.

Fortunately the wheels are falling off this ramblin’ wreck of supposed science, and the FOIA requests and inquiries by state’s attorneys general and so forth, will accelerate until, hopefully, I’ll see Michael Mann and a bunch of other charlatans either defrocked, or hopefully, refrocked in neat striped uniforms that they can wear for 10 to 20 (years, that is).

“OK, I’ll bite: Exactly what is the “GIStemp Global Product” vs the “GIStemp US Product”?”

On the GISTEMP site there are two graphs. One for globe. One for the US only. That is what I am referring to. It’s the same analysis but this “race” only exists in the US graph. On the global graph the values for 1998 and 1934 have not changed greatly.

The Hansen’99 paper shows the GISTEMP met stations only value for 1998 was about +0.66C and the 1934 value was about +0.04C. The current GISTEMP website shows GISTEMP met stations only for 1998 is +0.7C and for 1934 is still +0.04C. So hardly any change.

And Land+Ocean 1998 has actually gone down. In Hansen’99, the land+ocean value for 1998 is given as 0.58C. On the GISTEMP website today it’s listed as 0.56C. But again these changes of a few hundredths of a degree are so small as to be irrelevant.

“He has TESTIFIED that he thinks “The greater good” outweighs law and property rights. So he would apply that same belief to the property of the temperature data as he is rabid about the “greater good” of doing whatever it takes to get AGW believed.”

I disagree. He could believe in both scientific integrity and civil disobedience.

“And, IMHO, I’ve done one. The Dt/dt method.”

But do you have a global temperature graph that can be compared with the other records? In order to show that GISTEMP is doing it wrong you need to show what the graph should look like done right. That’s necessary so we don’t end up chasing our tails and thinking GISTEMP is wrong when in fact we are nitpicking over one hundredth of a degree or something.

LazyTeenager says:
December 26, 2010 at 3:06 pm
Ira says
——————–
. These things happen and, as I revealed above, I myself have been guilty of shouting at analysts. But, I corrected my error, and I was not asking all the governments of the world to wreck their economies on the basis of the results.
————
As you say it depends on your assumptions:
1. You finally do forget the distinction between US and global temps

The graphic is clearly labeled “US Temp Anomaly ºC”, and I am clear that 2010 may be declared the Global hottest year. I live in the US and am most interested in temperatures here, but, the Global Warming activists say their theory applies worldwide. If GISS cannot get their data analysis straight for their own US 2% of the globe, how can we trust their Global declarations?

2. You engage in guesswork about the reason for the variation in US temp analyses

I think it is pretty clear the analysts were being pressured to come up with results that would bolster the “tipping point” and “runaway warming” theory of their bosses, else why would they spend so much time fiddling with old data? However, since I was not there, I did not conclude they were either incompentent or that there was wrongdoing. Do you know the reason their seven analyses differ?

3. You assume the costs of dealing with CO2 will be expensive when the cost estimates are all over the place and therefore subject to their own asumptions.

Apart from that I find your story interesting.

As you say “the cost estimates are all over the place”. They vary from expensive to astonomical. No one thinks they are inexpensive. Apart from that I find your comment interesting.

A “Scientist carefully records (almost always in a speckally-gray book-of-lined-blank-pages (May have the word “Ledger”–black letters on a prinrted on a white panel on the cover, and probably has some sort of hand written (or printed) title, somebody’s name and, across the bottom a line with [some-date] —- [some later date] written it at least two kinds of ink).

After there is either some satisfaction with results or realization that there is some question to answered, the scientist (or one of the grunts) types it up, prepares the data in some sensible form, maps out the procedures used, and what the findings or questions are,and sends it off to some editor that tidies it who then sends ut forward (after perhaps an iterative process) to be published.

After publication other scientists take the data and the procedures and try to reproduce the results and either publish (details much like those give above) their findings, or their additional data or changed procedures.

Eventually these occurrences fade and if people begin to cite these articles without arguing with them what they settled on (without a “vote”, but maybe after a conference reading) the things discussed will be treated as if they were facts, and may become the foundation for new ideas.

A “social scientist” works out what needs to be done (and maybe why, perhaps) types that up, sends it in for assignment reviewers who look at it to be sure that it does not say anything that has not already been said by the right people, doesn’t propound any unacceptable ideas, and didn’t get assigned to an reviewer that hates the author, it is published. It is then cited as “peer-reviewed accepted science” in passing laws, regulations, and protocols for redistributing the ill-gotten wealth of the people that worked for it to people who, for what ever reason, did not work.

“Onion says:
December 26, 2010 at 8:53 am
If the station data actually show something significantly different than Hansen finds with GISTEMP, then what I would expect to find was a temperature record that very much disagrees with GISTEMP. That’s really the only way GISTEMP is ever going to be shown to be wrong, you need a benchmark.”

GISS says November 2010 was the warmest November on record. The other three disagree.
GISS says 2010 may be the warmest year on record. The year is not over yet, but there is every indication that for the other three, 1998 will remain the warmest year.
GISS says July and August had lower anomalies than October and November according the graphs. The other three disagree.

“Espen says:
December 26, 2010 at 3:09 am
Instead of temperature anomalies, one should have used enthalpy (total energy) anomalies. 2010 is a good illustration of this, since a lot of the positive anomaly comes from arctic deserts, where it takes a lot less energy to heat a given mass of air by e.g. 5 C than it would take in the tropics – or even in Sahara.”

Then to compound the problem, GISS uses few stations in the Arctic. So if the jet stream causes a corner of the Arctic to be warmer, that seems to be extrapolated over a much larger area than may be warranted. At least that is the impression I am under.

As Brian H mentions, the area around Greenland and Labrador has had less ice and gives v high anomalies (especially with 1200km smoothing and rectangular maps) this particular effect seems to keep the world warm even when it is not for most of us.
Its interesting that NASA themselves identify this as the reason for the high global temperature while the populated areas of the world shiver.http://data.giss.nasa.gov/gistemp/2010november/
This of course was not the only warm anomaly in Novemeber but it seems to have quite a constant influence. One cant help wondering is the lack of arctic ice and the warm temperatures (maybe more ENSO, PDO, and AO related) at all the coastal stations an independant influence on global surface records.
Is there a methodoligal problem concerning measurement of global temperature in the face of changing arctic ice cover?

A quote from here: “Phil Jones’ HadCRUT3 dataset is attributing November 2010 the coolest rating among Novembers and among the four datasets – with a 0.43 °C global anomaly, it was the 7th warmest November – while the GISS dataset says that November 2010 was the warmest November on record.”

I just wanted to verify in nutshell what is this all time high temp about. I selected by random point in Siberia that turned out to be closest to Turuhansk (65.8 N 87.9 E, http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=1&name=Turuhansk). Village has record since 1881 with some missing months as could be expected. I calculated all time average for November as 19,5 C. 1951-1980 average (GISS baseline) gives same average 19,5. Latest November temperatures in GISS:

2006 -23,7
2007 -14,2
2008 -15,1
2009 -22,4
2010 -20,8

Now GISS area plot with defaults today (goto http://data.giss.nasa.gov/gistemp/maps/ -> press Plot) gives image which states actually that in Siberia in that area and also overall average temperatures are some 4 degrees above long term average in that area. So based on the calculations I made the image is very faulty, misleading even.

So the issue is now that if I with one random point see all too big deviation in officially announced area temperature average, what is the reliability of ‘warmest’ ? Or is there some error in this logic?

Richard Sharpe says:
What is the probability that the warming will continue and is not over?

It is important to keep two things clearly apart. The comment about Roman Optimum return is ONLY in the context of “IFF IPCC is correct and we are warming”. I’m saying “IFF they are right, we get a Roman Optimum”.

But I believe they are not right…

My opinion is that it’s basically zero chance of warming… We topped out in 1998 at a tad under 1934 (in keeping with the 60 year cycle inside a 7000 year downtrend).

Now we’re into the sleeping sun Grand Mininum (that I like, in a Gilbert and Sullivan lilt, to say “Now We’ve Got A Major Minima!!” … ) with a cold plunge to 2040 or so. Not good. IMHO, it’s going to be LIA redux but slightly colder (in keeping with the general long term trend). AT MOST the CO2 can dampen the long term trend, but it can do nothing about the 60 year cycle nor the Major Minima melody… and we’re about the right timing for a 1500 year cold Bond Event too. So, sadly, I don’t see any warmth hanging on for long. But I can enjoy it while it’s here…

What I’d like to believe and would like to see happen is a return to The Roman Optimum temperatures in about 2070 as we exit the New Little Ice Age (that I pray will be CO2 dampened) and take the next run up the warming half cycle. It would be very nice to have some added grain growing regions then. Heck, it would even be nice if we had some added rain just on the North African coastal areas as the rain bands shifted more south ( i.e. not a complete re-greening of the Sahara ala Sahara Pump theory, just a minor shift of coastal rains).

MarkkuP says:
Strange GISS data in Siberia, do I have correct thoughts?

Absolutely! What you have done is something that can be done pretty much everwhere, that GISS says is surprisingly hot, in their maps.

All the individual stations show nothing out of line in thier history. The aggregate shows warming after GISS processing. Therefor the “warming” is in the processing not in the data. I’ve seen that for many places. From Marble Bar Austrlia to Canada.

That is the “magic” of GIStemp. It can splice together stations with low or no trend and find warming. The thing that makes this possible, IMHO, is the change of what stations are in GHCN for which years. “Managing the splice”.

press Plot) gives image which states actually that in Siberia in that area and also overall average temperatures are some 4 degrees above long term average in that area. So based on the calculations I made the image is very faulty, misleading even.

Yup. That’s the result of the “homogenizing” and “UHI correcting” and “grid / box anomaly” creation done with “The Reference Station Method”. The individual stations don’t need to rise for a ‘grid box’ to be nice and toasty…
So the issue is now that if I with one random point see all too big deviation in officially announced area temperature average, what is the reliability of ‘warmest’ ? Or is there some error in this logic?

Your logic is very sound. The problem is in proving that the warming is bogus. One needs to repeat your excercize for EVERY ‘grid box’ on the planet (16,000 at last code update by GISS) and then you can show that it’s not just one little odd data point…

That was one of the drivers for the Dt/dt method. To let me automate some of that and make graphs for local geographies. The Marble Bar posting shows this effect using that tool:

Does this falsify GIStemp? It would require much more work than I have time (given that I’ve got a family to feed) to do it conclusively. I like to think that it does show where the error is located so folks know where to dig and that someone will do the work to show that GIStemp is not just falisified in the specifics, but falsified in the general as well.

FWIW, I started by making tables of the actual temperatures for various locations. A bit of code that would calculate the simple averages. I then ran this on very small areas (so you don’t need anomaly processing to see artifacts) and noticed just what you are seeing. While a scan of the data showed no station with an out of the ordinary high temperature, the “grid box” showed a “hot hot hot” anomaly. That was what led to my finding that all the “warming signal” was in the newest 1/10 or so of the thermometers… and that it’s all just a giant “splice artifact” of recent changed processing and airports onto older stable data.

To falsify GIStemp fully would, IMHO, take about a staff of 4 ( 3 if done cheaply and they were dedicated) and about 1 year elapsed time ( 2 years if you have a ‘peer review and publish’ cycle at the end). 1 Staff Scientist to do the publication main writing. 2 Programmer types to run the equipment, write the code, and collect & manage the data sets. 1 Project Manager type (preferably who can also write code when the project hits a snag and write publication quality summaries for the Staff Scientist and PR type press releases and hustle for money while managing budgets and hang out with the clients and keep management happy and talk to the press and…) so the other three can concentrate on getting the real work done ;-)

I just read “Bundle Up, It’s Global Warming” By JUDAH COHEN (NY Times) and I can’t decide if the point was that AGW is now making us colder because the Himalayas, the Tien Shan and the Altai mountains are taller these days or because the snow is making the northern hemisphere more reflective than snow did when the climate was colder. Oh well, at least I can take cold and miserable comfort in knowing that even should I live to the middle of the next ice age AGW will still be working since it causes all sorts of weather to happen and weather is never climate change.

I’ve spent a lot of time looking at the links you provided, and the links they had, and then those other links…. I swear your web site is worse than picking up the dictionary; 3 hours later I can’t remember the word I wanted to look up! Thanks for an interesting start.

I’m new to the nuts and bolts aspects of the Global Warming, er, Climate Change or whatever they’re calling the “world is getting hotter” debate this month. You mentioned running the GISStemp code on a Linux box and it made me think. The code for Linux is open source, with hundreds, if not thousands of folks around the world helping to tweek and maintain it. Has anyone ever floated the idea of an open source global climate model? I suppose contributions would have to be moderated (even Wikipedia has gone that way), but really interested folks with the technical savvy might be able make small but useful contributions without being intimidated by the scope of the project.

Don’t know if there’s an open source version, there’s a “more portable” version of GIStemp (translated to something better than FORTRAN ;-) and there are published codes for several of the predictive models.

are the folks doing the more open version of GIStemp (and some other stuff too, I think).

“open source climate code” as a google term returns over a million hits…

Glad you liked my site. If there’s anything that you get “stuck” on and would like an explanation, feel free to just foat a question on some thread there. (I’m pretty loose about thread topic discipline… )

It’s all pretty dense at first, and there is a LOT of jargon and acronym soup, but none of it is very hard to master. Just take modest sized bites and chew on each for a while ;-)

I understand you find NASA GISS Temp graph for US having a problem. Frankly speaking I agree with you, if the relative positions of the older records change there is something wrong.
Now the global is based on several such local graphs – on same data – US was only a selection of it. We see trouble in US graphs/data?, we saw trouble in other locations too. Where do these come from? This looks wrong to me. You find this ok?

btw-does anybody have the GISS program code? Did any body had a look at it?

Lars;
Since the US has always been considered to have the best sited, best maintained, best built, and best distributed weather station pattern on the planet, you can pretty well toss the lot. It might be amusing first, though, to “adjust” the UHI-contaminated readings by the -9 to -14°C recently found to be appropriate instead of the +.05°C favored by Hansen’s GISS, and re-run some models. I scenario-ize a massive meltdown of GCM computers, world-wide.

As Brian H mentions, the area around Greenland and Labrador has had less ice and gives v high anomalies (especially with 1200km smoothing and rectangular maps) this particular effect seems to keep the world warm even when it is not for most of us. Its interesting that NASA themselves identify this as the reason for the high global temperature while the populated areas of the world shiver. http://data.giss.nasa.gov/gistemp/2010november/ This of course was not the only warm anomaly in Novemeber but it seems to have quite a constant influence. One cant help wondering is the lack of arctic ice and the warm temperatures (maybe more ENSO, PDO, and AO related) at all the coastal stations an independant influence on global surface records. Is there a methodoligal problem concerning measurement of global temperature in the face of changing arctic ice cover?

HAPPY NEW YEAR to all in this thread. This was a tremendous learning experience for me and I thank all of you, including those who do not (yet :^) see eye to eye with me. It was especially instructive to have several active commenters who know more about climate science than I ever will.