Note that not only was 1998 demoted, but also many other years since 1975 – the start of Tamino’s “modern warming period.” By demoting 1998, they are now able to show a continuous warming trend from 1975 to the present – which RSS, UAH and Had Crut do not show.

Now, here is the real kicker. The graph below appends the post 2000 portion of the current GISS graph to the August 25, 1999 GISS graph. Warming ended in 1998, just as UAH, RSS and Had Crut show.

‘

The image below superimposes Had Crut on the image above. Note that without the post-1999 gymnastics, GISS and Had Crut match quite closely, with warming ending in 1998.

Conclusion : GISS recently modified their pre-2000 historical data, and is now inconsistent with other temperature sets. GISS data now shows a steady warming from 1975-2010, which other data sets do not show. Had GISS not modified their historic data, they would still be consistent with other data sets and would not show warming post-1998. I’ll leave it to the readers to interpret further.

————————————————————————————————————-

BTW – I know that you can download some of the GISS code and data, and somebody checked it out and said that they couldn’t find any problems with it. No need to post that again.

This all extremely interesting (especially the note at the end saying how we don’t need to hear that nobody who knows how to work with the GISS code and data can strengthen your implied accusation of fraud), but I think most people are more interested in the next fair, comprehensive and balanced Sea Ice News.

But this isn’t surprising, as GISS use proxy data for the Arctic – where there are no recordings. So Hadley are correct in their assertion (to me, by email, from Phil Jones – yes, really) that their’s is the one to choose. If there are no recordings, you can’t simply ‘assume’ by using a proxy that is a massive distance away, and may itself, be subject to surrounding influences. GISS assert that you can, but that’s ‘science’ now!

I’ll offer a hypothesis.
A UHI adjustment lowers the historical temperature of a place.
Apparently it’s just a flag that goes with the station location. In reality it needs to have a date associated with it. I.E. Before 1950 it’s rural, after 1950 it’s urban or something like that.

Regarding the code of course changes (and their effects) can be valid but the fact that you can only check out the GISS code now says a lot. They don’t seem to have had their code in anything resembling a reasonably modern Change Management system.

For GISS to have credibility all code and data for each of the “versions” of their temperature records should be formally documented and a few clicks away.

“Uncertain Climate – In a special Radio 4 series the BBC’s Environmental Analyst Roger Harrabin questions whether his own reporting – and that of others – has adequately told the whole story about global warming”

“He finds that the public under-estimate the degree of consensus among scientists that humans have already contributed towards the heating of the climate , and will almost certainly heat the climate more.”

The first programme is on tomorrow morning at 09:00 BST. It looks lim AGM is not to be questioned but the alarmism may be.

Steve McIntyre has a say in this program. I wonder if they’ll mention GISS and James Hansen.

“The data processing flaw did not alter the ordering of the warmest years on record and the global ranks were unaffected. In the contiguous 48 states, the statistical tie among 1934, 1998 and 2005 as the warmest year(s) was unchanged. In the current analysis, in the flawed analysis, and in the published GISS analysis, 1934 is the warmest year in the contiguous states (but not globally) by an amount (magnitude of the order of 0.01°C) that is an order of magnitude smaller than the certainty. ”

Seems off the radar now it never gets a mention anymore, either in global or CONUS context:

“Although 2008 was the coolest year of the decade, due to strong cooling of the tropical Pacific Ocean, 2009 saw a return to near-record global temperatures. The past year was only a fraction of a degree cooler than 2005, the warmest year on record, and tied with a cluster of other years — 1998, 2002, 2003, 2006 and 2007 — as the second warmest year since recordkeeping began. ”

For readers that want to see the behind the scenes shenanigans that went on at GISS, Steve McIntyre’s post with Climategate email context is helpful.

“I was about to protest the characterization – but I had been arrested, more than once. And I had testified in defense of others who had broken the law. Sure, we only meant to draw attention to problems of continued fossil fuel addiction. But weren’t there other ways to do that in a democracy? How had I been sucked into being an “activist?”.

I really can’t trust a scientist who combines global data gatekeeper role with a jailbird role.

These adjustments of the past are always happening. A couple of years ago, one of these blogs (maybe it was The Blackboard), noticed how temperatures back to the early 20th century were being adjusted down by small amounts. I think that in another 100 years, it will look like the ice age didn’t end 14000 years ago or whenever we think it did now. If you were to ask what the temperature was in 1930, you would also have to state what year you want it referenced from – the number you get this month is different than last month or last year.

Tamino says—“the modern global warming era starts in 1975.” With or without the 1998 adjustment there has been warming since 1975. And it can be called ‘modern’ since it is. And it is ‘global’ if you talking about the mean temperature. So he’s right, it’s modern global warming. I wouldn’t use the word ‘era’ though because that’s an exaggeration.

But I know why he puts that in: he wants the reader to infer its the “manmde” global warming ‘era’.

All kinds of implied meanings going on in ‘global warming’, i.e., sophistry.

Stephen Schneider, a lead in the 2007 UN IPCC report, had said the following in 1989:
“To capture the public imagination, we have to offer up some scary scenarios, make simplified dramatic statements and little mention of any doubts one might have. EACH OF US HAS TO DECIDE THE RIGHT BALANCE BETWEEN BEING EFFECTIVE, AND BEING HONEST” (emphasis mine)

Refresh my memory. Didn’t four of the world’s most distinguished meteorological organizations record a worldwide temperature drop of 0.595C in 1998?

Hansen’s graph only shows about a 0.24C drop. Is this the homogenized version of 0.595C? This might explain the fifth sentence “But something interesting has happended to 1998 since then”. Looks to me like Dr. James “thumb on the temperature scale” Hansen is back at work again doing what he does best – cooking the books!

I’m glad some one else noticed this. I was involved in a few climate debates on different forums a couple years ago and used NASA graphs to make my point of mild cooling post 1998. Later when I looked back on those conversations, I found the links were dead and those graphs were no where to be found. This was the first time I realized that revisionist practices were in play at NASA. Since then I try to save such pieces of information to hard drive just in case.

(Looking at the old article it looks like the format of the formula got a little hosed.)

If I am missing the temperature for March, 1990 in East Podunk, Maine, GISStemp will use all the historical March, April, and May temperatures for that station in an effort to GISS-timate that missing monthly temperature. Because GISStemp is run start to finish every month with the most recent monthly data added to the pot, that previous estimated temperature can change due to the addition of recent data.

The GISStimation might not be that big a deal if there were only a few data points missing, but the problem is there are lots of data points missing. As a result we notice the results of GISStemp changing over and over.

Amino Acids in Meteorites says:
August 29, 2010 at 10:13 am
0.07 changed in GISTemp wouldn’t change the GISTemp trend on a decades scale. So to say the trend between CRU and GISS is still the same doesn’t apply to what this post is about.

AAIM: When you adjust 1998 it allows you easier to claim that temperatures are still rising even in the last decade.

An Anomaly can be misleading (shhh…. don’t tell anyone…it’s a big secret) when that is all one is shown.
But, under the hood, when the absolute temp is added back, the anomaly doesn’t look so big, bad & intimidating:

So I got the HADcrut3gl data from Wood for Trees.
What I did :
Copied the HADcurt3gl anomaly data (1850 to 2010) to 2 columns to plot.
To the first column I added 12.5 C to the anomaly to get the absolute temp back.
(hey, I tried to find the absolute temp, but nobody thinks of posting that any more….WUWT??)
Next, I multiplied the 2nd column by 10 to restore the Boogie Man OMG we’re all going to boil & drown effect, and plotted it all.
So, whoop, there it is. Doesn’t look so big & ugly anymore.

No need to talk about the code, nor to rework a person’s words to ascribe non-existent dark reasons for not talking about the code.

We’re just looking at the outputted results and how they were adjusted over time into saying something different, now different from other major temperature datasets. No need to go into how good truly-raw temperature records were ingested, homogenized, and excreted as those results.

rbateman says:
August 29, 2010 at 11:05 am
An Anomaly can be misleading (shhh…. don’t tell anyone…it’s a big secret) when that is all one is shown.
But, under the hood, when the absolute temp is added back, the anomaly doesn’t look so big, bad & intimidating:

To compensate for UHI effect, GISS does not adjust current urban temperatures downward to remove the effect of UHI, but instead it adjusts all historic temperatures downward, thus emphasizing the effect! This invalidates everything they say.

Its just another example of knowing what the answer s in advance (in this case steadily rising temperatures) and then adjusting the data to fit the required result. We shouldn’t be surprised because it isn’t science, its religion.

Gavin “predicted” 2010 would be the warmest on record. Gavin is g(e)o(i)d. Well he figured if it goes in El Nino mode and with GISS adjustments… There is no better race to call than the one you already know the result of…

Oh good – no warming since 1998. Surely the Arctic sea ice will continue it’s 2008 and 2009 summer minimum recovery from the bad-weather summer of 2007, just as the citizen/amateur climatologist/skeptics confidently predicted.

…what will Nature say? And I’m not talking about the publication. – Anthony
Exactly.
You can’t accuse Nature of being incompetent, a charlatan, staging a hoax, revising history, or having been arrested.

GISS and other activist organizations don’t need to worry about their long term credibility, they only need it long enough to get the global control and one world government in place. Once that is done no one will dare question them.

AAIM: When you adjust 1998 it allows you easier to claim that temperatures are still rising even in the last decade.

K.R. Frank

I know.

What I was doing was trying to head off an argument from a couple of weeks ago. In a previous posts about James Hansen/GISS by Steven Goddard there was an argument made by a commenter that Steven Goddard was treating James Hansen unfairly (thus making WUWT look bad) by pointing out the differences between GISTemp and other sets sets over the last 12 years, i.e., 1998 to 2010. The commenter said that was too short a time period and that over the long term the trend between CRU and GISS has been the same. The shaving of the data by things like 0.07 here and there wont show up in the long term trend. They get blended in. But, as you point out, it changes the data set in the short term into continued global warming.

“Its just another example of knowing what the answer s in advance (in this case steadily rising temperatures) and then adjusting the data to fit the required result. We shouldn’t be surprised because it isn’t science, its religion.”

If the Church adjusted data as fast as GISS, they would have zero believers.

Is there a short history of weather stations? My thought is that such stations and the collection and reporting of data with highs and lows, and averages, grew out of local interests. There probably was not an intention to use this in any other way. I doubt climate change studies were on anyone’s mind as the number of stations, the data, and the processing grew in size and complexity. Seventy or 80 years ago such tasks were done by hand and “computers” were (mostly women) folks that sat in long rows in big rooms and did arithmetic.http://www.thenewatlantis.com/publications/the-age-of-female-computers

I think most of the issues being labeled as corruption, fraud, and the like are off the mark when applied to most of the work. Did we see this in the 1970s when Earth was said to be heading for and ice age? In the last 25 years we have seen those in leadership positions influence the presentation of results (think Gore, Hansen, others ?) using flawed reasoning. On that there seems to be agreement by many.

But, go back in time to 1940, 50, or 60 and pick a weather-data analysis problem and decide how you would solve it with the material and technology at hand.

Remember, it was never intended to be the “end-all” for studying climate change.

BTW – I know that you can download some of the GISS code and data, and somebody checked it out and said that they couldn’t find any problems with it. No need to post that again.

No Steven, that’s not what we said, not at all.

We said your earlier accusations of “GISS working hard to make 2010 the hottest year ever” were without merit because you did not justify your statements. There had been no code changes noted to substantiate your insinuations of fraud and manipulation. You were anthropomorphizing unchanged computer code and making up all sorts of implied malfeasance to a non-living machine. Many issues were pointed out, such as you were cherry picking arbitrary six month periods, shifting arguments, such as comparing 2010 to 1998 when the previous record for GISS was 2005, and your deceiving use of chartsmanship in your previous presentation.

Because of the content of your posts I have said that they reduce the credibility of WUWT. Given the apparent lack of critical thinking or following a logical argument as I have just noted in the snark quoted above, I stand by that assessment.

On to this post. You may be surprised that I agree that GISS continually downward adjust older dates. This subject has been covered in detail previously on WUWT and at Climate Audit as John Goetz noted above.

You may also be surprised that I agree the GISS code routines for estimation of trends, which rewrite history are flawed, just as John Goetz pointed out more than two years ago. Yes I think what they do is in error, a known and discussed problem.

Had you not felt compelled to add that nonsensical snark at the end of your post there would be little reason for me to comment in this thread, but you felt compelled to get in a dig, and a completely misguided one.

I will repeat my first criticism of your GISS series in closing.

Gistemp methodology is accurately documented and the code is available. It has been reproduced by multiple citizen scientists.
It departs from other indexes for known reasons which can be debated both in the literature and in the blogosphere. Backhanded insinuations of evil intent, such as this, do little to further productive discussion.

Probably part of that quote are incorrect. It is probably not accurately documented, but the code has been available long enough that that error is moot.

The GISStimation might not be that big a deal if there were only a few data points missing, but the problem is there are lots of data points missing. As a result we notice the results of GISStemp changing over and over.

And this does not bother you? It bothers the hell out of me.

1) They should be very careful about trumpeting “the hottest evah” until all the relevant adjustments come in, if there are likely to be revisions. But, no, they shout out at the first opportunity. Clearly they think the temperatures are correct immediately.

2) I fail to see how large scale revisions can occur many years later. It boggles the mind.

3) I dislike the fact that all revisions are downwards for later events and upwards for recent ones. That is extremely unlikely.

4) Why, if your assessment it correct, does 1998 get a big revision downwards, but 1997 and 1999 are spared? How likely is that.

I suspect you would like to believe that GISS are honest, so will buy their “revision” line, because the alternative is that it is deliberate deception. I believe that you are being had.

“I was about to protest the characterization – but I had been arrested, more than once. And I had testified in defense of others who had broken the law. Sure, we only meant to draw attention to problems of continued fossil fuel addiction. But weren’t there other ways to do that in a democracy? How had I been sucked into being an “activist?”.

My estimate for the start of their warming was 1977, not 1975, but that is not important. What is important is that their entire warming prior to 1998 is faked. And so is warming shown by NOAA and the Met office: satellite data simply cannot see it. What satellites do see is a temperature oscillation, up and down by half a degree for twenty years, but no rise until the 1998 super El Nino arrives. These oscillations are real and record the ENSO phases in the Pacific, El Ninos alternating with La Ninas for twenty years. There were five such El Ninos during this period. If you compare the satellite temperature measurements with say, HadCRUT3 from Phil Jones, you find that these same temperature oscillations are present in their data as well and can be lined up with satellite observations. But what is different is that the valleys between the El Nino peaks – the La Ninas – have all become much shallower in their curve. This is how they change a horizontal temperature trend into a rising temperature trend to show warming. Same for NASA but NOAA goes them both one better: they stay with the peaks entirely and throw out all intervening La Nina valleys. This transmogrified curve is then said to prove that “anthropogenic global warming” is here and Hansen testifies to it in front of the Senate. What we are dealing with is a massive and long-lasting scientific fraud, one compared to which Climategate is just the tip of the iceberg. And since three organizations are involved it is also a criminal conspiracy and should be investigated. It started in the late seventies and requires that there had to be a synchronizing signal at the time, something like “let’s all follow the peaks and adjust the intervening values as needed.” The first four El Nino peaks do indeed increase in sequence but this is balanced by the valleys in the real temperature curve. But not in what we are shown by custodians of temperature curves. By a coincidence Hansen in the late seventies defined a new method for estimating global temperature change for GISS. He had just resigned from the Pioneer Venus project and joined GISS because, in his own words, “The composition of the atmosphere of our home planet was changing before our eyes…” A detailed description of his method of temperature analysis has never been published. Clearly an accounting is needed of how these temperature curves came into existence. They should not be used for any temperature analysis for which alternate satellite temperature measurements exist.

what will Nature say? And I’m not talking about the publication. – Anthony

Nature will say the following:

“There were many, many signs for you to see that it was high time you started transitioning towards a sustainable society. But you decided to listen to the people who said what you wanted to hear instead of choosing the best and most rational path ( like this guy did) You had your chance to do what was necessary. You didn’t. Now I will do it for you. You will not like it.”

And when I say “the people who said what you wanted to hear”, I mean the people who run this blog.

REPLY: Signs are funny things, depending on how they are displayed, people often interpret them different. Some people ignore signs that they deem inconvenient, like Gore and hurricane frequency (see the Katrina 5 year post). So, actually, we are in the business of saying what you don’t want to hear, Günther, or is it “Nevin”? – Anthony

“GISS recently modified their pre-2000 historical data, and is now inconsistent with other temperature sets.”

You persist in misrepresenting the record. Giss have NO HISTORICAL DATA. They import data from GHCN, USHCN, and SCAR. Then they process that data according to an algorithm and publish results. Those results have changed and will continue to change as the result of several things:

1. the processing error that Steve Mcintyre caught. IF they didnt fix this people would scream.
2. The missing data issue which John Goetz discusses. John did REAL WORK on this issue. Read John’s work.
3. If new data or revised data is put into any of the data sources.

There are real concerns with the science of global warming. Real uncertainties and real data manipulation/cherry picking issues. The issues you raise about GISS are a distraction from the real issues. They divert people’s time and attention away from the real issues. They are issues that people like John, SteveMc, JeffId and others have looked at long long ago and put to bed. Its GOOD that we put some of the spurious issues you raise to bed so that people focus 100% of their attention and doubt on the real issues. It’s good to put your spurious issues to bed so that we can focus on the real issues. Like the issues that Anthony and Dr. Peilke are actually doing REAL WORK on.
I’ll note that to my knowledge ( having searched the FOIA requests made to NOAA and to CRU) that you’ve never joined us on a hunt for real answers to real problems. Other people may enjoy this side show, I’ll pass. I think raising spurious issues detracts from the reputation of WUWT. BTW what’s the ice look like?

“In twenty years time I think the world will thank “the jailbird” and
curse the Heartland and other disinformation teams.

REPLY: With people like William Connolley helping such efforts at Wikipedia, perhaps, but what will Nature say? And I’m not talking about the publication. – Anthony”

Even if Nature should decide to verify the most hyperbolic projections of the climate alarmists I would hope future humanity would still assume nearly universal revulsion for Hansen and his ilk. If it doesn’t it will mean humanity has lost any capacity for rational self-interest in its own survival and progress.
I have never worked as a scientist but I have always felt that the idea of Science was humanity’s most valuable achievement. The notion of a system to maximize the objectivity of human knowledge has led to the only assuredly “unprecedented” phenomenon we have ever seen. That is the advancement of an unprecedented percentage of humanity to levels of personal well being never experienced in history, or pre-history for that matter.
Admittedly Science has never come close to maximizing the ratio of objectivity/subjectivity but the continuous improvement program that Science provided has led us from a point where the number of people living beyond a subsistence existence was smaller than the percentage of students who ace the SAT to where more than half the human population can be ranked as middle class.
Unfortunately, the only truly “endangered species” in the world today is the “scientist” who is willing to construct their experiment, run it, and let the chips fall where they may. Climate science may be the the most egregious example of this phenomenon, but thanks to things like PNS it is metastasizing at an alarming rate and the cancer that it is, does and will do damage to human prospects that is beyond what anything other than a massive meteor strike or another ice age could accomplish.
I have always maintained that he best of what is “known” by humanity at this point is far from true “knowledge”, but that the pursuit of the ideals of Science was leading us, despite many setbacks, on an approach that was inevitably closer to that goal. That many people doing science seem to have decided that because total objectivity can never be achieved, the quest for ever greater levels of it can be usefully abandoned and personal interests and opinions substituted for it can only lead to chaos and anarchy. When what we “know” becomes, as it increasingly has, a matter, not of rational investigation, but of polling data we will have lost much more than Nature could ever take from us, and we will have no one to blame but ourselves, with an extra ration of disdain for those who led us down that path.

Signs are funny things, depending on how they are displayed, people often interpret them different.

Yes, that’s your art isn’t it, displaying it in such a way that people interpret it differently? I’m already curious how you will display the Arctic non-recovery for people to interpret. Or will you wait for the refreeze to kick in and do your annual ‘look how it’s growing!’-blog post?

REPLY: Well “Neven” have a look at the next Sea Ice News, coming soon and read the graphs yourself. How’s that secret forecast on your secret blog coming over there? – Anthony

jeez, Anthony: Yes, jeez did find data going back to 1999 and posted links on CA. I’ve already downloaded all of it but have not had a chance to crawl through it. I’m not sure it merits much more investigation because I don’t think we will learn any more than we know, which is the algorithm modifies the past when temperatures are added to the present. That should be enough to cause a double-take.

Thanks jeez!

The helpfulness of your contribution towards that work of John Goetz was fully noted at the time.
:-)

On the other hand, it is Hansen who got Congress’ ear, Gore who borrowed Green as his Armor and Fear as his Axe, and The loose cannons found an audience for their interpretation, and we narrowly missed (so far) getting draconian taxes slapped like heavy chains around our necks.
Never forget that this overbearing Agenda is launched on numerous fronts.

GISS is digging the grave of whatever credibility they might have, in ways they might not have anticipated, with these adjustments. For example, suppose for the purpose of argument, that the predictions are true that rising temperatures will result in more hurricanes, floods, malaria, polar bear decline, pestilence, locusts, bad breath, etc. (Yes, I know that these predictions are mostly bunk, but the AGW community is banking on them.) If temperature is not really climbing, then we will not see these associated effects, despite a continually rising GISStemp. This will serve to increase doubt in the temperature-calamity connection.

In order to support a temperature-calamity connection, it will become necessary to “adjust” polar bear counts, tide gauges, hurricane reports, etc. Eventually all of science and economics will have to submit their numbers to AGW commissars for adjustment in order to maintain the party line.

“BTW – I know that you can download some of the GISS code and data, and somebody checked it out and said that they couldn’t find any problems with it. No need to post that again. ”

Nice try. More than somebody checked it out. quite a few software professionals, and people who have been professionally employed as statisticians and data analysts have downloaded the code and checked it out. Some of us have “refactored” the code, some of us have emulated the code in other languages. Some of us ( clearclimatecode) have reported errors and had those errors corrected.

Nobody SERIOUS has ever argued that there are no “problems” with the code. There are plenty of interesting technical issues, but not a single one of those issues is important to the real debate about the uncertainties of global warming. As new data comes into the algorithm the past ESTIMATE will change. This is a consequence of that method. BTW, the independent computations of global warming ( jeffId, myself, others) show higher rates of warming than GISS. Don’t like the answer GISS gives? The most statistically sound approach ( JeffIds and RomanMs) shows the same if not more warming.

Snowlover123 says:
August 29, 2010 at 1:01 pm
GISS should be called the Great Institute for Stupid Scientists.

Did you know the “G” in GISS stands for Goddard ?
I wonder what the psychological explanation is of stevengoddard choosing a pseudonym honoring this “Institute for Stupid Scientists” which WUWT attacks all the time…

I’m glad to see you’ve picked up the HTML strike tag and unicode smiley face in the many months since I’ve first seen your Comments.
Great journeys start with the smallest of steps – I’m sure you’ll learn plenty about climatology in the next 30 or 40 years.

One question that popped into my mind back then was whether or not – with all of the estimation going on – the historical record was static. One could reasonably expect that the record is static. (…)
—-
On March 29 I downloaded the GLB.Ts.txt file from GISS and compared it to a copy I had from late August 2007. I was surprised to find several hundred differences in monthly temperature. Intrigued, I decided to take a trip back in time via the “Way Back Machine”.

Here I found 32 versions of GLB.Ts.txt going back to September 24, 2005. I was a bit disappointed the record did not go back further, but was later surprised at how many historical changes can occur in a brief 2 1/2 years. (…)
—-
On average 20% of the historical record was modified 16 times in the last 2 1/2 years. (…)

Sure sounds to me like John Goetz is saying GISS is changing the historical record. Assuming the data at GHCN, USHCN, and SCAR isn’t being changed by GISS, then it appears John Goetz is saying GISS is changing their own historical record, which, going by what you’re saying, was assembled with data from those other sources.

If you’re going to heap blame on Steve Goddard over this issue, you should be giving John Goetz an equal share as well.

Mikael Pihlström says:
August 29, 2010 at 9:53 am
“[…]In twenty years time I think the world will thank “the jailbird” and
curse the Heartland and other disinformation teams.”

Wait. So you think that stuff that Hansen produces there is “information”? In other words, you BELIEVE that 1998 was slightly colder than 1934 when Hansen says so, and later, when Hansen makes it a little colder than 1998, you believe him again? And now, you believe that 1998 got colder than it really was?

Look, Mikael, the past doesn’t really work that way. Once something has happened, it has happened. It doesn’t change after the fact. Simple concept really once you get used to it.

The 2010 melt season keeps stumbling forward. Days of 20-30K decreases (August 23rd, 25th and 26th) are alternating with decent daily decreases of 50K (24th and 28th) and a very decent daily decrease of 78,437 square km (on the 27th).

There are volumes and volumes of documentation proving that the Earth is the center of the Universe ,that earth is 6,000 years old, that Continental Drift is impossible, and that sea level will rise 1-6 metres during the next century.

There are volumes and volumes of documentation proving that the Earth is the center of the Universe ,that earth is 6,000 years old, that Continental Drift is impossible, and that sea level will rise 1-6 metres during the next century.

Steven again demonstrates that he is either unable to comprehend the criticisms leveled against him or is attempting to use sophistry to deflect criticism.

Here he seizes on the word documentation and runs off on an unrelated tangent implying that my use the word is somehow an appeal to authority. It is clear from my comments and my wording that I am saying that criticism of Gistemp should be done with an understanding of method and documentation that exists to facilitate this analysis. Steven does none of the real analysis while others have and do.

I am critical of Steven’s hand waving accusations that lack substance. This is not an appeal to authority. This comment of Steven either shows complete lack of reading comprehension or is a deliberate ruse to avoid criticism.

His next response will likely be that he was not talking about my comment.

You misunderstand me. I am praising Hansen for proving that Had Crut, UAH and RSS are all dead wrong.

DMI showed July as coldest on record north of 80N, while Hansen showed that region well above normal. I am praising Hansen for proving that DMI’s egregious fault..

The noble Hansen
Hath told you big oil was ambitious.
If it were so, it was a grievous fault,
And grievously hath big oil answered it.
But Hansen says big coal was ambitious,
And Hansen is an honorable man.
He hath brought many captives home to Copenhagen,
Whose ransoms did the general coffers fill.
Did this in big CO2 seem ambitious?
Yet Hansen says big CO2 was ambitious,
And Hansen is an honorable man.

… 72 of the 168 monthly temperature records between 1880 and 1894 were changed.

Only 43% of the records were changed (almost all lower) in that period. That is in just one month – now give them 12 chances a year over 20 years and you can make a big change in the temperature trend.

Only 28 of the 130 years did not have a changed record.

If GISS only takes the data from GHCN, then why is the NCDC changing the temperature records every month?

I’m glad to see you’ve picked up the HTML strike tag and unicode smiley face in the many months since I’ve first seen your Comments.

Ah, so your in-depth research has concluded he is using the unicode smiley (ctrl-shift-u together then 263a then enter for my Debian Linux setup) versus the ANSI smiley (alt with the 1 on the numeric keypad for a Windoze box)? How did you determine the difference?
☺

I know several well educated professionals who voted for Obama, which proves that he must be a good president.

Now Steven seizes on the single word “professionals” in Mosh’s comment. This may be the worst rhetorical argument I have ever seen Steven make.

First, Mosh notes that those professionals have criticized or corrected gistemp errors, not praised or endorsed it. So this is another example of a complete lack of reading comprehension or deliberate misdirection on Steven’s part.

Second, Steven, are the professionals that you know who voted for Obama employed as professional presidential competency evaluators or related fields? Because if not, you’re example makes no sense.

And third, of course throwing in the blatant political judgment as a flourish, in order to illicit sympathetic ears.

Amino Acids in Meteorites says:
August 29, 2010 at 4:10 pm
I think they wanted him to get their ear. And here was manipulation of temperature there too!

You mean, Hansen had inside help? No! That would be a real Agenda at work.
Oh, darn that HADcrut3gl and Phil Jones ‘no warming the past 10-15 years’.
From what I get out of Phil Jones CRU 91/94/99 sets, the Pacific NW is a bellweather for global means to follow. Takes a couple of years to germinate.

Sure sounds to me like John Goetz is saying GISS is changing the historical record.

Not really. GISS’s algorithm changes the conclusion details they make regarding the historical record, but the record itself (the raw data) does not change as a result of GISS.

I guess, looking at it another way, GISS’s estimate (or assumption) as to the value used to represent missing data does change over time. So maybe that is one way to claim they change the historical record?

It is Steven who initiates the process of dodging and weaving in tangential non sequitors.

Pointing them out is the only way to address his evasiveness and lack of rigor.

The issue has been put forth succinctly.

Conclusion : GISS recently modified their pre-2000 historical data, and is now inconsistent with other temperature sets. GISS data now shows a steady warming from 1975-2010, which other data sets do not show. Had GISS not modified their historic data, they would still be consistent with other data sets and would not show warming post-1998.

You have not been addressing the substance of the issue.

But you have been quite vociferous about Steve not directly addressing what you were saying while you were not directly addressing the issue.

So, do you have anything to say about the issue, or shall you continue to talk about what you are trying to forcibly elevate to the status of “issue”?

On to this post. You may be surprised that I agree that GISS continually downward adjust older dates. This subject has been covered in detail previously on WUWT and at Climate Audit as John Goetz noted above.

You may also be surprised that I agree the GISS code routines for estimation of trends, which rewrite history are flawed, just as John Goetz pointed out more than two years ago. Yes I think what they do is in error, a known and discussed problem.

Had you not felt compelled to add that nonsensical snark at the end of your post there would be little reason for me to comment in this thread, but you felt compelled to get in a dig, and a completely misguided one.

Not really. GISS’s algorithm changes the conclusion details they make regarding the historical record, but the record itself (the raw data) does not change as a result of GISS.

I guess, looking at it another way, GISS’s estimate (or assumption) as to the value used to represent missing data does change over time. So maybe that is one way to claim they change the historical record?

If you obtained from GISS the data they crank through to obtain such wonderful products as their global average whatever, would you be looking at original unaltered data from GHCN, USHCN, and SCAR, or something adjusted?

So is counterfeiting changing the conclusion of the bearer on demand that his note is the real deal.
Upon examination, the promise to pay falls apart.
Substitute temperature for currency, and GISS is passing off the conclusion that the past 100 years of climate is a steady upslope, with no real peaks or downturns, as representing the true nature of the data. That is exactly what has transpired, the alteration of the raw data to represent something it does not, and how GISS representations have been marketed to support conclusions that are not true, to the unwary.
And, to this day, they (GISS) are still at it.

When records are missing, they estimate values for the missing data points, when possible. (Ooops, I guess they do produce estimated records).

Do the algorithms GISS uses to do the estimation cause the results to change each time they are run? Yes they sure do.

Do the algorithms GISS uses to do the estimation enhance the global warming trend? I do not know, but given that one is trying to fit a missing data point into a trend, I would not doubt that the algorithm creates data that supports the underlying trend. In other words, if the trend appears to be up, GISS will create data that supports this.

Is this algorithm a nefarious attempt by Hansen and Reudy to create a trend where none exists? I believe that unlikely. Support a trend, maybe, but create … no.

Why does hadley show that 1998 is the warmest? well simply because they have been underestimating the warmth of the 2000s as confirmed by the the European Centre for Medium-Range Weather Forecasts (ECMRWF).

This has been explained over and over again. You’ve cherry-picked a short time period where the discrepancy between indexes appears exaggerated. You have a hand waving explanation that this period is chosen because of some magical el nino peak to peak alignment.

Do el nino’s always begin on January 1st of a given year?

The discrepancy breaks down one year downstream 1999 or ten years upstream, 1988

Your “point” is nothing but an artifact of cherry-picking periods. And the funny thing is you accuse others who systematically dismantle your non sequitors, snarks, and misstatements as cherry-pickers. You even made a post about it. Again, pot, kettle. Now you bob and weave again. This post was not about GISS and its trend compared to other indexes, this post was about GISS modification of historical records. Please read John Goetz’s posts and comments on the subject and you might learn something about the subject you write so much about.

This is my last comment on this thread. When you post your rebuttal animations make sure to include all four indexes this time around and don’t conveniently leave out the ones that don’t support your narrative as you have previously.

“I’m glad to see you’ve picked up the HTML strike tag and unicode smiley face in the many months since I’ve first seen your Comments.
Great journeys start with the smallest of steps – I’m sure you’ll learn plenty about climatology in the next 30 or 40 years.”

*snort* Is that the best you’ve got?? Weak, very weak. You must be gettin’ old. ☹

Yes, I recently started using that smiley/frown because I prefer it to the more garish ones. But I’ve been using HTML & BBCode for years before WUWT went on-line. So much for your guess.

By claiming that I’m so ignorant of the subject, you’re implying that you’re knowledgeable. So speaking of guesses, give us your expert opinion: GISStimate your prediction for, oh, I don’t know… how about the lowest N.H. ice extent in 2011? What’ll it be? And don’t pull a Hansen by using the Sharpshooter’s Fallacy; be specific. This dummy wants to see a renowned expert’s prediction. Got cojones?☺☺

A webpage from 2002 http://data.giss.nasa.gov/gistemp/2001/ titled “Global Temperature Trends: 2001 Summation” shows 1998 to have been around 0.59. Note… “This webpage was previously published as a letter to Science (Hansen et al. 2002).”

Best estimate for absolute global mean for 1951-1980 is 14.0 deg-C or 57.2 deg-F,
so add that to the temperature change if you want to use an absolute scale
(this note applies to global annual means only, J-D and D-N !)

The MAM (March, April, May) block is interesting. Seems the coldest was 1884 at -0.67 Celsius anomaly degrees. The hottest is (surprise) 2010 with an unprecedented +0.94 Celsius anomaly degrees. It’s the only +0.9’s number in that column!

Wow. Straight math, for MAM that’s a +1.61°C increase over 126 years, 1884 to 2010, for a whopping +0.13°C/decade rate of change between the two points. We’re all going to fry!
;-)

It was Case study 12 from Watts and D’Aleo, 2010 (show this to Jim then hide it) that turned me into a sceptic. The emails seemed to imply that it was common practice to change “raw” data without documentation or notification – and delete the “original” “raw” data. It seems that it is still standard practice. What possible confidence could anyone have in their data set if they can ( and do) keep massaging it with out documentation?

[REPLY – Well, it looks as if the UAH and RSS have been fooled along with HadCRU. NASA is the Brave New Cheese who stands alone. ~ Evan]

NASA doesn’t stand alone? NOAA stands with it too. And so do the multitude of other reconstructions done by independent bloggers, many of which find years warmer than 1998. Also lets look at the satellite records now. How can we have very much faith in the satellite records when a huge mistake went “un-noticed” by the two climate change skeptics who run UAH for so long? Either it says their quality checks weren’t up to par or that they were happy enough to let skeptics run out and say how satellites show no warming. Too bad when people checked their work they found this mistake. At the moment. Yeah maybe RSS is good but I still cannot see much validity to 1998 being the warmest on record when most land surface records don’t find it that way.

Reply: Evan you are expressing opinions (whether factual or not), which are not really appropriate for a moderator’s inline comments. ~ ctm

NASA “raw” data is NOAA adjusted data. NASA “unadjusts” then “readjusts” it. So they are in no way independent. As for satellite data, all of it was open to the public. The drift error was uncovered by independent review and has been corrected. (An arcane and forgotten procedure once known as “Scientific Method”.)

Errors are entirely forgivable. Refusing to release full methods, however, is entirely unforgivable.

NASA refused to release its algorithm for years and only did so under heavy pressure. And even now, it can’t be recreated from scratch because, while NOAA just loves longwinded explanations of how it adjusts its data, it has yet to release anything which would allow its adjustments to be “replicated” (another long forgotten word in the world of climate science.

Changing the data after the event is surely an admission that they got it wrong. By changing it again and again they are making admissions that they were wrong again and again. In other words, they are wrong all the time. Basically they are making it up as they go along. They must think we are as stupid as they are. The RC sheep/lemmings fall for it but we don’t.

Reply: Evan you are expressing opinions (whether factual or not), which are not really appropriate for a moderator’s inline comments. ~ ctm

OK, removed. – Evan

Not only are the posts on WUWT quickly peer-reviewed, but the moderation is also peer-reviewed. Imagine if the journals had peer-reviewed peer-review, where anyone doing peer-review at the journal can critique any other peer-reviewer, and the peer-reviewing is openly displayed.

Once again, the future of the advancement of science is found on WUWT.
☺

[REPLY – Well, it looks as if the UAH and RSS have been fooled along with HadCRU. NASA is the Brave New Cheese who stands alone. ~ Evan]

Maybe certain global warming folks (I’ll leave them unnamed to avoid a flame war anew) are hoping Americans will never hear of UAH, RSS, and HadCRU and just believe GISTemp because it comes from “N-A-S-A”. And if they aren’t Americans, well, then, of course they’ll believe NASA.

This whole GISS situation is starting to remind me a little of the standard business accounting problem — namely, how do you tell whether a company is correctly recording and analyzing its financial data when it obviously has a strong motive to say that it’s making rather than losing money. You can suspend your judgement on the quality of the company’s accounting practices until employee paychecks start bouncing, or you can try right now to get a handle on what’s really going on by hiring an outside auditor. From what I read here, it seems that the GISS team audits itself (note that putting all your data and computer code on line — presumably after checking to see that it matches the official “narrative” — is **not** the same thing as being audited). Some comments also suggest that the GISS team is led by Hansen, an extreme climate alarmist. If the GISS team wants their work to be believable they got to do better than that, starting with cutting all their ties to Hansen (as well as colleagues who owe him professional favors) and following up by the scientific equivalent of an outside audit.

So if I’m getting this right, 1998 loses warmth in GISS because it’s smoothed out in the algorithm? Ok….but why isn’t it lowered in other data set algorithms? I’m kinda having trouble justifying such an algorithm, or believing that the certain folks using the algorithm just coincidentally have it.

I’m beginning to wonder if there’s a couple of commenters who are doing everything they can to bend over backwards as far as they can just to show how much they can bend over backwards before they’ll say James Hansen’s activism could be effecting his work? And if they are doing it as an unspoken competition with global warming proponents to show they are bigger people?

To the whack jobs who seem hell bent on defending GISS: Where in the heck have you been? GISS arbitrarily changes historical temps all the time. It is well documented. You should screen shot the temp data and watch it change. I really don’t care what computer code was used or how. It is called history revision. Older years have always been revised downward to lend the appearance of warming. This really hasn’t been a secret. The only thing new about this is the rolling time periods. Apparently, GISS starts revising downwards after a decade.

To the derisive people that insist on blathering about the Arctic ice in their attempt to put the statement ‘no global warming since 1998′, you forgot the other half of the equation. I would list the names of the either disingenuous or ignorant people, but you girls know who you are. Go to WUWT’s ice page and check the Antarctic, then come back and blather about ice.

A note: WTF is it about computer coding that makes people think it negates malfeasance? Does it matter if we look at the code or not? Do the rationalizations matter? I really hope GISS tries to claim hottest year evuh. They probably won’t now, but that’s just ’cause their watching, now. Maybe some of you aren’t old enough to remember the start of the alarmism. I am. Are we to believe they found an error in the way they calculated the temps back then, only to find it really wasn’t that bad then but, now its really bad, and now GISS really knows what its doing? Because if this is correct then all the BS hyperventilation about the OMG!!! hottest ever 1998 was total tripe. Why are you people wasting your time believing they have a clue about today, especially when the other 3 don’t agree? Remember, Hansen is an admitted activist. In other words, he’s lost the ability to be detached. He gets half a point for a least being honest in that regard, his other cohorts are not even that honest. Let’s not kid ourselves, they are activists, only most of them are smart enough not to get arrested.

Hang in there, Steve. One day people will wake up and realize they’ve been lied to. I think it is difficult for people to admit they were that easily misled.

ZZZ says:
August 29, 2010 at 10:13 pm
“…….If the GISS team wants their work to be believable they got to do better than that, starting with cutting all their ties to Hansen (as well as colleagues who owe him professional favors) and following up by the scientific equivalent of an outside audit.”

John Goetz says:
August 29, 2010 at 5:36 pm
“…….. When records are missing, they estimate values for the missing data points, when possible. (Ooops, I guess they do produce estimated records).

Do the algorithms GISS uses to do the estimation cause the results to change each time they are run? Yes they sure do……….

Is this algorithm a nefarious attempt by Hansen and Reudy to create a trend where none exists? I believe that unlikely. Support a trend, maybe, but create … no.”

They have an algorithm that changes historical data each time its ran? What good is it? If you don’t like the word create, how about over-exaggerate. Well, only for present time. Apparently the algorithm reverses course as it goes back in history. Nice. What I believe is unlikely, is that this is anything but intentional. I wonder how often they run this “algorithm”? Every time data gets 10 years older? How is it possible to determine an anomaly for global temps if the historic temps continually change?

Mikael Pihlström says:
August 29, 2010 at 9:53 am
“[…]In twenty years time I think the world will thank “the jailbird” and
curse the Heartland and other disinformation teams.”

Wait. So you think that stuff that Hansen produces there is “information”? In other words, you BELIEVE that 1998 was slightly colder than 1934 when Hansen says so, and later, when Hansen makes it a little colder than 1998, you believe him again? And now, you believe that 1998 got colder than it really was?

Look, Mikael, the past doesn’t really work that way. Once something has happened, it has happened. It doesn’t change after the fact. Simple concept really once you get used to it.
———————
I have no deep knowledge of GISS, but I find what Steve Mosher and
John Goetz say above convincing. It seems to be an algorhitm that is
adjusting itself continuously and that is openly stated.

If you want to prove a trend then make sure there is a trend by modifying the past. Bingo! the good old hockey stick rears its ugly head. Same with CO2 levels. Alarmists have shown that it has remained at a steady 250ppmv until the 20th century when it starts to climb. They fail to look at past data sets which show CO2 levels of up to 500ppmv in the late 1800’s. They also forget that plants require CO2 levels of at least 200ppmv to survive and without CO2 this planet would be a barren wasteland with no life of any sort.

Sorry if ‘off-topic’, but there have been references above to Roger Harrabin’s BBC R4 piece.

The guy is rising in my estimation, and I would say that his piece today was fair and balanced. Prior to Climategate I had him marked as an AGW ideologue; since then I reckon that he has risen to the standard we Brits expect from the Beeb.

Does this matter – the performance of a non-scientist broadcaster? You bet! The key to neutering the AGW scare story is public opinion. The Hansens and Watsons of this world will never concede defeat, even if there were five consecutive winters with New York immobilised by blizzards: they’ll say ‘temporary reprieve’ until they die. No, when the ordinary Joe from Missouri bursts out laughing at any mention of Global Warming, that’s when AGW can be declared dead; that’s when the IPCC will implode.

“Conclusion : GISS recently modified their pre-2000 historical data, and is now inconsistent with other temperature sets. GISS data now shows a steady warming from 1975-2010, which other data sets do not show”

They have an algorithm that changes historical data each time its ran? What good is it? If you don’t like the word create, how about over-exaggerate. Well, only for present time. Apparently the algorithm reverses course as it goes back in history. Nice. What I believe is unlikely, is that this is anything but intentional. I wonder how often they run this “algorithm”? Every time data gets 10 years older? How is it possible to determine an anomaly for global temps if the historic temps continually change?

They run it once per month.

Perhaps I am being a little too nuanced, but they are not changing existing historical data. It is their estimates for missing data that changes. However, because so many data points are missing from the record (like holes in swiss cheese), their estimation strongly influences the global summaries they publish every month.

And yes, because their previous results change over time, it certainly does cause one to question the value of the results.

How can we have very much faith in the satellite records when a huge mistake went “un-noticed” by the two climate change skeptics who run UAH for so long? Either it says their quality checks weren’t up to par or that they were happy enough to let skeptics run out and say how satellites show no warming. Too bad when people checked their work they found this mistake. At the moment.

Give it a rest. UAH had an error in their calculations, they fixed, published that they fixed on error…unlike most of the researchers on the other side of the debate. I find it trite and self serving to bring that up as some sort of proof that UAH is not doing good work. What that tells me is that you are just reaching when you have to bring that up.

As an example, Von Braun and his team blew up quite a few rockets before they got it right. By your reckoning, they would be forever branded by their failures instead of the many successes and general advancement of the discipline. You know, that is what happens when people actually do stuff, they make mistakes, learn, improve and keep on going. While a REMF like yourself casts stones without any skin in the game.

With no real evidence except the various apparent global temperature anomaly data differences, all that can be said is that these inconsistencies just serve to indicate the basic random ‘fuzzyness’ associated with the information.

Steven Mosher says:
August 29, 2010 at 2:49 pm
but not a single one of those issues is important to the real debate about the uncertainties of global warming. As new data comes into the algorithm the past ESTIMATE will change. This is a consequence of that method.

What kind of a BS is this? If an algorithm, which is man made and written by these charlatans, keeps showing different figures for past historical climate data trends, it is a fraudulent and unreliable piece of computer program that is designed to cheat. What kind of a stupid justification is being given for this by some people here? Anyone who thinks this is no great deal and is very normal should have their head examined.

What the hell is wrong with people who try to justify a computer program ” estimating ” past facts? How the hell is that acceptable? A past fact is a past fact and any estimation of that which changes reality is a deliberate act of fraud.

Steven Goddard’s conclusion and thrust of this post was very clear that GISS deliberately altered past data by using dubious algorithms. People who argue against that conclusion are far removed from reality.

Ah, so your in-depth research has concluded he is using the unicode smiley (ctrl-shift-u together then 263a then enter for my Debian Linux setup) versus the ANSI smiley (alt with the 1 on the numeric keypad for a Windoze box)? How did you determine the difference?

There is no ANSI smiley face – ANSI (ISO 8859) is an 8-bit character encoding standard. Don’t get confused by the many ways different computers/software use to enter characters – you can easily View Source for web pages and see the underlying codes in any hex editor.
The smiley is a 16-bit unicode character – as is his evil twin: ☻

The name of the game is higher highs. It’s about the PR, not the science. Without higher highs, the bull market in AGW is dead.

Think about this for a moment. If you believe in AGW and that CO2 is the main driver of the increasing anomalies, and if your belief is unshakable and supported by the vast majority of your colleagues, you are going to look for ways to adjust the methods of analysis so that the results fit the science, especially if temps look like they aren’t going up enough. I mean the temps have to be going up. They just have to be. And temps had to have been cooler when there was less CO2 (which also implies that that cooling the past is totally rational and justified). So even if I do a little fudging here and there — such as creating values where none exist and adjusting these non-existent values periodically and such as smearing a single value over a huge geographic area without any data (what luck ye gods!) — the end result is that the analysis is still going to be accurate even if it is somewhat synthetic. In a way, I’m really doing everyone a favor by beating nature to the punch.

The bet is that the anomalies are going to catch up one way or another. A little divergence from the other three datasets isn’t going to kill the analysis over the short haul. There is still some wiggle room. And don’t forget the sandbagging of the literature in support of the methods, but that’s a completely different story and one that provides for a great deal of entertainment.

John, thanks for responding. I’m in agreement with you except I’m not nearly as generous as to motivations and expected outcomes of their methods.

“Perhaps I am being a little too nuanced, but they are not changing existing historical data.”

Yes, perhaps our view of ‘changing historical data’ is a disagreement in the context of viewing historical data. I would contend that they do, indeed, change historical data, in terms of adjustments and homogenizing and otherwise processing the temps. I understand the need to do so, but for this consistent historical downward trend, gives one pause and more. Perhaps if they printed in bold, DEFINITIVE STATEMENTS SUBJECT TO CHANGE on every thing they state and publish, then I probably wouldn’t cast a cynical eye when viewing their assertions.

John, what do you mean “… does cause one to question the value of the results.”?
Every month they run a computer program which “guesses” what the temperature might have been in an area where they have no measuring instruments. This program, you have already admitted, probably is influenced by the existing trend (I think that’s a fair summation of what you wrote earlier).
I worked in four different areas of business in my working life and if had ever used a trick like that to justify an action or a business decision I would have been fired on the spot.
The temperatures are what they are. If you have places with no record then too bad; work with what you have. More and more we see (at least anecdotal) evidence of a system which deliberately selects certain weather stations and then extrapolates in a way which distorts the record.
And at the end of all this we are seeing claims that these programs, algorithms, smoothings, whatever, can still tell us the variations in the earth’s temperature to hundredths of a degree.
Forgive me if I see no justification at all for defending an organisation or an individual that plays games like this with (in effect) the future of humanity.

By claiming that I’m so ignorant of the subject, you’re implying that you’re knowledgeable.
I can watch somebody play tennis and realize he’s terrible, without being a world ranked player myself. But I was encouraging you this time, not claiming you’re ignorant – certainly you’ll be more knowledgeable in 30 or 40 years.

So speaking of guesses, give us your expert opinion: GISStimate your prediction for, oh, I don’t know… how about the lowest N.H. ice extent in 2011? What’ll it be? And don’t pull a Hansen by using the Sharpshooter’s Fallacy; be specific. This dummy wants to see a renowned expert’s prediction. Got cojones?☺☺

I’d like to see how WUWT handles this Arctic summer minimum first. I made a specific prediction months ago, as did stevengoddard. If the final result, which is only weeks away, is handled with honesty and integrity, I might make another prediction for next summer.

I’m looking forward to the Cryosat-2 data – that ice thickness data, plus the post-mortem of this summers melt season, would underly any prediction for next summers minimum.

Of course, any prediction I make would lie somewhere between young molten Earth and snowball Earth, so it is all within “natural variability” for someone with a 4.6 billion year perspective ☻

It was once said, and is sometimes still said, that there are “Lies, Damned Lies, and Statistics!”
I move we henceforth and evermore also include the phrase that there are “Lies, Damned Lies, and Gisstimates!”, and that this phrase take on special derogatory significance when speaking of anything related to the contrived, voodoo-science field of Climatology and/or NASA’s credibility in generating reliable climate data.

What kind of a BS is this? If an algorithm, which is man made and written by these charlatans, keeps showing different figures for past historical climate data trends, it is a fraudulent and unreliable piece of computer program that is designed to cheat. What kind of a stupid justification is being given for this by some people here? Anyone who thinks this is no great deal and is very normal should have their head examined.”

I will give you a simple one.

The Reference station method works something like this. Say you have 2 stations that are in close proximity to each other.

One starts in 1985, The other starts in 1900. in the year 2000 the stations have 15 years of OVERLAP. The algorithms “combines” stations at the “same” location SUBJECT to overlap rules. If there are less than 20 years of overlap, they are not combined. The station that starts in 1985 IS NOT INCLUDED. No overlap, no inclusion. Time marches on. THEN in 2005, you do have 20 years of overlap, so the
station gets “included” in the reference station time series and the past ( 2000-2004) will change. what changes is the ESTIMATE of temps. It changes BECAUSE the data changed. What changed in the data? MORE stations reporting in the 1985-2004 period. a station that WAS excluded can now be INCLUDED in the estimate. It is one of the BENEFITS of the RSM method. it uses more data than say CAM method which CRU uses.

Are global surface temperatures actually increasing or are we walking up a ‘down’ escalator?

In 10 years is GISS going to claim 2020 is the hottest ever with at .6 anomaly while claiming 2010 was actually .57 and 1998 was at .54?

Either temp’s are actually increasing or GISS is engaged in some sort of tribalistic statistical rain dance.

I don’t need to look at the data or the code to see this; the claim is that .6 is the hottest ever, in 1998 the anomaly was claimed to be above .6 and is now shown below .6. Who am I to trust, GISS data and code or my own lying eyes?

If temps are increasing there’s no need to adjust the old temperatures down to show this. If temperatures are merely staying the same while you adjust old temperature data downward to show warming then GISS has a lot of explaining to do. I don’t think anybody would provide much funding to mitigate that problem or study that issue. They’d simply conclude that you are a cargo cult and refer students to documentation regarding the phenomenon.

Now, explain the logic behind any changes that caused the discrepancy or be a laughing stock. Your choice.

What’s to say that in 10 years MORE station data won’t get put into 2010 and suddenly 2010 is no longer as warm as we thought it was? And isn’t it amazing when you put in more data, the older data ALWAYS gets cooler?

Steven Mosher says:
August 30, 2010 at 9:25 am
“THEN in 2005, you do have 20 years of overlap, so the
station gets “included” in the reference station time series and the past ( 2000-2004) will change. what changes is the ESTIMATE of temps. It changes BECAUSE the data changed. What changed in the data?”
=======================================================

ROTFL thank God my bank doesn’t work that way!

What a novel concept, the present changing the past……..

Which means that current temp “estimates” are not accurate either, because tomorrow they will be in the past, and tomorrow’s temp data will change today’s “estimate” too.

Long series: short series. if those two stations are close together, the RSM will stitch them into a LONGER series. But in the case above the overlap is short, 12 years.
SO, Gisttemp will not include the short series. Station 1 will just be ZERO.
TIME MARCHES ON. more years get added to station 2 and station 1. When you have 20 years
of overlap THEN you go back in time and stitch them together. Station 2 gets included. This changes the “estimate” for that REFERENCE STATION (station1&2). a reference station is merely the stitching together of stations with gaps. Its done according to a set rule. dont like that rule? download GISStemp and change it. see if it matters.
IF you dont like this approach ( Jones does not) Then you can use CAM. You get the same general answer.

The theme song of the ignorati. You know a lot of the things that bother Steven and others about GISSTEMP bothered me. So, in 2007 we lobbied GISS to release the code. You can go back and see me pester gavin to “free the code” You can go to CA and see me help people who were trying to compile it. In the meantime I took the effort to actually read the code. SteveMc took the effort to actually emulate things. John G took the time to go through data by hand. WORK to back up or dash some ARMCHAIR opinion. The result of that real work is a better understanding of the math. Now some 3 years later people are “discovering” what we discussed long ago. But they don’t have enough conviction to doubt their doubt and check for themselves. They are lazy thinkers. They cant read a blog comment, much less the code. They cannot understand the mathematical differences bwteen RSM,CAM,FDM and a least squares approach, BUT they draw conclusions BASED ON THEIR ignorance. Doubt, if followed rigorously, just leads to doubt. If you want to say you “doubt” the methods, that is consistent. But some people want to say, I dont understand the method, I doubt the method, I refuse to study the method, THEREFORE it is ‘wrong’. These are the lazy skeptics. There skepticism is not an intellectual discipline, it is a political expedient. They generally, say things like ” I got to this sentence and stopped reading”

And typically they stop because of some combo of the following

1. They actually continued reading and cant find anything wrong with the argument
2. They didn’t understand or would not understand
3. Nobody in the past has ever called them on their declaration of ‘stopped reading’
4. They learned as a child to stick their fingers in their ears.
5. They were afraid of being educated
6. They never learned to doubt their doubt, the final stage of skepticism.

One could generate some fake data, where all the various trends would be known absolutely…THEN start removing data from the data set and apply these various algorithms. From there one could get a feel for how well the infilling procedures worked, what sort of artifacts they produced, when they broke down and under what conditions. Now that would give considerable insight into what methods one should be using to account for missing data. Seems like a good master’s thesis to me.

“IF you dont like this approach ( Jones does not) Then you can use CAM. You get the same general answer.”

I don’t, and I don’t see Hadley data changing like this over time….
#################

CAM DOES NOT CHANGE OVER TIME!. do you read? do you understand why it doesnt? and WHY this is a limitation of the approach?

CAM works like this: a station gets INCLUDED if it has 15 years in the period 1961-1990. If it meets that criteria, it gets added to the list. Then you calculate the
average for that station during that period. Then all the years get baselined as deviations from that average. If a new station comes on line in 1985 and has reports from 1985 to 2500, It will not get added. So CAM only accepts new data, IF it has old data in the1961-1990 period. The past can only change if new data comes in during the base period. SO if GHCN were to update the database and find more 1961-1990 data, then CAM would change. That is a Limitation of CAM. the reliance on a base period. In fact Jones discusses this in the mails. Did you read them all? There are only a couple thousand.

RSM uses MORE DATA than CAM. As new data comes in it can be used as long as there is a long station that it Corellates with. The test for merging stations into long series involves an overlap test and some statistical tests that determine how well the two stations match. So two stations a few miles apart can be merged into one estimate of the local temperature trend. But to do this they have to be well correlated and they have to have at least 20 years of overlap. Don’t like that, explain why from a statistical perspective. Write some code. SHOW how the method biases the result, and under what conditions. Or you can read up on the method

You may not “like” RSM. but generally, you would be well advised when dealing with math to explain why you dont “like” a method. I’ll give you an example. FDM. The first difference method is one used by NCDC. Also, Willis likes it and EM smith likes it. Unfortunately, it has deep statistical flaws when applied to smallish numbers of stations. It requires thousands. The reason for this is described by JeffId in an actual statistical test of the method on synthetic data. That’s how we test methods. Up until Jeff’s work I rather “liked” FDM. After his work I understand its limitations. CAM has limitations ( new data cant come in) RSM has limitations ( new data can change estimates of the past) The least squares method doesnt have any of these problems, but its hard for the non statistian to grasp. if we comapre all the methods on the data in question, they all give comparable results. As an analyst that means the method choice is immaterial. Could it be different? yes. The data could change (more new stations) and you might see effects. But as it stands the methods give the same answer. OF COURSE there are some small differences. there SHOULD BE. the question is NOT who has the hottest year ( thats a statiscal estimate and not a FACT)

For the technically inclined, these minor differences are a fun thing to look at. BUT we understand the math and understand what difference matter and what differences are in the noise. Trying to make NEWS about the noise, distracts from the credibility of your skepticism. You should make strong arguments, not weak ones. You should focus on the real uncertainties where the error bars are wide ( hockey stick and GCMS) rather than the meaningless uncertainities. Leave the meaningless uncertainties to those who understand them for what they are: meaningless uncertainties that are fun to characterize and understand. Pure research. as applied analysis, these small differences make no difference. THAT SAID, I criticize GISS for making any definitive statements about “warmest” year or “decade” that does NOT have an attendent statement of confidence. Like so, “we can be 95% confident that 2005 is the warmest year in the record.” But that criticism is a criticism of their PR and not Global warming science. The science can be correct AND they can say incorrect/misleading things about it. The existence of bad PR, manipulative PR, does NOT change the scientific facts. It can’t. Its only PR. So if somebody wants to attack the PR, then they need to be clear. This isnt about the science, its about the twisted presentation. That’s why we wanted the code. To get at the real math and real numbers and not the words in a press release. Should they clean up their PR act? Yes. Does their misguided use of PR justify the same behavior on “our” side? Nope, it just makes us like them.

One could generate some fake data, where all the various trends would be known absolutely…THEN start removing data from the data set and apply these various algorithms. From there one could get a feel for how well the infilling procedures worked, what sort of artifacts they produced, when they broke down and under what conditions. Now that would give considerable insight into what methods one should be using to account for missing data. Seems like a good master’s thesis to me.

###################

or you could read the papers already published, or look at chads site were he did this, or steveMc exploration of the problem, or JeffIds recent work.

So the fact is we have no way to reliably estimate global mean temperature anomaly, yet are still expected to believe that GISS know how it changes to one hundredth of a degree C. We also cannot rely on previous estimates, as when new data is included the history changes.

I have been involved in science all my life, and this must be the best example of cargo cult science I have seen in forty or so years! GISS temperature anomalies = FAIL.

Steven, I understand the algorithm, I also understand why it is done in this manner. My problems with this approach is there is obviously a quality control difficulty. When the results of running the code illustrates a global anomaly difference of over 10% after only little more than a decade, (0.64-0.57), then there is a problem. Maybe they need to let the time elapse longer before they merge. Maybe they need tighter constraints on the data they allow into the db. Obviously, the statements and assertions coming from GISS should come with the caveat that it is all subject to change and shouldn’t be viewed as a reflection of reality. But when the result is predictable, one should pause. Especially seeing we are passing laws all around with world effecting the global population based on the certainty of the various assertions, GISS being a great contributor.

Further, your response to latitude is a bitover the top. There seems to be indeterminable amounts of fields of study related to the climate change discussion. From algorithms to coding to astrophysics to chemistry to ecology…..ect.. Before you cast dispersions towards others, can you state you’ve educated yourself on all questions of our climate? Get a grip. Personally, I’ve learned an incredible amount of information that is only useful to me while conversing with an alarmist. At some point, when one determines the CAGW/CC issue is a hoax/power grab for totalitarian socialists, there is a point of diminishing returns to force oneself of learning irrelevant topics. For instance, without the CAGW/CC discussion I wouldn’t give a rat’s ass about the bands of absorption or the multi-directional emission of CO2. Is it your assertion we all should learn chemistry/physics before we can possibly understand CAGW is nothing to worry about? Describing people as stupid and lazy because they can come to a intelligent conclusion without having to have every bit of minutia explained to them in excruciating detail doesn’t say much for the person offering the description. Believe it or not, people understood about triangles, geometry and algebra before Pythagoras explained his theory. Now, who is smarter, the ones that understood before Pythagoras or the one that needed Pythagoras to explain before they could understand?

Shouldn’t that read ‘more ESTIMATED data’ than CAM? It in incomprehensible to me as to how, as time goes on, MORE data just suddenly appears. That sounds like magic, or making stuff up to me. That “more data” really is just a guess, right?

Why is that more accurate? Why is that better? More data is not necessarily ‘better’. Better data is better. How, after 12 years, it the actual data we measured in 1998 now better?

I wonder if hanson took a data plotting seminar from the financial folks at GE?
___________________________________________________
Yeah, Maybe Hansen has a “greenbelt” from GE. (I absolutely hate that program BTW)

or you could read the papers already published, or look at chads site were he did this, or steveMc exploration of the problem, or JeffIds recent work.

Sure one could also do those things. Regardless, something of the sort, whether its been done or not, would be the logical first step in understanding the behavior of these algorithms. So, if I were really interested in this topic, then that is where I would go first. I think this could also be discussed without being so…cynical, snarky, impatient, etc. A little quick on the lecture trigger aren’t you?

What you are stating here is that you do not have an accurate means of reliably estimating global mean surface temperature.”

Wrong. You have several methods that give the same answer. For example, with the CAM method, it doesnt matter whether I use 5000 stations or 1000. The answer is the same. The answer is the same because for the most part the changes in trend at any GIVEN location mirror the changes in trend at any other location. All the methods give different estimates ( within a small range) Its like taking a poll at election time. one pollster says Obama will win 52% to 48, another says, 53%to 47 another says 51% to 49, another says 54% to 46. They are ALL WRONG. we know with almost near certainty that the point estimate will be wrong. It will end up at 52.09876543% which no one will predict, YET, they are all Right, in that each predicted a win. They will differ marginally in ways that dont matter. They will poll different people and use different methods. THEY HAVE NO WAY of ‘accurately’ estimating the results, yet they are all correct.

“If that is the case then all arguments about AGW really ought to end until such time as we can tell whether the temperature is increasing or if our means of estimation is statistically unstable.”

1. that is an illogical conclusion where the conclusion does not follow from the antecendent.
The means of estimation is stable. Multiple methods, same data, same results. The changes to the METRIC THAT MATTERS ( trend) are in the 1/100s of C.
2. The principle arguments for AGW are made from first principles physics. Nothing in that physics predicts a monotonic increase in temps over short periods. Over long periods, it predicts a wide variety of increasing trends, from small increases to large. As such, it is hard to confirm the theory with short term trends and hard to disconfirm the theory. Disconfirming the theory would REQUIRE a whole rewriting of some fundamental physics. Physics that is known to work.
3. There is no evidence whatsoever that the globe has cooled since 1850. All evidence points to warming. That evidence when compiled WILL have uncertainty associated with it. it MUST. however, that uncertainty is not so broad as to include a zero trend. It’s getting warmer. AGW predicts this. The evidence is consistent with the prediction. Do the predictions MATCH the observations perfectly? of course not. THEY CANNOT, they can only match more or less. That is where the real issue lies. NOT in the issue of how much EXACTLY it has warmed, but rather in how well the predictions ( with their SPREAD) match the observations ( with there spread)

“There is no CO2 mitigation strategy which will stabilize our temperature estimating methodology.”

C02 mitigation has nothing to do with this argument. I tell you that if the FED prints more money the inflation rate will increase. One economist says 2.4%, another says, 2.8%, a third says 2.3% .month to month their numbers change, BUT NONE of them makes the argument that printing money will cause dis inflation. Your argument is this. because we dont know the EXACT inflation rate, that we should therefore print money. You use ignorance of the exact answer to justify your flauting of known economics. Your wife tell’s you that she over drew the account by 200 dollars. You check her math and found out that according to your math, its 195 dollars. Your CPA says that it is 196.34. And you conclude that you can continue to write checks because the various accountings all give different answers.

Your cult has dug it’s grave, now you need to sit down and be quiet while scientists try to figure out what is really happening to our atmosphere and climate.

Why reduce dependence on fossil fuel? Currently, it is the best energy source when trying to optimize for price and efficiency. Fossil fuels have improved the lives of probably billions of people cause they have allowed us to break our dependency on land intensive “renewable energy” that has limited societies in the past to Malthusian limits. When you can come up with an energy source that is not land intensive and meets the energy requirements of the globe, then dependency on fossil fuel will take care of itself. It’ll likely be something we haven’t thought of yet.

I guess it was a little much to expect climate science to be a little more definitive than political polling(wrong often) and economics (wrong more often than political pollsters). Wasn’t it some economists that told us to spend $trillions in the U.S. and we wouldn’t see unemployment over 8%? And didn’t the pollsters have Mrs. Clinton hands down Democratic nominee? Didn’t they say the Reagan/Carter election was too close to call on election night, but when the dust settled, it was 51%-41%? I really don’t think we can afford to act on that sort margin of error.

In RSM you look to estimate the trend for a station. When stations are close to each other you “consider” combining them to get a better estimate of the local trend. 2 data sources as opposed to one. Now in the case above, station 2 has a short record. So we ONLY USE station 1 data and the answer is:
0,0,0,0,0,0
Now Time marches forward. We add data as years go by. After 3 years it looks like this

Now we run our algorithm. Suppose the algorithm said ‘ 6years of overlap required’ Well Now station #2 GETS INCLUDED. we can use that data that we always had, but could never use, because of our algorithm. because of our desire to make good estimates.
And the answer is:
0,0,0,.5,.5,.5,.5,.5,.5.
Wow! the past changed. Well actually NOT. our estimate of the past changed because we can NOW consider data that was not included before. So we have a better estimate. RSM ( in some cases) can change the past and should change the past, beacause the “data” changed, that is you can now use more of the data that you used to throw out. and you threw it out because there was not enough of it. you threw it out because you didnt want to include high frequency error. now with a longer series for that station, you get to include the past data. And it changes the record. usually in a very slight manner.

“If you obtained from GISS the data they crank through to obtain such wonderful products as their global average whatever, would you be looking at original unaltered data from GHCN, USHCN, and SCAR, or something adjusted?

What is “GLB.Ts.txt” anyway?”

It depends upon what step of the analysis you look at: GHCN,USHCN, SCAR is read into the program. At each step, fully documented, you can see what is done. If you have questions about GLB.Ts.txt, go download the code and walk through it. It takes a competant individual a couple days to get up to speed. When you show yourself ready to do the same work that John G or SteveMc or me or Clearclimate code has done, then you have standing to make a comment.

Jaye Bass says: August 30, 2010 at 11:39 am
I think this could also be discussed without being so…cynical, snarky, impatient, etc. A little quick on the lecture trigger aren’t you?

Mosh gets it from both sides here.

Yea I noticed that. I did like his book, guess I don’t have to necessarily like him though to appreciate a reasonably significant contribution to the whole debate. He is coming off as being condescending, impatient, and generally patting the heads of the unwashed. Its a bit unseemly if you ask me.

USHCN stations are part of GHCN; but the data are adjusted for various recording
and protocol errors and discontinuities; this set is particularly relevant if
studies of US temperatures are made, whereas the corrections have little impact
on the GLOBAL temperature trend, the US covering less than 2% of the globe.

Step 0 : Merging of sources (do_comb_step0.sh)
—————————
GHCN contains reports from several sources, so there often are multiple records
for the same location. Occasionally, a single record was divided up by NOAA
into several pieces, e.g. if suspicious discontinuities were discovered.

USHCN and SCAR contain single source reports but in different formats/units
and with different or no identification numbers. For USHCN, the table
“ushcn2.tbl” gives a translation key, for SCAR we extended the WMO number if it
existed or created a new ID if it did not (2 cases). SCAR stations are treated
as new sources.

Adding SCAR data to GHCN:
The tables were reformatted and the data rescaled to fit the GHCN format;
the new stations were added to the inventory file. The site temperature.html
has not been updated for several years; we found and corrected a few typos
in that file. (Any SCAR data marked “preliminary” are skipped)

Replacing USHCN-unmodified by USHCN-corrected data:
The reports were converted from F to C and reformatted; data marked as being
filled in using interpolation methods were removed. USHCN-IDs were replaced
by the corresponding GHCN-ID. The latest common 10 years for each station
were used to compare corrected and uncorrected data. The offset so obtained
was subtracted from the corrected USHCN reports to match any new incoming
GHCN reports for that station (GHCN reports are updated monthly; in the past,
USHCN data used to lag by 1-5 years).

Filling in missing data for Hohenpeissenberg:
This is a version of a GHCN report with missing data filled in, so it is used
to fill the gaps of the corresponding GHCN series.

Result: v2.mean_comb

Step 1 : Simplifications, elimination of dubious records, 2 adjustments (do_comb_step1.sh)
———————————————————————–
The various sources at a single location are combined into one record, if
possible, using a version of the reference station method. The adjustments
are determined in this case using series of estimated annual means.

Non-overlapping records are viewed as a single record, unless this would
result introducing a discontinuity; in the documented case of St.Helena
the discontinuity is eliminated by adding 1C to the early part.

After noticing an unusual warming trend in Hawaii, closer investigation
showed its origin to be in the Lihue record; it had a discontinuity around
1950 not present in any neighboring station. Based on those data, we added
0.8C to the part before the discontinuity.

Some unphysical looking segments were eliminated after manual inspection of
unusual looking annual mean graphs and comparing them to the corresponding
graphs of all neighboring stations.

Result: Ts.txt

Step 2 : Splitting into zonal sections and homogenization (do_comb_step2.sh)
———————————————————-
To speed up processing, Ts.txt is converted to a binary file and split
into 6 files, each covering a latitudinal zone of a width of 30 degrees.
At the same time, stations with less than 20 years of data are dropped, since
in the subsequent gridding step we require overlaps of at least 20 years
to combine station records.

The goal of the homogenization effort is to avoid any impact (warming
or cooling) of the changing environment that some stations experienced
by changing the long term trend of any non-rural station to match the
long term trend of their rural neighbors, while retaining the short term
monthly and annual variations. If no such neighbors exist, the station is
completely dropped, if the rural records are shorter, part of the
non-rural record is dropped.

Step 3 : Gridding and computation of zonal means (do_comb_step3.sh)
————————————————
A grid of 8000 grid boxes of equal area is used. Time series are changed
to series of anomalies. For each grid box, the stations within that grid
box and also any station within 1200km of the center of that box are
combined using the reference station method.

A similar method is also used to find a series of anomalies for 80 regions
consisting of 100 boxes from the series for those boxes, and again to find
the series for 6 latitudinal zones from those regional series, and finally
to find the hemispheric and global series from the zonal series.

WARNING: It should be noted that the base period for any of these anomalies
is not necessarily the same for each grid box, region, zone. This is
irrelevant when computing trend maps; however, when used to compute
anomalies, we always have to subtract the base period data from the
series of the selected time period to get a consistent anomaly map.

For both sources, we compute the anomalies with respect to 1982-1992, use
the Hadley data for the period 1880-11/1981 and Reynolds data for 12/1981-present.
Since these data sets are complete, creating 1982-92 climatologies is simple.
These data are replicated on the 8000-box qual-area grid and stored in the same way
as the surface data to be able to use the same utilities for surface and ocean data.

Areas covered occasionally by sea ice are masked using a time-independent mask.
The Reynolds climatology is included, since it also may be used to find that
mask. Programs are included to show how to regrid these anomaly maps:
do_comb_step4.sh adds a single or several successive months for the same year
to an existing ocean file SBBX.HadR2; a program to add several years is also
included.

Result: update of SBBX.HadR2

Step 5 : Computation of LOTI zonal means
—————————————-
The same method as in step3 is used, except that for a particular grid box
the anomaly or trend is computed twice, first based on surface data, then
based on ocean data. Depending on the location of the grid box, one or
the other is used with priority given to the surface data, if available.

Result: tables (GLB.Tsho2.GHCN.CL.PA.txt,…)

Final Notes
———–
A program that can read the two basic files SBBX1880.Ts.GHCN.CL.PA.1200 and
SBBX.HadR2 in order to compute anomaly and trend maps etc was available on our
web site for many years and still is.

For a better overview of the structure, the programs and files for the various
steps are put into separate directories with their own input_files,
work_files, temp_files directories. If used in this way, files created by
step0 and put into the temp_files directory will have to be manually moved
to the temp_files directory of the step1. To avoid that, you could
consolidate all sources in a single directory and merge all input_files
directories into a single subdirectory.

The reason to call the first step “Step 0” is historical: For our 1999 paper
“GISS analysis of surface temperature change”, we started with “Step 1”, i.e.
we used GHCN’s v2.mean as our only source for station temperature data. The
USHCN data were used for the 2001 paper “A closer look at United States and
global surface temperature change”, the other parts of “Step 0” were added later.

If you have questions about GLB.Ts.txt, go download the code and walk through it. It takes a competant individual a couple days to get up to speed. When you show yourself ready to do the same work that John G or SteveMc or me or Clearclimate code has done, then you have standing to make a comment.

I suppose you know how to compute an integral of a function…right? Now, do you know the theory behind it? Like say, Riemann–Stieltjes Integration theory? Probably not. You know how to do the arithmetic but not the underlying math. That’s ok though cause why should everybody be self sufficient in all aspects of a particular discipline? Why make everybody prove the theorems? Engineers don’t need that stuff they just need to know some conditions about where the function is integrable and how to compute it if it is. So, maybe everybody doesn’t need to walk the code cause I bet there will be a resource that summarizes the contents thereby relieving everybody of having to retrace every step. Use other people’s labor to come up with something new.

Alan, Hansen’s constant revision of the data reminds me of the Gilligan’s Island Episode where the professor decides the island is sinking – all because Gilligan wanted to catch larger and larger lobsters!

“When you show yourself ready to do the same work that John G or SteveMc or me or Clearclimate code has done, then you have standing to make a comment.”

Steven, I’ve read your statements in the past with a great deal of earnest. But you’re acting as if you’ve just come down from a mountain top carrying two stone tablets. The algorithm you’ve described or method, if you will, isn’t exactly revolutionary. So, unless we go into detail about how GISS does it we don’t have any standing to comment on the general error built-in to this type of averaging and in-fill? That’s a load of crap. Now GISS has particular insights to algebraic equations? And only a select few may comment on this type of mathematics? I can be part of the lucky few if I play follow the bouncing ball with the programmer’s code? If it is anything like you described, I’ll comment on it because I essentially do the same thing in my line of work. Its crap. I’ve fewer inputs and less steps and still hate to generate a bill from it, much less pass laws because of it. Steven, I thank you for the work you did to open up GISS’s methods and code. But on this particular day, your viewing this discussion in a singular dimension and if there is a lesson from the climate discussion it is that there are various views and dimensions to any particular part of the climate discussion. I’d urge you to re-think your perspective on this matter.

Anu says he might make a prediction , maybe [I predict he’ll chicken out again], but then he says: “Of course, any prediction I make would lie somewhere between young molten Earth and snowball Earth,” the ultimate Sharpshooter’s Fallacy, making Hansen’s giant error bars look puny by comparison. Very brave prediction… not.

Here’s my prediction: when the climate sensitivity number is finally determined, it will be found to be decisively under 1°C, making the emission of CO2 a complete non-issue.

Well that explains the error. Clearly then they are comparing apples to oranges. But in your (very clear and easy to understand) example, when the original calculation was made, station 2 was still too young to use. So 10 years later, it is still too young to use for 1998. So why use it? The temperature for 1998 is not changing 12 years later. Except when they decide to add in values not deemed accurate enough at the time of the calculation, but 12 years later they are deemed such.

That’s too bad, I was about to heap praises on him for his god-like insights to GISS that the rest of us mere mortals couldn’t possibly understand or the ones that can are only too lazy to do so and aren’t worthy of conversing with someone of his abilities, dedication and knowledge.

In re; Pols; This isn’t like pols. This would be the equivalent of going back now and declaring that Al Gore didn’t actually get 50.01 percent of the vote, he only got 48%, and now Barrack got 50.01% in the most recent election. If your old figures keep changing (esp. in a monolithic downward way) then your methodology necessarily comes into question; see my ‘walking up the down escalator’ analogy in my early comment. You cannot say the ’98 was the warmest ever at 60.4, then 5 years later claim ’05 to be the warmest ever at 60.4 and ’98 was really.57 , then come back in 2010 and say we’re at the warmest ever at 60.4, while ’05 was really .57 and ’98 was at .54. That is just folly.

And that isn’t illogical at all; if your means of estimating global mean surface temperature (GMST) tends to systematically overstate today’s temperatures (requiring downward revision multiple times in the future) then you have a real problem stating with any useful degree of certainty what today’s GMST is.

“There is no CO2 mitigation strategy which will stabilize our temperature estimating methodology.” I think you missed the point here Steve; if CO2 causes global warming, then you can mitigate global warming by reducing CO2 emissions. There is no amount of CO2 reduction, however, that will allow you to better estimate global surface T. So if you don’t have very accurate GMST estimates that are stable over time, there’s no point in pointing to an experiment involving a bottle and heat lamps to tell us we have a problem to solve. The whole notion of the multiple ‘tipping points’ in order to make this into a true crisis is almost hysterical. Whether anyone agrees with the basic localized physics or not is irrelevant. Take the cap off your bottle and see what happens to heat in the uncontained atmosphere. But if you cannot point to increasing temps that are stable over time (not revised downward every 5 years) then you certainly cannot claim we need to reduce CO2 to reduce GMST.

Now; You can either produce an historic record of increasing global temps through your methodology that doesn’t involve systematically revising prior temperature estimates downward or you cannot. So far Anthony, Lindzen, McIntyre, et. al. have been very successful in showing this to be very problematic for the Mann’s and Phil Jones types. Meanwhile Mann and Phil keep hiding data and methodology, getting rid of the MWP and hiding the decline. Pick which side of the argument you want to be on, but really we need to get rid of scientists who’ve chosen sides and get back to just studying what’s going on. I think your last comment about reducing our dependence on fossil fuels says it all; yes, it’s better to conserve resources, but that is completely absurd in the AGW/CO2 debate. It’s got no place here. To bring it up should be the global warming equivalent of Godwin’s law; argument over, you lose.

I will seed points 2, 3 and 4; We cannot quantify the problem reliably, we cannot estimate the harms reliably and we cannot determine the costs and mitigation solutions accurately. That about sum it up? Yeah; it’s manbearpig. It’s the bogeyman. We can’t define it, we can’t tell you for sure what will end it, but we can tell you what you need to do about it and it involves surrendering this little bit of your personal freedom right here. Okay, I don’t believe this is a conspiracy, but it sure doesn’t seem too far fetched either.

I too am a luke warmer; has the climate gotten warmer in the last 100 years? Yes, most likely. Have anthropogenic CO2 emissions contributed to this? Likely or possibly in some degree or another. I don’t think they’re the primary driving influence and I am certain science hasn’t answered that question in any meaningful way. However, we are more than two standard deviations below where the GCMs said we’d be by now on GMST and I don’t see the earth warming significantly lately. If GISS wants to drone on and on about the most recent decade, let’s look at where their models said we’d be by now. And tipping points? Come on already. If their claims were at all valid we’d have never entered the last three ice ages.

So, in conclusion; yes, it is a cult. If you want funding you need to tie your subject back to global warming. If that’s not a self affirming feedback loop then I don’t know what is. There is real science that needs to be done, but if it’s done by those who already know what’s causing the warming then you will certainly get warming and it will certainly be proven to be caused by CO2. Of that you can be certain.

They’ve lost any and all pretense of objectivity which clearly precludes their involvement in scientific endeavor, so they need to step aside and allow some real scientists to come in, pick up the pieces, try to establish the historical data set and get to work understanding how our atmosphere and oceans and climate really work.

Right now we’re too busy proving it’s warming by walking up a down escalator.

There is no ANSI smiley face – ANSI (ISO 8859) is an 8-bit character encoding standard. Don’t get confused by the many ways different computers/software use to enter characters – you can easily View Source for web pages and see the underlying codes in any hex editor.
The smiley is a 16-bit unicode character – as is his evil twin: ☻

Seven of the eight bits (which control the first 128 characters) have been standardized world wide and are known as ASCII – American Standard Code for Information Interchange. In the all encompassing Unicode, this set is referred to as Basic Latin. However, with the next 128 characters — known as the extended set — a number of variations have arisen.

Below is the character set sometimes referred to as the ANSI code page, or the ISO-8859-1 character set. This character set is made up of two groups: printable characters and control characters. Because of the additional control characters that ANSI (and ISO) then substituted for the printable characters in codes 128-159, it is now more correct to refer to it as the Windows-1252 code page. However, it retains the name ANSI code page in Microsoft’s Notepad for historical reasons.

In re; Pols; This isn’t like pols. This would be the equivalent of going back now and declaring that Al Gore didn’t actually get 50.01 percent of the vote, he only got 48%, and now Barrack got 50.01% in the most recent election. If your old figures keep changing (esp. in a monolithic downward way) then your methodology necessarily comes into question; see my ‘walking up the down escalator’ analogy in my early comment. You cannot say the ’98 was the warmest ever at 60.4, then 5 years later claim ’05 to be the warmest ever at 60.4 and ’98 was really.57 , then come back in 2010 and say we’re at the warmest ever at 60.4, while ’05 was really .57 and ’98 was at .54. That is just folly.”

The OLD FIGURES do not change. get that through your head.

In 2000, GISS looks through 7364 stations and they select stations to construct an average. Their rules say. NO STATION that has less than 20 years can be used. SO, for example a station that started reporting in 1990 could not be used.
By 2010 that station NOW has 20 years. NOW, its data from 1990-2000 CAN be used. That will of necessity change the estimate of 1990-2000. Further you WANT to use more data. If GISS did NOT use this data, THEN you would claim they were disregarding data. When they do use the data you cry that the estimate changes.
The Example of the poll was to show your silliness about not having a “stable” statistic.

And further, you can say in 2005 that 1998 was the warmest and then in 2006 say that no, 2003 or 1934 or whatever was the warmest. Its the warmest GIVEN THE ADDITION OF NEW DATA. it would be silly NOT to change your claim. it would be dishonest NOT to change the claim.

“But if you cannot point to increasing temps that are stable over time (not revised downward every 5 years) then you certainly cannot claim we need to reduce CO2 to reduce GMST.”

Wrong. in several ways.

1. I can point to a stable estimate, Romans. Its actually the most statistically sound approach as it uses all available data. Also CRU is stable. Also NCDC is stable.

2. If you had a series that show 1C of warming year to year, but it changed slighty, year to year ( sometimes .99C, sometimes 1.01C) you surely could NOT conclude as you do. the ‘stability’ of a measure year to year has NOTHING to do with the argument to mitigate ( which i dont buy anyway) it COULD matter if:
A. all measures had an instability
B. The instability was LARGE.
C. the instability kept me from doing skillful hindcasts.

3. If you had an unstable series that showed the problem was WORSE than you thought, you’d beter take attention to it. Especially if the “instability” derives from the fundamental choice to consider more data as more data becomes available.

“I too am a luke warmer; has the climate gotten warmer in the last 100 years? Yes, most likely. Have anthropogenic CO2 emissions contributed to this? Likely or possibly in some degree or another. I don’t think they’re the primary driving influence and I am certain science hasn’t answered that question in any meaningful way”

1. Glad you agree with Christy, Monkton,Spenser, Lindzen,Willis.

2. You cannot rationally maintain the following:
” the science hasnt answered the question in any meaningful way BUT i dont think that they are a primary driver.

#2 is illogical. The best you can say is that the science is uncertain. Unless you have a theory that explains or supports the notion that they ARE NOT the primary driver, you would be more consistent if you said “the best science is uncertain”, we dont know. they could be primary, secondary or hardly an effect at all. But instead you point to uncertainty in the science as cover to give opinion. Not skeptical of your own ability to ascertain the truth. selectively skeptical.

And yes, I know the “GCMS” are forecasting high, Thats plays a role in being a lukewarmer.

“If you obtained from GISS the data they crank through to obtain such wonderful products as their global average whatever, would you be looking at original unaltered data from GHCN, USHCN, and SCAR, or something adjusted?

What is “GLB.Ts.txt” anyway?”

It depends upon what step of the analysis you look at: GHCN,USHCN, SCAR is read into the program. At each step, fully documented, you can see what is done. If you have questions about GLB.Ts.txt, go download the code and walk through it. It takes a competant individual a couple days to get up to speed. When you show yourself ready to do the same work that John G or SteveMc or me or Clearclimate code has done, then you have standing to make a comment.

Actually, as I previously posted, all I had to do was Google and I found that file, at least the current version.

And as I said in my first comment, we’re looking at the results, such as that file, not the process. There is no need to know everything that is done between the animal and the package to evaluate if the sausage is good or not. Plus I fail to see how an in-depth study of how a particular sausage is made will make me think it tastes any better.

BTW, please avoid getting so worked up as it may lead to your being cranky towards your roommate, and we don’t like to see Charles get irritated.
;-)

Well that explains the error. Clearly then they are comparing apples to oranges. But in your (very clear and easy to understand) example, when the original calculation was made, station 2 was still too young to use. So 10 years later, it is still too young to use for 1998. So why use it? The temperature for 1998 is not changing 12 years later. Except when they decide to add in values not deemed accurate enough at the time of the calculation, but 12 years later they are deemed

************************
In my example I shortened the number of year FOR ILLUSTRATION. so that people could work through the problem without taking their shoes off to count. the sticthing of series is MORE complicated that I have explained in my simple example. The simple example is merely to explain the process.

here is a rule. When two stations 5 miles apart overlap by 2 years, combine them.

Station 1. 0,0,0
Station2 NA,NA,1

Now in year 3 your estimate of the temp for “station 1 area” is 0,0,0
BUT there is data your are NOT considering. station 2. Why not? well its got a short
record. A year goes by. you get this

Station 1. 0,0,0 0
Station2 NA,NA,1,1

Now you do have enough data to include station 2. so your revise your estimate of the past based on the latest information.
.5.5

Now imagine the problem with GHCN. You have 7280 stations, add in SCAR and you have 7364. Now 50% of the stations have ‘duplicate records’ except the duplicates are not always duplicates they can be the same station different instrument, Some stations have 15 year gaps, 2 year gaps, 1 year gaps. add 20 years of fresh data to the end of a series and it will cascade changes all the way back to the start. Understanding how these changes propogate is not at first ,apparent. People who know me, know I screamed about this very thing. Until I understood it, until I watched John G work through some aspects of the problem. Until I actually sat down and tried to do the job myself. Then I said ‘hmm tough problem.” how many different ways are there to solve this and what do each of those ways get right and what do they get wrong. and how much does it matter.

The falsifications being made to the record are made before they’re posted. Looking back even further in time, there are versions of this graph that correctly place the 1930s as the warmest era.
_________________________________

Here is an example of Hansenization of the US temperature record.

When you are talking tenths of a degree it is real easy to fudge… excuse me that is adjust the data.

If you obtained from GISS the data they crank through to obtain such wonderful products as their global average whatever, would you be looking at original unaltered data from GHCN, USHCN, and SCAR, or something adjusted?

You would be looking at fully adjusted data. Their algorithm ostensibly “unadjusts” it, then “readjusts” it.

(This is a baffling procedure, as GHCN and USHCN raw data is available. One wonders why NASA would not start with that.)

“The algorithm you’ve described or method, if you will, isn’t exactly revolutionary. So, unless we go into detail about how GISS does it we don’t have any standing to comment on the general error built-in to this type of averaging and in-fill? That’s a load of crap. Now GISS has particular insights to algebraic equations? And only a select few may comment on this type of mathematics?”

They are 1km apart. I ask you to tell me the average for that 1 km patch of the earth

What do you figure and why? The actual problem is harder than that, but take the easy one? whats the best estimate? what do we mean by best estimate. Now make the record even patchier but like this
Station 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-,0,0,0,0,0,0,0,0,0,0,
Station -,0,-,-,-,0,0,0,0,0,0,0,0,0,-,-,-,-,0,0,0,0,-,-,0,1,0,0,0,0,0-,-,-,-0,0,0
What then? are these the same station? two transcriptions of the same record? do we average them? only pick the long one what about the blank in the long one? do we infill it? ignore it? throw away the information. station 1 and 2 match each other perfectly where they overlap? can we use info from one to infill the other? Good question? Does it make a difference?

here is the deal. IF you take the ‘skeptic method” ( jeffId, romanM) the method that is the best, you will actually show a LARGER warming trend than GISS or CRU. Giss method is not the best. CRUs is not the best. They UNDERESTIMATE the warming. that’s the hilarious thing. Those of us who were skeptical of these methods actually end up proving that the better method shows more warming. So if people want the facts about the results of the best method, those are the facts. CRU has warts, GISS has worts and those worts UNDERESTIMATE the warming. So, what happens if GISS decide that romans method is better? Are you then going to critcize that better method?

“There is no need to know everything that is done between the animal and the package to evaluate if the sausage is good or not. Plus I fail to see how an in-depth study of how a particular sausage is made will make me think it tastes any better.”

However, if somebody claims that the sausage contains rat hair, When it in fact DOES NOT contain rat hair, then I’ll say so.

there are valid claims to make about the shortcomings of the RSM. There are valid reasons WHY one should not like its taste. RAT HAIR is not one of them. What I am trying to do is to get you guys to Focus on the BETTER arguments, on the REAL arguments. its the rat poop you should be focusing on, rather than making up crap about problems that dont exist.

Jaye
‘That’s ok though cause why should everybody be self sufficient in all aspects of a particular discipline? Why make everybody prove the theorems? Engineers don’t need that stuff they just need to know some conditions about where the function is integrable and how to compute it if it is. So, maybe everybody doesn’t need to walk the code cause I bet there will be a resource that summarizes the contents thereby relieving everybody of having to retrace every step.”

Utterly besides the point. I am saying this. If you want to say anything worthwile about GISS, if you want to ADD VALUE to the criticism of the method, then read the code. I’d like nothing better than for people to find REAL problems with the real code. What DOES NOT HELP is for people to raise issues that have already been addressed, already been studied, already been put to bed by people who did do the hard work. Next up, somebody will complain that its in fortran. or somebody will go off about rounding errors, or differences in the earth radius, or any number of TANGENTS. or somebody will argue that they change the percent of land ( another misunderstanding that came by ONLY looking at the outputs without understanding the process)

Any way you cut it, if GISS fixes the shortcoimngs of the method, the warming trend goes UP. feast on that and then tell me what you think the best reponse to bogus posts is? Cheer along while people make bad points in a losing argument? I think not.

The ACTION in this debate is about the data quality. PEROID. Not about the processing method. Thats the wrong grounds. Its the wrong grounds because the BEST METHOD shows more warming than their flawed method. GET IT!

Offhand the reason might be because those are not their adjustments, not done the way they want to do them. So basically, they take the pre-adjusted historical records, then transmogrify them into their own version of the historical record. Which leads to the commonly-accepted principle, “You modify it, you own it.”

I will try to avoid ascribing nefarious motives for such changes, according to that thought (however worded) ‘Never ascribe to malfeasance what can be accounted for by incompetence.’ ☺

Hey…I haven’t ventured into a discussion about GISS adjustments. I suppose if I wanted to and had time, I’m sure I could do a code walk thru and figure out what is going on. I was just skimming the posts avoiding the most outlandish rants, trying to learn a bit, etc. My beef was the extreme condescension you displayed.

Maybe that’s to be expected. Maybe its seems like a million voices all clamoring for attention, spewing opinions like so much oil out of an uncapped BP well. Fighting the same stuff day after day…I dunno. No biggy though as long as somebody is trying to actually determine what is going on without a preconceived bias. If they bash the odd blogger with a rhetorical cudgel, maybe it gets there attention. At least it was a bit of sport in between writing page after page of an ICD today.

Why reduce dependence on fossil fuel? Currently, it is the best energy source when trying to optimize for price and efficiency….
__________________________________________________________
We should be going toward 100% hydro/nuclear for electric generation and save the fossil fuel for transportation, plastics and other uses.

What part of “The algorithm you’ve described or method, if you will, isn’t exactly revolutionary.” did you not understand? Is my English that poor that you didn’t you comprehend when I said, “Steven, I understand the algorithm, I also understand why it is done in this manner.”?

Steven, you’re taking a rather simplistic view of the whole issue. The question isn’t if this is the best method or not, the question is, “Is this sufficient?”

I assume you don’t need the offsets done to be able to discern the difference in trends of estimating the warming. Let me know if you need help with that.
Are the graphs at the top of this thread are a lie or an illusion? What the hell are you babbling about? GISS stated 1998 was the hottest year ever. But then they revised the year downward not once, but twice. So, this year can be the hottest year ever. They underestimate it retroactively!

Back to sufficiency. Is being wrong in a backward trend by .07 degrees sufficient over a decade period of time? Well, if the trend continues, GISS will have been off by 0.7 degrees in a century. If we’re discussing a 10 degree/century, probably not. But, that isn’t what we’re discussing, is it? We’re talking 2-3 degrees/century. Damn man!!!! If this keeps up, 1998 will be at zero with the base line still being used today! Are you going to tell me 1998 wasn’t or isn’t going to be an exceptional year if we continue to use the current baseline?

Even if we grant there isn’t any intentional mucking with the temp data(I don’t), do you believe this is sufficient in determining global temp anomaly and passing laws to mitigate the ensuing doom predicted because of the errant assumption of our current temp anomaly?

The only reason I ask is because we already have(based on 1998 as it was reported then) and we are continuing to do so(because 1998 is going down the memory hole) and will continue to do so because of people like you.

Steven, you’ve done well, now go ask JH if you and he can write a book together. I promise to buy that one.

Steven, you should do yourself a favor. Run a few of those scenarios that you’ve used today. Only quit using 1s and 0s and use temp WNL of a location and randomly input temps with some orderly temps in place and then randomly remove some to mimic missing data and then see the many different averages and trends you can come up with and then come back and tell me GISS is essentially correct.

I’m evaluating 3/4HP 120VAC TEFC motors made by four different manufacturers. Three makes have similar performance profiles, one is notably different. Moreover, I find out the performance of that one make will change depending on when it was made. The three grouping together I then consider to have the normal expected performance, the other I do not consider normal as well.

I want reliability, consistency. I don’t care how a particular motor was made, to some extent I don’t even care what it was made of. I want to know that when I need that particular motor for an application, it will do what I expect it do, reliably and consistently. Plus, I really don’t want to find out after building something with a particular make of motor, it needs to get redesigned to accommodate a maker’s ongoing changes, and I have to retrofit anything I made previously with that motor to allow for those changes.

I’m not buying GISS. Long-winded expositions on their manufacturing process won’t change that, nor will defenses against theoretical charges of substandard and/or questionable materials used in the product. Show me a track record of proven reliability and consistency, then I may consider using them.

Your seemingly endless posts are off-topic. This article is about GISS demoting 1998.
Not that that makes any difference to you, because you have an agenda.

Hmmm….Steven, earlier in this thread you wrote:

There are volumes and volumes of documentation proving that the Earth is the center of the Universe ,that earth is 6,000 years old, that Continental Drift is impossible, and that sea level will rise 1-6 metres during the next century.
Is science measured by the weight of the paper it is printed on?

And this one:

I know several well educated professionals who voted for Obama, which proves that he must be a good president.

And this one

GISS shows steadily increasing temperatures since 1998, which HadCrut, UAH and RSS don’t.
Now cut the endless, monotonous BS and accusations, and explain the discrepancy.

Which I pointed out was off topic at the time quoting me (jeez);

This post was not about GISS and its trend compared to other indexes, this post was about GISS modification of historical records. Please read John Goetz’s posts and comments on the subject and you might learn something about the subject you write so much about.

And then you wrote this one:

If I were more intelligent, I would be able to understand why it is OK to change data in a way which (purely coincidentally) suits someone’s political agenda.

Steven you are unable to maintain a logical train of thought. You consistently run off on tangents, even literally picking a single word out of someone’s previous comment and run off on a tangent with it as you have done at least three times in this thread,

You cherry pick constantly, and accuse others of this.

You run off topic constantly and you accuse others of this.

You constantly use strawmen and accuse others of this.

You constantly use ad homs and accuse others of this.

Every comment I make is either a response to a point you try to make, pointing out failings or deception in your presentation or in tangential comments. I am insulting your unscientific, illogical points and random non sequiters or pontifications, but I have never attacked you personally.

I attack your arguments, which are so flawed they are a downright embarrassment to WUWT. I consistently call out and demonstrate your failure to make logical points.

Now after all the random tripe you have spouted on this thread you have the audacity to call Mosh’s comments off topic?

He has explained in detail how and why the changes in the 1998 values occur. That is more on topic than anything you have written in this entire thread. He is also trying to explain to your supporters how this has been known for a long time and why it is not an important point, but the audience isn’t listening..

jeez says that Steven Mosher has “explained the 1998 shift in detail.” I must have missed that. Please repost.

In fact, I would like to see an entire article from Steven Mosher explaining why GISS is correct and HadCrut, UAH and RSS are wrong. Why 2010 is the hottest year on record, and why GISS is actually underestimating the temperature.

jeez says that Steven Mosher has “explained the 1998 shift in detail.” I must have missed that. Please repost.

More reading comprehension problems Steven?. Try reading upthread. Read ssllllooowwwllllyyyy. Maybe you will actually lllleeeaaarrrrnnnn something. I came up with a phrase a long time ago that applies perfectly here. I can explain it to you, but I can’t understand it for you. If this seems condescending, it is because it does get tiresome trying to explain things to someone who simply refuses to understand no matter how many ways the point is explained.

Your framing of the question GISS as “correct” and others “wrong” is a typical pontificating logical fallacy of yours.

Steven, they are all “correct”. They process data and display results. The concept of a global temperature is subjective in definition. Differences in processing will occur even if much of the underlying data is the same. Over long periods the indexes all show similar trends. under short periods they can diverge. In some years some may be closer to the abstract ideal than others, but since we don’t know what that is, we don’t know which. Your attempt to find villainy in GISS because of a short term (shall we say “cherry-picked”) time period is beyond tiresome. It is a distraction from adult conversations which could be taking place.

There is no rigorous reason to focus on 1998 as significant to your point as 2005 is currently the record year in GISS. In the past you were criticized for improperly finding trends in oscillating or sinusoidal data and actually corrected your mistake, by measuring (I think) peak to peak. In this series of posts on GISS you have used this previous criticism completely out of context in a nonsensical manner claiming that measuring el nino to el nino is the same thing a measuring a trend in oscillating data to justify your cherry-picking of 1998 as a significant year. It is a hand waving argument without merit or justification, but it sounds logical and cool to your followers.

There are likely problems with much of the underlying data. That is the point of Anthony’s surface station project and paper. Those are interesting points.

So you can say I haven’t answered your question. What I have done is explain in detail why your question is leading and unscientific and not worth answering.

Far be it from me to carry Goddards water, but when you said, “This post was not about GISS and its trend compared to other indexes, this post was about GISS modification of historical records. ”

I can’t help but think your view of history is similar to Moshes. I think this is the second sentence of the post. “In a previous article, I discussed how UAH, RSS and HadCrut show 1998 to be the hottest year, while GISS shows 2010 and 2005 to be hotter.” Of course, in 20 years time I could be wrong and it could be revised to the first or third or not even placed in the dialogue because our view of history is more accurate the further we get from history, obviously. History revision is a good thing!! It trains us to be more flexible in interpreting the occurrences we lived through! I never thought I’d see the day when people told me the properties of mercury don’t apply to my observations, but other people that have more insight to basic algebraic equations than myself.

Yeh, we thought knew we were right then, but then we really knew we were right later, but then we really, really know we are right now and we’ll be really, really, really right later. Odd how people don’t believe we’re credible. WE SHOWED THEM THE COMPUTER PROGRAM!!!!

It’s called introspective. Try it. No, you won’t like it. No one does. If you do it properly, you will understand the necessity. I’ll buy the first round.

“He has explained in detail how and why the changes in the 1998 values occur. That is more on topic than anything you have written in this entire thread. He is also trying to explain to your supporters how this has been known for a long time and why it is not an important point, but the audience isn’t listening..”

Sis, he’s explained in detail how and why GISS was wrong on more than one occasion. I didn’t post the article, but I’ll defer to the one that did as far as pertinence. Or are you assuming to read Goddard’s thoughts now? I can’t speak for the rest of the people on this thread, but I don’t support anyone but my family. There is none here. Mosh has made statements, he’s been asked for clarification, he’s responded and I’ve made judgments towards his arguments. Unless someone shows me any different, I believe I’ve summed it up fairly well. Do you want some clarification, or do you simply disagree, or do you agree, or do you simply have a personal thing with Goddard? In which case, the latter is off topic and not relevant to any discussion I can see here, but perhaps I’m wrong and in 20 years your personal views will be relevant to someone somewhere somehow. Apparently temps are relative. Mostly to the way one feels about math and algebra.

Well, if OT stuff is allowed here, then I’ll try to post this. Actually, it is relevant as to how to start a fight! Perhaps y’all heard this.

Saturday morning I got up early, quietly dressed, made my lunch, and
slipped quietly into the garage. I hooked up the boat up to the van, and
proceeded to back out into a torrential downpour. The wind was blowing 50 mph, so I
pulled back into the garage, turned on the radio, and discovered that the
weather would be bad all day. I went back into the house, quietly
undressed, and slipped back into bed.. I cuddled up to my wife’s back, now with a
different anticipation, and whispered, “The weather out there is terrible.”
My loving wife of 5 years replied, “And, can you believe my stupid husband
is out fishing in that?”
And that’s how the fight started…

Stop. It is not. There is an average global temp. It is not subjective. MATH ISN’T SUBJECTIVE!!!!!

Really? What is the equation for it? How do you decide which data points to use? How do you decide to weight them? Do you make any Time of Observation change adjustments? Do you grid the data, or just average all the datapoints? What do you do if coverage is greater in one area than in another? What is the sea surface temperature of a region covered in sea ice? What do you do with station moves? What do you do with station discontinuities? How do you adjust for UHI? What if you have a single station on an Island whose temperature differs from all the water around it?

Have you looked at the Pielke’s opinions on Global Temperature indexes?

Seriously this is just Math isn’t it? So what is the equation since there are no subjective decisions to be made?

It’s funny that you think I’m an alarmist because I point out that Steven’s concerns are a distraction to real skeptical science, such as the paper Anthony is working on. I have also said in this thread that I think GISS’s constant estimation is a flawed method in my opinion, but it also doesn’t matter in the big picture. Over longer time period’s the various indexes give similar results making this whole attack on GISS methods a complete waste of time. The real issue for surface indexes is the underlying data and that is what Anthony is working on. The battle over issues with the methods is moot.

“REPLY: snip – please rewrite this and feel free to resubmit when you don’t have to use vulgarity to get your point across. – Anthony”

Yeh, I know. Truly, it is sometimes it’s difficult to convey the thought and expression in type. I don’t typically resort to vulgarity, but it would have conveyed the appropriate sentiment over the blogasphere.

Not that I’m anybody, but I think the point you’re trying to make is being missed by everybody on this thread. And people need to lighten up and hear what you are saying: It’s DATA QUALITY!

Slap me down if I’m wrong; but, in every example you’ve given, the second data set has been higher (I’m going to guess that there are very few second data sets that are lower). Therefore, the objective should be to examine the quality of these second data sets (for heat island, adjustments therefore and thereto, etc.). I think Mr. Watts, et.al. are examining this as we converse (soon to be published?). And what happens when those better methods you mentioned are used with the new quality data? We’ll all have to live with it.

Ok, if you don’t like “alarmist” how about apologist for the alarmists? How difficult is it for you to say, “I don’t know.”? How about, “We don’t have a way of knowing right now.”?

You asked, Seriously this is just Math isn’t it? So what is the equation since there are no subjective decisions to be made?

I’m not the one making assertions. It isn’t to me to create the equation. But, were I the one to do so, I wouldn’t create an equation that would be, by my own admission, incorrect. I’d at least have a plausible explanation why my assertions were correct if/when I made an assertion. From the discussion on this thread there is no reasonable expectation of GISS pronouncements to be correct. BY THEIR OWN ADMISSION!!!

You also said, “I have also said in this thread that I think GISS’s constant estimation is a flawed method in my opinion, but it also doesn’t matter in the big picture.

Here’s a big picture for you. Almost daily, across the globe, we pass laws making food and energy more difficult to obtain because of the “equations” and guessamations. You do know that people suffer and die because of this, right? You do know that people in other countries are condemned to poverty because the laws we are passing based on these guessamations that are admittedly wrong to begin with? Right? You know, I wish I had the convenience of a lack of conscience. If I were a betting man, and I am, I’d bet GISS is more correct than wrong. That said, I’m not willing to bet someone else’s life or livelihood on it, much less an entire world’s. For the life of me, I can’t see why you don’t see the “what is right today, is wrong tomorrow, but may be right or wrong again in the future” as wrong.

I’ll ask you the same thing I asked Mosh.

Is being wrong in a backward trend by .07 degrees sufficient over a decade period of time? Well, if the trend continues, GISS will have been off by 0.7 degrees in a century. If we’re discussing a 10 degree/century, probably not. But, that isn’t what we’re discussing, is it? We’re talking 2-3 degrees/century. Damn man!!!! If this keeps up, 1998 will be at zero with the base line still being used today! Are you going to tell me 1998 wasn’t or isn’t going to be an exceptional year if we continue to use the current baseline?

In case anyone is wondering to which I was refering, here is the exact quote:

Steven Mosher says:
August 30, 2010 at 4:02 pm

“The ACTION in this debate is about the data quality. PEROID. Not about the processing method. Thats the wrong grounds. Its the wrong grounds because the BEST METHOD shows more warming than their flawed method. GET IT!

“Offhand the reason might be because those are not their adjustments, not done the way they want to do them. So basically, they take the pre-adjusted historical records, then transmogrify them into their own version of the historical record. Which leads to the commonly-accepted principle, “You modify it, you own it.””

well first you would have to understand the “adjustments” That NCDC made. One used to be a UHI adjustment. Since hansen had his own proceedure It perfectly acceptable to apply your own adjustment. Second you look at the Magnitude of the adjustments. GISS adjustments are inconsequential to the overall average. Personally, I would take a different approach. But they document what they do. More troubling are the adjustments that GHCN make. That is the REAL ISSUE. The largely undocumented and untraceable changes made PRIOR to data IN at GISS. From data IN at GISS to data OUT we know the steps. We can quibble about 1/100s of a degree that GISS fiddles with. Thats a side show. OR you can focus on the real problems
1. the metadata that describes a station
2. The adjustments a that get made PRIOR to DATA IN at GISS, CRU et al.

Side show or real problem. Personally I’d like more REAL HELP in the trenches. I’d like arm chair analysts to get off their butts in join me in FOIA to NOAA. How many have you done?I have 500 pages from my last FOIA, prgramming notes, memos, phone logs, meeting notes. you get the idea. GHCNv3 is going to happen, have you downloaded the code that explains the homogenization? Probably not. That task has been left to me and a couple other guys. So once again a few people will do the work while others opine without real facts or real experience.

Ralph, I understand what you are saying. One of the many problems I have, is that the properties of mercury have been well known for quite some time. We knew the properties of mercury all the way back in 1998 as much as we do today. Oddly, we are now treating 1998 as if it were 1798 and apparently back then, they didn’t know how to keep the thermometer out of the outhouse. So, the reading gets to be adjusted downward. And rightfully so. Their thermometer was in a pile of excrement! But today, today’s temp is imagined. But, it is imagined to be cooler than what it really is, so we adjust it upwards. This is all ok because we have an algorithm that does so, CONSISTANTLY. We know it’s right because the programmer showed us step by step how it works that way. And we all know, if it is coded, then it is true. Forget what we said back then, we now have further insight to truth and it is better truth than the last truth. Of course, tomorrow will be better truth than today, but all of it is truth nonetheless, but only on a day to day basis.

“In case anyone is wondering to which I was refering, here is the exact quote:

Steven Mosher says:
August 30, 2010 at 4:02 pm

“The ACTION in this debate is about the data quality. PEROID. Not about the processing method. Thats the wrong grounds. Its the wrong grounds because the BEST METHOD shows more warming than their flawed method. GET IT!”

Yeh, I’m still wondering about how he can show that. Look here and tell me what you think. Be sure to explore the site, it is neutral and well worth visiting.

Most adults recognize that there is more than one way to analyze a problem. Rather than assuming that your view of the world is the only correct one, you might want to open your mind to the idea that other people have good ideas too.

“Most adults recognize that there is more than one way to analyze a problem. Rather than assuming that your view of the world is the only correct one, you might want to open your mind to the idea that other people have good ideas too.

The arrogance displayed here is indeed breathtaking.”

Character is a funny thing. Many of us know we lack character because we have the intelligence to know character is a desirable, if not, necessary trait. Those that lack intelligence do not pursue character.

Einstein once said, “Weakness of attitude becomes weakness of character.”

Ralph
‘Not that I’m anybody, but I think the point you’re trying to make is being missed by everybody on this thread. And people need to lighten up and hear what you are saying: It’s DATA QUALITY!”

Yes. in any analysis problem there are always two components. Data quality and algorithm limitations. When it comes to algorithms for processing data, there are usually subjective/methodological choices. If we are good analysts we test a variety of choices to see how our choices shape the answer. We want to Minimize bias and minimize uncertainty. Let me give you an example of a “subjective” decision. The baseline period for CAM. Why does CRU pick 30 years? Jones actually discusses this issue in the mails. But the reading adverse probably dont recall that mail. The question is Does the answer change if you pick 29 years? 25? 18? 32? 35. Well, we can test that and the answer does change. Not much, but it does. And we know why it changes. What number of years is objectively correct? Wrong question. The question is does the selection change the MOM ( measure of merit) and by how much? CRU decide that 15 years are required. What about 16? what about 30? what about 20 years where each year as a minimum of 11 months? All good questions. And for someone like me, they are question I answer by varying the parameter throughout the range to see the effect. So I dont sit at home and opine. I actually do the test. We also test the methods with Synthetic data. Does it capture a signal buried in noise? how well? with bias or no bias? but how many people read Jeffs work or Chads work. who here even read mciintyres work on Hansens RSM method? written over TWO years ago:What steve did was study the code, implement a emulation, write his own approach and compare the results. he didnt cut and past a graph and count pixels. he did real work. Real work that has a statistical point. So, we understand that RSM doesnt necessarily give us an unbiased answer. By comapring Hansens method with Romans we gain insight into RSMs shortcomings. We also gain insight into the shortcomings of CRU and of FDM (preferred by Willis and EM ) As I noted jeffs work changed my view of FDM. But here is steveMc. real work on Hansen. If people have not read that they have no standing in my eyes. No credibility. its required reading.

The sum total of all the work that people like JeffID, roman, chad,zeke, nick stokes, and others is this: The methodological issues amount to a VERY SMALL component of the final answer. If anything the methods used by CRU and GISS underestimate the warming. So, my suggestion is that the focus should turn to data quality. EXCLUSIVELY. focus your brains where the real problem is. Focus your efforts on understanding the homogenization code. Demonstrate that you have a nose for the real issue and not the side show. In a nutshell that is the message. Dont talk about GISS unless you know the code. you will make mistakes. You will make the same mistakes that I made before I read the code. You will walk down paths that others have. You will miss the real problem and waste time and bandwidth on the tangents. On metadata, I need someone with GIS talent. Otherwise I will have to teach myself how to do that. I will, but every second I spend correcting the bogus issues is time I cannot spend on understanding the homogenization code or in getting better metadata.

It is one of the miracles of modern science how GISS corrections almost invariably seem to make the past cooler, and the present warmer.

Thanks for the acknowledgment, I was glad I was able to lead you to the discovery that you had previously been wrong about the GISS temperature record.
This correction to the data can be laid at the door of that well-known ‘warmist’, Steve McIntyre.
He brought to Hansen’s attention that a change in the data base used by GISS had led to a change from a TOBS adjusted series to a series without that adjustment, the result of which was a jump in temperature. When GISS corrected that error the jump was removed and the 1998 global temp dropped as you described. I don’t know why you didn’t include that in your post as well?

Sooner or later they may be able to correct away the Dust Bowl entirely.

Seems a rather odd comment since this correction reduced the 1998 temperature compared with the 1934 value, if they’d been trying to correct away the Dust Bowl surely they would have adjusted upwards?

Most adults recognize that there is more than one way to analyze a problem.

#########

It has been my argument that there are MANY ways to address the problem. Did you read what Zeke and I wrote on the various methods. We recognize that there are many ways. The difference is that we actually code up the different approaches and TEST THEM to see how important those differences are. Do you read where i wrote about the different analytical choices? probably not. Did you read where McKittrick wrote about the question of methods? probably not. The bottom line is there are basically these methods:
1. a least squares approach
2. CAM
3. RSM
4. FDM

Now, I used to put FDM in position 2. But reading Jeffs work changed my mind about that approach. It changed other guys minds as well. But the bottom line is this. NONE of those choices change the final answer in any substantial way. Are the differences interesting technical discussions? yup. Do they change the science of AGW? nope.

I weighed 10lbs at birth. At age 12 I weighed 120 lbs. whats your best estimate of my weight at age 6? you have no measurement. Are you misleading people if you say
“based on the data at hand I guess X” no. you yourself make estimates of what you think the ice will be at. How’d that work out? You estimate area from counting pixels? how’d that work out? you estimate slopes by fitting OLS lines. you know that method has assumptions.

I can’t speak for the rest of the people here, but I believe many, if not most, of us here have read the GD post! Do you think you live in a vacuum? I was reading CA before I ever heard of WUWT! Personally, I followed the whole damned thing. Stop. For whatever reason, you’re trying to convince people that GISS is proper in their assertions. You know they are not.

You’ve gone from absolute rejection of other peoples thoughts, to a condescending manner about how people can’t understand basic math principles, to rationalizing that it isn’t GISS that is doing the “adjustments”, but GCHN and NCDC. I notice you didn’t mention NOAA. Steven, it isn’t rational to believe GISS doesn’t understand what these groups are doing to the data. Further, it isn’t rational to accept the assertions GISS makes understanding that the assertions will change in the near future. Seek help or show me where it is rational to have such beliefs.

And what of 1934 Phil. ? The GISS Y2K data step failure made 1934 the hottest year in the USA on record by a small margin.

That seems to have been reversed also.

I’m not sure why you say that, Hansen has always said that 1934 is the record.
“In the contiguous 48 states the statistical tie among 1934, 1998 and 2005 as the warmest year(s) was unchanged. In the current analysis, in the flawed analysis, and in the published GISS analysis (Hansen et al. 2001), 1934 is the warmest year in the contiguous states (not globally) but by an amount (magnitude of the order of 0.01°C) that is an order of magnitude smaller than the uncertainty.”
In his 2001 paper, before he was aware of the data error Hansen said:
“The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station
history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.”

Seems off the radar now it never gets a mention anymore, either in global or CONUS context:

“Although 2008 was the coolest year of the decade, due to strong cooling of the tropical Pacific Ocean, 2009 saw a return to near-record global temperatures. The past year was only a fraction of a degree cooler than 2005, the warmest year on record, and tied with a cluster of other years — 1998, 2002, 2003, 2006 and 2007 — as the second warmest year since recordkeeping began. ”

James:
Steven, you don’t think GISS is aware of your points 1. or 2.? RU KIDDING ME? But, then you’ll lend credence and validity to their assertions? Again, are you kidding me?

**********
james yes there were some metadata issues that GISS was not aware of and some metadata solutions that NOAA was apparently not aware of. as for the adjustment issues, yes there are some issues that have not been raised. Further, whether or not they are aware of all the issues is immaterial to the question of what you should focus on. You can choose to focus on the weak part of their argument or the strong part. You can focus on the thing that requires work or you focus on the things you can write comments about things that dont require much in the way of thought much less work. So, I have respect and admiration for the work that Anthony has done. It was real work. A hunch, followed up by investigation, followed up by field work. And I think that the more people focus on that corner of the problem the better. The more people make this place a place where non problems get raised, or were minor issues get blown out of proportion the worse.

“You can focus on the thing that requires work or you focus on the things you can write comments about things that dont require much in the way of thought much less work. So, I have respect and admiration for the work that Anthony has done. It was real work. A hunch, followed up by investigation, followed up by field work.”

That tells me all about what I need to know about you. Sis, I do real work every day. If you want to blather about how GISS should be sanctified, go right ahead. Is following a basic algebraic equation real work to you? The fact of the matter is, CAGW isn’t a problem people should focus on, nor commit work towards. This is a distraction that requires my attention only because people such as yourself lend credence to the problem. Why don’t you try being a useful, productive citizen of this world for a change? Reference several of my posts above. Sis, you can’t see the forest through the trees. Myself and the rest of the world should commit to a fictional problem that can’t even be argued beyond a post on the internet? Try getting a life. Try getting a rational view of reality. Try to understand 2+2 is always = 4. 4 isn’t subjective. 4 doesn’t change with time. 4 is always 4. You know, I’m sorry you felt the obligation to work towards understanding this alleged crisis. I wish you didn’t feel it necessary. I wish people could think beyond the subjective numbers. They don’t. But please don’t believe you are doing mankind a service when you engage in such a manner that you have. You are not.

I have real work to do tomorrow and it is late, now. Math is the perfunctory work of ideas, yet math is the proof. Subjective mathematics is the last failing of mankind. Were I you, I would distance myself from that as far and as fast as I could. But, that just me, “a person that only writes comments” that doesn’t require much thought nor work. Be sure to not take anything to heart from what I said, because you know I didn’t work for ……………………..????

“james yes there were some metadata issues that GISS was not aware of and some metadata solutions that NOAA was apparently not aware of. as for the adjustment issues, yes there are some issues that have not been raised. ”

Steven, they were aware. They were told over and over again. There are problems with the data and the meta data. How many times and how many ways did we have to show them? You want to talk about Steve Mc? How many times? Or Anthony? How many ways? Does it really take someone to go back through the code to show them where they went wrong? Wouldn’t it be incumbent to at the very least double check to make sure what people are stating is incorrect? It is not incumbent upon me to prove them incorrect. It is incumbent upon them to prove themselves correct. And that doesn’t include a rolling, subjective empirical value. That’s asinine.

More……
“Further, whether or not they are aware of all the issues is immaterial to the question of what you should focus on. You can choose to focus on the weak part of their argument or the strong part.”

AAGGHH!!! What? Everyone, at all times, should focus on the weak part. In any assertion, the weak part is where the fallacy lay! Obviously, I’m tired, else I’d have volumes to say about this. But, I think it also speaks volumes if I leave it at that.

Come see the new ivory tower, see how different it is from the old ivory tower…

Best of luck getting support for your pate FOIA gras work, Mr. Mosher, now that you’ve gone to such great lengths [snip – (He’s, um, a bit annoyed at Steve.) Nothing personal, kadaka, I promise. In happier times I’d let this ride. But under the circumstances, I am going to ask everyone to tone down the conflict about three notches. thanks in advance for your understanding and cooperation. ~ Evan]

Meanwhile the rest of us will keeping looking at three things that consistently give about the same reading, note how a different thing says something else and isn’t consistent, and determine the one that says something different shouldn’t be trusted so we stop using it. But that’s what you would expect from us un-intellectual savages, thus you shouldn’t be surprised by it.

I have looked at smoothed representations of four global surface temperature anomaly products and I agree that those three that represent combined land and ocean anomalies all seem to be relatively consistent. However, beginning around Y2000, the GISS land meteorological station anomaly does seem to diverge and now, relative to the other indices, it seems to be about 0.1 degrees C warmer than it was at the beginning of the decade. Perhaps this is because the land stations are increasingly more likely to be biased by local urban heating.

After the reported attempt to remove the ‘Medieval Warm Period’ from the record, I would expect any change to familiar official data that might indicate the apparent halt in global warming this decade was a metric error would be seen by many as *extremely* suspicious.

I work in the healthcare industry and am involved all the time in clinical trials, protocols, data collection, biostatistical analysis and validation etc. based upon double blind, randomised and placebo controlled studies. If I use any data and algorithms the way GISS uses it, any studies done on that basis won’t see the light of the day and will get thrown out. I know what it is to collect, analyse and study data with rigid rules dictating every aspect of it. That’s because a slightest mistake can affect human health.

Here, with such joke data, fancy algorithms and changes of past data decades back by estimates, we’re told to believe that everything’s hunky dory. Just because an algorithm is published, it is not correct. If an algorithm can’t use certain data when it is collected, don’t ever use it or write a better program.

Justifying such practices is being accessory to fraud, especially when policy decisions amounting to trillions affecting whole world and national economies are based on such data. And especially when the gatekeepers and manipulators of such data are activists like Hansen, Schmidt et. al. who have a vested agenda and long crossed the line between science and activism. And when I say ” science ” here, it means true unbiased facts, without any fancy footwork on data homogenisation, pasteurisation and adjustments designed to show a trend they want to show for their alarmist purposes.

Judging from the way their algorithm functions, there is no way it is reliable then for current predictions. And yet they shout all over the roofs saying ” hottest ever 2010 ” when the real issue with 2010 will be known only sometime in 2025, as per the past track records of their own algorithms.

A few years ago, Roger Harrabin gave an interview to CAM (Cambridge Alumni Magazine) in which he wondered why voters and their stupid views insist on getting in the way of the scientific experts who should be allowed to run the world for us.

This is, of course, the type of traditional technocracy that’s been espoused from the left since, oh, H G Wells anyway. It worked splendidly for the likes of Lysenko in Russia, and of Burt in his enlightened work on eugenics – leftism’s Big Idea of the 1930s.

You may wonder how an alumnus of a scientific university could possibly be so ready to believe, where the climate is concerned, that there are any experts, or that there is a consensus, or that any such consensus would carry any weight even if did exist.

The thing is, although he attended a science-oriented university, Roger Harrabin is not a trained scientist (he read English Literature). His function therefore cannot possibly be to scrutinise and report on the state of the science. He is not intellectually or academically equipped to do so.

All he can do is repeat what he’s heard. And here, with a delicious irony, he listens to what a noisy minority tells him is “the consensus”.

So in the worldview of a BBC environmental correspondent, for whom I am forced to pay, public policy on climate should be driven not by the voters’ wishes, but by the experts’ view. And how do we establish what the experts’ view is? Why, some of them voted on it!

As an arts graduate, we should perhaps expect people like Harrabin to favour qualitative over quantitative debate. As a Cambridge arts graduate, we can reasonably expect him to be aware of this limitation in his thinking, and to adjust for it. Otherwise he resembles nothing so much as a goldfish who insists that water doesn’t exist because he can’t see it.

In which case (ignorant question), WTF are we arguing about? We don’t know what we don’t know. I believe that global temperature is probably higher than it was 150 years ago but I don’t know how much or whether the difference is important and having followed this and other blogs I see no empirical evidence that the matter is anything other than a 21st century debate on the lines of how many angels can dance on the head of a pin because it seems that no-one else does either.

Thankyou for that, Steven. I am dim enough on this subject to need it spelt out for me. But the question is not one of how you combine the two stations. It is why, according to GISS, the station 2 figures appear always to be higher than the station 1 figures.
The cynic in me wonders if there could be a reason for this.

“I know what it is to collect, analyse and study data with rigid rules dictating every aspect of it. That’s because a slightest mistake can affect human health.”

DIFFERENT PROBLEM. the data is historical data. there was no design of experiments. The stations were not set up to monitor climate. Thomas jefferson went out every day to collect temperature twice a day. That stupid man. Didnt he realize that 200 years from then we would decide that midnight was the standard observation time?

“Judging from the way their algorithm functions, there is no way it is reliable then for current predictions.”

Do you read?. The more accurate methods show the problem is worse. FURTHER, the accuracy of the historical record is NOT a constraint on the predictions. Has very little to do with it. The predictions are not based on the historical temperature record. FURTHER YET, GISSTEMP is NOT USED, CRU is.

so that is why, people like me, wanted to get CRU Code and cru data. That was the important thing. because THAT is what the science uses. Not Gisstemp, CRU. he attacked the wrong target for the wrong reason.

In 2000, GISS looks through 7364 stations and they select stations to construct an average. Their rules say. NO STATION that has less than 20 years can be used. SO, for example a station that started reporting in 1990 could not be used.
By 2010 that station NOW has 20 years. NOW, its data from 1990-2000 CAN be used. That will of necessity change the estimate of 1990-2000. ”

Uh, so you state that the old figures do not change then explain how they change the estimate? I guess I am confused.

As to the point that I am being illogical in my assertion that science hasn’t answered the question in any meaningful way (your 2:53 post I think); let’s take a little primer; if you put CO2 in a bottle and put a heat lamp on it, it will heat more than a bottle with just plain old air. There’s your hard scientific fact. Take the lid off the bottle and put a CO2 emission source inside the bottle and it will no longer heat up more than a bottle with regular air and no lid.

Also; the Earth heated up, CO2 increased ergo; more CO2 caused the earth to warm! That is the current status of the science. About 50% of the warming of the Earth is caused by things like Milankovich cycles, precession of orbits, solar variance, etc. Therefore we’ve got better proof that CO2 is responsible for the remainder? No, that’s guesswork at best, and yes, that is the state of the science. Remember; GCM’s are off by two STDs and currently seem to have missed the fact that (statistically at least) warming stopped about 10 years ago.

A new analogy to the GISS global temp. estimate;

You’ve got a dam. It sits above your town in such a way as to make your town’s existence entirely dependent upon the integrity of this dam. Now, this is no oridinary dam; it has water level sensors all along it’s face; ney more than this; workers are installing new sensors all the time.

Now one day Professor Little decides to review the logs that are kept on these sensors and discovers that the water is rising on the face of the dam! If the water tops the dam it will erode, placing the town in jeopardy. By Dr. Little’s calculations the water is a foot from the top of the damn and rising one inch per year! Okay, we’ve got a little time to investigate mitigation strategies, but the work must begin!

Low and behold; Dr. Little comes back in a year and announces the situation is more dire than he thought; the water is higher than ever! It’s one foot from cresting the dam, and we expect it will crest the damn in no less than 24 years!

At that point the voters aren’t going to elect politicians who continue to appropriate money for this project. In fact they’re going to want to investigate the science a bit further and look into connections between Dr. Little and the dam mitigation industry. They’re going to ask why Dr. Little has been on the board of directors of the Dam Failure Option Exchange and so forth. If Dr. Little at this point claims he has lost his data and won’t let you look at his methodology because ‘you’re only going to try to tear it down’ he would lose his job. Any further assertions about water level rise and the impending doom of the town will be greeted with great skepticism. If he starts calling you a ‘water rise denier’ he will be greated with laughter.

Do you get it now?

The stability of the estimates is indeed important. If the revisions were of a different magnitude it would be one thing, but they’re not saying it was .67 in ’98, and now it’s .95, but it was only .66 in 98. They’re claiming it’s .67 now and it was only .57 then. They keep moving the bar; it’s not any warmer now than it was claimed to be in ’98 and the only reason they can claim it’s an all time high is because they’ve significantly lowered the ’98 estimate. The water isn’t rising the tape measure is sliding down the face of the dam. To put it another way; Gilligan is moving the professor’s pole out into the lagoon.

We’re not even sure it’s warming and there are strong indications it’s starting to go the other way. If we’d been looking at this seriously for the last 30 years we might now know what we can expect; is this just another minimum or the onset of the next glaciation? Instead we’ve poured all of our research into proving a forgone conclusion and as a result don’t know why the climate seems to have stopped warming. If there’s a new LIA event people are going to starve to death because we didn’t see it coming because Jones, Hansen, Mann and the rest spent all the research money they were given proving something they believed in rather than studying the data to see what it could tell them. THAT IS THE STATE OF THE SCIENCE TODAY.

I would suggest that the mitigation strategies for a new ice age or a minimum events would differ somewhat from the measures we would take against CO2 induced global warming, but that right now we don’t know anything about which is coming, as our models go on about their merry business of showing uninterupted global warming because that’s what they’re designed to do.

We could’ve figured out how to feed the masses in a low solar energy environment. How to keep them from freezing, etc. But we didn’t know it was coming because these charlatans pissed the money away on a pocket full of magic beans!

So go ahead; be lukewarm. I too believe that CO2 probably did make our climate a couple 10ths of a degree warmer in the later half of the 20th century. But I also think it would be great if we knew what drove the Maunder minimum, if we understood why the earth heated up 6/10ths of a degree and then stopped. Instead we’ve got GCM’s that insist it hasn’t stopped and scientist who’s proverbial dog ate their homework.

Again; it’s time for Dr’s Mann, Hansen, Jones, and the rest of their crew to get out of the way and let science begin anew. Nobody is likely to believe them any more and they should consider themselves lucky to get out of the business without being charged with a crime. If they were to walk away quietly I bet they would be so fortunate. If they want to go down with the ship they are certainly entitled.

You sir have been very patient and I for one don’t find you to be arrogant. At least you entertain debate and have some attachment to your convictions. I greatly respect that. You and Ms. Curry are to be applauded for at least being willing to discuss the ideas rather than hurling ad hominem attacks on everyone who doesn’t agree with everything the mainstream climate science establishment has been promulgating for the last 20 years. Thank you for your patience with those of us who aren’t scientist but who are just trying to understand the science and learn the truth.

This article is about GISS. If you want to defend CRU you are more than welcome, but what is being looked at here is Gisstemp and a very interesting little anomaly whereby their temp for ’98 changed.

As to CRU, their data and methodology; I would suggest that their data probably does correlate very well with Giss, because it really isn’t an independent data source, as they like to pretend. As to their methodology (code); I think you really can get that through the FOIA file that was leaked late last year. If you can’t find it I’ve got a pristine copy here somewhere. There’s a lot of interesting comments in that code too.

This article by Steven Godard was about GISS’s temperature and how they adjusted past history. It could be perfectly rational to you just because the algorithm worked that way. It is not rational to me as the noise was made by Hansen of GISS and Schmidt, Rudy et. al. of GISS about the state of world temperatures, based on their data. it is based upon this same GISS data that people like Tamino are making noises even today about AGW. It’s just not warming per se that they are talking about. The noise they are making is that the temperature rise as seen by GISS data and algorithms is proof of man made global waring due to CO2 increase.

This is exactly the activism they are practicing, aided and abetted by Real Climate, the site in which the same Gavin Schmidt of GISS postulates all the above theories with ruthless censoring and arrogance, leaving people to wonder what is he paid by NASA / GISS to do. And don’t come and tell me ” different problem ” in capital letters. The problems are very well inter linked as it is this clique of Hansen, Mann, Schmidt, Santer et.all who are making all the noise about AGW based upon GISS data. And they had substantial influence in the IPCC reports as contributors, authors, editors etc.

Of course, Jones, Briffa et. al. were there from CRU as part of the gang. But the masterminds were the RC gang led by the GISS based puppet masters.

So GISS figures, algorithms and the drama they create does matter.

To my comment

“Judging from the way their algorithm functions, there is no way it is reliable then for current predictions.”

You gave a big lecture with bold letters [ by the way, that is the internet equivalent to yelling. ] about accurate methods etc.

Don’t bring in strawmen. We are talking here about GISS and GISS only. Do you call their method as ” more accurate ” when it needs 12-15 years backcasting to arrive at historical ” estimates ” and yet we should believe that what it tells today is the gospel?

That is the issue. Because people like Hansen, Schmidt, Tamino, Romm et. al. are doing exactly that.

The ” science ” has long gone fro the scene for these people. It is activism that matters, shamelessly twisting whatever is needed to suit their agenda. And they have the political and economic clout with Gore, Soros et. al on their sides, not to mention the patrons of CCE.

So, it does matter about what they say and do. If GISS analysed data is not to be considered as science or used for climate science decisions, what are they doing with public money playing around and creating these kind of analyses?

And Mr.Mosher, if you can’t put across your point rationally without yelling, kindly go to Climate Progress or some other blog where such behaviour is tolerated. Stop using bold letters and tone down your arrogance.

Sam the skeptic
“But the question is not one of how you combine the two stations. It is why, according to GISS, the station 2 figures appear always to be higher than the station 1 figures.
The cynic in me wonders if there could be a reason for this.”

Good question. Its not always the case that the stations that get added in, are warmer with cooler pasts. BUT there there is one good reason for explaining it.

The BEST method, the method that uses all the data shows MORE warming. RSM, recall, doesnt use all the data. So on average the data it doesnt use must be warmer. as time marches forward and RSM uses more of the data it hasnt previously used, well on average we know its warmer.

“This article is about GISS. If you want to defend CRU you are more than welcome, but what is being looked at here is Gisstemp and a very interesting little anomaly whereby their temp for ’98 changed.”

I appreciate Steve Mosher hanging around and explaining his thoughts. Indeed, he has provided an insight that I had not thought of before. And that kind of bothers me.

He has broken down the “averaging” function into a very simple example – and in so doing called into question the entire declared series of numbers. Indeed, in his example the temperatures from the 2 stations had not changed over 20 years – all being either a 0 (station 1) or 1 (station 2). But due to the process the climate scientists use to come up with their average, they show a .5 degree temperature rise where none exist!

I understand the need to validate a stations temperature by getting some history. But the results appear to favor a trend that does not exist, regardless of what you believe to be the truth.

:And don’t come and tell me ” different problem ” in capital letters. The problems are very well inter linked as it is this clique of Hansen, Mann, Schmidt, Santer et.all who are making all the noise about AGW based upon GISS data.”

based on giss data? hardly. Based on physics, supported by many different kinds of evidence.

Yeh, I was a little animated last night. There are a couple of things that set me off. Condescension towards people that have shown they are more than capable of understanding complex thoughts.(I happen to like some of the people that were treated in such a contemptuous manner.) Treating rational numbers (temps) as some vague concept or more complex than imaginary numbers. And history revision. It seems, the above, mixed with a bit of beer and I have the grand ability to descend to the lowest common denominator.

My thanks to the moderators for not allowing my more base comments to be posted. My intentions were not to cast dispersions upon anyone, but rather to defend(I know, they don’t need my help.) the ones I viewed as wrongfully depicted. Ah, the road to hell……

lol, No, they can claim it now, but later a reverse cooling trend going back in time will be shown to where they really didn’t know what they were talking about but the next year really will be hottest evuh, even if the numbers don’t ascend to the previous hottest evuh, because the hottest evuh back then really wasn’t the same as the hottest evuh they depicted. It’s because mercury lowers as time passes, but only retroactively.

Its all very clear to me and quite justified because they use a thing called an algorithm. Plus it is in the computer code, so we know it is all legit.

” they do not adjust past history. if a dataseries is too short they do not include it, as time goes by they can include more data.”

If that is not adjusting history, what is it? Adding new data sets to make claim different from what you have been making means you are altering the parameters of your observations and are rewriting history. History is what had happened and recorded before. So whether right or wrong, making alterations to the parameters and making new reporting showing a different trend is altering history.

And why should their current predictions of hottest ever 2010 be believed till they fully adjust it with new data series 10-12 years down the line? You can’t have it both ways. They are basing the claims on their data for sure and have stated it s many times. If the past estimates were not reliable or complete enough to be correct till 12 years, the same criteria should apply to their present data. So why don’t they say ” At present, we don’t know the entire truth and are making claims based upon partial data “. If they are truly scientific, they should add such disclaimers.

And as for your second statement

” based on giss data? hardly. Based on physics, supported by many different kinds of evidence.”

there are two issues. Hansen and co have been trumpeting about their hottest ever 2010 based upon GISS data. They certainly did not come to Moshtemp or to Jeff ID for their data.

The fact that UAH, CRU and RSS do not sow as 2010 as hottest and GISS is the only dataset of the 4 currently in use which says 2010 is the hottest is itself proof of their statements.

If you have any direct proof from them that they did not use GISS data to make their claims, please provide with references. And I’m talking about the GISS / RC / Tamino / Romm gang.

And as for physics, supported by different kinds of evidence, it’s a different story and we can debate on that viewpoint till the cows come home. There are many papers which have come out in the past months, reported here also, about the poor state of reliability of the surface temperature data, the reliability of the statistics, the fact that it has been shown that the Planck-Weighted greenhouse gas optical thickness of the earth s a constant, the fact that the models have been shown to be 200% – 400% wrong, the fact that UHI exists and has been shown to exert statistically significant bias on the land surface temperatures, the fact that clouds and oceans’ role has been poorly understood and not properly accounted for, the fact that solar activity and sunspots form an important part of the warming etc. etc., one can go on. There’s enough room and issues for debate and unlike the RC and GISS crowd, we can actually have a debate, on a separate thread, if you want.

Here it is OT and the thread is about the GISS Temperature analysis issues.

James Sexton says:
August 31, 2010 at 9:41 am
==========================
But James, we all know that 1998 (12 years ago) was the dark ages.
You can’t trust any of the temperature data from way back then……;-)

The BEST method, the method that uses all the data shows MORE warming. RSM, recall, doesnt use all the data. So on average the data it doesnt use must be warmer. as time marches forward and RSM uses more of the data it hasnt previously used, well on average we know its warmer.

I’m afraid that sounds like the beginning of a circular argument to me. The data show it is warming because we know it is warming because the data say so. If you introduce a new reporting site there has to be at least a reasonable chance that the figures will be lower than the existing site.There is no reason per se why they should be higher.
And as Philjourdan points out, in your example there is no trend; both stations are showing a constant temperature. Station 2 may be higher but it always was, even when it wasn’t being included. You do not have better data by including it; all you have is more data.
You have created a 0.5 anomaly where no such anomaly exists. It looks to the layman like an inability to see the wood for the trees. Or alternatively a neat trick to confound the unwary.

James Sexton says:
August 31, 2010 at 9:41 am
==========================
But James, we all know that 1998 (12 years ago) was the dark ages.
You can’t trust any of the temperature data from way back then……;-)

This “science” needs a much bigger shovel………
==========================

Naw, they’ll just code an algorithm to make the mess decrease to eventual nothing to where it never really happened……… retroactively, of course. Much like 1984 1998. Dang, now I’m confused about what work of fiction is what?

Will the real 1998 please stand up? Is it the 1998 of the 1998 vintage or the 1998 of the 2005 vintage or the 1998 of the 2010 vintage? OMG!!! I just realized GISS has fixed the quantum time difficulty!!! It shouldn’t be long before they solve the “grandfather paradox”!!! Phi isn’t used after all!!!

The 1998 temperature was adjusted downwards years later. That is called “adjusting past history.” You focus on minute details and completely miss the big picture.

You display a classic case of “can’t see the forest for the trees.” You also display a classic case of “The Emperor’s New Clothes.””

Not just that, but EVERY past year seems to get adjusted DOWN. Always. Look at the two graphs. Pick a year, any year out of the last 20. 1992. Higher in the 1998 graph than the 2010 graph (.17 vs. .12). Look at 1990. .48 (1998) to .38 (2010). They ALL go gown, always. It would seem to me that if what Mosher says is happening is really happening, some would go up a little, some would go down a little, but by and large they should bounce around the same point by a few hundreths. Not ALWAYS go one direction (down) by up to a tenth.

there are two issues. Hansen and co have been trumpeting about their hottest ever 2010 based upon GISS data. They certainly did not come to Moshtemp or to Jeff ID for their data.

Actually I don’t believe that they are trumpeting that, they said that the last 12 months were the hottest in their timeframe which is not the same thing. You’d have to produce the 12 month running average for the other databases to do a comparison.

The fact that UAH, CRU and RSS do not sow as 2010 as hottest and GISS is the only dataset of the 4 currently in use which says 2010 is the hottest is itself proof of their statements.

Really, I’m fairly sure that RSS has indicated that 2010 is the hottest to July. Here’s a quote from Roy Spencer re UAH:
“As of Julian Day 212 (end of July), the race for warmest year in the 32-year satellite period of record is still too close to call with 1998 continuing its lead by only 0.07 C:

NASA GIS station temperatures could earlier go back to about 1800. I have a print from 2003.
Today all temperatures earlier than 1880 are deleted!!! As far I know mean temperatures were higher before 1880. To get a constant rise in temperature they had to cut just there. Is that the reason?

If they buy your arguments, then this is deliberate as it is part of the algorithms, such revisions of rankings and amounts of anomaly are normal and should be expected.

If they examine the directions these adjustments take and conclude there is a consistent pattern to these revisions that continually yields a certain result, they will view this as deliberate.

A mistake would be an inadvertent error, just notify GISS of the problem and it’ll get fixed. Does anyone here think that’ll happen?

Either way of seeing it as deliberate, especially the latter, leads one to question the science of (A)GW, especially the “definitive” version of “AGW science” espoused by Hansen the Activist.

BTW, talk of how the “BEST” method shows more warming, actually talk of the accuracy of any of those particular methods, really doesn’t cut it. I am really agreeing with the view that total Ocean Heat Content is what we need to be looking at. Far away from Urban Heat Islands and airport tarmac lies the largest surface area and what is soaking up the largest amount of solar radiation. If you were looking for changes based on atmospheric CO2 concentrations, the sought-after AGW signal, that is where it would be found. Air temperatures over land, which don’t tell us of the actual heat content without humidity measurements, with the atmosphere having much less heat content compared to the oceans anyway, with the taking and recording of said measurements filled with potentials for error and bias, with the processing showing such variance, just ain’t telling us what we really need to know.

Seven of the eight bits (which control the first 128 characters) have been standardized world wide and are known as ASCII – American Standard Code for Information Interchange. In the all encompassing Unicode, this set is referred to as Basic Latin. However, with the next 128 characters — known as the extended set — a number of variations have arisen.

Below is the character set sometimes referred to as the ANSI code page, or the ISO-8859-1 character set. This character set is made up of two groups: printable characters and control characters. Because of the additional control characters that ANSI (and ISO) then substituted for the printable characters in codes 128-159, it is now more correct to refer to it as the Windows-1252 code page. However, it retains the name ANSI code page in Microsoft’s Notepad for historical reasons.

Please avoid trying to rewrite the history of those of us who lived through it. And enjoy this ANSI smiley.
☺

You’re still confused.
The fact that Bill Gates decided to map the byte value 01 (hex) to a smiley face in some software/hardware situations does not affect the ANSI standard:http://tinyurl.com/2ejp8qo
The fact that some Windows programmers mistakenly call this character mapping “ANSI” does not make it so.

01 is the SOH control character: Start of Heading. It is not a graphic character that shows up as a glyph.
Your own citation shows Bill Gate’s motivation in assigning graphic characters to these byte values that, in ANSI, are invisible control characters:

The character repertoire for these controls was taken from the character set of Wang word-processing machines, as explicitly admitted by Bill Gates in the interview of him and Paul Allen in the 2nd of October 1995 edition of Fortune Magazine: “… we were also fascinated by dedicated word processors from Wang, because we believed that general-purpose machines could do that just as well. That’s why, when it came time to design the keyboard for the IBM PC, we put the funny Wang character set into the machine–you know, smiley faces and boxes and triangles and stuff. We were thinking we’d like to do a clone of Wang word-processing software someday.”

Now look at this page in more detail – your browser is probably automatically choosing the “Unicode” encoding for the bytes within this browser page. Force it to use Western (ISO-8859-1) encoding, and see what happens to your “ANSI smiley”.

Or, like I said, use a hex editor to look at the actual bytes encoding your “ANSI smiley” in this page – it is 263a (hex). Two bytes. Not ANSI.

So did stevengoddard, which he clarified as to “error bars” more recently:https://wattsupwiththat.com/2010/08/22/sea-ice-news-19/
IARC-JAXA numbers, 5.5 million sq km, finishing above 2009 and below 2006.
Good – now we know the prediction was not 5.5 million sq km ± 3 million sq km. It’s an actual prediction which can be proven wrong – if it finishes below 2009.
In which case my prediction will have been right. There is no need for me to whine about the “wind” not cooperating – it’s just simple annual variability (winds, clouds, ocean currents, forest fires, etc) superimposed on a strong climate trend.

Whether I make another prediction here at WUWT is the “maybe” part. I’ll see how the WUWT Arctic sea ice skeptic-expert handles being wrong first. I hope he sticks around long enough to accept defeat. The summer melt season post-mortem could be a nice learning moment for WUWT – since it’s founding in late 2006, the Arctic summer minimums have been 2007 (way down), then 2008 and 2009 (up and up). The “up and up” has been given lots of significance here – a new “down” might generate funny interesting responses.

but then he says: “Of course, any prediction I make would lie somewhere between young molten Earth and snowball Earth,” the ultimate Sharpshooter’s Fallacy, making Hansen’s giant error bars look puny by comparison. Very brave prediction… not.
Try to read more closely. That wasn’t my “prediction” – I was reminding you that you consider all future climate change to be within “natural variability” – a worthless concept.

[REPLY — They’re not deleted. Just set aside temporarily. Look, I know you’re het up, but Anthony’s going through a very bad time. We all need to tone it down a bit. (It doesn’t matter who started it.) And fear not, others will suffer a few snips, too. ~Evan]

is land only, here’s the missing legend:
“Fig. 1: Annual and 5-year mean surface temperature for (a) the contiguous 48 United States and (b) the globe, relative to 1951-80, based on measurements at meteorological stations.”

The graph you compared it with is clearly labelled as global land-ocean temperature index:

You mean Steven made the kind of error that might occur looking at graphs and counting pixels comparing them instead of reviewing the underlying data that generated them? Who’d have thought?

I knew Steven was writing a post without merit because the issue of the changing estimates over time was long ago put to bed, known not to change the trend significantly over time and the people at GISS must have been sighing in disbelief over his exposition and what an ill-informed waste of time it was.

Phil. and Rattus’s observations made me go check the underlying data.

Here is the current anomaly data for every month of 1998, as of July 30th 2010,

Here is the monthly anomaly data for 1998 from 2000 (file dated January 21st 2001) which I scraped for John Goetz back in 2008. It is no longer available at archive.org as GISS has stepped in to disable archiving. Remember I never said they were nice guys.

If you look at the information at the top, you obtain the anomaly data in degrees celsius by diving by 100.

Averaging the year 2010 data for 1998 and diving by 100 I get: 0.070° C

Averaging the year 1999 data for 1998 and diving by 100 I get: 0.068° C

So the estimation algorithm has actually made the 1998 anomaly 0.02° C higher in 2010 than the the value calculated and used in 1999/2000.

Or in other words exactly the opposite of what Steven Goddard has claimed has occurred. The GISS algorithm has not cooled the past in this case of the crucially important 1998. It has warmed it.

Checkmate.
Game Set Match.
The Fat Lady has sung.

No, the people at GISS are not sighing in disbelief over Steven’s Post, they are rolling on the floor laughing so hard they are pissing themselves.

And you cheerleaders for Steven wonder why Mosh and I say Posts like this reduce the credibility of WUWT. At least on WUWT a dissection is allowed which lets us get the root of the problems. That is something. A little more honest skepticism from the cheerleaders would be an improvement.

Now look at this page in more detail – your browser is probably automatically choosing the “Unicode” encoding for the bytes within this browser page. Force it to use Western (ISO-8859-1) encoding, and see what happens to your “ANSI smiley”.

Tried that with my “back-up” browser, Epiphany, comes with GNOME, set it for Western (ISO-8859-1) encoding. Guess what? Smiley is still there.

Or, like I said, use a hex editor to look at the actual bytes encoding your “ANSI smiley” in this page – it is 263a (hex). Two bytes. Not ANSI.

Didn’t need a hex editor, just a simple ctrl-u from Epiphany to look at the page source. Found the 263a coding for the smiley.

You know why? Apparently not. See, at the top of the source it says “(left-arrow) meta charset=”UTF-8″ / (right-arrow)”, which is HTML 5 notation. WordPress served up a UTF-8 (Unicode) document.

My browsers are not choosing to display Unicode, they have properly recognized they were given a UTF-8 document so Unicode is what is displayed. I can’t “force” them to do otherwise, I can only set the default encoding and the document has specified what encoding to use.

BTW (emphasis added):

ISO-8859-1 and Windows-1252 confusion
It is very common to mislabel text data with the charset label ISO-8859-1, even though the data is really Windows-1252 encoded. In Windows-1252, codes between 0x80 and 0x9F are used for letters and punctuation, whereas they are control codes in ISO-8859-1. Many web browsers and e-mail clients will interpret ISO-8859-1 control codes as Windows-1252 characters in order to accommodate such mislabeling but it is not standard behaviour and care should be taken to avoid generating these characters in ISO-8859-1 labeled content. However, the draft HTML 5 specification requires that documents advertised as ISO-8859-1 actually be parsed with the Windows-1252 encoding.[1]

So by that draft HTML 5 specification, which is already incorporated into modern browsers as possible and practical, I can tell the browser to use ISO-8859-1, the document can declare itself to be ISO-8859-1, and the MS ANSI smiley would still be there. The test you proposed was improperly designed.

How do you know, back when Smokey was typing his comment, if he inputted a Unicode or an ANSI smiley? Can you tell if he used the long ctrl-shift-u 263a sequence or the short alt-1(numeric keypad) sequence?

The answer, of course, is that you can’t know. You stated he used the Unicode smiley, after all your hex editor gave you the 263a sequence. But you went off the info at your end, processed through browsers, ISP’s, and servers, that served to you the smiley as Unicode. Thus you never had the info to determine what smiley he used to begin with. Stating what smiley it was, was perhaps a 50/50 guess. At least it would be if it wasn’t for the prevalence of Windoze boxes. By trusting the info you did, you likely picked the losing end of worse than 1 in 10 odds.

As we used to say at MIT – RTFM.

And after MIT they can be saying “Would you like fries with that?” But that could just be the economy, of course. ☺

Oh, and Steven, just so you can compare apples to apples, the GISS anomaly for Land only for 2005 is 0.77 ° C not 0.61° C as you wrote above, since you are using the Land only graph for 1999.

I don’t have data to compare Land + Sea Surface 1999 data to 2010 data, but from what I know of the code, perhaps Mosh can confirm, the direction of any change in estimations would be consistent with the Land only index.

NASA GIS station temperatures could earlier go back to about 1800. I have a print from 2003.
Today all temperatures earlier than 1880 are deleted!!! As far I know mean temperatures were higher before 1880. To get a constant rise in temperature they had to cut just there. Is that the reason?

No. PLEASE.

“NASA GIS station temperatures.” NO SUCH THING EXISTS. they pull data from GHCN which starts in 1701.

There is nothing of interest in how GISS process data. nothing. That is why I know with certainty that every time Goddard writes something that says “Giss fraud” that he has gotten something wrong. The difficulty as always is finding his exact error. But you can be sure. if goddard criticizes GISS, he did something wrong. it’s that simple.

“So the estimation algorithm has actually made the 1998 anomaly 0.02° C higher in 2010 than the the value calculated and used in 1999/2000.”

Oh my god! they rewrote history. therefore all of global warming science is bunk. They are making changes that are within a standard deviation the whole process is a fraud. and Mosh is an arrogant dope. This kind of thing would never fly in health care science, therefore, its colder now than in 1850, and the ice isnt melting.

WRT your question. RSM is not used in the SST ( there are no stations) as far as I recall. so your surmise would be correct

“Will the real 1998 please stand up? Is it the 1998 of the 1998 vintage or the 1998 of the 2005 vintage or the 1998 of the 2010 vintage? OMG!!! I just realized GISS has fixed the quantum time difficulty!!”

There is no REAL 1998.

there is the 2005 estimate that is based on ONE set of data. There is 2010 estimate that is based on MORE data.

” It would seem to me that if what Mosher says is happening is really happening, some would go up a little, some would go down a little, but by and large they should bounce around the same point by a few hundreths. Not ALWAYS go one direction (down) by up to a tenth.

I have never seen so much tap-dancing in my entire life, just to say “yeah, they backcasted and adjusted it down”

Which still means that any “records” they say about today’s temperatures are not worth the paper they were printed on……….
#########

well, since goddard got the wrong charts, and since you supported him, I’ll say this.
Iff GISS is worthless and if Goddard cant even get the critcism of a worthless thing thing right, and if you cant see that goddards worthless critcicsm of a worthless thing is less than worthless, then I suppose that would make your opinion even more worthless. you follow? you supported a sloppy analysis. an analysis MORE sloppy than GISS. your support of goddard was your own fault. as you said…”and then I stopped reading.”

“You have created a 0.5 anomaly where no such anomaly exists. It looks to the layman like an inability to see the wood for the trees. Or alternatively a neat trick to confound the unwary.”

As I tried to explain the example was just to show how the algoritm would include older data going forward.

AND YES you do get spurious anomalies when you stitch stations together in my example. AND YES thats one issue with RSM. did you even READ the Mcintyre post I linked to. The question is do these biases normalize. a question steveMc left unanswered. They are a known artifact of the approach.

Thanks Mr.Mosher, now we know the gospel. The temperature anomaly record of a past historical year or years will vary every time you run the GISS Algorithm. That is absolutely normal in your Alice in Wonderland world.

Yet, it is also absolutely normal that the same people who make up such data, can claim absolute accuracy to 6/10ths of a degree or even lesser and state today that they were always right and that this year is the hottest year. They can repeat it for any number of years by magically adjusting past years downwards, irrespective of what the actual fgures were. It has to be true because of their algorithm.

Pull the other leg and it has bells.

When one dataset alone shows such results, contrary to what the other 3 show and when the adjustments are always lowering past temperature decades later, the BS meter starts buzzing in full flow. In any other field, such a data set would be called as an artifact and disregarded.

I think you have explained well as far as you can. However what your explanation has shown – and the GISS anomoly as well – is that the truth is NO ONE knows what is happening with global temperatures in the short term. In other words, we can agree with some certainity based upon ice core samples, what the temperature was during the last ice age. However, we have no clue what the actual temperature was 12 years ago since there is no global thermometer.

The 1998 anomoly shrunk by .07 in 12 years. Who is to say it will not shrink another .07 in the next 12, and another .07 in the following 12? Apparently no one. Because they do not know what it was then, nor now. It is just a SWAG.

“Figure 1 compares the temperature history in the U.S. and the world for the past 120 years. The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934. Global temperature, in contrast, had passed 1930s values by 1980 and the world has warmed at a remarkable rate over the last 25 years.”

He uses the terms “global” and “world” for that graph. You are telling me that is land only? Y/N?

MattN says:
September 1, 2010 at 6:30 am
“Fig. 1: Annual and 5-year mean surface temperature for (a) the contiguous 48 United States and (b) the globe, relative to 1951-80, based on measurements at meteorological stations.”

See the part I’ve highlighted, ‘Global Land-Ocean Index’ is a different statistic which includes SST as well as the land stations. They shouldn’t be directly compared which was the error in the original post.