GISS is Unique (Now Includes May Data)

In comparing GISS with the other five data sets that I comment on, some of the points I raise below overlap, and others could be added. However, and in no particular order, the following are some things that I have come up with on why GISS is unique. Perhaps you may disagree on some points or you may come up with others.

Image Credit JoNova

1. GISS uses two decimals whereas all others use three. While I agree that we do not know anomalies to the nearest 1/1000 or 1/100 of a degree, I find it very inconvenient. In my table, I give the 2013 anomaly rank, but with GISS, I need to check it every month since 2003 is usually tied to two decimal places, however they may switch places to three decimal places. Of course I realize that depending on how you look at it, there may be a ten way tie for sixth place, however if I want the best single number for the table, it is just a nuisance.

2. For 95% statistical significance, all others are above 17 years, but according to GISS, it is just over 14 years. See the table for details.

3. Including May, GISS has the most months in 2014 above the average of its record year of 2010, namely four of the five months. All other data sets have either zero or one or two months in 2014 above the anomaly average for its highest year. See the table for details.

4. GISS has the highest ranking after five months at first place. I realize it is only by 0.001 C and that could change when China’s numbers come in, but at the same time, 2010 could revert back to 0.65 from 0.66 next month. By contrast, RSS is eighth after five months. So while it is very probable that GISS will set a record, there is no way that RSS will do so. At this point, each of the last seven months on RSS would need to have an average anomaly of 0.775 and thereby smash every monthly record to date for every month from now to December. That is just not going to happen with RSS. The other rankings are from 4th to 8th.

5. GISS has the coolest period as the base period causing it to have the highest anomalies. However this does not affect the warming rate.

6. 1998 is ranked 4th which is the lowest of all data sets. Hadcrut4 has it as third and the others as first.

7. This is the warmest May ever recorded by GISS. However on RSS it is sixth; on UAH, version 5.5, it is fourth; on Hadsst3 it is second; and on Hadcrut3 it is also second. In all of these cases, at least the 1998 anomaly was higher. However Hadcrut4 also had May 2014 in first place by beating its 2010 mark by 0.004 C. However this difference is certainly not statistically significant.

8. GISS is the most quoted by warmists.

9. GISS is the most volatile of all data sets. Like James Bond, GISS has a reputation that precedes it. Why further it? Who will read a long and possibly a perfectly logical explanation when the end result is that a previous record is now easier to beat? For example, the 1998 anomaly of 0.62 in January was lowered to 0.61 now. Why can they not leave a 16 year old anomaly alone like the rest of the world?

10. And last, but not least, per JoNova, as shown referenced at the top of this article, GISS progressively realigns and reinterprets the temperatures from decades long ago:

In the parts below, as in the previous posts, we will present you with the latest facts. The information will be presented in three sections and an appendix.
The first section will show for how long there has been no warming on several data sets.
The second section will show for how long there has been no statistically significant warming on several data sets.
The third section will show how 2014 to date compares with 2013 and the warmest years and months on record so far.
The appendix will illustrate sections 1 and 2 in a different way. Graphs and a table will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.com (WFT). All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.
On all data sets below, the different times for a slope that is at least very slightly negative ranges from 9 years and 5 months to 17 years and 9 months.

1. For GISS, the slope is flat since September 2004 or 9 years, 9 months. (goes to May)
2. For Hadcrut3, the slope is flat since September 2000 or 13 years, 9 months. (goes to May)
3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since January 2001 or 13 years, 5 months. (goes to May)
4. For Hadcrut4, the slope is flat since January 2001 or 13 years, 5 months. (goes to May)
5. For Hadsst3, the slope is flat since January 2001 or 13 years, 5 months. (goes to May)
6. For UAH, the slope is flat since January 2005 or 9 years, 5 months. (goes to May using version 5.5)
7. For RSS, the slope is flat since September 1996 or 17 years, 9 months (goes to May).

The next graph shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the upward sloping blue line indicates that CO2 has steadily increased over this period:

WoodForTrees.org – Paul Clark – Click the pic to view at source

When two things are plotted as I have done, the left only shows a temperature anomaly.

The actual numbers are meaningless since all slopes are essentially zero. As well, I have offset them so they are evenly spaced. No numbers are given for CO2. Some have asked that the log of the concentration of CO2 be plotted. However WFT does not give this option. The upward sloping CO2 line only shows that while CO2 has been going up over the last 17 years, the temperatures have been flat for varying periods on various data sets.

The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Section 2

For this analysis, data was retrieved from Nick Stokes’ Trendviewer available on his website Nick Stokes’ Trendviewer. This analysis indicates for how long there has not been statistically significant warming according to Nick’s criteria. Data go to their latest update for each set. In every case, note that the lower error bar is negative so a slope of 0 cannot be ruled out from the month indicated.

On several different data sets, there has been no statistically significant warming for between 14 and 21 years.

The details for several sets are below.

For UAH: Since February 1996: CI from -0.017 to 2.347
For RSS: Since November 1992: CI from -0.016 to 1.857
For Hadcrut4: Since October 1996: CI from -0.010 to 1.215
For Hadsst3: Since January 1993: CI from -0.016 to 1.813
For GISS: Since December 1999: CI from -0.004 to 1.413

Section 3

This section shows data about 2014 and other information in the form of a table. The table shows the six data sources along the top and other places so they should be visible at all times. The sources areUAH, RSS, Hadcrut4, Hadcrut3, Hadsst3, and GISS.
Down the column, are the following:
1. 13ra: This is the final ranking for 2013 on each data set.
2. 13a: Here I give the average anomaly for 2013.
3. year: This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.
4. ano: This is the average of the monthly anomalies of the warmest year just above.
5.mon: This is the month where that particular data set showed the highest anomaly. The months are identified by the first three letters of the month and the last two numbers of the year.
6. ano: This is the anomaly of the month just above.
7. y/m: This is the longest period of time where the slope is not positive given in years/months. So 16/2 means that for 16 years and 2 months the slope is essentially 0.
8. sig: This the first month for which warming is not statistically significant according to Nick’s criteria. The first three letters of the month are followed by the last two numbers of the year.
9. Jan: This is the January 2014 anomaly for that particular data set.
10.Feb: This is the February 2014 anomaly for that particular data set, etc.
14.ave: This is the average anomaly of all months to date taken by adding all numbers and dividing by the number of months. However if the data set itself gives that average, I may use their number. Sometimes the number in the third decimal place differs slightly, presumably due to all months not having the same number of days.
15.rnk: This is the rank that each particular data set would have if the anomaly above were to remain that way for the rest of the year. It will not, but think of it as an update 25 minutes into a game. Due to different base periods, the rank is more meaningful than the average anomaly.

To see all points since January 2013 in the form of a graph, see the WFT graph below.

WoodForTrees.org – Paul Clark – Click the pic to view at source

As you can see, all lines have been offset so they all start at the same place in January 2013. This makes it easy to compare January 2013 with the latest anomaly.

Appendix

In this part, we are summarizing data for each set separately.

RSS
The slope is flat since September 1996 or 17 years, 9 months. (goes to May)
For RSS: There is no statistically significant warming since November 1992: CI from -0.016 to 1.857.
The RSS average anomaly so far for 2014 is 0.235. This would rank it as 8th place if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2013 was 0.218 and it is ranked 10th.

UAH
The slope is flat since January 2005 or 9 years, 5 months. (goes to May using version 5.5 according to WFT)
For UAH: There is no statistically significant warming since February 1996: CI from -0.017 to 2.347. (This is using version 5.6 according to Nick’s program.)
The UAH average anomaly so far for 2014 is 0.192. This would rank it as 8th place if it stayed this way. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.662. The anomaly in 2013 was 0.197 and it is ranked 7th.

Hadcrut4
The slope is flat since January 2001 or 13 years, 5 months. (goes to May)
For Hadcrut4: There is no statistically significant warming since October 1996: CI from -0.010 to 1.215.
The Hadcrut4 average anomaly so far for 2014 is 0.515. This would rank it as 4th place if it stayed this way. 2010 was the warmest at 0.547. The highest ever monthly anomaly was in January of 2007 when it reached 0.829. The anomaly in 2013 was 0.486 and it is ranked 8th.

Hadcrut3
The slope is flat since September 2000 or 13 years, 9 months. (goes to May)
The Hadcrut3 average anomaly so far for 2014 is 0.472. This would rank it as 5th place if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to go back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2013 was 0.459 and it is ranked 6th.

Hadsst3
For Hadsst3, the slope is flat since January 2001 or 13 years and 5 months. (goes to May).
For Hadsst3: There is no statistically significant warming since January 1993: CI from -0.016 to 1.813.
The Hadsst3 average anomaly so far for 2014 is 0.392. This would rank it as 5th place if it stayed this way. 1998 was the warmest at 0.416. The highest ever monthly anomaly was in July of 1998 when it reached 0.526. The anomaly in 2013 was 0.376 and it is ranked 6th.

GISS
The slope is flat since September 2004 or 9 years, 9 months. (goes to May)
For GISS: There is no statistically significant warming since December 1999: CI from -0.004 to 1.413.
The GISS average anomaly so far for 2014 is 0.66. This would rank it as first place if it stayed this way. 2010 and 2005 were the warmest at 0.65 in April. But in May, 2010 was raised to 0.66, however to 3 digits, 2014 is very slightly warmer, although the difference is certainly not statistically significant. (By the way, 2010 was 0.67 in January.) The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2013 was 0.59 and it is ranked 7th.

Conclusion

GISS is unique in many ways. Can you think of other ways in which GISS is unique that I have missed? I seem to have the impression that most adjustments serve one of two purposes. With the odd exception, they either make the present warmer and the past cooler. However if this is not the case, then the adjustments make a new record easier to happen. Is this a fair assessment?

P.S. RSS came so fast for June and Hadcrut3 was so slow for May that the June value for RSS came in before I completed the report. As a result of the June value for RSS of 0.345, the average for RSS for the first six months is 0.253. If it stayed this way, it would rank 7th. However the time period for a slope of zero increased from 17 years and 9 months to 17 years and 10 months.
UAH, version 5.6 has also came out, although nothing shows on WFT yet. It was interesting, but not unexpected for me that UAH went down from 0.327 to 0.303. However RSS went up from 0.286 to 0.345.
Please correct me if I am wrong about the reason. It is my understanding that RSS only goes to 70 degrees south, whereas UAH goes to 85 degrees south.
According to this, it has been is cold in the Antarctic lately. Perhaps this cold anomaly has been captured by UAH but not by RSS. Does this make sense?

How does an outfit like GISS end up changing historicl data so many times ? Do they make some apparently justifiable changes to the data, but fail to record that they made those changes, then several years later, someone thinks that the data hasn’t had the adjustments made and sets the computer going again, and a few years later he leaves, a new person joins and eager to make the right impression finds that the necessary adjustments haven’t been made and makes thos adjustments yet again.

The last on shown was 2007, perhaps another set of adjustments will be made this year if indeed they haven’t already been made

GISS is the most alarmist biased of reporting sitesand all attempts to make it the alpha source for global temperature will be made by the team. Even though it only covers a small proportion of the land mass, and none of the sea surface ,it is most quoted. Ironic that their figures show cooling trends over much of the country according to their “climate at a glance” page on a decadal trending.

Calculate from the anomaly data the real global temperature and compare then the different data sets. I guess that you will find that they differ more than 0.1 °C. (The weather stations always measure temperatures not anomalies.) That’ s a simple method to find the error of measurement. I guess most of your statements will disappear in the sea of error.

If it is still being used, it will introduce a variety of bizarre and pointless artifacts. Because these methods have been used by GISS for a long time, I don’t think that people should presume that there’s necessarily a reason for the various artifacts – though if GISS’ stupid methods had gone the other way, one would surmise that they would have re-examined them long ago.

I would recommend that people interested in GISS spend some time with the CA posts on the topic, as my brief perusal of recent commentary on the topic indicates to me that people have not bothered familiarizing themselves with previous work on the topic.

Good information. But if you’re going to discuss a temperature series like GISS, why assume that we have all these acronyms memorized and know exactly what it is? Why not give a brief review? Reminding us that GISS is the global surface temperature series maintained by NASA would probably be enough to jog memories for some of us who are not immersed in this stuff on a daily basis. It always made sense to me to define an acronym the first time it is used in an article. Forcing thousands of readers to Google a term just to save the author a few seconds of time never made any sense to me.

Thank you for that! The question that I now have is whether or not what GISS is doing is even proper, assuming they are doing it without any bias. For example, is HadCRUT4 now somehow inferior to GISS for not doing what GISS is doing? Or is GISS inferior to HadCRUT4? Or are they just different without a possibility of value judgement?

“10. And last, but not least, per JoNova, as shown referenced at the top of this article, GISS progressively realigns and reinterprets the temperatures from decades long ago:”

This just indicates another undoubted uniqueness – GISS has been around a lot longer than any other index. They may have changed since the 80’s; no-one else was estimating at all then. And a 1980 estimate would have been based on thin data. Only a fairly small fraction had been digitised.

“1. For GISS, the slope is flat since September 2004 or 9 years, 9 months. (goes to May)”
On my calc it was below zero from Nov 2001 to April 2014. It’s possible May put it over the line, but these are very fine and basically random distinctions.

Sorry about that! However in my defense, I would like to note that Walter Dnes just had an article two days ago called: GISS Hockey-Stick Adjustments
So it is not as if it has been a long time since the acronym was last used here in a title.
However I will try to remember next time!

I shouldn’t have picked on you, Werner. GISS is one of the more common acronyms. But there have been past articles that have frustrated me because they throw out obscure acronyms left and right without defining any of them. Even with the ones I am somewhat familiar, I have a hard time remembering which ones are satellite data, surface temperature data, or ocean temperature data. I think age has something to do with it, but there may be others in my same boat.

To most of us who are skeptical of CAGW, the consistency of the GISS temp adjustments comes as no surprise. Putting Hansen or Schmidt in charge of temperatures gives them a unique opportunity to perpetuate the world’s greatest science hoax.
I am eagerly waiting the day NASA or NOAA whistleblowers expose the fraud being foisted on the world’s scientific community. The adherence to the scientific method has been violated and a large portion of the public are aware of the corruption. Following the end of the manmade climate change era, it will be a long time until the scientific community regains public trust and an even longer time (if ever) for politicians, who jumped on the CAGW bandwagon in its hay day.

werner, thanks for a great, comprehensive, clear and concise update and comparison of the different GSTA data-sets. GSS is clearly in a world of its own… unfortunately…

One of the main problems with all these adjustments is that if one tries to change the past, one is doomed to make the same mistakes again in the future (since one isn’t learning from the past). Hence, policies to affect the future that are based on data that changes the past will fail and be likely counter productive. This actually warrants a whole essay by itself.

What is GISS?
GISS is the Goddard Institute of Space Studies, a division of NASA. GISS home office is in Manhattan, where they can be close to nature and the heavens above.
NCDC is the National Climatic Data Center and is a division of NOAA.
Acronyms are a part of government lingo. It takes time and lots of reading to become fully aware of who the players are as well as their interrelationships.

Nothing packs an emotional impact quite like the frequent monthly announcements of new all-time high record global temperatures.
And no other data set delivers these records quite as frequently as the political activists at GISS.
Coincidence?
I think not.

On my calc it was below zero from Nov 2001 to April 2014. It’s possible May put it over the line, but these are very fine and basically random distinctions.

It was from November 2001 last month, but now WFT gives slope = 0.000422865 per year from November 2001. So to be negative, we need to go to September 2004. I agree, the jump is huge.
There was also a huge jump in the statistically significant times for GISS. It jumped from August 1997 last month to December 1999 this month.

P.S. Is your Hadsst3 up to date? The last 3 points do not match the last 3 months of 0.347, 0.478 and 0.479.

For your reading pleasure:http://data.giss.nasa.gov/gistemp/abs_temp.html
Excerpt:
Q. If SATs cannot be measured, how are SAT maps created ?
A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a ‘climatology’) hopefully represents a typical map for the particular month or day of the year.

So GISS is using a computer model to generate the very “data” that is used for the graph.

Looking at the “evolution” of the graph where the red and blue lines realign I think the explanation is simple, more and more iterations of the same computer model, generating more and more “data”, lets us look at the computer generated data with some remnants of historical data, which gets diluted as the time pass.
In the end we will have 100% fit.

climategrog says:
July 5, 2014 at 12:48 pm“…. however they may switch places to three decimal places. ”
whicih should tell you that your ranking is totally meaningless.

While I agree with your general sentiment, I would not say “totally meaningless” since there is obviously a difference between 8th and 1st on most data sets. I agree the divisions are sometimes extremely small and not significant. But that does not stop people from using these numbers as if they were without error. You have no doubt heard that May 2014 was the warmest May on record for GISS. It came in at 0.76. However 2010 and 2012 came in at 0.70. So statistically speaking, there could be a three way tie for the warmest May, right?

At least for the land-station averages, what GISS produces with its quaint methods is not the most egregious outlier in terms of maximizing the long-term trend and distorting local/regional temperature variations. That dubious distinction belongs to the whack-away-at-intact-records technique (sold under the “scalpel” rubric) that BEST uses to enforce physically unrealistic spatial homogeneity, so that kriging can be used to extrapolate to locations where no measurements were ever made. This over-ambitious approach severely diminishes the low-frequency spectral content in their manufactured time-series, thereby emphasizing linear trend at the expense of natural climatic variations over decadal and longer time-scales.

” That dubious distinction belongs to the whack-away-at-intact-records technique (sold under the “scalpel” rubric) that BEST uses to enforce physically unrealistic spatial homogeneity, so that kriging can be used to extrapolate to locations where no measurements were ever made. This over-ambitious approach severely diminishes the low-frequency spectral content in their manufactured time-series, thereby emphasizing linear trend at the expense of natural climatic variations over decadal and longer time-scales.”

Gross misunderstanding

Suppose you have a station named Grand Haven

Grand Haven Lat = 40, Lon = -97, Alt = 120

in 1967 the station is moved

Grand Haven Lat = 40.05, Lon = -97.02 alt = 400

In Ghcn Daily these two different stations will be listed as ONE STATION
But they are really two different stations. The only thing that is the same is the name and the station ID
Same thing for some stations that moved hundreds of miles.

Basically the SOURCE file has an error in it. They moved the station and didnt give it a new ID
Of course sometimes they do give it a new ID

Also, when a station gets a new sensor ( say MMTS) we refuse to apply the adjustment which would warm the station. We say its anew station. why, because the instrument changed.

basically we are restoring the data to its proper form. move the station from a city to the country?
no skeptic would say these were the same merely because they had the same NAME.

What are realistic accuracy, precision, and error numbers for any of these data sets? Are the data from the 1800s and most of the 1900s precise to even a tenth of a degree? Are any of these data sets of any value at all?

I wonder if employees would accept that they are getting more in their pay packets without any real monetary increase because their past wages have been regularly adjusted to lower values in the historical pay records.?

I know there is inflation that erodes the value of the current dollar and economists have ways of looking at that – also some economists are named by Main Stream Media as “climate experts” – are the climatologists just adopting the mantle of economic expert to inflate their own salaries by fiddling with automatic temperature adjustments and issuing alarming results?

“… we do not know anomalies to the nearest 1/1000 or 1/100 of a degree,…” Even 1/10 th of a degree is hard to believe. Temperatures were recorded to the nearest 1.0 degree until recently. How on Earth are measurements across a small sample of the Earth’s surface supposed to measure the “true” temperature accurately to 0.1 degree? I would love to see a full statistical justification of claimed accuracies and precision.

Sal Minella says:
July 5, 2014 at 2:05 pmWhat are realistic accuracy, precision, and error numbers for any of these data sets? Are the data from the 1800s and most of the 1900s precise to even a tenth of a degree? Are any of these data sets of any value at all?

If I recall correctly, measurements from around 1900 are only good to the nearest 0.5 C, but lately, they are expected to be within 0.1 C. In Section 2, I am using Nick Stokes’ site for the 95% confidence intervals. I will let him elaborate further.
Thanks Nick!

Dr Burns says:
July 5, 2014 at 2:21 pmRead the end of page 11 of this NOAA document
Thank you! They must assume that the temperatures ending in 0.1 to 0.4 must balance the ones ending in 0.6 to 0.9. That would be reasonable. However in rounding a 0.5 to the next highest number makes 10% of the readings too high. It would have made more sense to ask that a 0.5 be rounded to the nearest even number if they wanted only whole numbers.
On the other hand, if 10% of the numbers were too high 50 years ago, and if 10% of the numbers are too high now, the trend should not be affected.

John Kennedy who helps compile the hadcrut4 figures pointed out there are many uncertainties in the May 2014 figures and it should be considered as certainly in the top ten of warm Mays, but they couldn’t be any more certain than that

Vern Cornell
July 5, 2014 at 11:20 am
‘What is GISS?…..it is not explained…it baffles me…’

First of all GISS stands for Goddard Institute for Space Studies. So, perhaps the next question that may come to mind is, who is Goddard? According to Wikipedia: ‘Robert Hutchings Goddard (October 5, 1882 – August 10, 1945) was an American professor, physicist, and inventor who is credited with creating and building the world’s first liquid-fueled rocket,..,successfully launched on March 16, 1926. Goddard and his team launched 34 rockets between 1926 and 1941, achieving altitudes as high as 2.6 km (1.6 mi) and speeds as high as 885 km/h (550 mph).’ Now, I do not wish to put Robert Goddard down. His accomplishments were substantial considering his funding. But he received very little, if any, funding from the US government at the time. His direct involvement with any US government space program was pretty much nonexistent. For that we have to turn to Wernher von Braun (March 23, 1912 – June 16, 1977) who developed the world’s first large scale liquid fueled rockets which were produced at Peenemünde on the Baltic in Germany. The Germans, during WWII, invested heavily in the development of rockets as long range weapons; Peenemünde is where they were produced, and Wernher von Braun was the German rocket scientist who headed up Peenemünde. At the end of WWII von Braun and some colleagues headed west to be taken captive by Allied troops advancing eastward. Other colleagues remained at Peenemünde to be captured by Soviet troops advancing westward. The large, liquid fueled V2 rockets developed at Peenemünde were split up as war booty between the Allies and the Soviets and it is from these V2s that the US conducted upper atmospheric research and acquired experience with a two stage rocket. Arguably, the space race of the 1950s through 70s between the US and USSR was driven by the German rocket scientists each side acquired. And our’s were better. Wernher von Braun was responsible for all the successful space program rockets up to and including the Moon launch. However, it wouldn’t do to name a space center after a former Nazi collaborator so we have no von Braun space centers today. So, instead we have GISS.

What does GISS do? Well, according to Wikipedia ‘GISS was established in May 1961 by Robert Jastrow to do basic research in space sciences in support of Goddard programs.’ Remember, this was all instigated as a result of WWII, the Cold War that followed, and the space race between the US’s and the USSR’s German rocket scientists that arose from that silent conflict. The Goddard programs were simply the programs of the GSFC, and GISS was an arm of it. And what is the GSFC? Again, according to Wikipedia; ‘The Goddard Space Flight Center (GSFC) is a major NASA space research laboratory established on May 1, 1959 as NASA’s first space flight center.’ Now, I don’t wish to insult anyone’s intelligence but it might be useful to reiterate what NASA (the parent to GSFC and GISS) really is, or at least is claimed to be, so, according to the US government’s own website; ‘NASA stands for National Aeronautics and Space Administration.’ Furthermore, on this website we learn NASA was initiated in 1958. It bears notice here that the world’s very first satellite, Sputnik I, was launched in 1957 by the USSR so the dates behind the creation of NASA, GSFC, and GISS become self explanatory. Prior to all this, the US military was responsible for the first rocket exploration, but after the embarrassment of Sputnik, the civilian agency NASA came into being and Wernher von Braun was enlisted.

Now, before there’s any misunderstandings as to what the word, ‘Space,’ present in NASA’s name means let us return to Wikipedia: ‘Space is the boundless three-dimensional extent in which objects and events have relative position and direction.’ But, let us be more precise. So, also, according to Wikipedia: ‘Outer space, or simply space, is the void that exists between celestial bodies, including the Earth…'; and; ‘There is no firm boundary where space begins. However the Kármán line, at an altitude of 100 km (62 mi) above sea level, is conventionally used as the start of outer space in space treaties and for aerospace records keeping.’

In conclusion, I have to apologize for my rather lengthy answer to your question. Especially since I don’t think I even began to answer it. You see, I don’t find anything whatsoever in the foregoing descriptions or explanations that points, ever so microscopically slightly, towards the idea that the entities above would possibly be involved in collecting land surface temperature measurements at all, let alone from anything less than 62 miles above the Earth. As we have seen, that purpose certainly wasn’t the motivation in creating NASA, GSFC, or GISS. And I can’t find it in their charters, mission statements, history, or even in their names. It pains me to tell you that GISS is a land surface temperature measurement when all my research tells me it shouldn’t be. Maybe it’s mission creep. Maybe, with the end of the space race (after all, we haven’t been back to the moon in almost 40 years) it’s a jobs preservation program. (If so, the only one the Obama administration has been successful with.) Perhaps it’s a change in direction to avoid the embarrassment of needing Russian rockets to now launch payloads.

In any case, maybe that’s why the numbers produced by it are so crappy.

Dr Burns says:
July 5, 2014 at 2:21 pm
Read the end of page 11 of this NOAA document
Thank you! They must assume that the temperatures ending in 0.1 to 0.4 must balance the ones ending in 0.6 to 0.9. That would be reasonable. However in rounding a 0.5 to the next highest number makes 10% of the readings too high. It would have made more sense to ask that a 0.5 be rounded to the nearest even number if they wanted only whole numbers.
On the other hand, if 10% of the numbers were too high 50 years ago, and if 10% of the numbers are too high now, the trend should not be affected.”

The way they round the numbers to the nearest whole degree is the standard way to round them…nothing new there. 0.5 to 0.9 is rounded up and 0.0 to 0.4 is rounded down…so 75.0 through 75.4 become 75 and 75.5 through 75.9 become 76.

Tonyb says:
July 5, 2014 at 2:50 pmcertainly in the top ten of warm Mays
Thank you for that. So the warmest May was 2014 at 0.586 and the 10th warmest was at 0.428. The difference is 0.158, so if we assume the low value could be 0.08 higher and the high value could be 0.08 lower, then we could say we have a 10 way tie for first assuming an error bar of +/- 0.08.
Something very similar could have been said for GISS in May.

Werner Brozek;
However in rounding a 0.5 to the next highest number makes 10% of the readings too high.
>>>>>>>>>>>

No it doesn’t. Everything up to 0.49999…. is rounded down, 0.5 and higher is rounded up. So the difference between 0.49999…. and 0.5 is effectively 0.0000… hence a perceived bias but from a purely math perspective, there actually isn’t one.

davidmhoffer says:
July 5, 2014 at 3:37 pmNo it doesn’t. Everything up to 0.49999…. is rounded down
Keep in mind we are talking about temperature readings to the nearest tenth of a degree, or so I thought.

Werner Brozek;
Keep in mind we are talking about temperature readings to the nearest tenth of a degree, or so I thought.
>>>>>>>>>>>

We are. But from a strictly math perspective, there is no bias introduced by rounding 0.4 down and 0.5 up because the “range” of 0.4 actually extends up to 0.4999….

On the other hand, there would be a bias from observation because I’m pretty certain that someone looking at a thermometer that read 0.47 would probably record it as 0.5, and this would in fact introduce a bias.

I wonder if they’re blocking “bots” based on user-agent-string, and “R” has a user-agent-string that they block. Does “R” have the option to change user-agent-string? For a list of “normal Firefox” strings see http://www.useragentstring.com/pages/Firefox/ and try one of the more recent ones, e.g…

As a statistician and meteorologist, much of what is going on here makes me feel like I am watching a horror show. The errors occurring on all sides here are frightening – here are a few of them:

1) If the measuring instrument changes, no amount of “infilling” or “adjustments” can ever change the fact that the data are bad – you cannot treat new and old as the same data series. If they are used in calculating a statistic, then the proper degrees of freedom of any adjustment method require deflation. On the flip side, if the changing data are used in a model as a proxy of an unobserved “true” state (such as often used in econometrics), then all statistics must be calculated with respect to this model, but usually model specification errors dominate to the point of making the inferences useless. In other words, one cannot use fake data from which to calculate an inference and not acknowledge that the data have been tampered with in the corresponding inferences.
2) With respect to the first point, the other option would be to throw out data series that have a changing measurement or introduced bias, but this then means that survivorship bias will dominate. In other words, are surviving “good” series in rural areas, urban areas, country/regional specific, etc?
3) The bigger horror is that average temperature anomalies are not really even measuring what has to be measured, namely, the combined total heat content found in the atmosphere/ocean system. Since even small relative changes in latent heat content dominate small changes in average temperature, this is not even measuring the right thing, much less handling distributional issues (e.g., net boundary layer heat content as opposed to a surface station temperature that will be influenced by land-use change effects on boundary layer decoupling, etc).
4) Point 3 was the tip of the “iceberg”, as the heat capacity and storage of the oceans is so massive compared to the atmosphere that even accounting for water vapor and latent heat roles in the atmosphere would still succumb to the much more massive and still mostly unmeasured oceanic heat content. In a nutshell, we do not even have a valid historical series in which we can measure such changes on the scale being attempted.

We do “know” that the earth has been both much warmer on average and much colder on average than present, but the ugly truth is that we do not yet have any usable data to give us a good estimate of total heat content changes in the modern era. Satellite measurements of total upwelling/down-welling radiation fluxes are a definite step in the right direction, as are deployment of much more extensive ocean buoy/temp sensors, but the rest of this looks like bad science and statistics to me. It’s nice to be able to ask if things are getting warmer or colder, but the real question is what is happening to the heat content and distribution. And that is not going to appear in any of these single number series, even ignoring their issues.

Maybe this constant exaggeration of data as seen in GISS is an inherent characteristic of the always optimistic American psyche.
I have flown gliders, those big mostly German and Polish built FRP ones for just over 50 years now .
A number of American outfits over those 50 years have had a go at building gliders with the performance curves that are claimed to match or exceed the German and Polish built gliders and their performance curves, that is the sink rates and glide angles at different speeds, the “polar curves” as they are called which on paper the American designed and built gliders can match the best of the German and Polish aircraft .
[ Those best German gliders depending on class models now have best glide ratios of between 50 and 60 to one; ie 50 to 60 kilometres gliding distance in calm air for each kilometre of height; ie One kilometre = 3281.4 feet
A late model jet airliner has a glide ration close to and around 20 to one ]

But in the air it is a different matter altogether as the American designed and built gliders, despite their claimed performances derived from their “polar curves”, have never matched the German and Polish machines in performance.
And the reason is quite simple.
The Americans as do the European manufacturers of gliders design the glider using all the aerodynamic inputs required for best performance as well as for good handling in the air, a vital characteristic whether you spend six or eight hours driving hard over long distances of hundreds of kilometres or just lazing around for an hour or so casually enjoying flying the local airfield thermals .
When the glider’s claimed glide angles at different speeds, the performance characteristics seen in the “polar curve” of the glider is published, the Americans in the past have always use the modeled, the maximum theoretical performance curves in their advertising literature and in their technical specifications for the glider type.
As we all know there can be a lot of quite large differences between modeled and calculated performance and the real world performance in just about any field we would consider, especially climate and now temperature data manipulation..

The Germans use the measured in air performance of the very carefully built and finished prototype or preproduction models of a type for their performance curves.
The actual performance of their commercially built gliders are usually not quite up to their claimed performance simply because the precision built molds, built to less than a few tenths of a millimetre in accuracy in the wing aerodynamic profile which the wing is then molded in, deteriorate over a few tens of the numbers of wings produced and have to be re-profiled to regain the accurate aerodynamic wing profile.
So a glider type off the German and European production lines will rarely match the performance of the preproduction gliders but will generally come very close.
And those flown performance curves from the preproduction gliders are what are used in the advertising and technical information of the glider type to sell those gliders.

The Poles apparently grab a glider off the production line and measure the performance curves of the real world glider with all it’s small imperfections from a series built production, the glider the customers actually buy so you generally get the advertised performance you pay for from the Poles.

And so with the American GISS and the European HADCRU.
The Americans as usual figure that it is advertised world beating performance that sells and that has certainly been the case with the exaggerations of GISS in it’s temperature data .

And maybe that tendency to exaggerate somewhat in the American psyche is also seen in the gross corruption of temperature data by the American based NCDC before it ever reaches GISS and HADCRU for further severe massaging before being released for public consumption.

Mind you, if NCDC and either or both GISS and HADCRU were calculating and advertising the performance specifications for a glider, given their known ability to “adjust” global temperature performance and advertise those grossly hyped figures as the real deal I would now just assume that they were into advertising and selling gliders that had real world performances somewhat akin to that of a “concrete” glider as an alternative to selling the odd bridge or two.

How much of a rounding error there may be, really depends upon what people were ‘reading’ in the past, bearing in mind the then used scale on the thermometer being read.

mjc says: July 5, 2014 at 3:09 pm “…The way they round the numbers to the nearest whole degree is the standard way to round them…nothing new there. 0.5 to 0.9 is rounded up and 0.0 to 0.4 is rounded down…so 75.0 through 75.4 become 75 and 75.5 through 75.9 become 76″.If that is how the thermometer was read, and if that is how the rounding operates there may be cause for concern.

After all 75.0 is read as 75 and 75.0 is not being rounded down to 75, such that only 75.1, 75.2, 75.3 and 75.4 lies in the lower rounding down band, whereas 75.5, 75.6, 75.7, 75.8 and 75.9 lies in the higher rounding up band.

The problem is the treating of 75,5, half the time it should be rounded down and half the time it should be rounded up.

Of course, I do not know how the thermometer was ‘read’ in the past, nor precisely how they are treating the rounding exercise.

Does anyone here know how many real original stations currently used in USHCN.
July 4th, 2014 at 9:48 pm Steven Mosher has mentioned
angech The TOBS adjustment would be done once
the adjustment that would/could change on a daily basis is PHA
wait for Monday. the entire process will be explained.
but, I don’t expect anybody who has raised objections will give them up.
Going forward I think you’ll see NCDC make the whole thing an output of PHA.
That moves them closer to something like our process.
But wait and read what is coming out on Monday
Then if you don’t like what NCDC does with USHCN, we could just dump all of USHCN, dump all of the US and the answer wouldn’t change much.
This will no doubt solve your GISS concerns, Werner.

Regarding rounding used in ASOS observations, for a number of years, the instructions
have been to ALWAYS round UP a 0.5 degree reading. This means that if the reading
is below zero, you round the reading to the nearest higher whole degree, rather than
the nearest higher number absolute with the sign appended.

In other words, if the temperature reads -1.5 degrees, it would be rounded to -1
degree, rather than -2 degrees.

If you round numbers to the nearest whole degree then the resulting dataset is precise to a whole degree not a tenth or hundredth of a degree. Any mathematical operations, such as averaging, cannot yield a result with greater precision than the original dataset. So, reporting average temps to a tenth or hundredth is ludicrous. Furthermore, if the accuracy of the original reading, done by untrained individuals, or read from uncalibrated sources, etc., etc., may be only good to a degree (+/- 1 degree), then the warming since 1850 may be 1 degree, +/- 1 degree.

Steve McIntyre says:
July 5, 2014 at 9:53 am
When I looked at GISS in 2007, they used a weird “two-legged” adjustment to individual stations that is not used by anyone else. I haven’t checked their recent methods to determine whether it’s still being used, but it probably is.

Yes, they still use the “two-legged” adjustment, where each of the legs can either go down (to remove “urban warming”) or up (to remove “urban cooling”). I think the rationale was as follows:

1. Urbanization bias is not necessarily “linear”, e.g., an area could undergo exponential growth during some period, while during other periods the rate of development might “slow down”. Therefore, Hansen et al. decided to allow a bit more flexibility in their adjustments than a single “linear” adjustment per station. Hence, the “bi-linear” adjustments…

2. They heard that, under certain conditions, urban development could introduce a slight “cooling” bias. So, they decided to let their automated adjustment program calculate adjustments that had either a negative or a positive slope.

Unfortunately, because of a number of serious flaws in their automated adjustment algorithm and the poor quality of the GHCN dataset, these adjustments are frequently unrealistic and/or nonsensical.

We provide a detailed discussion & analysis of these adjustments in our “Urbanization bias II. An assessment of the NASA GISS urbanization adjustment method” paper, which we have submitted for open peer review at our OPRJ forum: http://oprj.net/articles/climate-science/31

For our paper, we carried out a series of five comprehensive surveys of the adjustments over the period August 2010 until November 2011, when GISS switched to GHCN V3 and stopped publishing their intermediate calculations on their public ftp site. We also used some of the GISS files you have in your ClimateAudit data folder to calculate the adjustments for February 2008.

For each survey, we divided the station adjustments into 4 types, based on the slopes of the two “legs”:
Type 1 – Both slopes negative, i.e., remove “urban warming”
Type 2 – Both slopes positive, i.e., remove “urban cooling”
Type 3 – First slope negative; second slope positive
Type 4 – First slope positive; second slope negative

Types 3 & 4 comprise the majority of their adjustments. We call these “tag-team” adjustments. I think this is similar to the adjustments you referred to as “bi-polar”, although as far as I recall you found less “bi-polar” adjustments than we found “tag team” adjustments.

After November 2011, NASA GISS stopped publishing the intermediate calculations (presumably as part of NASA’s “Open Data” project??? ;) ). So, we were unable to check exactly how the breakdown of the four types changed when they switched to using GHCN V3.

But, as Nick Stokes points out, when they switched to using V3, they started using the homogenized version of GHCN.

As we discuss in our papers, the Menne & Williams, 2009 homogenization algorithm used for GHCN causes “urban blending”. That is, the trends of the most heavily urbanized stations are slightly reduced to match their less urbanized neighbours… but, the rural stations have their trends increased to better match their urban neighbours.

This means that, in the homogenized GHCN stations, the urbanization bias is blended (or “homogenized”) amongst all the stations – “rural” and “urban”.

The GISS adjustments explicitly assume that the “rural” stations have no urbanization bias. So, whatever the problems with the pre-2012 GISS adjustments, their post-2011 adjustments are likely nonsensical.

Regarding old temperature records – A lot of this data was collected in the World Weather Records which were published from time to time in the Smithsonian Miscellaneous Collections series, which I found (sometime ago) on the Web.

Among the data, I found adjustments of 1 degree or more mentioned without explanation – and this was not uncommon. I don’t recall what the precision was for the basic numbers, but I certainly wouldn’t believe a precision finer than a degree.

There seem to be many claims of continued, rather rapid, warming. Are these based on anything at all except the occasional high daily temperature?

More specifically, for the past ten to twenty years, do claims of continued warming rely on:
Some alternate analysis of these particular temperature data sets, or some subset of them, that does say there has been statistically significant warming?
Other temperature data sets that show statistically significant warming when subjected to the same statistical analysis, or some alternate statistical analysis?

I have yet to come across any specific answers to this.
I will be happy to follow any references that can be provided
but I hope to get something that directly addresses the question
rather than something that may,
to someone with the necessary background,
simply suggest the idea of relevance to the subject.

AndyH says:
July 6, 2014 at 4:37 pmThere seem to be many claims of continued, rather rapid, warming. Are these based on anything at all except the occasional high daily temperature?
It is my experience that these claims depend on one of two possibilities, neither of which really prove anything.
The first is that even if the air has not warmed, the oceans have warmed by 0.1 C and if this heat went into the air instead, the air would have warmed by 100 C since the oceans can hold about 1000 times more heat than the air. There are at least two things wrong with this scenario. Firstly, we do not know the temperature change of the whole ocean to know if that is the case. And secondly, even if the ocean did warm by 0.1 C, the most that the air can be warmed by this is also 0.1 C. For all intents and purposes, the oceans are an infinite heat sink which will prevent rapid warming of the air.
The second problem is how certain people define a trend. They say that since the last decade was warmer than the previous one, and it was, that warming is continuing. Here is the problem with this argument. Suppose a man grows until his 20th birthday and then stops growing. At age 30, he is the same height as at age 20. He did not grow a millimetre in all these last ten years. But some warmists will say that since his average height between ages 20 and 30 is much higher than between 10 and 20, the man was growing rapidly between the ages of 20 and 30.

You list how far back in the data sets the regression line is flat. Actually, you can push it back a bit further than that. If the slope of the regression line is less than the Standard Error, then the slope is not statistically significant and you can consider that line to be flat.

GISS reached some sort of a tipping point with its record May anomaly which pushed its statistically significant warming date forward by over two years from last month.
Hadcrut4 also set a record for May, however since it came so late, the statistically significant date had not been updated when I sent this off. It has now been updated and the time for statistically significance for Hadcrut4 only went up by one month to November 1996 where the CI is now -0.023 to 1.167. So Hadcrut4 still shows no statistically significant warming for over 17 years.

UAH version 5.5 update:
The June value was 0.277 leaving an average of 0.206 for the year so far. This would rank 6th if it stayed this way. It is possible that the period of zero slope will increase to 9 years and 6 months, but it is too close to call. I will know in 11 hours when WFT gets updated.

The misunderstanding about tacit assumptions of spatial homogeneity in producing “regional expectation” is entirely BEST’s. While the “Roman hammer” can be used effectively to cobble together different versions of a station record to form a single long-term time-series, this cannot be done reliably with snippets of record from locations a few hundred km apart.

GISS deals with surface temperature data, right? Well, with all the negative information we have heard about the adjustments to the data, and the way the global average temperature is determined, why should anyone have any confidence in the surface temperature data put into the public domain by GISS?

” however if I want the best single number for the table, it is just a nuisance.”

This is not the best number. Those numbers have error bars dude. The third dp is likely to be random at best. Do not try to rank based on that. If there is a draw within statistical significance then you must declare a draw. There is no other way to do it. Sorry about that.

Yes, GISS deals with surface temperature. And we are indeed trying to figure out why we should “have any confidence” in their data. One notable difference between GISS and Hadcrut4 is how they treat sparse data in the polar regions. With the huge amount of extra ice at the two poles combined, and with GISS’ method to take sparse polar readings into account, I am puzzled how GISS just blew away the previous May record by 0.06 C, far surpassing all other data sets for May.

Adam says:
July 7, 2014 at 6:30 pmIf there is a draw within statistical significance then you must declare a draw.

I am not disputing what you say. I give ranks in row 1 and in row 15 of the table. In addition, I mention ranks continuously in the Appendix. In almost all cases, I could probably give a rank and say +/- 5, unless the rank is less than 5.
And with every single temperature anomaly on the table, as well as elsewhere, I could probably say +/- 0.1.
I will just keep things simple, although we all realize the limitations. The given data sets also do not have a +/- 0.1 behind every one of the hundreds of numbers.

UAH version 5.5 update. With the June value of 0.277 for the anomaly, the time for a negative slope decreased from January 2005 to June 2008. So instead of 9 years and 5 months, it is now 6 years and 1 month.
I am of course making perhaps the erroneous assumption that the 0.277 is accurate to the nearest 1/1000 degree. If I were to assume it could be 0.277 +/- 0.1. then it could be as low as 0.177, and if this were the case, the time would remain over 9 years.