Red and orange are based on surface meteo station data as compiled by NASA (Hansen et al) and the British Met Office (HADCRU of Jones et al). Green and blue are two different products of the same satellite data series compiled by University of Alabama (Spencer et al) and Remote Sensor Systems (RSS).

See how NASA creeps up, whereas Jones et al of the UK met office holds the midgrounds between Hansen and the two satellite temperature sets. Although the latter show differences in monthly values, both have a robust fit of the 12 months running mean (bold black).

Staff: Mentor

It's not an "analysis" of the data. It's looking at what these "official" sources predicted would happen as opposed to what actually happened. Like "it will rain Thursday" and on Friday you know it didn't rain Thursday.

It's not an "analysis" of the data. It's looking at what these "official" sources predicted would happen as opposed to what actually happened. Like "it will rain Thursday" and on Friday you know it didn't rain Thursday.

That's the spirit of the scientific method, testing the predictions, and that's the intention of this post, to see if it rained on Thursday. No analyses, just comparing predictions with measured results.

This is the prediction that started the global warning alarm Hansen et al 1988, centered around the model result, fig 3 (page 9347)

So what happens if we merge the actual results of the NASA and RSS (12 month running average) with the predictions:

Note that the vertical positions of the graphs depend on different definitions of the basic zero value. Therefore I have displaced the vertical plots of the both measured series to start at the average value between scenario A and B.

Also important is to note the presumptions for the about the three scenarios in appendix B page 9361 - 9362

B: "In Scenario B the growth of the annual increment of CO2 is reduced from 1.5% yr-1 today to 1% yr-1 in 1990, 0.5% yr-1 in 2000 and 0 in 2010...

C: "In scenario C the CO2 growth is the same as in scenario A and B through 1985; between 1985 and 200 the annual CO2 increment is fixed at 1.5 ppmv yr-1; after 2000 CO2 ceasaes to increase, its abundance remaining fixed at 368 ppmv...

Staff: Mentor

Perhaps it's still too early to tell. The NASA 12 Mo RA and RSS 12 Mo RA oscillate quite a lot compared to the models, which seem smoother. Occasionally the measurements depart from the models. It's hard to tell A, B, C (but I assume C is the bottom one in the second plot). It would appear the measurements are dropping below C between 2006 and present.

Staff: Mentor

How are standard deviations or confidence intervals normally taken into account with these types of models and data?

Perhaps those details are buried in the papers by Hansen et al., e.g. the one cited by Andre in post #8.

I suppose they could use noise analysis. In some cases, I seen 5-year (rolling average) trending plots which smooth out variations. I'm not sure how the measured data are processed.

Hansen/GISS make the following comments:

Current Analysis Method
The current analysis uses surface air temperatures measurements from the following data sets: the unadjusted data of the Global Historical Climatology Network (Peterson and Vose, 1997 and 1998), United States Historical Climatology Network (USHCN) data, and SCAR (Scientific Committee on Antarctic Research) data from Antarctic stations. The basic analysis method is described by Hansen et al. (1999), with several modifications described by Hansen et al. (2001) also included.

Graphs and tables are updated around the 10th of every month using the current GHCN and SCAR files. The new files incorporate reports for the previous month and late reports and corrections for earlier months. NOAA updates the USHCN data at a slower, less regular frequency; we switch to a later version as soon as a new complete year is available.

The GHCN/USHCN/SCAR data are modified in two steps to obtain station data from which our tables, graphs, and maps are constructed. In step 1, if there are multiple records at a given location, these are combined into one record; in step 2, the urban and peri-urban (i.e., other than rural) stations are adjusted so that their long-term trend matches that of the mean of neighboring rural stations. Urban stations without nearby rural stations are dropped.

A global temperature index, as described by Hansen et al. (1996), is obtained by combining the meteorological station measurements with sea surface temperatures based in early years on ship measurements and in recent decades on satellite measurements. Uses of this data should credit the original sources, specifically the British HadISST group (Rayner and others) and the NOAA satellite analysis group (Reynolds, Smith and others). (See references.)

In the past our procedure has been to run the analysis program upon receipt of all three data sets and make the analysis publicly available immediately. This procedure worked very well from a scientific perspective, with the broad availability of the analysis helping reveal any problems with input data sets. However, because confusion was generated in the media after one of the October 2008 input data sets was found to contain significant flaws (some October station records inadvertently repeated September data in the October data slot), we have instituted a new procedure. The GISS analysis is first made available internally before it is released publicly. If any suspect data are detected, they will be reported back to the data providers for resolution. This process may introduce significant delays. We apologize for any inconvenience due to this delay, but it should reduce the likelihood of instances of future confusion and misinformation.

Finally, we note that we provide the rank of global temperature for individual years because there is a high demand for it from journalists and the public. The rank has scientific significance in some cases, e.g., when a new record is established. However, otherwise rank has limited value and can be misleading. Note that, given our estimated error bar in Figure 1, we can only say that 2008 probably ranks as somewhere between the 7th and 12th warmest year. As opposed to the rank, Figure 3 provides much more information about how the 2008 temperature compares with previous years, and why it was a bit cooler (note the change in the Pacific Ocean region).
. . . .
References
1. Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato, 1999: GISS analysis of surface temperature change. J. Geophys. Res., 104, 30997-31022, doi:10.1029/1999JD900835.

A press release from Met Office Hadley Centre and the Climatic Research Unit (CRU) at University of East Anglia 2008 global temperature

. . .
La Niña events typically coincide with cooler global temperatures, and 2008 is slightly cooler than the norm under current climate conditions. Professor Phil Jones at the CRU said: "The most important component of year-to-year variability in global average temperatures is the phase and amplitude of equatorial sea-surface temperatures in the Pacific that lead to La Niña and El Niño events".

The ten warmest years on record have occurred since 1997. Global temperatures for 2000-2008 now stand almost 0.2 °C warmer than the average for the decade 1990–1999.

Dr Peter Stott of the Met Office says our actions are making the difference: "Human influence, particularly emission of greenhouse gases, has greatly increased the chance of having such warm years. Comparing observations with the expected response to man-made and natural drivers of climate change it is shown that global temperature is now over 0.7 °C warmer than if humans were not altering the climate."
. . . .

Hansen papers assumes in scenario A and B that CO2 emissions will grow 1.5% annually. Actual atmospheric CO2 concentrations have been growing only about 0.4% annually since 1980 for a total of 13.6%. Hansen also assumes a climate sensitivity of 4.2C per doubling of CO2.

Since 1980, according to the NCDC data base, global 5 year average temperatures have risen about 0.51C, whereas NASA GISS finds 0.53C. Considering the actual increases in CO2, Hansen’s sensitivity assumption works out to 0.57C of warming. So, actual climate sensitivity to CO2 doubling looks to be less than 3.8C since CH4 has also played a role (at least up to the 1990's).

Hansen should have assigned an uncertainty band for his CO2 sensitivity or anticipated solar irradiance falling as much as it has. On the otherhand, perhaps he seriously underestimated climate sensitivity to CH4. Go figure.

Let's compare the four predominant data sets about the global temperature updated to include October 2008:

...

See how NASA creeps up, whereas Jones et al of the UK met office holds the midgrounds between Hansen and the two satellite temperature sets.

Umm... NASA (GISTEMP) does NOT "creep up", nor does the UK Met (HADCRUT3) hold a middle ground between GISTEMP and the LT dataset.

In fact, if you take the littlest trouble of adjusting for the baselines, one will find that 3 of the 4 datasets match fairly closely. The outlier is the UAH set, (not GISTEMP, as anyone reading any number of the threads in this forum - including this one - would have come to believe).

Here's what you'd get for the means and trends of the 4 datasets (from a linear least squares fit to 12-month running averages from last 30 years of data) after correcting for the baselines by using the 1979-1998 mean values (which RSS and UAH use) for all four sets:

Although the latter show differences in monthly values, both have a robust fit of the 12 months running mean (bold black).

That's just flat out ridiculous!

UAH and RSS do not share the same indistinguishable 12-month running average. In 1980, for instance, the difference in the 12-month average is nearly 0.1C (after matching almost exactly in 1979). Whoever made the plot in the OP will need to make that "bold black" line about 20 times thicker in order to pull off the story that UAH and RSS share the same running mean for every month of the last 360 months!

Indeed I made a mistake, using the same UAH data twice for the 12 month running average in the OP giving a RA slightly above UAH. I should have more cautious because it's meaningless and has nothing to do with misinformation. UAH is indeed the outlier with the lowest trend, however NASA is the only of the four not having 1998 as the warmest year which results in an optical outlier.

The second graph in my last post does not suffer from that error because it has been generated differently, calculating the running average from the correct data manually

Well Evo, I guess this answers your question: "If it's based on the data from the official sources posted, what is wrong with that?" The plot in the OP was NOT a true representation of the data from the official sources! I guess that leaves Ivan's question unanswered.

Staff: Mentor

Well Evo, I guess this answers your question: "If it's based on the data from the official sources posted, what is wrong with that?" The plot in the OP was NOT a true representation of the data from the official sources! I guess that leaves Ivan's question unanswered.

But the links to the official sources are valid, so what I said stands.

Hansen papers assumes in scenario A and B that CO2 emissions will grow 1.5% annually. Actual atmospheric CO2 concentrations have been growing only about 0.4% annually since 1980 for a total of 13.6%.

Note that while that may be be correct for concentration, CO2 emissions did increase as Hansen assumed and more: 28% from '90 to '04, peaking at 5% in 2004. Thus Hansen's '88 accumulation model was flawed:

Hansen said:

...Apparently the rate of uptake by CO2 sinks, either the ocean,
or, more likely, forests and soils, has increased.

There is no such thing. The temperature is defined for a system in (cvsi)equilibrium.
That's it. The "global temperature" the statisticians talk about is something else than a "temperature" (that is, a system parameter, a physical property, blablabla). It's not something that has a physical meaning. It should be named bull...rature, to avoid equivocation.