The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.

When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.

The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.

Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.

The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.

The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.

Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.

IPCC PREDICTIONS FOR 2015 – AND IRA’S

The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:

IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)

IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)

IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)

IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)

Ira Glickstein’s Central Prediction for 2015: 0.46˚C

Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.

IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG

As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory iswrong!

Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).

Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.

It’s very important in this debate to not accept IPCC outputs at face value. Doing so yields far too much ground.
None of the IPCC predictions include physically valid error bars. Therefore: none of the IPCC predictions are predictions. Those T vs time projections are physically meaningless.
We’ve all known for years that models are unreliable. Demetris Koutsoyiannis’ papers showed that unambiguously.
For example: Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis, and N. Mamassis (2010) A comparison of local and aggregated climate model outputs with observed data Hydrological Sciences Journal, 55 (7), 1094–1110.
Abstract: We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections do not correspond to reality any better.
I’ve not checked yet, but would be unsurprised if that paper does not appear in the AR5 SOD reference list.

I see need for IPCC to adjust itself to some recent helpings of reality, and
their favored scientists to adjust themselves to reality, as opposed to totally
discarding their previous findings.
Let’s see what the next decade or 2 brings. We are going into a combined
minimum of ~60-year and ~210-year solar cycles, likely to bottom-out close
to the minimum of the ~11-year cycle and the ~22-year “Hale cycle”, which
will probably be in the early (possibly mid) 2030’s. It’s looking to me that this
will be a short, steep-&-deep solar minimum as far as ~210-year-class ones
go.
As for effect on global temperature: I expect global temperature sensitivity to
solar activity to be just high enough, and global temperature sensitivity to CO2
to be just low enough, (after applicable feedbacks), that global temperature
will roughly hold steady over the next 20 years. Fair chance, decrease by
1/10 degree C.
I feel sorry for England and nearby parts of “continental Europe”, and
northeastern USA and some nearby parts of Canada. It appears to me that
dips in solar activity, including the otherwise-probably-insignificant ~22-year
Hale cycle, hit these regions hard.

Dr. Ira Glickstein
This is great! If I could suggest a possible improvement on the visualization: a separate “actual” starting at each IPCC release point or perhaps the submission cut of dates. The observed lines would get progressively flatter from FAR to AR4 illustrating the IPCC reports getting farther and farther from reality even to those less scientifically inclined.

I do wish people would stop drawing straight lines through this stuff as if it proved anything. What is the likelihood that a system as complex as the Earth’s climate system responds in a linear fashion?

“As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!”
A rather radical idea. I can’t see that catching on at the IPCC.

The facts are that the New speakers quote unprecedented heat and continued warming. This year was the warmest in history. Heck I heard a representative of the ski industry bemoan warm weather and attribute it to global warming which if we don’t do something now will go up 4-10 degrees by 2100. News reader agreed. Hard to imagine a speaker for CO2 reduction representing a leisure industry with higher carbon footprint.
Logic has lost. End of the world, doomsday, repent-the end is nigh has won.

Yeah, been reading some alarmist excuses, which essentially state that the predictions of the IPCC in 1990 were right, even though they are now wrong, because once you have made ‘adjustments’ to the temperature trend since 1990 due to the lack of volcanic activity and ENSO, the IPCC predictions of 1990 are spot on.
In other words, what the alarmists are saying is this: I predict the New York Giants will beat the San Francisco 49ers. But when the SF 49ers win, I can say my prediction was correct, because the New york Giants would have won if the 49ners hadn’t scored so many touchdowns, kicked so many goals, and intercepted so many passes.
This is where science has passed into fantasyland.

l think things are only going to get worse for the IPCC as am seeing increasing signs of climate cooling within the global weather system.
l think the best they can hope for is that the temps will remain flat.

I stand in awe of the IPCC. An organization who, over a period of nearly 25 years, has produced more meaningless fluff than can be imagined. I’d like to say “you just can’t make this stuff up”, but it really looks as if they have. Remember that this so-called ‘global’ warming is 0.16 of a degree. You cannot actually measure this change with instruments; you have to coax it out of data purporting to represent an ‘average’ temperature relative to an arbitrarily-determined baseline (oh, sure, you could argue that the baseline is somehow meaningful, but c’mon! In relation to what?). We are talking billions of dollars and millions of air miles to determine something so tiny? And just how, in the minds of the warm-mongers, can such a small amount of heat translate into such a dramatic scenario of destruction like Unicane Sandy? Or all the other grand leaps in intensity caused by a basically immeasurable change? It boggles the imagination.
Listening to the meme-spouters shriek and wring their hands, while “Prominent Professors” at Berkeley and elsewhere translate this into the stuff of moral decay, makes one wonder: Has academia gone insane? Better yet, haven’t they something better to do than to force-feed us all of this snake oil?
Scepticism about this dog-and-pony show is almost silly if you look at it this way, but sceptics must keep revealing the truth as much as they can…even if it means an apparent waste of time. The alternative is the insidious creeping cancer of control by organizations like the UNFCCC. This cannot be permitted, ultimately. How can it continue….? I am glad that AR5 leaked. It shows, once again, the inner workings of a juggernaut swollen with special interests and agenda scientists, continuing gleefully–despite exposés like Donna’s book–to produce reams of meaningless drivel aimed at the ignorant and fearful.

Common sense and ice-core data are sufficient to demonstrate that CO2 sensitivity MUST be low.
First, in the core data, T always changes direction before CO2 changes. So CO2 cannot be the leading factor.
Second, T always starts to rise when CO2 is at its lowest concentration. Similarly, T always begins tofall when CO2 is at its highest concentration. QED, CO2 can not be the driving factor.
Tmax in interglacials and Tmin in full glacial periods are always about the same values. So the factors that affect T ranges must operate independent of humans, who have only [potentially] had any influence in the last 70 years.
Can we now dispense with this dross and actually focus on real problems ???

Ha ha – now that graph is a very ‘Inconvenient Truth”
Of course CO2 lagging Temp increases in the ice core graphs in that movie was in truth the movie by the same name ignored since it falsified their basic premise.

Bob says:
December 19, 2012 at 6:43 pm
The facts are that the New speakers quote unprecedented heat and continued warming. This year was the warmest in history. Heck I heard a representative of the ski industry bemoan warm weather and attribute it to global warming which if we don’t do something now will go up 4-10 degrees by 2100.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I wouldn’t worry too much about the ski industry, at least in western Canada and Utah, we are having record snowfalls for this time of year. Skiing is as good as mid season already in lots of areas.http://www.revelstokemountainresort.com/conditions/historical-snowfall

I’ve noticed some people saying the models are getting better. (Didn’t they just use 2 super computers on the latest and greatest?) That implies the past ones needed improvement. Have any warmests ever admitted even that, that the models need improvement? Let alone admit they’ve been just plain wrong? Yet they still insist we take immediate action based on the past flawed models.
It seems this whole mess started with Hansen’s predictions. Yet people still cling to them and his solutions to what hasn’t happened as he said it would.
I think I’ll buy a snowblower afterall.

Dr Glickstein said “The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.”
Steady on: isn’t that an example of “prosecutor’s fallacy”, in treating the small probabilities multiplicatively? Surely it’s more likely that there was just systematic bias in the separate reports (which were of course ultimately under political control).[Tim, thanks for your comment. I looked up “prosecutor’s fallacy” and it did not seem to me to apply in this case. Consider throwing a single fair die four times. The probability of getting a “1” on any throw is one in six, so the probability of getting four “1” results in a row is 1/(6 x 6 x 6 x 6) = 1/1296. If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten, and so on for the four IPCC Assessment Reports. Please be more specific on where you think I went worng with this simple mathematical reasoning. advTHANKSance.
Of course I know that the IPCC changed their computer models to some extent each time, and the data they used included some new observations, but the fact they missed the mark four times in a row indicates that they have not chaged their underying climate model, based on an over-estimate of climate sensitivity to CO2 levels and an under-estimate of natural cycles of the Earth and Sun. They are wedded to the same -now discredited- climate theory because they are politically motivated (IMHO) to want to believe that human activities, such as our unprecedented burning of fossil fuels and land use that changes the albedo of the Earth, are the main cause of the Global Warming we have experienced over the past century or so. If they change their theory, and accept the Svensmark explanation that solar cycles, not under human control or influence, affect cosmic rays and that cosmic rays affect cloud formation that, in turn, affects net solar radiation absorbed by the Earth/Atmosphere system, they will lose their government funding and their political goals will be frustrated. Ira]

Ira Glickstein:
Thanks for this. The models are even worse than I imagined. I note the AR4 projection has the steepest slope of all, as if they hope to make up for lost time. The modelers great strength is that they don’t care how ridculous they appear.

The IPCC and its contributing “scientists” have only been following this corollary of Murphy’s Law – First draw your graph, then plot the data that agrees with the graph. Until recently, I thought only high school students and undergraduates did this.

Are you sure you are going to 2012 and not 2011? 2010 was a very warm year and the next one would be 2011. However in the end, the conclusion is about the same since 2012 is just a bit warmer than 2011 so far, but since the graphs move up as well, the effects almost cancel. You do not say which data set is being used, but the latest 2012 anomaly and the 2011 anomalies for 6 sets are shown below.2012 in Perspective so far on Six Data Sets
Note the bolded numbers for each data set where the lower bolded number is the highest anomaly recorded so far in 2012 and the higher one is the all time record so far. There is no comparison.
With the UAH anomaly for November at 0.281, the average for the first eleven months of the year is (-0.134 -0.135 + 0.051 + 0.232 + 0.179 + 0.235 + 0.130 + 0.208 + 0.339 + 0.333 + 0.281)/11 = 0.156. This would rank 9th if it stayed this way. 1998 was the warmest at 0.42. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.132.
With the GISS anomaly for November at 0.68, the average for the first eleven months of the year is (0.32 + 0.37 + 0.45 + 0.54 + 0.67 + 0.56 + 0.46 + 0.58 + 0.62 + 0.68 + 0.68)/11 = 0.54. This would rank 9th if it stayed this way. 2010 was the warmest at 0.63. The highest ever monthly anomalies were in March of 2002 and January of 2007 when it reached 0.89. The anomaly in 2011 was 0.514.
With the Hadcrut3 anomaly for October at 0.486, the average for the first ten months of the year is (0.217 + 0.193 + 0.305 + 0.481 + 0.475 + 0.477 + 0.448 + 0.512+ 0.515 + 0.486)/10 = 0.411. This would rank 9th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less.The anomaly in 2011 was 0.340.
With the sea surface anomaly for October at 0.428, the average for the first ten months of the year is (0.203 + 0.230 + 0.241 + 0.292 + 0.339 + 0.351 + 0.385 + 0.440 + 0.449 + 0.428)/10 = 0.336. This would rank 9th if it stayed this way. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273.
With the RSS anomaly for November at 0.195, the average for the first eleven months of the year is (-0.060 -0.123 + 0.071 + 0.330 + 0.231 + 0.337 + 0.290 + 0.255 + 0.383 + 0.294 + 0.195)/11 = 0.200. This would rank 11th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147.
With the Hadcrut4 anomaly for October at 0.518, the average for the first ten months of the year is (0.288 + 0.209 + 0.339 + 0.526 + 0.531 + 0.501 + 0.469 + 0.529 + 0.516 + 0.518)/10 = 0.443. This would rank 9th if it stayed this way. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399.On all six of the above data sets, a record is out of reach.[Werner Brozek: Thanks, you are correct that the base chart shows observed temperature “anomaly” only up to 2011, not 2012. I used 2012 in my annotations with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011. As you point out, “… in the end, the conclusion is about the same since 2012 is just a bit warmer than 2011 so far, but since the graphs move up as well, the effects almost cancel.” – Ira]

I believe they should compare the trend of “business as usual” scenario, and not that of the “center line”, let alone the lower end, with the measured temp trend. This is because things (esp. CO2 emission) have proceeded at least in a BAU mode, and actually in a faster-than-BAU mode, due to rapid industrialization of China, India etc.
But then, it is unmistakably clear that the two trends are far, far, far apart from each other.
IIRC, Lance Wallace said similarly on another thread today or yesterday.

Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.
The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.
Across the country, 45 people have died due to the cold, and 266 have been taken to hospitals. In total, 542 people were injured due to the freezing temperatures, RIA Novosti reported.
The Moscow region saw temperatures of -17 to -18 degrees Celsius on Wednesday, and the record cold temperatures are expected to linger for at least three more days. Thermometers in Siberia touched -50 degrees Celsius, which is also abnormal for December.

h/t to BobN who pointed me at it…
So about those land temperatures… which way they gonna go?…

By the time Hansen and friends massage the Russian and Arctic winter temperatures, 2012 will be a new record high, just ignore the minus sign again or invert the data no problem at all.
Are politicians and bureaucrats capable of remorse?
So much ado over so little, an almost unmeasurable imagined change.

tokyoboy says:
December 19, 2012 at 9:00 pm
I believe they should compare the trend of “business as usual” scenario, and not that of the “center line”, let alone the lower end, with the measured temp trend. This is because things (esp. CO2 emission) have proceeded at least in a BAU mode,

E.M.Smith says “So about those land temperatures… which way they gonna go?…”
Now that depends on who does the calculations !!
In Hansenworld, for example, freezing causes global tempertures to go upwards. !!!

E.M. Smith
l don’t think it will be just Russia who will be suffering.
The jet looks to be setting up eastern Canada for some of the same treatment around the 25th-27th Dec. l think its going to be a long hard winter for many in the NH this season.
l hope Climate science will be sitting up and paying attenion to this winter, because its looking like it could be the shape of things to come.

Wars prevented: 0
Genocides prevented: 0
Climate catastrophes prevented: 0
The United Nations. Where never before have so many been paid so much to do so little. But they are determined to set a new record next year.

There’s an error in the chart. The oval labeled “2012” should read “2011,” and the heading “1990 to 2012” should read “1990 thru 2011”. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

It is important to understand that even if temperatures should suddenly rise and start resembling the predicted values the theory is still wrong. The models have failed. There is no allowance for going back and adjusting values after the fact. My guess is that with a dozen years of new data it is possible to hind cast a close fit but that in doing so future values are in no way worth worrying about.

I suspect the IPCC will repaint the side of the barn to add a bullseye where the arrows hit.I sneezed a sneeze into the air.
It fell to earth, I know not where.
But cold and hard were the looks of those
In whose vicinity I snoze.
–S. Lee Crump, Boys Life, Aug. 1957

Somehow I thought that most of those predictions were actually a range of predictions, each one based on different levels of projected CO2? Am I confusing this with other projections? If not, can we remove the predictions that were based on reduced CO2 levels and only show the ones that were based on ‘business as usual’ (the closest to the actual record) emissions?

Sorry for re-posting this again but their time for continued failed predictions projections has to run out sooner or later. They can’t keep missing the mark and fail to re-visit the ‘theory’. Remember that we have had 16 years on statistically insignificant warming – unless it begins to rewarms to a significant degree, then what next?

“A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature. ”http://www.agu.org/pubs/crossref/2011/2011JD016263.shtml

Thanks, Ira. Or do it as follows. Determine the slope of linear regression at which we would have concluded from the data that there was warming, using significance level alpha. Plot the regression line on the figure with colored bands. The colored area below that line relative to the total colored area and divided by 0.9 estimates beta, the probability of a type II error. Both IPCC and skeptics have a right on equal error rates. If beta < = alpha, the model is falsified.

Ira–
As Tokyoboy (9 PM above) and Roger Knights (9:43) point out, picking the middle point of each set of IPCC projections is not correct. The reason is that their projections are based on scenarios (estimates of what will happen, such as “business as usual” or CO2 regulation of some sort). So the single estimate you should pick in each case is the one corresponding most closely to the associated scenario. In the case of the first Assessment Report (FAR) that estimate is the uppermost line associated with their “Business as Usual” assumption, since hardly any regulation is evident when one looks at the exponential rise in CO2. In general, probably an estimate close to the highest one in the next three reports is the one that most closely approximates what actually happened.
Picking the middle estimate as though it was the IPCC “best” estimate is actually picking an estimate based on a failed scenario. The entire graph (particularly the addition of the even larger “error bounds” in gray) was prepared by the IPCC to allow them to say their estimates were within the uncertainty bounds. But it is simply another case of hiding the decline (the decline in this case being the refusal of the observed temperature to match the projections.)
Ira has fallen into the trap set by the IPCC. Ira or someone should carry out the program outlined above, which is not quite as easy for the later reports as for FAR.[Lance Wallace, Tokyoboy, and Roger Knights: Of course you are correct that, had I chosen the “business as usual” scenario predictions which correspond to the actual rise in CO2, my animated arrows would have had a higher slope and the separation of the IPCC from reality would have been greater. I used the central IPCC predictions (which correspond to the centers of the colored “whiskers” at the right of the chart) to avoid being accused of “cherry picking”. In other words, if the IPCC is off the mark based on my central predictions, they would have been even more off the mark had I used “business as usual”. Ira]

Ira quotes
As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!
———
Hmmm. Yes if your observations are in fact correct.
The trouble with Ira’s observation is that he has done a straight line fit with the starting point constrained to be the starting point of the aligned series. If he did a straight line fit without that constraint he would get a very different answer.
Aligning all of the series at some arbitrary time is somewhat arbitrary and is not a sensible way of comparing the various trends.
Maybe Ira needs some statistical expertise. Go and talk to McIntyre. He’ll sort you out.[LazyTeenager: I do not claim to be any kind of statistical expert, though I do have a working knowledge of statistics from my long career as a system engineer and from my PhD dissertation. However, all the temperature “observations” are on the IPCC base chart and were done by the IPCC researchers and authors. All I did was draw some animated arrows atop the IPCC data. I started my arrows at the center of the Global temperature “anomaly” value as graphed by the IPCC. You say “If [Ira] did a straight line fit without that constraint he would get a very different answer.” I have no idea where one would start a “straight line fit” other than at the starting point of each analysis. Please be more specific about the “very different answer” you expect from a different “straight line fit”. To me, “very different answer” implies that it would show that the IPCC actually hit the mark four times (or even once :^). advTHANKSance. Ira]

LazyTeenager is, in this instance, dead right. Given the data in this figure and its error bars, an unconstrained linear fit would not falsify the predictions. One has to hindcast the models to 1980 (when it was almost exactly the same temperature as it was in 1990) to do that.
However, LT (presumably skilled in statistical analysis himself, teenager and all) also knows at a glance that even an unconstrained linear fit is bogus. The “error bars” on the data points are clearly meaningless. The data points themselves are not iid samples drawn from the same process. The shaded regions are bogus — they are nothing like a statistically meaningful confidence interval. The centroids of the shaded regions are not even plotted so that one cannot even determine and compare the linear trend to the presumably nonlinear trends plotted. And if one attempted to fit a nonlinear function to the data using the bogus error bars, one might not get one that has positive curvature at the present time, presenting a real problem for the models!
What, exactly, are these models? They aren’t. They are composite predictions of many models. In fact, they are composite predictions of many runs each of many distinct models. Some of the runs of some of the contributing models no doubt came close to the data (enough to produce their lower-shaded boundaries, presuming that those boundaries aren’t freehand art drawn by someone seeking to create a pretty graphic and were actually produced by some sort of computational process — I leave it to LT to tell me if he thinks that there is the slightest chance that this figure was produced by means of performing an actual objective statistical process of any sort, as it makes precisely the error it accuses Mr. Glickstein of making by starting at the year 1990 with a constrained point). Which models were, to some extent, verified by the data? Why are they not given increased weight in the report? Which models were completely and utterly falsified by the data? Why are they not aggressively omitted and the model predictions retroactively repaired?
LT, Mr. Glickstein is, as you have observed, not a statistics god. However, a large part of statistics isn’t math, it is common sense. It is having the common sense to look at (and, if one is honest, present) the data robustly, not a cherrypicked 12 year segment on a fifteen year graph. I don’t have the energy to grab the graph, overlay it with all 33 years of UAH LTT and/or RSS, and invert the model wedgies into the past, still pegged at 1990, but then, I don’t need to. You know exactly what it would look like. It would be a complete and utter disaster — for the models. Mr. Glickstein has the common sense to see that the data and the models are not in good agreement, even in the narrow time frame plotted.
Do you?
rgb[rgbatduke: THANKS for your conclusion that “…Glickstein is … not a statistics god. However … [he] has the common sense to see that the data and the models are not in good agreement, even in the narrow time frame plotted. Do you?” – Ira]

LazyTeenager;
The trouble with Ira’s observation is that he has done a straight line fit with the starting point constrained to be the starting point of the aligned series.
>>>>>>>>>>>>>>>>>>>>>>>>>
Starting it in the year it started at the temperature it started at is arbitrary? I tried reading what you wrote by examining random words in your comment and it turns out it makes more sense that way than just using arbitrary starting points like the beginning of sentences and following the words in sequential order. Very clever.

Before too long IRAs prediction as well as the IPCC projections will turn out to be far too optimistic (where optimism correlates with rising temperatures). The ocean buffer has had a slight thermal top up after the solar cycle 23 minimum but with the peak of cycle 24 currently upon us this top up will be rapidly exhausted as the solar magnetic fields and solar activity collapse on the downside of 24 and into the all but absent solar cycle 25. This winter will not be so bad and maybe next winter (relatively speaking) in the northern hemisphere but thereafter there will be a major collapse in global temperatures for several decades with harsh winters and collapsing grain harvests. Mean temperatures will fall by 2.5 degrees Celcius in the temperate latitudes and more at higher latitudes by 2021.
It is all going to be very unpleasant as we will be thoroughly unprepared because of Piltdown Mann and the Team.
That is my prediction.
Stay Cool!

Camburn,
Maybe you are missing something about LazyTeenager.
The arrows do fly straight and true. Published and predicted. What happens between The date the prediction is released based on theory compared to the proof of observation only has one straight line that matters. The temperature line. The most recent release being the most ridiculous.

One can also add the IPCC AR5 multimodel means to the projections as well. They would have had access to temperatures up until 2010 so that is when the projections start. AR5 is almost the same as AR4, there is very little difference.
The Climate Explorer has recently added a nice summary download page for AR5 multi-model means. I use the RCP 6.0 scenario which is the most realistic in terms of where we are going with GHGs. Be sure to set the base period to 1961 to 1990 in order to be able to compare to Hadcrut temperatures for example (everyone is using different baseperiods now so one has to be careful that they are all comparable – someone post this comment over at Skeptical Science since they do not seem to get this idea).http://climexp.knmi.nl/cmip5_indices.cgi?id=someone@somewhere

E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)http://rt.com/news/russia-freeze-cold-temperature-379/
Hi E.M. Smith – I also pointed to this story yesterday in another thread (I saw the story first at Instapundit).
What I find interesting is that CAGW devotees appear to believe that the mean temperature of the Earth is slowly increasing over time, which can be expressed simply as:
T_earth(t) = T_cagw(t) + T_stf(t)
where t is time, T_cagw(t) is the slow increase in mean temperature due to “global warming”, with a time scale on the order of multiple decades, and T_stf(t) are “short term fluctuations” due to ENSO, volcanoes, weather “noise”, and other natural variations. What I don’t understand is that if multidecade-scale “global warming,” as expressed above, does exist, we should NOT be breaking low temperature records established many decades ago in large area, broad regions like Russia. It will be interesting to see if more low temperature records are broken as we move into winter 2013…

well i am not sure that the debate is right.. I am more in comparing the shape of the curve.
Clearly any single model is not able to fit the data.
Who can explain why they use so many models? what is the meaning of that??? Why is it called uncertainty?

The last entry for the “Observed” data set is 2011 not 2012. Also, the graph does not say which data set “Observed” is. I suspect HadCrut3 or 4 as the HadCrut set has been their preferred one for all previous reports.
Data for the year so far suggests that 2012 will be warmer than 2011 but actually only about the same as 2009. That means the two dots will be at the bottom end of the green shaded area (TAR) and the upper end of their error bars is likely to sneak into the orange AR4 range. Of course the IPCC will say that because the single data point for the Observed 2012 data could have fallen within the bottom of the AR4 predicted range that it is “consistent” with their forecast. Of course they will ignore the fact that the trend in the data is clearly flat compared with the predicted upward trend.
That will, of course, not stop Tamino claiming that he has “pre-bunked” this argument by removing the effect of the dominant La Nina during the period, and then stating that the climate would have warmed. That translates to me as “if the climate had not cooled then it would be warmer than it is now”. The problem for Tamino is that ENSO is not a “cycle” where the warm and cool spells cancel out, it is a random fluctuation and can have a negative or positive trend of its own. Just because ENSO has biased cool in the last 10 years does not mean that it will bias warm to an equal extent in the future and that temperatures will somehow “catch up” through the effect of a series of El Ninos. They might, or they might not, it is a random fluctuation and it will now take a series of quite monster El Ninos to cancel out the last few La Ninas.

So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us. Perhaps you could use Hansen’s A, B, and C scenarios he once touted 😉
As others have commented here, we should be looking at the BAU predictions the models have made as that is the scenario we are currently living in (in fact, I believe our evil ‘SeeOhToo’ emissions are higher than the BAU scenarios). I’d REALLY love to see you try and reconcile those predictions with the real-world temps!
Over to you Lazy…

@ fhhaynie
You said: “That still would not explain a probable future downward trend in global temperature.”
As you know, there is no forecast of a near-term downturn in temperature in the purview of mainstream science. Certainly I don’t know of such a forecast, and I am therefore confident that many others who read your contribution will likewise be unaware.
I went to your website and found nothing that lead me to judge that such a decline is likely.
The great thing about WUWT’s (Specifically Anthony Watts’s) determined light moderation stance, is that within reason everybody has a chance to have their say. The heretic, the dissenter, the lone true voice in the crowd, the voice of orthodoxy, the honestly mistaken and the oughtright crackpot all get heard.
It’s embarrassing to have crackpots interjecting in a discussion. It would be even more embarrassing to exclude honest, possibly even correct viewpoints by wrongly judging them to be crackpot.
With respect, no matter how correct you might actually be, when you allude to a forecast not supported by conventional science, if you don’t give a citation then the reader has little choice but to include you among the crackpots. From visiting your blog, this would be an unfair characterisation of you.
I therefore ask you to always include a citation to your calculations about your expected temperature decline with every post you make that alludes to it, no matter how much you feel we ‘ought’ to know it.
Sincerely,
Leo Morgan

Leo,
That probable downturn may not occur in my life time, but it will happen. We will have another ice age. Also, consider the probability on a short term basis that the last sixteen years of no temperature rise is the top of a temperature cycle that is following a 200 year cycle of solar activity. Time will tell and reveal the true crackpots.

Andy W says:
December 20, 2012 at 6:49 am
(replying to LazyTeenager)
So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us.
I think you have that wrong. I really don’t even care anymore “how” his precious models may have accidentally got it right.
Your question actually needs to be: “So Lazy, are you going to try and show us which of the models actually got it right?”
See, we still have not seen ANY of the 23 some odd “officially acceptable models” actually produce even ONE single model run (of the many thousands they supposedly average to get their results) that has “reproduced reality” and predicts/projects/outputs/calculates ANY single 16 year steady temperature period during ANY part of the 225 years between 1975 and 2200.
Its not that the “CAGW modelers” need to produce hundreds (or thousands) of model runs that lay right down the middle of the real world temperatures: clearly there are error bands and the global circulation models will be slightly different each run. Nobody anywhere questions that.
They cannot even produce ONE run of ONE model that fits inside the error band of ONE standard deviation.
But for the IPCC to claim “certainty” (more than 3 standard deviations (of what outputs??? from what sample set ??? using what “data” ???) that their GCM models are correct 100 years in the future – when not even ONE result of 23 models x 1000 runs/model is inside the 16 years of real world measurements between 1996 and 2012 is ludicrous!

E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)http://rt.com/news/russia-freeze-cold-temperature-379/
Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.
The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.

It only makes logical sense: most of the world’s warming happened in the northern latitudes, so it shouldn’t be a surprise when cooling is realized in this same locale. Unfortunately, these same areas are the global breadbaskets. GK

Lance Wallace says:
December 20, 2012 at 1:59 am
Ira– As Tokyoboy (9 PM above) and Roger Knights (9:43) point out, picking the middle point of each set of IPCC projections is not correct. . . .

To which Ira responded:

[Lance Wallace, Tokyoboy, and Roger Knights: Of course you are correct that, had I chosen the “business as usual” scenario predictions which correspond to the actual rise in CO2, my animated arrows would have had a higher slope and the separation of the IPCC from reality would have been greater. I used the central IPCC predictions (which correspond to the centers of the colored “whiskers” at the right of the chart) to avoid being accused of “cherry picking”. In other words, if the IPCC is off the mark based on my central predictions, they would have been even more off the mark had I used “business as usual”. Ira]

However, Lance Wallace mis-reported what my criticism was, which was quite different and which must be addressed:

Roger Knights says:
December 19, 2012 at 11:02 pm
There’s an error in the chart. The oval labeled “2012″ should read “2011,” and the heading “1990 to 2012″ should read “1990 thru 2011″. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

[Roger Knights: Thanks, you are correct about the oval. I should have moved it and the arrow heads to the right by one year. Please see my embedded reply to Werner Brozek (December 19, 2012 at 8:43 pm) that I used 2012 instead of 2011 “… with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011.“ – Ira]

Dr Glickstein – many thanks for your comment immediately following mine above at 7:35 pm.
I quite agree that if you take truly random events such as throwing dice, the probability of throwing the same number N times will be 1/(6^N).
However, what I have problems with is where you say “If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten …”.
The same theory and model implies the same result, if you use the same starting and boundary conditions. Even with different starting conditions I don’t think you can regard any two runs as truly random – so I personally have doubts that the probabilities can simply be multiplied in the way you suggest (one in ten times ten, etc).
But I will be happy to be corrected, if my grasp of probability theory here is wrong …!

Don’t anyone hold their breath, waiting for LT to respond. He never does. His strength is that he doesn’t mind being wrong. Nonetheless, he serves a good purpose in parroting the dubious scientists who brought us AGW, and so exposing their dubious science to public inspection.

Tim’s critique about the “prosecutor’s fallacy” (Dec 19 7:35 pm) is correct (and the rebuttal unfortunately is not). Four incorrect predictions, each with a 90% confidence (and therefore, a 10% chance of being wrong), does not lead to a 1 in 10,000 chance of all four being wrong. The fallacy is that the predictions are not independent events – that is, they are not separate throws of the dice.
If, for example, the 10% uncertainty includes some component of systemic error and that systemic error is propogated through all four trials, the calculated error considering all four trials may still be as high as the original 10%.
To go back to the rebuttal’s dice example, there is a one in six chance of rolling a “1” and a one in 6^4 chance of rolling four “1”s in a row if you have no prior knowledge or reason to suspect that the dice are unevenly balanced. Once you have four “1”s in a row, you have competing hypotheses, however – a) that you’re really unlucky or b) that the dice are skewed. Now you need to assess the probability of systemic error and recalculate. That is, given that you know that trial A was exceeded, what is the probability that trial B will be exceeded.
Unless you pick the extremes of either 0 or 100% component of skewing, the final properly-multiplied error of all four reports considered as a unit will be less than one in ten but substantially greater than one in ten thousand.

Let’s be generous, say SAR got it right. Doesn’t that still mean the GCMs that produce high forecasts have been proven inappropriate? Doesn’t all this still mean that the “C” part of CAGW has fallen off the table.
Even if the aeorosol component in the prior GCMs is considered wrong, to account for the discrepancey, doesn’t this mean that the science is not settled?
Connolly says the SAR, at least, is correct, but doesn’t concede the Catastrophic part has been invalidated by time.

RACookPE1978 almost gets to the issue.
For any of these projections to be valid, they need to not only reproduce the forward temperature but also the components of the projection need to be correct. If they get the temperature correct but CO2, water vapour, ENSO, clouds, aerosol, TSI etc are wrong then the model isn’t correct at all, it’s got the temperature correct by pure chance. You can do this with virtually any ensemble of models you like.
So when the IPCC puts together these ensembles they are trying to hide the fact that their underlying models have zero predictive power from the get go. Not only do they not have a single model that can be run and produce any kind of predictive output, they don’t have a single model that can be run to get even a hindcast of temperature correct with all of the underlying variables also being correct.
The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.

Ira, I think you are still not grappling with the main point here. The point, as rgb and many others have said, is that this graph is NOT showing a range of predictions with a “best” value somewhere in the middle and uncertainties around the best value shown by colored bands. That is what the IPCC wants people to think! When you accept that, as you implicitly do by picking the central estimate, you are now open to the IPCC response (e.g., see Connelly) that at least the actual values are within the uncertainty. But these values are not even close to the uncertainty if you use reasonable uncertainty values enclosing the ACTUAL SCENARIO that ensued following the IPCC projection. That is, one would see four lines (probably lying close to the upper boundaries of each band of colors), with NARROW bands associated with each line, and the measured temperatures would lie far outside those narrow bands. This would give the IPCC no wiggle room.
Roger Knights, I did not “mis-report” what you said. I quoted your response and gave the time of 9:43 PM. That post of yours simply quoted Tokyoboy and said “it would be a nice addition.” You made two posts and it is the second one you are thinking of.

pete says: @ December 20, 2012 at 2:30 pm
…The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.
____________________________________
Yes this chart alone shows the premise upon which the models are built is rubbish. They put in airplane contrails but they ignore clouds and water is bundled into CO2 as a “Feedback”!
And it is not like they do not have any real world data either.

Parameterization of atmospheric long-wave emissivity in a mountainous
site for all sky conditions J. Herrero and M. J. Polo
Received: 14 February 2012 – Accepted: 11 March 2012 – Published: 21 March 2012ABSTRACT
Long-wave radiation is an important component of the energy balance of the Earth’s surface. The downward component, emitted by the clouds and aerosols in the atmosphere, is rarely measured, and is still not well understood. In mountainous areas, the models existing for its estimation through the emissivity of the atmosphere do not give good results, and worse still in the presence of clouds….. This study analyzes separately three significant atmospheric states related to cloud cover, which were also deduced from the screen-level meteorological data. Clear and totally overcast skies are accurately represented by the new parametric expressions, while the intermediate situations corresponding to partly clouded skies, concentrate most of the dispersion in the measurements and, hence, the error in the simulation. Thus, the modeling of atmospheric emissivity is greatly improved thanks to the use of different equations for each atmospheric state.
——–
Introduction Long-wave radiation has an outstanding role in most of the environmental processes that take place near the Earth’s surface (e.g., Philipona, 2004). Radiation exchanges at wavelengths longer than 4 μm between the Earth and the atmosphere above are due to the thermal emissivity of the surface and atmospheric objects, typically clouds, water vapor and carbon dioxide. This component of the radiation balance is responsible for the cooling of the Earth’s surface, as it closely equals the shortwave radiation absorbed from the sun. The modeling of the energy balance, and, hence, of the long-wave radiation balance at the surface, is necessary for many different meteorological and hydrological problems, e.g., forecast of frost and fog, estimation of heat budget from the sea (Dera, 1992), simulation of evaporation from soil and canopy, or simulation of the ice and snow cover melt (Armstrong and Brun, 2008)….Downward long-wave radiation is difficult to calculate with analytical methods, as they require detailed measurements of the atmospheric profiles of temperature, humidity, pressure, and the radiative properties of atmospheric constituents (Alados et al., 1986; Lhomme et al., 2007). To overcome this problem, atmospheric emissivity and temperature profile are usually parameterized from screen level values of meteorological variables. The use of near surface level data is justified since most incoming long-wave radiation comes from the lowest layers of the atmosphere (Ohmura, 2001).
…. the effect of clouds and stratification on atmospheric emissivity is highly dependent on regional factors which may lead to the need for local expressions (e.g., Alados et al., 1986; Barbaro, et al., 2010). on environmental processes, especially if snow is present. As existing measurements are scarce (e.g., Iziomon et al., 2003; Sicart et al., 2006), a correct parameterization of downward long-wave irradiance under all sky conditions is essential for these areas….Conclusions
The long-wave measurements recorded in a weather station at an altitude of 2500 m in a Mediterranean climate are not correctly estimated by the existing models and frequently used parameterizations. These measurements show a very low atmospheric emissivity for long-wave radiation values with clear skies (up to 0.5) and a great facility for reaching the theoretical maximum value of 1 with cloudy skies.….

TimC (and mikerossander): You are correct that I was considering the ideal case where each run of the climate model could be analogized to a throw of a die, where each throw is totally independent (if the die is fair). Thus, my claim of ten times ten times ten times ten equals only one chance in 10,000 of four runs being wrong does not apply to IPCC runs of their climate models. (And when I say “run” I understand that the results published in each Assessment Report are the combined results of several models using different assumptions as to the rate of increase in atmospheric CO2, etc.)
Of course, if you run the same set of climate models with the same data, you will always get the same results. If the statistical certainty of one run is 90%, the chance of one run or multiple runs being wrong remains only one in ten.
However, in the case of the four IPCC Assessment Reports, each individual climate model and each set of climate models was somewhat different. Also, each run made use of some previously unknown data because the runs were made five or six years apart. So, while not independent, the subsequent runs were not totally identical either. Therefore, if the supposed statistical certainty of each run was 90%, meaning the probability of a single run being wrong is one in ten, the probability of all four runs being wrong is definitely greater than ten, but not as great as 10,000.
Could the four IPCC Assessment Reports be like throwing four different dice and all coming up as “1”? As before, if all four dice are fair, the probability of all coming up as “1” would be six times six times six times six equals 1296. However, if I threw four different dice and got that result, I would wonder if the dice were actually fair. Wouldn’t you? I would strongly suspect that all four dice were “loaded” to favor a certain result, namely that they would come up as “1”.
We have four sets of IPCC climate models, over a period of up to 22 years, predicting far greater Global Warming than has actually been observed. I think there is a very high probability that all of those sets of IPCC climate models are “loaded” to favor high Climate Sensitivity to CO2 and other forcing factors that are under some control by humans. Given the fact that Global Warming has slowed down or stalled for a decade or two despite the continued rapid increase in CO2 levels, I think there is also a very high probabilty that the Climate Sensitivity to CO2 has been way over-estimated and that actual temperatures are mainly driven by natural cycles of the Earth and Sun that are not under human control.
The very first sentence of Chapter 1 of the leaked AR5 says:

Since the fourth Assesment Report (AR4) of the IPCC, the scientific knowledge derived from observations, theoretical evidence, and modelling studies has continued to increase and to further strengthen the basis for HUMAN ACTIVITIES being the PRIMARY driver in climate change. At the same time, the capabilities of the observational and modelling tools have continued to improve. [EMPHASIS mine]

It seems to me that the opposite conclusion has been increased and strengthened, namely that the IPCC-supported Climate Theory and models derived from that theory were wrong to start with (like “loaded” dice) and, after four tries, are still wrong. IPCC researchers were closest to the truth with the SAR, which is the least high of the four Assessment Reports, but subsequent Climate Models, the TAR and AR4, have demonstrated that the IPCC researchers have definitely not improved their modelling tools. Thus, the draft AR5 starts with two false sentences in the very first paragraph.
Ira

The problem with any model is as follows: by every iteration, the error tends to be enlarged. When one runs a model through thousands of iterations, errors accumulate.
Simply put: when my model does a 90% good prediction of the temperature at day one, what will it do for day two, assuming the same skill of the model? 0.*0/9???
And on dAy three? 0.9*0.9*0.9????
Anyone tried o.9^100????
It is 26 exp -6
And the models do much more than cycling through 100 cycles.
Please, do not consider models as if they were experiments. They are not. Discard models.

The very first sentence of Chapter 1 of the leaked AR5 says:
Since the fourth Assesment Report (AR4) of the IPCC, the scientific knowledge derived from observations, theoretical evidence, and modelling studies has continued to increase and to further strengthen the basis for HUMAN ACTIVITIES being the PRIMARY driver in climate change. At the same time, the capabilities of the observational and modelling tools have continued to improve. [EMPHASIS mine]
====================================
It looks as though they intend to brazen it out. Is any more proof needed that the IPCC reports are the vehicle of a particularist agenda?

Ira Glickstein;
It seems to me that the opposite conclusion has been increased and strengthened, namely that the IPCC-supported Climate Theory and models derived from that theory were wrong to start with (like “loaded” dice) and, after four tries, are still wrong.
>>>>>>>>>>>>>>>>>>
Ch11 of AR5 is about the models and shorter term (a few decades) predictions. There’s a section on initialization as a technique to make the models more accurate, in which they make the most astounding (to me anyway) statement:
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be? They adjust to make one part to be more accurate and it makes another part worse. They don’t even seem to consider that this is an indication that the models contain one or more fatal flaws which render them incapable of producing an accurate result. It is direct evidence that the things the model gets right, it gets right for the wrong reasons.

Dr Glickstein (and mikerossander): many thanks and (referring to your December 20, 7:38 pm comment) I can no longer fault your analysis. Particularly, I agree that there is likely to be a (probably self-serving) bias to the models which to me also suggests “the dice are loaded”, leading to a higher probabilty of error than the original 1/10 for any single run – but not so high as 1/10,000 for 4 truly independent events.
Apropos of nothing, what actually first came to my mind (as a lawyer here in the UK for many years) was the notorious Sally Clark case here in the UK. She was wrongly convicted of the murder of two of her sons both of whom died suddenly within a few weeks of birth. A paediatrician gave evidence that in his opinion the chance of two children from her well-off background suffering sudden infant death syndrome was 1 in 73 million, taken by squaring 1/8500 (his estimate of the likelihood of a single cot death occurring in similar circumstances). The jury convicted on that evidence, despite the judge giving a warning of the possible “prosecutor’s fallacy”. She was imprisoned for life and had served 4 years when it emerged that the pathologist failed to disclose microbiological reports implying the possibility that at least one of her sons had died of (unlinked) natural causes. Her convictions were then overturned but (having lost two sons, then having been wrongly convicted of their murders) she never recovered – she died just 4 years later, aged 42. A very sad case.

The title of the post,
“An animated analysis of the IPCC AR5 graph shows ‘IPCC analysis methodology and computer models are seriously flawed”
raises a question: Flawed for what purpose?
Obviously, current global temperatures are below what the models would have led us to believe. But the models can’t predict specific ENSO events in advance, or long-term solar output trends, at all. People who work with them, or are used to examining their output, know this, and can allow for the fact that unexpected ENSO events or solar forcings will give a real-life result that the models didn’t predict. But when the model results are presented to non-specialists, it’s hard to avoid this point being lost.
Foster and Rahmstorf have taken a stab at adjusting the temperature history, for ENSO/solar/volcanic histtory, with the aim of isolating the CO2 effects. They used a multivariate regression analysis, so the accuracy of their results will depend on whether the factors they examined affecting temperature (CO2, ENSO, solar output, and aerosols) leave out any significant contributors, and the extent to which their effects can, for the metrics they chose, be thrown together as linear, independent influences on temperature.
Models do include ENSO events at random, and it would be interesting to see what predictions came out when selecting runs with a strong El Nino bias in the late 1990s, and a strong La Nina bias recently. What I’d really like to see would be some models run using the known ENSO history and solar influences, for hindcasting. That would give a better idea of how well the models work, and what we might expect under various scenarios for future ENSO and solar influences.

unha says: December 20, 2012 at 7:39 pm:
Thank you for raising the exponential rate of error accumulation in GCM time step integrations.
When I could not understand how climatologists thought that they could get sensible data from GCMs I did some checking and found out that the models use low pass filters between integration steps in order to preserve conservation of energy, mass and momentum, and to maintain “stability”. Even worse, they use pause/reset/restart techniques when physical laws are violated, or the “climate trajectory” breaches boundary conditions.
All of this tells me that what they are trying to do is mathematically impossible.

davidmhoffer says: December 20, 2012 at 8:56 pm
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be?
=========================================
Exactly. A frank admission of the inadequacy of the models. Tinker here, and – oops! Contradicts the cite above from Ch. One, as given by Dr. Glickstein. The crack of Doom? Or a case of indiscipline?

An analysis of past climate history shows that during the period 1870 to 1910, the global air temperatures and the global ocean surface temperatures both declined as the sunspot number declined. From 1910 to 1940 all three again moved up together. From 1940 to 1970’s, the global ocean surface temperatures declined as they entered their cool mode and wiped out the global surface temperatures rise from continuing solar sunspot increase. From 1980 to 2000 all three variables again moved up in unison.. During the last decade or 2000-2010, all three climate variables are again going down as global cooling again gets underway. This declining pattern is likely to continue until 2030 at least . It would appear that the decadal average yearly sunspot number level of about 30-45 seems to be the tipping point where any level below this figure causes global cooling and above this figure causes global warming unless ocean cycles happen to be out of sync and over ride any warming [ like 1950’s-1970’s]. Most recently we have been running at an average yearly decadal sunspot number of 29.2 over the last 10 years. This low figure clearly explains why there has been no warming for the last 16 years and why instead we are starting to see global cooling like during the past the period of 1880-1910. Not enough solar energy is being put into the planet to cause any warming.
The average yearly sun spot numbers during the Dalton Minimum decades [ 1790 to 1837], a period of much colder temperatures like the period 1880-1910 were 27.5, 16.5, 19.3 and 39 . So there is some convincing evidence that low solar sunspot numbers and declining global temperatures are directly linked.and we are already in cooling phase like we had before

Dave Hoffer: Always good to “see” you here with comments on my Topics. I had not read Chapter 11 of the IPCC leaked AR5 and I share your opinion of the statement you quoted. The IPCC climate models are biased towards human caused Global Warming and are therefore incapable of predicting Global Cooling or even low levels of warming.
Your comment led me to look at Chapter 11 and I found this amazing statement that confirms the IPCC bias towards blaming warming PRIMARILY on humans:

A return to conditions similar to the Maunder Minimum is considered very unlikely in the near term but, were it to occur, would produce a decrease in global temperatures much smaller than the warming expected from increases in anthropogenic greenhouse gases. However, current understanding of the impacts of solar activity on regional climate remains low.

In other words, the IPCC, despite their “low … understanding of the impacts of solar activity”, continues to support the view that human (“anthropogenic”) activities are the PRIMARY cause of Global Warming. This flies in the face of the recent slowing of warming despite record increases in atmospheric CO2.
Ira

IRA
IPCC has been completely wrong about their winter climate predictions .
UNITED STATES
The winter temperatures for Contiguous United States has been dropping since 1990 at -0.26 F per decade [per NCDC]
The annual temperature for Contiguous United States has been dropping since 1998 at -0. 80 F per decade[ per NCDC]
Basically US winter temperatures have been flat with no warming for 20 years
CANADA
The annual temperature departure form 1961-1990 averages has been flat since 1998
The winter temperature anomaly has been rising mostly due to the warming of the far north and Atlantic coast only
8 of the 11 climate regions in other parts of Canada showed declining winter temperature departures since 1998
During the 2011/2012 winter the Canadian Arctic showed declining winter temperature departures
Yet the IPCC assessment for North America was:
All of North America is very likely to warm during this century, and the annual mean warming is likely to exceed the global mean warming in most areas. In northern regions, warming is likely to be largest in winter, and in the southwest USA largest in summer. The lowest winter temperatures are likely to increase more than the average winter temperature in northern North America
EUROPE
The winter temperature departures from 1961-1990 mean normals for land and sea regions of Europe have been flat or even slightly dropping for 20 year or since 1990
Yet the IPCC assessments of projected climate change for Europe was:
Annual mean temperatures in Europe are likely to increase more than the global mean. The warming in northern Europe is likely to be largest in winter and that in the Mediterranean area largest in summer. The lowest winter temperatures are likely to increase more than average winter temperature in northern Europe
It is not happening

Ira,
I’m in general a sceptic and very much find the graph you pulled from the report highly confusing. I think the poor quality of the graph has led you to totally mis-read it and worse to misapply it.
For AR4 as an example, the starting point for the hindcast / forecast is clearly 1990.
If you want to eliminate that hindcast portion of the AR4 fan, then you need to start your AR4 line from the middle of the AR4 2007 hindcast for 2007. Then connect that line to the center of the 2012 forecast. The slope of that line would be totally different from the slope of the line you got by mixing a 2007 actual temp with a 2012 temp findcast/forecast with a 1990 starting point.
As it currently is, I think your entire blog post should be withdrawn as simply being a misinterpretation of a really poorly done graph.[GregF, you are entitled to your opinion. However, I find it somewhat ridiculous that the middle of the AR4 prediction fan (the brown and rust-colored band) is so far above the actual observations for the year before AR4 was issued as well as for the year AR4 was issued. It seems to me that a prediction should start with a known situation and predict the future from that point. Nevertheless, thanks for your input. – Ira]

Obviously, current global temperatures are below what the models would have led us to believe. But the models can’t predict specific ENSO events in advance, or long-term solar output trends, at all. People who work with them, or are used to examining their output, know this, and can allow for the fact that unexpected ENSO events or solar forcings will give a real-life result that the models didn’t predict. But when the model results are presented to non-specialists, it’s hard to avoid this point being lost. [emphasis mine]

It seems to me that we "non-specialists" who are not invested in the meme of human-caused Global Warming are more attuned to the abject failures of the IPCC models.
Please have a look at Figure 1-4 from the AR5 draft (above). The black vertical bars represent actual temperature observations as reported by the IPCC. Note that, IMMEDIATELY AFTER the IPCC released their FAR, SAR, and AR4, all predicting major temperature increases, the observations show strong temperature decreases! Indeed, there are four years worth of lower temperatures following the release of the FAR!
The IPCC cannot even predict ONE YEAR in advance and yet they have been somewhat successful in convincing the media and governments that drastic action must be taken to prevent runaway Global Warming.
Therefore, it is pretty clear (at least to me, an admitted "non-specialist") that common sense indicates a very strong bias on the part of the IPCC researchers to predict high rates of warming, even though three out of four of their predictions were followed by cooling in the very next year.
JazzyT asks about the IPCC models I claim are flawed: “Flawed for what purpose?” IMHO, for the purpose of keeping their funding by governments that are politially motivated to increase their control over the global economy, even if, as a result, our economy is wrecked.
Ira

Ira Glickstein;
Your comment led me to look at Chapter 11 and I found this amazing statement
>>>>>>>>>>>>>>
I’m only part way through it, but there are a few more beauts in there. One is that they predict 0.4 to 1.0 degrees of warming for 2016-2035 compared to 1986-2005, and they expect to be at the low end of that range. For starters, we are right now today at +0.2 compared to 1986-2005, so they only need +0.2 by 2016-2035 to hit their projection range. But they then hedge their bets further by stating that this is all based on the assumption that there will be rapid decreases in aerosol emissions over the next few years. No justification for the assumption that I can find, and it makes little sense to make such an assumption given the rapidly industrializing economies in China, India and Brazil which will ramp up emissions far beyond what we can reduce them in the western world. Talk about a get out of jail free card! Nor can I find (so far anyway) who much of the warming they project is due to the decrease in aerosols that they project, so how much is actually left to attribute to CO2 is currently a mystery to me.
But here’s one that got the expletive’s going big time:
“It is virtually certain that globally-averaged surface and upper ocean (top 700m) temperatures averaged over 2016–2035 will be warmer than those averaged over 1986–2005”
Well duh! Since CURRENT temps are ALREADY 0.2 degrees above 1986-2005, we’d have to see a COOLING of 0.2 degrees by 2016-2035 for this to NOT be true!
And you have just got to love this one on surface ozone:
“There is high confidence that baseline surface ozone (O3) will change over the 21st century, although projections across the RCP, SRES, and alternative scenarios for different regions range from –4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
Are they kidding? They are highly confident that it will be either higher, or lower, or about the same, but not exactly the same?
The more of it I read, the sadder it gets.

{ davidmhoffer says:
December 21, 2012 at 9:10 am }
RE:
–4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
LOL…Nice catch. But the real question is……(drumroll)
{ There is high confidence }
The best they can do is “high confidence”. I think it’s “very highly likely”, or maybe “almost certainly”, or even so far as, dare I go there, “irrefutably robust”.
;<)

It seems to me that we “non-specialists” who are not invested in the meme of human-caused Global Warming are more attuned to the abject failures of the IPCC models.

Well first, the bit about “non-specialists” was not intended as a jab at anyone, and I regret it if that’s how it came through through.
But I’ll try to clarify what I meant. Suppose a model prediction persistently fails to match reality within a stated tolerance. (I say “persistently” because one excursion could be a statistical fluke.) Now, if the model diverges from reality because processes that were modeled were gave incorrect answers, then the model is not working. However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.
Is this what has been happening? I don’t know. ENSO processes can”t be predicted, so they are modeled randomly. The real-life events of a super El Nino in 1998 and double-dip La Ninas recently tend to flatten out temperatures. These won’t match the mean of model runs using random ENSO processes, some of which would raise the trend and others lower, or flatten it. Weak solar output over this cycle and the last contribute more to temperature flattening. How would the temperature curve have looked without these? Would it have matched the model predictions?
There’s been one statistical attempt to deal with all these processes, that could not have been included in model predictions (because they’re unpredictable). But that gives the best fit to the data, which is not necessarilty the most physically plausible interpretation. That’s why I’d like to see some model runs that actually include the ENSO and solar events of the last 15 years, as they actually happened. That would have a lot to say about how well the model is working in general.
Now, the climate modelers understand these issues very well. They may be exposed to the risk of confusing models with reality, but they do know what’s in the models and what isn’t. When I see a peer-reviewed article about models, the language seems appropriately cautious, trying to state simplifying assumptions and areas of uncertainty. When it gets into the IPCC scientific summary, it gets compressed and these caveats lose detail. In the summary for policymakers, these technical details are likely to be left out. By the time it has been digested by the mass media, possibly several times, and passed on to people who have no reason worry about how the models work. At this point, they see the prediction, but none of the caveats.
So, the divergence of models from reality is clearly due, partly, to things that just weren’t modeled. But the predictions, as communicated to the public, didn’t include that as a possibility. So, if you want to define a model at each stage–modelers, two (or three) layers of IPCC, and one or more runs through the mass media–well, the end prediction could be called a model too. And, the predictions that came out at the end certainly didn’t work. And that’s a problem. How much of it was in the code and how much in the communication–that’s what I’d like to find out.

JazzyT, The communication ignores more than just the possibility of divergence because of things that weren’t modeled like volcanoes, ENSO and an change in solar activity. It also generally ignores the diagnostic literature documenting problems in the things that were modeled. Models may not seem that far wrong when consideration is given to the things that could not be modeled in advance, but they can achieve that by just following the trend linearly for awhile. They diverge from that in longer range projections, and are not credible when we know they have “matched” the climate incorrectly. They have documented correlated errors larger than the phenomenon of interest.

JazzyT: As Yogi Berra famously said, “It’s tough to make predictions, especially about the future.”
Those of us who come up with scientific theories and make predictions about the future know that no model can capture the total reality, because, if it did, it would BE the reality. We develop scientific theories and use them to make models and then compare the results of those models to observations of the real world for two purposes: (1) to better prepare ourselves and our society for future developments that the scientific theory and models based on that theory predict are likely to occur, and (2) to test the underlying scientific theory against actual observations that may possibly strengthen our acceptance of the theory, or may disprove the theory.
With that in mind, let me address the points in your latest comment:

Suppose a model prediction persistently fails to match reality within a stated tolerance. (I say “persistently” because one excursion could be a statistical fluke.) Now, if the model diverges from reality because processes that were modeled were gave incorrect answers, then the model is not working. However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed. [my emphasis]

In other words, “the operation was a success but the patient died.” :^)
I agree that we cannot blame the model, per se, if something extraordinary happens that was not considered when the model was constructed.
However, when, as in the case of the IPCC Climate Models, the abject failure to match reality happens four times in a row, over a period of 22 years, and in the same direction of unreasonably high Global Warming predictions, and using updated models each adjusted and tuned by reference to five or six years of additional observations, I think it is pretty clear that something is very wrong with the underlying Climate Theory.

Is this what has been happening? I don’t know. ENSO processes can”t be predicted, so they are modeled randomly. … a super El Nino in 1998 and double-dip La Ninas …There’s been one statistical attempt to deal with all these processes, that could not have been included in model predictions (because they’re unpredictable). …
Now, the climate modelers understand these issues very well. They may be exposed to the risk of confusing models with reality, but they do know what’s in the models and what isn’t. …
And, the predictions that came out at the end certainly didn’t work.

I guess it is possible that four prediction failures in a row are just due to bad luck and that the IPCC Climate Theoreticians and “climate modelers understand these issues very well.” Perhaps they are all competent climate scientists and nice guys who have chanced into a string of bad luck. Perhaps their theory that human civilization is the PRIMARY cause of the Global Warming experienced in the latter half of the 20th century is correct. Perhaps rising atmospheric CO2 and land use that affects the Earth’s albedo and can be blamed on human actions is actually a stronger force than natural cycles of the Earth and Sun.
Perhaps they were justified when they used their over-stated predictions of these flawed climate models and hyped them in the media to strike fear of a Global Warming “tipping point” that would destroy human civilization. Is it really their fault that the fear they generated led to hysterical international and national government actions, such as subsidizing ethanol and other “green” energy schemes that have, in part, wrecked the economies of the US and Europe?
They may have had fine motives to start with, and, like Chicken Little, may have really thought they were saving humanity from itself. However, assuming good motives to start with, don’t you think they should have moderated their hype and fear campaign by now? After four failures of their Climate Theory?
It seems to me that the powers that be in the IPCC (and the US Goddard Institute, UK Climategate Research Unit, and other members of the Official Climate “hockey” Team) have a not so well hidden agenda. They want to continue to rake in the government funding they need to continue to earn a living and publish papers in prestigious peer-reviewed journals and appear on TV programs as “experts”. They cling to their underlying Climate Theory and cannot get themselves to be honest and admit that natural cycles of the Sun and Earth, not under human control or influence, are the actual PRIMARY cause of Global Warming (and Cooling). To admit that human actions are a SECONDARY cause would put them out of business.
Bottom line: As Feynman says in the video I linked in the main Topic above,

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.

In other words, “the operation was a success but the patient died.” :^)

This happens sometimes. But if the patient died in a traffic accident as their spouse was driving them home from the hospital, it would take a rather brazen lawyer to sue the surgeon for malpractice. :->
But we’re on the same page as far as what’s in and out of the models.
I couldn’t help noticing something else, and I’m surprised I didn’t see it come up in the thread. With the arrow metaphor, of course, we score a hit when the arrow hits the target. The target, in this case, could be the actual temperature…or, you could say that the temperature was the bulls-eye, and the scoring rings extend to the edge of the error bars. But 2012 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011. Two of the arrows, SAR and AR4, actually hit 2011, not in the bulls-eye, by any means, but still in scoring range. It’s the same for 2010. The arrows would probably not hit the error bars for 2012 once those are available, but insisting on using 2012, and disregarding the two previous years would invite a charge of cherry-picking.
Others have covered things like picking the starting point, how to get the slope, etc. I’ll only add that I’m old enough to have learned how to do a linear fit to the data by eye (and, in fact, they still have students do this at least once or twice in a college physics lab, to make the students interact with their data). When I do that, I get a slope that is, by eye, slightly lower than that of FAR, higher than SAR, lower than TAR, and distinctly lower than AR4.
But it seems strange to compare the slope for the entire series with the slopes for each model. Why would each model’s predictions for the future be tested against the past? It seems that you’d want four slopes for measured data that start at the time of each model’s predictions. But then, AR4s and TARs would be completely impractical due to the short time intervals, and TAR could be dodgy as well.
If you want to do this again when 2012 data are complete, well, those are the issues I noticed, which others would surely notice if this is released to a wider audience. Now they’re in the same pile as everyone else’s comments; some stuff from that pile will probably be useful for the next version.

JazzyT says:
But 20122011 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 20112010.

As per my comment upthread:

Roger Knights says:
December 19, 2012 at 11:02 pm
There’s an error in the chart. The oval labeled “2012″ should read “2011,” and the heading “1990 to 2012″ should read “1990 thru 2011″. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

Ira
“Those of us who come up with scientific theories and make predictions about the future know that no model can capture the total reality, because, if it did, it would BE the reality.”
I n my opinion, there is nothing wrong with scientists doing model work to understand the climate. Personally i think one is trying to model something that has too many variables that cannot be predicted or modeled completely. However where I have a more serious concern is when unproven and purely experimental models are portrayed as solid science and are thrust on the public domain to shape public policy . This very expensive , wasteful and burdensome on the society . These models should remain as experimental only until there is sufficient evidence that they have high level of success. . In my judgment , we are decades away from that point when it comes to climate.

There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage. Yet when it comes to climate science we are doing exactly the opposite . We are into the implementation and construction stage when it comes to energy changes , environmental actions and public policy while the models are still in the concept and unproven stage . So the whole planet is now like big experiment where these scientists are allowed to play around with public resources , energy options and taxpayers money based only on questionable science and unproven models most of which have been seriously wrong predicting just the first few years ahead .Successful hind casting of models does not prove the model as it is too easy to feed fudge factors and twig the model to give you a known answer,Successfully predicting a decades into the future is the only true test in my opinion..

herkimer says: @ December 23, 2012 at 6:52 am
There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage…..
>>>>>>>>>>>>>>>>>>>>>>>>>
And any company that has it’s head on straight gathers all of its technical personnel together to have a go at ripping to shreds the design while in the pilot stage BEFORE it gets expensive.
This is what the most successful company I worked for did with very good results. Sadly it is not common because of the delicate sensibilities of the scientists/engineers who head projects and who can not stand criticism. It takes a brave soul to present his ‘baby’ to the critiquing wolves.

JazzyT: You seem bound and determined to ignore the role of the IPCC Climate Theorists in the failure of the Climate Theory that underlies the Climate Models.
We agree that the Climate Modellers, following the lead of the Theorists, did not allow for some natural variations due to cycles of the Earth and Sun that are not under control or influence by we humans. They also modelled CO2 Climate Sensitivity way too high, again following the lead of the Climate Theorists who run the IPCC and the Official Climate “hockey” Team. Atmospheric CO2 continues to rise at a rapid rate yet Global Warming has slowed or stalled because the high CO2 Sensitivity THEORY is wrong.
You wrote in an earlier comment: “… if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.” and I replied “In other words, ‘the operation was a success but the patient died.’ :^)”
In your most recent comment, you say:

This happens sometimes. But if the patient died in a traffic accident as their spouse was driving them home from the hospital, it would take a rather brazen lawyer to sue the surgeon for malpractice. :-> … But we’re on the same page as far as what’s in and out of the models.

I agree that if the doctors DIAGNOSED the patient correctly and the surgeon gave him the proper operation and he received adequate treatment, and he died as a result of a traffic accident, that could not be blamed on the Medical Establishment.
However, do you agree that if the patient was MISDIAGNOSED (say with heart disease when his actual problem was heartburn) and the patient was therefore subjected to an unnecessary operation that, as far as it went, was successful, but he happened to die as a result of a hospital infection, that should be blamed on the Medical Establishment? What if there was a pattern of MISDIAGNOSIS and evidence the hospital had done that four times in a row? What if there was reason to suspect the MISDIAGNOSIS was not an accident but rather a way to increase the income of the hospital or to help the surgeon make a payment on his yacht?
***************
In your latest comment you also note that, looking at 2010 temperature observations, two of the four IPCC “arrows” graze the outer scoring ring of the target, and, when the final 2012 data comes in, they may in fact be determined to have hit the outer edge of the target. OK, I think they should get partial credit for that. What grade would you give them for totally missing the target twice and hitting the outer ring twice? By the way, I teach an online grad course in System Engineering at the University of Maryland, and, even with that type of partial credit, they would not pass the course :^)
Ira

… in engineering work … one should make all your changes or alternate options studies during the conceptual design stages … Yet when it comes to climate science we are doing exactly the opposite. We are into the implementation and … environmental actions and public policy while the models are still in the concept and unproven stage . So the whole planet is now like big experiment where these scientists are allowed to play around with public resources, energy options and taxpayers money based only on questionable science and unproven models most of which have been seriously wrong predicting just the first few years ahead. …

THANKS, Herkimer, and I agree 100%
I considered how we ran projects during my long career as a System Engineer as I read JazzyT’s statement that: “… if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.”
In System Enginering we did BOTH System Verification and System Validation:
1) Verification compared the system design and implementation with the Specifications. If the Specifications said “A, B, and C …” did the designers and implementers actually provide “A, B, and C …”? In other words, “Did we build the system right?” I would say that the Climate Modellers passed that threshold. They coded what they were given by the Climate Theorists.
2) Validation compares what the system actually does with what the users actually need. In other words, does the system perform the mission effectively? In other words, “Did we build the right system?” I would say the Climate Theorists gave the modellers the wrong specifications and therefore the wrong system was built!
And, of course I agree we should never go into production with the wrong system. That does not satisfy the mission and wastes money and does not serve any of the stakeholders well.
As you point out, blindly accepting the catastrophic predictions of climate models based on flawed Climate Theory has wasted taxpayer money. IMHO, public funding of harebrained “green” energy schemes has benefitted no one but the Official Climate Establishment and politically-connected industries. Theories must be VALIDATED before predictions based on them are implemented on any large scale.
Ira

Ira Glickstein, PhD says:
December 23, 2012 at 10:51 am
….As you point out, blindly accepting the catastrophic predictions of climate models based on flawed Climate Theory has wasted taxpayer money. IMHO, public funding of harebrained “green” energy schemes has benefited no one but the Official Climate Establishment and politically-connected industries. Theories must be VALIDATED before predictions based on them are implemented on any large scale.
>>>>>>>>>>>>>>>>>>>>>>>>>
Too bad the run-of-the-mill taxpayer who is being scammed can not see that. One wonders just how bad the backlash will be when realization hits. Given the acceptance of the Banker bailout fiasco by those who were conned it looks like everyone will take it lying down or maybe not….
I think a friend’s four year old had the right idea when she said she wanted to grow-up to be a government. (She now works in DC)

This entire exercise I think has also been made considerably worse by having the scientific and political mandates together at UN /IPCC where the political objective to collect money and distribute the wealth dictates scientific mandate and clouds the scientific objectivity. Things are being rushed where there is no reason to rush as we now see that the warming will not be anywhere near the rate predicted. We have the time to do things right with the right science

I’m assuming that the data points in the graphic only go as far as 2011 (?)
How was the increase in temperature of between 0.12 and 0.16 degrees C, between 1990 and 2011, calculated in the animated graphic? It appears to me that this was done using only the first and last data points in the chart (1990 and 2011). If so, then I don’t think this isn’t the best method for estimating the increase in temperature. I think linear regression would be better as it uses all of the data points, and thus reduces the influence of year-to-year variability.
Using annual global combined (land and ocean) surface temperature anomaly data from 3 data sets (GISS, HardCrut4, NOAA/NCDC) I calculated the slope of the regression line between 1990 and 2011, and estimated the increase in temperature in degrees C between 1990 and 2011 to be 0.33 for HadCrut4, 0.33 for NOAA/NCDC, and 0.37 for GISS.
Admittedly, the estimates obtained above are most likely to be too high, as the slope of the regression line would be steepened due to mount Pinatubo erupting in 1991, so I did 2 very simple alternative analyses to adjust for this:
Firstly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 as the average of the anomalies for 1990 and 1995. When I did this, the increase in temperature in degrees C between 1990 and 2011 was estimated to be 0.23 for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS.
Secondly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 using simple linear interpolation (from the temperature anomalies for 1990 and 1995). This gave idenitical results to 2 decimal places (i.e. 0.23 degrees C for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS).
Therefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.

Rob Nicholls says:
December 23, 2012 at 1:47 pmTherefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.
Let’s assume you are trying to evaluate a trend from “good” data. (And, the judgement of most independent observoers – those not in the pay of the CAGW community – , none of those is particularly accurate w/r to the real “world” temperatures…)
Regardless, let us assume those are valid.
You ARE making an error: You are trying to <artificially create a conclusion that the weorld’s temperatures are linear! They are NOT linear. There is a very evident inflection point – a bend in the curve at 1997-1998-1999. Your “method” creates an “anomaly” (an average needed to calculate differences) early in the time series of events; creates a second anomaly (a second average) at the end, then looks for a lest squares single line between anomalies based on the start and end values.
If you insist on using straight lies to analyze cyclic trends, do this: Run TWO least-squares linear trends. One based on 1990 (or better yet 1975) through 1998. The second using 1996 through 2012 values.
Now plot the two different straight lines.

Rob Nicholls and RACookPE1978: Thanks for your comments and I hope you continue posting and comparing specific linear regression and straight line trends. I find it interesting that the linear regression results calculated by Rob are either in the range of 0.33˚C to 0.37˚C or 0.23˚C to 0.25˚C which are about as far from each other as my simple delta from start (1990) to end (2011) of 0.12˚C to 0.16˚C is from your lowest estimates. I would be interested in the results you might get if you took RACook’s suggestion and did the calculation that way.
Perhaps I am too simple-minded, but if my doctor told me I was eating too much over the past two decades and had therefore gained weight, I would simply subtract my weight in 1990 from my weight in 2011 to determine how much I had gained, net-net, over that period.
In any case Rob, yes, simple subtraction is how I determined the change in temperature anomaly from 1990 to 2011. (And, yes, my graphic is misleading when it mentions 2012 as was brought to my attention earlier in this comment thread. The graphic should say 2011 and the oval and arrow heads that say 2012 should be moved a year to the right.)
Regardless of the exact numbers and how they are calculated, the bottom line for me is that the IPCC made four predictions, basically implying that Global Warming would end human life and civilization within our lifetimes unless our governments took drastic actions to reduce the rapid rise in CO2 levels. Despite the fact that CO2 has continued its rapid rise, each of those four predictions turned out to be way too high with the result that actual observations in 2011 are outside the supposed 90% certainty limits of all four predictions.
Where I come from (System Engineering), predictions generally improve as we get more data and learn the folly of past predictions. This has not happened in the world of the IPCC. Although SAR turned out closer to the truth than FAR, the last two predictions, TAR and AR4, have turned out worse than SAR.
For me, that confirms the most likely explanation that the Official Climate “hockey” Team is not motivated primarily by a scientific agenda, but rather with a political one.
If you have any evidence to the contrary, please bring it forth. I’m listening.
Ira

Ira and RACookPE1978, thanks for your prompt responses – these are much appreciated. I’ll try to do some further analyses around this at a later date. I hope all here will have a good Christmas.[THANKS, and I’ll look forward to further analysis. Also, a Merry Christmas to all as well as a slightly belated Happy Chanukah. (Let us keep CHRIST in CHRISTmas and the “CH” in CHanukah -pronounced like the “CH” in the Scottish “loCH” or the German “aCHtung!” :^) – Ira]

Ira —
“Perhaps I am too simple-minded, but if my doctor told me I was eating too much over the past two decades and had therefore gained weight, I would simply subtract my weight in 1990 from my weight in 2011 to determine how much I had gained, net-net, over that period. ”
Imagine that your body weight fluctuates by as much as 15% on a day-to-day basis, let alone year-to-year. Would weighing yourself one Tuesday in 1990 and again on a Tuesday in 2011 be enough to conclude that your weight had increased by 30% over that period?
Probably not.
Temperature seems to be like that.
If you look at the up-to-date GISTEMP record for 2011 and take the (black) data points for 1990 and 2011, you can see about 0.17 to 0.19 C increase.http://data.giss.nasa.gov/gistemp/2011/Fig2.gif
But if you compare that against the 5-year running average of the data (red trendline), you’ll see that it’s quite unrepresentative to do that two-point estimate. No climate models are expecting to produce the year-to-year accurate temperature estimates, but if they can capture the average trend that’s a success.
So if you’re going to do just a two-point trend estimate, at the very least you should base your start and end temperatures not on the instantaneous temperature in the particular year you chose, but on the 5- or 10-year average temperature centered about that year.

JazzyT: You seem bound and determined to ignore the role of the IPCC Climate Theorists in the failure of the Climate Theory that underlies the Climate Models.

I’m determined to figure out whether the theory has failed or not. If I become convinced that it has, I’ll want to know why. I can’t help feeling that a lot of people are leapfrogging this step. Nothing wrong with considering various known and potential errors, but I want to know about the how much of diverge between prediction and measurement actually arises from an unusual combination of ENSO and solar events. The well-known attempt do do this is Rahmstorf and Foster’s paper:http://iopscience.iop.org/1748-9326/6/4/044022/pdf/1748-9326_6_4_044022.pdf
You know this one, I’m sure; but there it is as a reference (and thanks to Louis Hooffstetter on another thread for the link). Now, this is the result of a multivariate regression, which will work better or worse depending on a number of things, including whether anything was left out. Also, it’s all statistics, and necessarily includes simplifying assumptions. That’s why I’d like, in the end, to see the known ENSO and solar events to go into a few models, and see whether the modelers end up with a happy face or a sad one. (Or, a puzzled face).
The really warm people seem to like this paper a lot. On the skeptic side, maybe a few dismissive comments, but mostly, just dead air on that frequency. The only one I’ve seen really acknowledge and engage this paper, or even the issue at all, is Bob Tisdale. I don’t think his counterargument is ready for prime time yet, but he’d plugging away at it, and taking the issue on.

What if there was a pattern of MISDIAGNOSIS and evidence the hospital had done that four times in a row? What if there was reason to suspect the MISDIAGNOSIS was not an accident but rather a way to increase the income of the hospital or to help the surgeon make a payment on his yacht?

This certainly happens in the hospital setting, and it’s worth lawsuits, and sometimes, prosecutions. But in the analogous modeling field, I don’t think that there have been four failures. FAR’s aarrow shot high, but it was the first, and clipped the tops of some error bars. SAR actually did pretty well right up until about 2000, the beginning of the strange era I want explained. For TAR we could say the same thing. AR4 gets within a couple of error bars but it’s too early to say anything, no matter what the data.
So, if you correct, or model, the ENSO and solar events of the last 15 years. what does it look like? I don’t know, but I want to. I know many on the skeptical side would dismiss this issue as being unlikely to matter. But it will definitely matter in the debate, because the Warm Ones will be all over it, every time the “flat decade” argument comes out.
People do get attached to their pet theories, and the careers that they engender. I think most of this is unconscious; my thesis advisor was a highly ethical man, but he could talk himself into anything. By the same token, I see a lot of people who really want the whole process to be flawed, dishonest, and/or failed, so that they won’t see a threat to their lifestyles, freedoms, etc. This is equally unconscious. Me, I want to know what’s going on. But then,in the end, so does everyone else.
Martin Lewitt says:
December 22, 2012 at 7:58 am

JazzyT, The communication ignores more than just the possibility of divergence because of things that weren’t modeled like volcanoes, ENSO and an change in solar activity. It also generally ignores the diagnostic literature documenting problems in the things that were modeled.

Not that I need another subject to read deeply…but if you have a ref to a good starting place, I’d be grateful.
Beyond that, I hope everyone enjoys whatever they’re doing for the holiday. I hope I don’t have too many typos; dinner is called, and I have to hit “send”

So, if you correct, or model, the ENSO and solar events of the last 15 years. what does it look like? I don’t know, but I want to. I know many on the skeptical side would dismiss this issue as being unlikely to matter. But it will definitely matter in the debate, because the Warm Ones will be all over it, every time the “flat decade” argument comes out.
OK, but it is not 4 models that are not tracking the real world.
It’s ALL 23 models, in EVERY ONE of their hundreds of runs. To date, NO model run at ANY time using conventional consensus state-of-the-art “science” has duplicated the real world over the past 16 years.
SO, I would grant this is important, but I would caution that the analysis needs to be “real world”: pollution and aerosols need to match real-world measurements, not conveniently “canned” assumptions that “aerosols increased between 1955 and 1975 so solar radiation decreased by xxx.yyy% over that time frame.” When modeled, ENSO events need to be as little and short and follow their actual rise, steady, and fall patterns as they actually where – not “light switch” on-and-off “high and low” positive and negative inputs.
Perhaps the result will be instructive.
Then again, if it were instructive, the “scientific” CAGW theists would have run their latest models with … maybe … the past 16 years of “real” data, wouldn’t they?

Imagine that your body weight fluctuates by as much as 15% on a day-to-day basis, let alone year-to-year. Would weighing yourself one Tuesday in 1990 and again on a Tuesday in 2011 be enough to conclude that your weight had increased by 30% over that period?
Probably not.
Temperature seems to be like that. … So if you’re going to do just a two-point trend estimate, at the very least you should base your start and end temperatures not on the instantaneous temperature in the particular year you chose, but on the 5- or 10-year average temperature centered about that year.

Thanks for sharing your ideas and I agree that it would be foolish to compare my weight (or the temperature) on “one Tuesday in 1990 and again on a Tuesday in 2011” which is why I compared not a day or a month but the whole IPCC YEARLY average temperature anomaly reports for the YEAR of 1990 with the YEAR of 2011.
I chose 1990 because that was the IPCC’s First Assessment Report (FAR) and the first actual temperature anomaly data point on the AR5 draft Figure 1-4 graphic. I chose 2011 because it was the last actual anomaly data point. Yes, taking a five- or ten-year average would have yielded a different result. For example, taking the five-year average from 1990-1995 (which includes the low point at 1992) and comparing it to the average of the most recent five years available (2006-2011) would have yielded about double the delta from what I presented. However, three out of the four IPCC reports have central estimates for 2011 or 2012 that are higher than the five-year averages you suggest.
The key message of Figure 1-4 is that the IPCC has been wrong on the high side four times over a period of up to 22 years. I do not think they are that incompetent, so I have to conclude their primary motivation has been political. They may have really thought they were saving human life and civilization from a Global Warming “tipping point” by making exagerated predictions in the warming direction when they issued the FAR. However, by the third or fourth report, they should have figured out that their assumption of ultra-high CO2 sensitivity was unjustified. I think CO2 sensitivity is closer to 1˚C than the approximately 3˚C they seem to have used. What would their four projections look like if they re-ran them with lower CO2 sensitivity? I’ll bet much closer to the truth.
Those of us who think low solar activity is related to lower than usual temperature anomalies are encouraged by the most recent Solar Cycle #24 that seems likely to peak this coming year at about 60% below Solar Cycle #23. That, taken together with the lack of statistically significant warming over the past decade and a half, lead me to expect a continuation of a low level of warming and perhaps even some cooling. All this despite the continued high rate of atmospheric CO2 growth.
What is your prediction for 2015? For 2020 and beyond?
Ira

JazzyT: I read your latest comment with interest and perhaps it is all a run of bad luck that will turn around over the coming decade. Kind of like the guy who bets his rent money on a “sure shot” and, each time he loses, doubles down because the law of averages says he can’t lose forever.
I see no mention of CO2 Climate Sensitivity in your comments. IMHO, CO2 is the key THEORETICAL issue that plagues the IPCC researchers.
If they had assumed that doubling atmospheric CO2, all else being equal, would raise average Global temperatures by only 1˚C, their predictions would have been pretty close to the truth for 2011. On the other hand, had they used 1˚C, their whole “tipping point” panic argument would have evaporated.
Ira

Thanks RACookPE1978 for the information that the four IPCC Assessment Reports are actually based on hundreds of runs of some 23 different models. Every one aimed too high.
When will they ever learn?
In this Topic back in 2010, I included the following with apologies to Pete Seeger:

Where have all the sunspots gone? NA-SA search-ing,
Where have all the sunspots go-ne? NASA don’t know.
Where have all the sunspots gone? Global Cooling, anyone?
Will NASA ever learn? Will NA-SA ev-er learn?
Where has all the carbon gone? Green-house gas-es,
Where has all the carbon go-ne? Come down as snow!
Where has all the carbon gone? Heating houses, everyone,
Will NASA ever learn? Will NA-SA ev-er learn?
Where has Global Warming gone? Point not tip-ping,
Where has Global Warming go-ne? Its gonna slow.
Where has Global Warming gone? Normal seasons of the Sun,
Will NASA ever learn? Will NA-SA ev-er learn?

On my personal blog, back in January 2009, I predicted that Solar Cycle #24 would peak at about 80 in 2013. It looks like I was a bit high since the likely peak is below 70. In 2006, NASA was predicting a peak of about 146 and, in 2008, they reduced it to about 137. When I made my prediction of 80, NASA was already down to 104. If Solar Cycle #24 comes in low as expected, and if it happens to be followed by another couple of low cycles, we could be in for some Global Cooling and future generations may thank us for putting a cushion of atmospheric CO2 up to moderate the cooling temperatures.
Ira

The problem with estimating the trend in global temperatures by using just 2 data points at the start and end of the relevant time period (subtracting one data point from the other) is that this method is heavily influenced by year-to-year fluctuations in temperature (such fluctuations are caused by, among other things, the El Nino Southern Oscillation, the 11 year solar cycle, and volcanic activity).
This is illustrated if we move the timescale of the analysis by just one or two years:
The 1990 to 2011 global temperature difference calculated by subtracting the 1990 global temperature anomaly from the 2011 global temperature anomaly was 0.14 for NCDC/NOAA, 0.11 for HadCRUT4, and 0.15 for GISS (GISTEMP) (all temperatures are in degrees C).
But, if the timescale shifts backwards by one year, and we subtract the 1989 global temperature anomaly from the 2010 global temperature anomaly, we get 0.40 for NCDC/NOAA, 0.42 for HadCRUT4 and 0.43 for GISS. (The animated graphic in the article would look quite different with these figures.)
If we shift backwards by another year, and we subtract the 1988 global temperature anomaly from the 2009 global temperature anomaly, we get 0.26 for NCDC/NOAA, 0.29 for HadCRUT4 and 0.25 for GISS.
(I note that Jacob already commented along these lines on 24th December).
I appreciate what RACookPE1978 said about linear regression on 23rd December. I can see that if the underlying temperature trend is markedly non-linear then linear regression may not provide a reliable estimate of the underlying trend in global temperatures from 1990 to 2011.
One method for calculating the trend in global average temperature over a 21-year period, which does not assume a linear trend, and which reduces the influence of year-to-year variability, is to perform a subtraction of moving averages. I’ve done the calculations with 5 and 10 year moving averages.
10-year centred moving averages can be calculated up to 2006, and 5-year centred moving averages can be calculated up to 2009. So, the latest 21-year period for which subtraction of 10-year moving averages can be performed is 1985 to 2006. The latest 21-year period for which subtraction of 5-year moving averages can be performed is 1988 to 2009. (Fortuitously, the calculations for these time periods do not involve data from the years 1991-1994, which seem to have been heavily influenced by the Pinatubo eruption).
Subtracting the 10 year moving average centred on 1985 from that centred on 2006 gives 0.36 for NCDC/NOAA, 0.37 for HadCRUT4, and 0.36 for GISS (in degrees C).
Subtracting the 5 year moving average centred on 1988 from that centred on 2009 gives 0.27 for NCDC/NOAA, 0.29 for HadCRUT4, and 0.29 for GISS (in degrees C).
I also used similar methods to look at the temperature changes after 1990 (again in degrees C; this time the calculations involve the years 1991-4, so I’ve included adjustments for the Pinatubo eruption in 1991):
Subtracting the 10-year moving average centred on 1990 from that centred on 2006 gives 0.30 for NCDC/NOAA, 0.31 for HadCRUT4 and 0.32 for GISS (without adjustment for Pinatubo’s 1991 eruption), and 0.26 for NCDC/NOAA, 0.27 for HadCRUT4 and 0.27 for GISS (with adjustment for Pinatubo’s eruption).
Subtracting the 5-year moving average centred on 1990 from that centred on 2009 gives 0.26 for NCDC/NOAA, 0.26 for HadCRUT4 and 0.28 for GISS (without adjustment for Pinatubo’s 1991 eruption), and 0.22 for NCDC/NOAA, 0.23 for HadCRUT4 and 0.24 for GISS (with adjustment for Pinatubo’s eruption.)
[The method I used for adjusting for Pinatubo’s 1991 eruption in this case was to re-calculate the temperature anomalies for 1991 to 1994 as the average of the anomalies for 1986 to 1990 and 1995 to 1999 (i.e. 5 years either side of the time period 1991-4). This is the best method that I could come up with given my limited skills (I think it’s better than the methods that I mentioned in my previous comment), but I’m not very happy with it and I’m sure there must be better ways of adjusting for these kinds of events. It’s pretty arbitrary to adjust for Pinatubo and not for other short-term causes of fluctuation in temperature. My reason for performing this one particular adjustment is that the Pinatubo eruption seems to have had a big influence on temperatures between 1991 and 1994, and to leave out the adjustment would, I think, lead to an over-estimate of the underlying trend in global temperatures since 1990, if the data from 1991-4 is used in calculating the estimate. I don’t have the time or the expertise to try to replicate Foster & Rahmstorf’s 2011 paper].

Thanks, Rob Nicholls, for doing all these alternative calculations using five- and ten year moving averages. Your answers range from 0.22˚C to 0.43˚C. That is about 50% uncertainty.
I note that your results are based on the excellent satellite data available over the two most recent decades. Given that range of uncertainty for data that is far more reliable than the old manual thermometer readings prior to the 1970’s, how close to reality do you think the current Official “hockey” Team estimate of Global Warming of about 0.8˚C since the 1880’s is?
My estimate is that about 0.3˚C of the 0.8˚C is due to data bias such as measurement stations that were originally located far from artificial heat sources that, over the decades, have been encroached by expanding human civilization, as well as “adjustments” of previously reported data that, in the late 1990’s, seem to have systematically reduced temperature readings from before 1960 and raised readings taken after 1980, increasing net warming by up to 0.3˚C. About 0.4˚C is due to natural variations not under human control or influence, such as multi-decadal ocean oscillations and sequences of high or low 11-year solar cycles. IMHO, only about 0.1˚C of the total reported Global Warming since the 1880’s is due to human activities including our unprecedented burning of fossil fuels and land use changes that affect the Earth’s albedo.
Nearly two years ago, I did a survey of WUWT commenters and, taking the average of their estimates: Data Bias = 0.28˚C, Natural Variations = 0.33˚C, and Human Caused = 0.18˚C.
What do you think of these estimates?
Ira

Nearly two years ago, I did a survey of WUWT commenters and, taking the average of their estimates: Data Bias = 0.28˚C, Natural Variations = 0.33˚C, and Human Caused = 0.18˚C.
What do you think of these estimates?

Reasonable estimates.
however, the actual satellite measurements show a significant but random month-to-month change of +/- 0.2 . That is, a temperature measurement (expressed as an anomaly) in May has been as much as 0.2 degrees different than the temperature anomaly for August or March. This is not really an error band – the error band would describe instrument or sensor differences that “record” or “report” a difference from the actual temperature.
In the satellite record, it appears to be the actual measurements – the temperatures – that vary. In turn this creates two questions: Is the temperature actually varying in this kind of random ways over a very short term interval? or do these variations stem a detector/analysis flaw?
If the temperatures do vary over such short term,s, is it valid to even consider a 1/3 of 1 degree difference as significant in any “symptom” of climate change?
.

I see no mention of CO2 Climate Sensitivity in your comments. IMHO, CO2 is the key THEORETICAL issue that plagues the IPCC researchers.

The CO2 forcing seems uncontroversial among scientists.skeptical and warm alike. It’s the feedbacks, especially clouds, where controversy lies. Tropical cloud formation influences how much heat is available to warm the temperate zones, melt ice caps and decrease albedo, etc. Cloud formation is the least understood process in the game, and clouds are difficult to model on the grid size used, which is limited by available computing power, As I understand it, the way to deal with such uncertainties is to make a number of runs, using different parameters to (hopefully) cover the range of possible values. Meanwhile, smaller models can be run over a limited areas, e.g., just a small tropical ocean zone, for a month, to give a snapshot that can be compared to observations. Finally, to insure against errors, and because each model has somewhat different algorithms that may do better or worse in different situations, they use 23 different models, as noted in several posts above.

If they had assumed that doubling atmospheric CO2, all else being equal, would raise average Global temperatures by only 1˚C, their predictions would have been pretty close to the truth for 2011. On the other hand, had they used 1˚C, their whole “tipping point” panic argument would have evaporated.

Over the long term, say, 100 years, I would view any tipping points, scary or otherwise, as part of the overall CO2 sensitivity. Of course, it’s useful to define a shorter-term CO2 sensitivity that specifically excludes tipping points. At this point, I haven’t heard of it being officially defined this way; it’s just what makes sense to me. Meanwhile, it may be true that 1 degree for CO2 doubling would fit the recent data, but after correction for ENSO and solar, this probably would not be true anymore.
So, to make a short story long, I’d say that although they eventually do get CO2 sensitivity right or wrong, that would be something to come out of the model, and it can be used as a diagnostic, if extraneous influences (ENSO, Solar) have been accounted for. So, “do they get CO2 sensitivity right” is indeed a good question. On the input end, it’s not CO2 sensitivity, but rather its components that they have to get right: CO2 forcing, which is pretty well nailed down, and all of the feedbacks, especially clouds. As others point out from time to time, “it’s all in the feedbacks” that you’ll find the remaining controversies that will determine CO2 sensitivity, and, for various emission scenarios, future warming. It’s all in the feedbacks–especially clouds.

Hi Ira,
(OT: How do you get a real “quote” in italics like that, to make it a proper response? I don’t see any rich-text formatting options. I’ll just have to quote you the traditional way)
“Thanks for sharing your ideas and I agree that it would be foolish to compare my weight (or the temperature) on “one Tuesday in 1990 and again on a Tuesday in 2011″ which is why I compared not a day or a month but the whole IPCC YEARLY average temperature anomaly reports for the YEAR of 1990 with the YEAR of 2011.”
I know that you are not comparing averages over a day or month, but a year. My point (perhaps I can stress it better than I did) is that even a yearly average is still not good enough. The reason I can say this confidently is well illustrated by Rob Nicholls’ post, or better yet, by glancing at any graph of the temperature record (for example the GISTEMP graph that I linked to). You can clearly see just by looking at the graph that the year-to-year temperature variations are extremely noisy. Those are already averaged over an entire year (and the entire globe). Simply from this, it is totally apparent that a comparison of “not a day or a month but the whole IPCC YEARLY average” is still not going to give you any useful result. Rob’s example of how taking your exact approach, but shifting the year (arbitrarily) by plus or minus 1, gives completely different answers.
I do believe that you have good reasons to pick 1990 as your starting point and 2011 as your ending point and that you aren’t cherry-picking those particular years to support your argument. But still, picking just two individual years (even if they’re a decade apart) won’t give you any useful result, because the signal-to-noise ratio is simply too high.
“Your answers range from 0.22˚C to 0.43˚C. That is about 50% uncertainty.”
That is a very unusual definition of uncertainty that you are using. Uncertainty in a dataset is not the minimum value divided by the maximum value, not even as a rough estimate. It doesn’t take more than 5 minutes to do this calculation properly if you have Excel, so no excuses for being lazy! 😉
Furthermore, his 0.43˚C value was a one-year to one-year two-point estimate (the same type that you used in your original post), which he used as an example of why your method does not work. I don’t think he is claiming this to be a valid estimate, or an “answer”. He’s claiming it as an example of what not to do.
His actual answers have an average of 0.29˚C with a standard deviation of 0.043˚C (counting his data both with and without Pinatubo’s correction). In other words, his estimated trend is 0.29˚C +/- 0.09˚C, 19 times out of 20. That’s an “uncertainty” of about 30%, keeping in mind that one would truthfully expect a range of different trends because he’s looking at a range of different starting and ending years.
You also wrote:
“What is your prediction for 2015? For 2020 and beyond?”
The exact point I am making is that nobody and no model can make a prediction for 2015. And nobody and no model tries to make a prediction for 2015. As a non-expert though I would be willing to hazard a guess for the 5-year running average centered on 2015, which we will be able to calculate in 2018, when the previous year’s data is released. My guess is that it will be 0.3˚C warmer than the 5-year running average centered on 2005. In other words, I predict the average of the anomaly from 2013 to 2017 will be 0.87˚C (on GISTEMP scale). I hope it’s an overestimate though; this is a bet I would rather lose.
I’m really not qualified to comment on the IPCC itself, and for all I know it is as corrupt and misrepresentative as you say. To be honest, I hope it is, and that this whole global warming deal is a scam. I would much rather the IPCC were wrong than that they were right. And if they turn out to be wrong (and knew so) I will be angry at them for misleading me, and yet I do hope this is how it turns out, because I would prefer this to them being right. In other words, I acknowledge a bias and preference for the truth to turn out to be that they’re lying about everything. I’m unfortunately not convinced yet, though.

RACookPE1978, Rob Nichols, and Jacob: Thanks for your latest comments and I accept them as reasonable and science-based.
For Jacob, a brief HTML “lesson” on how to do a Blockquote. Jacob wrote:

Hi Ira,
(OT: How do you get a real “quote” in italics like that, to make it a proper response? I don’t see any rich-text formatting options. I’ll just have to quote you the traditional way)

How did I get the above text to appear indented and in italics? I just put the text I copied from your comment between HTML blockquote commands:
<blockquote> any text you want </blockquote>
and it automagically appears indented and in italics like this:

Thanks v much Ira for your response to my last comment, and thanks for Jacob’s subsequent response. Jacob correctly surmised that I did not mean 0.43 degrees C to be a realistic estimate of the warming since 1990 – I was just trying to illustrate how susceptible to year-to-year variation an estimate is when it’s only based on two years of data.
The question you ask about how close to reality I think the “Hockey Team’s” estimate of 0.8 degrees C of warming since 1880 is, is important. It’s way beyond my expertise (and the free time that I have) to deduce this from first principles as I’m not a climate scientist, although I have tried to follow all the arguments as well as I can. I have to say I’ve never found any evidence or argument that casts serious doubt in my mind on the IPCC’s assertion (in AR4) that the vast majority of the warming is real and anthropogenic (Of course I hope that I’m wrong and that you are right.) The much maligned adjustments to temperature data series seem, as far as I can tell, to be scientifically rigorous and necessary to correct for known biases and to make the data comparable so that trends in temperature can be assessed. I seem to remember that there’s strong peer-reviewed science suggesting that the contribution from urban heat island effects is very small (this is somewhat counter-intuitive for me). The contribution to warming from changes in solar irradiance since 1880 seems to me to be small, and I don’t think there’s any convincing evidence that galactic cosmic rays play a significant role. (sorry I cannot quote the papers to back up any of this, I don’t have time to pull it all together at the moment, but there’s plenty of websites on both sides of the debate which link the relevant papers). I don’t think there’s a conspiracy of mainstream climate scientists making up the evidence for anthropogenic global warming (I’ve had a good look for evidence of such a conspiracy, ever since Climategate in 2009).
Admittedly, as a lot of the arguments go over my head, I cannot say with 100% certainty who is right and who is wrong with respect to climate change, but I’d be extremely surprised if the IPCC, which seems to involve hundreds of experts and which seems to summarise the science cautiously and honestly, have got it very wrong. I may of course be wrong!
Anyway, thanks for taking the time to respond so thoroughly to my comments. Your responses have been thought-provoking for me. Best wishes for 2013 to you and all at this site.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!OkPrivacy policy