Figure 1 compares the GCPC v2.2 precipitation anomalies for global land and ocean surfaces. The dataset starts in January 1979 and ends in February 2013. Both land and ocean precipitation anomalies have been smoothed with 13-month running average filters to suppress the monthly variability. Looking at the global ocean precipitation anomalies (red curve), it’s blatantly obvious that the primary causes of annual precipitation variations are El Niño and La Niña events. The 1982/83, 1986/87/88, 1997/98 and 2009/10 El Niño events are plainly visible, and you can also make out the lesser El Ninos in the early 1990s and mid-2000s. The trailing La Niñas are also evident.

Figure 1

The opposing relationship between ocean precipitation and land surface precipitation is also obvious. Land surface precipitation generally drops in response to El Niños and increases during La Niñas. There is also a strong dip and rebound in the land surface precipitation data starting about 1991 that may be a response to the eruption of Mount Pinatubo. Curiously, the ocean data does not show a similar response.

There also appear to be other factors contributing to the longer-term variations. The Atlantic Multidecadal Oscillation, the Pacific Decadal Oscillation, the Indian Ocean Dipole, and other coupled ocean-atmosphere processes are the likely suspects. But El Niño-Southern Oscillation (ENSO) is one of the primary factors governing precipitation and the water cycle on this planet, if not the primary factor.

And what can’t climate models simulate? ENSO. For further information about climate models failings when trying to simulate ENSO, refer to Guilyardi et al (2009). Climate models also can’t simulate Atlantic Multidecadal Oscillation, the Pacific Decadal Oscillation, the Indian Ocean Dipole, and other coupled ocean-atmosphere processes.

I thought of ending the post there. There really is no need to continue. But for those interested, Figures 2 and 3 compare the CMIP5-archived models to the land surface precipitation anomalies and the precipitation data over the oceans. Ocean and land masks are available through the KNMI Climate Explorer for the model outputs as well. As noted in the title blocks, we’re using the multi-model ensemble member mean of all of the models in the CMIP5 archive. As with the other model-data comparisons, were using RCP6.0 because it is the most similar to the widely used A1B scenario from earlier modeling efforts. And as a reminder, the models in the CMIP5 archive are being used by the IPCC for their upcoming 5th Assessment Report.

Figure 2

########

Figure 3

The climate models simulate increases in precipitation over both land and ocean surfaces, and the rates are very similar. But the data shows basically no long-term trend over the oceans and a decline over land. In more basic terms, according to the climate models, if manmade greenhouse gases were responsible for the changes in precipitation over the past few decades, precipitation over land surfaces would have increased, but the data show it has declined.

STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN (With a Minor Addition that’s Underlined)

We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.

The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:

The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:

If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

Gavin Schmidt replied with a general discussion of models:

Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).

To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; so we use the average because we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.

The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):

Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.

In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.

CLOSING

We can add global precipitation anomalies over land and over the oceans to the growing list of climate model failures. The others included:

And we recently illustrated and discussed in the post Meehl et al (2013) Are Also Looking for Trenberth’s Missing Heatthat the climate models used in that study show no evidence that they are capable of simulating how warm water is transported from the tropics to the mid-latitudes at the surface of the Pacific Ocean, so why should we believe they can simulate warm water being transported to depths below 700 meters without warming the waters above 700 meters?

Surface temperatures and precipitation are the two primary metrics that interest humans. Will the future be warmer or cooler? And will it be wetter or drier? Climate models show no skill at being able to answer those two fundamental questions about climate change.

Am I correct in thinking that there might be a problem with models and ensemble means with respect to real world data? While by definition a set of models will all use absolute values, in the real world each instrument will have some difference between the measured and the actual value. This might be a fixed offset or one that scales with the measured value or even a combination of both. If this is the case then does this imply that the ‘ensemble mean’ is only a valid measure for models and not for real world measurements?

Thanks Bob.
I hate temperature anomalies because they assume that everything should remain constant, and changes indicate a problem, whereas weather changes because inputs change and this gives a chaotic system. Chaotic systems will have an ”average” but variations from that average does not prove a problem only that the system is chaotic and extremes happen to fall within the average. Monitoring a chaotic system will change the average over time as the time gets larger but this is the nature of chaos not a problem with climate.

I shall continue to disagree that the average of an ensemble of different models represents a useful physical property. Hence, it has no validity in testing the validity of the models. If each model in the ensemble had the same trend then I’d be less sceptical of the methodology. But if they don’t each have the same trend then there is more than noise differentiating the models. So the average of wrong is wrong except by chance. The conclusion is robust however, the models are wrong.

I am seeing more and more articles where the Global Climate Models are failing to simulate real-world effects, in everything from ice-loss, to cloud cover to precipitation to temperature and humidity. IF the models cannot accurately model these effects, and these are the ONLY places where climate scientists can find the human “fingerprint” of AGW, then how the hell can there be any certainty whatsoever when the AGW hypothesis is based solely on model outputs?

the overwhelming and central fact in these GCMs is that they DO NOT model the actual climate accurately. What parts of the climate they do simulate, they simplify, or simulate badly, and many other activities of climate are not simulated at all. Given that this is the case it is obvious to anyone with an IQ over mud, that, (Apologies for shouting, but this has to be stated loud and clear)… THE MODELS ARE WRONG!!!!

If the models are not including several atmospheric processes, then the models are incomplete. Another way of describing an incomplete model is to say that it is WRONG!

IF the models are incomplete, because these processes or activities of the climate are missing, or so poorly inderstood, that they are simplified or excluded altogether, then they cannot accurately model how these processes interact with other modelled processes. Therefore the MODELS ARE WRONG!

In Formula one motor racing, the teams use very complex model driven simulators. Although incredibly complex and continuously updated with real world data, even these models are often wrong and lead the teams into making changes to components, which then fail to have the intended result in reality, on the track. The climate is several orders of magnitude more complex with far more non-linear processes with unpredictable consequences on other processes. Not simulating these processes accurately, or at all, mean that THE MODELS ARE WRONG!

The Climate modellers admit that their models are not complete. Yet they still expect us to base our belief and faith and trillions of dollars worth of policy and potentially massively restrictive legislation, on models WHICH ARE WRONG!

C’mon Bob. Skeptics justifiably criticized Hadley Center charts because they ran their smoothing function to the end point. Your chart should omit six years of smoothed data from both start and endpoint. It is ok to overlay the smoothed data to the raw data, but trends in the data outside the smoothed region may be overwhelmed by noise.

son of mulder says:
July 10, 2013 at 5:11 am
“I shall continue to disagree that the average of an ensemble of different models represents a useful physical property. Hence, it has no validity in testing the validity of the models. If each model in the ensemble had the same trend then I’d be less sceptical of the methodology. ”

The theory of anthropogenic global warming, if it can be evaluated AT ALL, must be evaluated by comparing what its proponents claim to be the best representation against the data.

If the proponents switch to saying, the best representation is at the moment Model XYZ (as one would expect in a normal science), then that must be the yardstick. But that is not what they say (for obvious reasons; and these reasons are not scientific but political; therefore the theory of CO2AGW is not a normal science but a branch of political science – this doesn’t change the approach of invalidating it though.).

The last three initial cooling phase events of the PDO seem to have been deep drought periods here in the southwestern United States which caused large mostly natural fires (“dry lightening” – lots of thunderstorms with little rain) in the forests of New Mexico. We saw this in the early 1900s, the 1950s and are seeing it again now. The iconic little bear that was dubbed Smokey was found singed in a small tree near Capitan, NM in 1951. There is a clear temperature and precipitation connection to NM relating to ENSO driven events.

Are the models as simple as shown? Do they really lack the fundamental variability (and pattern) of the data?

Precipitation redistributes global energy in a huge way, even if the net result is zero in a stable environment. The impact of large swings of precipitation on other systems must be considerable. It would be like saying the Bay of Funday 46′ of tidal bore is the same as the Mediterranean 6″, because over the course of the week both tides average out to nil.

The climate models diverge from reality because they are based on the incorrect assumption, that noise is random and thus average noise will converge to zero over reasonably short time scales. This assumption is not true for “chaotic” noise as is the case with weather data. At all time scales less than infinity the average of chaos diverges from zero, which is why the ensemble mean fails to match reality. This is an inherent flaw in the design of the GCM’s and the ensemble mean.

MThompson says: “C’mon Bob. Skeptics justifiably criticized Hadley Center charts because they ran their smoothing function to the end point. Your chart should omit six years of smoothed data from both start and endpoint. It is ok to overlay the smoothed data to the raw data, but trends in the data outside the smoothed region may be overwhelmed by noise.”

First, this isn’t Hadley Centre data. Second, the data and model outputs in Figures 2 and 3 aren’t smoothed. They present monthly values. Third, the trends are based on the monthly figures.

The spaghetti graph of the GCM’s has much more value then the ensemble mean, because the upper and lower bounds of the spaghetti graph are telling us what to expect from natural variability. In effect, the climate models are saying that without any change in forcings, climate could do this, or it could do that.

So, while chaos invalidates the ensemble mean, it has no such effect on the spaghetti graph. Rather the spaghetti graph tells us that without any change in forcings, wild variations in both temperature and precipitation are possible due simply to natural variability. This has largely been overlooked by climate science.

The valuable part of the spaghetti graph is the lower and upper bounds on the spaghetti. This tells us what the models are predicting for natural variability. And the climate models are telling us that natural variability is HUGE, much larger than assumed. The ensemble mean however is statistical nonsense because the underlying data is chaotic.

son of mulder says: “I shall continue to disagree that the average of an ensemble of different models represents a useful physical property…”

I understand your complaint. However, the IPCC continues to present the model mean, and as long as they are going to use it, I will continue to use the model mean as an easy way to show, collectively, that the models show no skill at being able to simulate the recent past.

I especially liked, “So the average of wrong is wrong except by chance.”

I shall continue to disagree that the average of an ensemble of different models represents a useful physical property. Hence, it has no validity in testing the validity of the models. If each model in the ensemble had the same trend then I’d be less sceptical of the methodology. But if they don’t each have the same trend then there is more than noise differentiating the models. So the average of wrong is wrong except by chance. The conclusion is robust however, the models are wrong.

I agreed with son of mulder. Multiple runs of one model, averaged together will reduce noise. In that I agree with Mr. Schmidt. However the reason for having more than one model is to test different concepts. If one model treats CO2 as having cataclysmic importance, and another that treats CO2 as unimportant, averaging them together is not the same as running a third model that treats CO2 as having moderate significance. Averaging the results of different models just ensures that the result is wrong and complicates the search for the reason why.

ferd berple says: “The spaghetti graph of the GCM’s has much more value then the ensemble mean, because the upper and lower bounds of the spaghetti graph are telling us what to expect from natural variability.”

Climate models do not simulate natural variability. They create noise. They do not simulate natural coupled ocean-atmosphere processes–like ENSO, like the AMO. like the PDO.

While most people recognize 1) as a source of natural variability, the climate models are telling us that 2) is also a source of natural variability. What the GCM’s are telling us is that if you start with two identical earths, and hold all the forcings identical, the two earth’s are very unlikely to have the same climate. The climate may be similar, within the bounds set by the GCM’s, but not the same.

This is why the ensemble mean varies from reality. Not because of errors or noise in the system, but because of the nature of the system itself. In effect the climate models are telling us that, for any given combination of forcings you do not get a single answer for temperature or precipitation, you get a range of answers, and this range of answers is the “predicted” forcing independent natural variability of the system.

This is why the ensemble mean varies from reality. Not because of errors or noise in the system
+++++++++
ps: I’m not trying to say the GCM’s don’t suffer from errors or noise, rather that even if they had zero errors or zero noise, there would still be a range of answers for each unique combination of forcings and this range gives a prediction of the forcing independent natural variability.

I think it’s not a good way to gobaly study anomalies because each part of the world diffrently react to forcing (may be because of compensation rule) that could explain why models can’t answer to definite questions : each part of the world has its question and answer it’s otherwise impossible (it’s not scientist method to hope a global answer of climate change, according to me)

Paul Vaughan, in what way do you think these data would “constrain” the models? The subtle impact that evolving earth parameters have on insolation are obvious, but small. Is there some other impact these data have on the models I haven’t considered?

Bob says: “So why are you introducing smoothing to the discussion?”
Well, I guess I was misled by the chart title that states that the data was smoothed with a 13-month filter. (sic)

You seem a little touchy, Bob. I know a lot of people jump on you, and that’s why. Really, all I want to point out is that everyone who wants to present data should follow best practices. This is not a criticism of your work, merely your presentation.

MThompson says: “Well, I guess I was misled by the chart title that states that the data was smoothed with a 13-month filter. (sic)”

There was nothing misleading about my presentation of data and model outputs—or in the text of the post that accompanies them. If you were misled, MThompson, it was due to your own failure to grasp what was presented. That is, you misled yourself. There are 3 graphs in this post, MThompson. Figure 1 notes in its title block that the data have been smoothed with 13-month filters—not 6-year filters as you mentioned in your first comment. My intent in Figure 1 was to highlight the ENSO components in the two datasets. The use of a 6-year filter as you recommend would have suppressed the ENSO-related variability—or aren’t you aware of that? Figures 2 and 3 contain the trends. Do they state in their title blocks that the data have been smoothed, MThompson? No. I presented the monthly data, because trend analyses of smoothed data often provide different results than the raw data, and I did not want to present skewed trends to my readers.

MThompson says: “You seem a little touchy, Bob. I know a lot of people jump on you, and that’s why.”

I’m not touchy. I try to write concisely, especially when dealing with someone displaying troll-like behavior. And rarely do “a lot of people jump on” me. What you think you “know” about my responses to your nonsensical comments is obviously incorrect.

MThompson says: “Really, all I want to point out is that everyone who wants to present data should follow best practices. This is not a criticism of your work, merely your presentation.”

What is interesting is that the modesl predict increased precipitation in line with positive water feedback. While reality shows reduced precipitation which is in line with negative water feeback due to the increased partial pressure of CO2 reducing the amount of H2O in the atmosphere.

Yet, not a single climate models uses negative water feedback. And not a single climate model matches reality. While high school chemistry quite cleary taught that if atmospheric pressure remains the same and you increase the amount of CO2, then the amount of some other gas will be reduced, all esle remaining the same. The most likely gas to be reduced is water vapor, because it exists naturally as a solid, liquid and gas – something no other atmospheric gas can claim.

fred;
Agree! The vast bulk of the atmosphere is composed of N2 and O2, non-radiative non-GHGs. They are unable to dispose of sensible heat except through evaporative loss from the top of the atmosphere. Only GHGs, especially H2O, can radiate energy to space. Hence, in their absence, the atmosphere would heat until it could “boil” away enough mass to counterbalance solar irradiation.

The strong anticorrelation between land and ocean rainfall is interesting.
It seems to exist in the unsmoothed data too.
Comparing the increases in ocean rainfall with UAH lower troposphere temperature it seems the ocean rainfall leads temperature rises by 3 to 6 months.
Is it possible to mask down the rainfall data to see if this happenning more specifically in the “cold tongue” region and the “NINO 3.4″ region in the pacific?

Bob, n=13 smoothing filter should be constructed by averaging six data points to the left, six to the right and the index data point. This means that a smoothed value for the initial six and the final six observation points does not exist. This is not central to your thesis, but I care.

I do however agree with you that my personal attack (indicating that you seem a little touchy) was completely deserving of your retaliation impugning my motivation, education and intelligence. Thank you for putting me in my place.

Hello, I am new to this site. I am from the netherlands, and this grafic that is discussed here is from our national meteorogical institute. this institute is biassed and needs money .
But the most interesting about the grafic is that it is consistent with our winters in netherlands.
extremely colder winters have occured 1985 and 1987 and 1993, and 2013. so at the lowest landperspirationpoints. we can expect even colder winter next in 2014.
Still our national(left) government claims with help of naitonal tv and knmi that temperatures are above average and climat warming is a fact…only for us to have cold feet mid-summer now..and paying hefty taxes on energy and such.
Be ware…USA…do not let this happen to you.

Merrick (July 10, 2013 at 10:28 am) asked:
“Paul Vaughan, in what way do you think these data would “constrain” the models? The subtle impact that evolving earth parameters have on insolation are obvious, but small. Is there some other impact these data have on the models I haven’t considered?

For one example, if the climate models are getting the evolution of seasonal wind fields right, the models will be able to accurately mimic the decadal volatility clustering of semiannual LOD. Model failure in this case would sharply highlight insufficient attention to temperature GRADIENTS.

Further advice:

“Apart from all other reasons, the parameters of the geoid depend on the distribution of water over the planetary surface.” — Nikolay Sidorenkov

MThompson says: “Bob, n=13 smoothing filter should be constructed by averaging six data points to the left, six to the right and the index data point. This means that a smoothed value for the initial six and the final six observation points does not exist. This is not central to your thesis, but I care.”

Look closely at the graph in Figure 1, MThompson. The 13-month running-average filters are centered on the 7th month and the 6 starting data points and 6 ending data points are not shown. Your complaint is unwarranted.

Instead of confronting, it’s always best to ask, MThompson. Stating that my graphs are misleading doesn’t sit well with me, as you have seen.

Andre: You referred to a graph in your comment, but one wasn’t shown. If you tried to upload it directly with your comment, that will not work. If it’s a graph that’s already online, simply provide a link to it. Example is my Figure 1 above:

If you’ve created the graph yourself and it resides on your computer, then you should upload it to a picture-sharing website like TinyPic:http://tinypic.com/
They then provide an html address to the picture.

Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’).
========
Noise is the simplest example of chaos. It is chaos of 1 dimension. It is the 2 body problem in orbital mechanics. A single object orbiting 1 attractor. We can solve this mathematically and statistical theory is almost entirely based on this model of reality. The orbit has a mean and a variance.

Weather however is chaos of many dimensions. It is the N body problem in orbital mechanics. It is an object orbiting many attractors. For example, we can easily see that temperature orbits attractors with periods of 24 hours and 365.25 days. Many other strong attractors are hinted at in the temperature records.

We cannot solve this mathematically. Traditional statistics deals poorly with chaos greater than 1 dimension because the average orbit and variance is largely meaningless. As you increase the scale the answer does not converge on a single mean. Rather there a many different means and averages all operating with different orbital periods.

For example daily average temperature varies wildly over the period of a year, and over the cycle of ice ages. The average temperature of the earth changes as you increase the time scale, while in traditional statistics you would expect the average to converge as you increase the time scale.

We are only just beginning to create statistical models of reality to deal with this complexity. For example, to reduce chaos to a stochastic process of different order, and thus apply standard statistical theory. However, we have only begun to scratch the surface. This problem is not in any way unique to climate science. In quantum mechanics the Schrodinger equation gives us an exact solution for the hydrogen atom, but can only approximate reality for helium and heavier elements.