As a preview of the upcoming Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5) due out in September 2013, in this post, we’ll take a brief look at the multi-model ensemble mean of the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations of global surface temperature anomalies through the year 2100. The four new scenarios are discussed and shown. We’ll also compare CMIP5 and CMIP3 hindcasts of the 20thCentury to see if there have been any improvements in how well climate models simulate the rates at which global surface temperatures warmed and cooled since 1901. For the observations data in another comparison, we’ll use a weighted average of the Met Office’s new HADSST3 and CruTEM4 surface temperature datasets, approximating the HadCRUT4 data, which has yet to be released formally in an easy-to-use format.

The KNMI Climate Explorer Monthly CMIP5 scenario runswebpage was used for RCP global surface temperature hindcast and projection data. Keep in mind that it’s still a little early. As KNMI notes:

The collection here changes almost daily, it is not definitive by any means. The CMIP5 system itself is in flux at the moment.

But this post will give us a reasonable idea of the direction the researchers are taking the hindcasts and projections.

A REMINDER

Figure 1 is Figure SPM.5 from the Summary for Policymakers of Working Group 1 of the Intergovernmental Panel on Climate Change’s (IPCC’s) 4th Assessment Report (AR4). It shows hindcasts and projections of global surface temperatures for a number of scenarios. The scenarios are explained on page 18 of the linked Summary for Policymakers. Scenario A1B is commonly referenced. In fact, that is the only scenario provided as merged hindcast-projection data (the first 3 fields) at the Monthly CMIP3+ scenario runs webpage at the KNMI Climate Explorer. For a full-sized version of the IPCC’s Figure SPM.5, see here. As shown, for scenario A1B, the models are projecting a rise in surface temperatures (relative to the base years of 1980 to 1999) of about 2.8 deg C.

Figure 1

CMIP5 PROJECTIONS OF GLOBAL SURFACE TEMPERATURE ANOMALIES

The Lawrence-Livermore National Laboratory (LLNL) Program for Climate Model Diagnosis and Intercomparison (PCMDI) maintains archives of climate models used in the IPCC’s assessment reports. These archives are known as Coupled Model Intercomparison Project (CMIP). The 3rd phase archive (CMIP3) served as the source of climate models for the IPCC AR4, and the 5th phase archive (CMIP5) is the source of models for the IPCC’s upcoming 5thAssessment Report (AR5).

It appears the IPCC will be presenting four scenarios in AR5, and those scenarios are called Representative Concentration Pathways or RCPs. The World Meteorological Organization (WMO) writes on the Emissions Scenariowebpage:

The Representative Concentration Pathways (RCP) are based on selected scenarios from four modelling teams/models working on integrated assessment modelling, climate modelling, and modelling and analysis of impacts. The RCPs are not new, fully integrated scenarios (i.e., they are not a complete package of socioeconomic, emissions, and climate projections). They are consistent sets of projections of only the components of radiative forcing (the change in the balance between incoming and outgoing radiation to the atmosphere caused primarily by changes in atmospheric composition) that are meant to serve as input for climate modelling. Conceptually, the process begins with pathways of radiative forcing, not detailed socioeconomic narratives or scenarios. Central to the process is the concept that any single radiative forcing pathway can result from a diverse range of socioeconomic and technological development scenarios. Four RCPs were selected, defined and named according to their total radiative forcing in 2100 (see table below). Climate modellers will conduct new climate model experiments using the time series of emissions and concentrations associated with the four RCPs, as part of the preparatory phase for the development of new scenarios for the IPCC’s Fifth Assessment Report (expected to be completed in 2014) and beyond.

Table 1.1: Overview of Representative Concentration Pathways (RCPs)

RCP 8.5

Rising radiative forcing pathway leading to 8.5 W/m² in 2100.

RCP 6

Stabilization without overshoot pathway to 6 W/m² at stabilization after 2100

RCP 4.5

Stabilization without overshoot pathway to 4.5 W/m² at stabilization after 2100

RCP 3-PD2

Peak in radiative forcing at ~ 3 W/m² before 2100 and decline

NOTE: RCP 3-PD2 is listed as “RCP 2.6” at the KNMI Climate Explorer Monthly CMIP5 scenario runsWebpage, and will be referred to as RCP2.6 in this post.

Further information about the individual RCPs can be found at the International Institute for Applied Systems Analysis (IIASA) webpage here.

Figure 2 compares the multi-model mean of the global surface temperature hindcasts/projections for the 4 RCPs, starting in 1861 and ending in 2100. (The use of the model mean was discussed at length in the post Part 2 – Do Observations and Climate Models Confirm Or Contradict The Hypothesis of Anthropogenic Global Warming?, under the heading of CLARIFICATION ON THE USE OF THE MODEL MEAN.) The base years are 1980 to 1999, same as those used by the IPCC in AR4. Also listed in the title block are the numbers of models and ensemble members that make up the model mean as of this writing, and as noted above, the numbers are subject to change. Based on the models that presently exist in the CMIP5 archive at the KNMI Climate Explorer, the IPCCs projected rises in global surface temperature by the year 2100 in AR5 should range from about 1.3 deg C for RCP 2.6 to a whopping 4.4 deg C for RCP 8.5. At 2.7 deg C in 2100, RCP 6.0 projects about the same warming of surface temperature as SRES A1B, and if memory serves, the SRES A1B forcing in 2100 was about 6.05 watts/m^2, comparable to RCP 6.0.

Figure 2

Notice, however, that RCP 6.0 has received the least attention by the modelers, even though it’s about the same as SRES A1B. Based on the number of models on the KNMI Climate Explorer, RCP 6.0, as of now, has been simulated by only 13 models with a total of 28 ensemble members, while RCP 8.5 is getting the most interest, 29 models with 59 ensemble members. Is the IPCC going to follow suit and spend most of its time discussing RCP 8.5 in AR5? The projected warming of RCP 8.5 appears to be in the neighborhood of the old SRES A1F1.

The model mean of the CMIP5 simulations of 20thCentury global surface temperature anomalies for the four RCPs are shown in Figure 3. The data runs from 1901 to 2012. The base years for anomalies are (and for the remainder of this post) 1901 to 1950, which are the base years the IPCC used for their Figure 9.5 in AR4. All but RCP 6.0 are closely grouped; RCP 6.0 diverges from the others starting at about 1964. Is this caused by the limited number of models simulating RCP 6.0? It’s still early. The modeling groups have some time to submit models to CMIP5 for inclusion in AR5.

Figure 3

The model mean of the global surface temperature anomaly hindcasts of the 12 models used by the IPCC in their Figure 9.5 cell a of AR4 has been added in Figure 4. The RCP hindcasts of global surface temperature anomalies appear to differ most from the AR4 hindcast during the 1960s and 70s, as though the newer RCP-based models are exaggerating the impacts of the eruption of mount Agung in 1963/64. Other than that period, the model mean of the newer RCP-based models appear to mimic the older model mean.

Clearly, the changes are not linear and can also be characterized as level prior to about 1915, a warming to about 1945, leveling out or even a slight decrease until the 1970s, and a fairly linear upward trend since then (Figure 3.6 and FAQ 3.1).

We have in past posts used HadCRUT3 land plus sea surface temperature anomalies, the same dataset presented by the IPCC in AR4 for comparisons to models, and have further clarified those warming and “flat temperature” periods. The years that marked the transitions were 1917, 1944, and 1976.

For the following four comparison graphs of CMIP3- and CMIP5-based global temperature anomaly hindcasts, we’ll use RCP 8.5, and the simple reason is, it’s the scenario that was modeled most often and it has the most ensemble members. And we’ll use the multi-model ensemble mean of the 12 models the IPCC used in their Figure 9.5 cell a.

Figures 5 through 8 compare global surface temperature anomaly hindcasts and linear trends of the CMIP3 (20C3M) and CMIP5 (RCP 8.5) multi-model mean over the 20th Century (1901-2000). The data have been broken down into the two warming and two “flat temperature” periods. The linear trends of the CMIP3- and CMIP5-based models are reasonably close during the early “flat temperature” period (1901-1917), the early warming period (1917-1944), and the late warming period (1976-2000). Any changes in forcings used by the modelers during those periods do not appear to have had any major impacts on the rates at which modeled global surface temperatures warmed. On the other hand, as shown in Figure 6, there is a significant difference in the trends during the mid-20thCentury “flat temperature” period (1944-1976). The CMIP3 hindcast shows a slight positive trend during this period, while the CMIP5 (RCP 8.5) trend shows a moderate rate of cooling.

We presented and discussed the recent updates of the Hadley Centre’s HADSST3 sea surface temperature anomaly dataset here, and introduced the recent updates to their CruTEM4 land surface temperature anomaly dataset here. Unfortunately, the Hadley Centre has not yet released its new HadCRUT4 land plus sea surface temperature data through its HadCRUT4 webpagein a form that’s convenient to use. We can approximate the global HadCRUT4 data using a weighted average of HADSST3 and CruTEM4 data, using the same weighting as older HadCRUT3 data. I relied on annual HADSST2, CruTEM3, and HadCRUT3 data from 1901 through 2011 to determine that weighting, and used the linear trend of a weighted average of the HADSST2 and CruTEM3, comparing it to the trend of the HAdCRUT3 data. The weighting determined was 28.92% land surface temperature and 71.08% sea surface temperature, and has been used in the approximation of the HadCRUT4 data that follows.

Note: The CruTEM4 data is available at the Hadley Centre’s webpage here, specifically the annual data here, and the HADSST3 data is available through the KNMI Climate Explorer here.

Figure 9 compares the approximated HadCRUT4 land plus sea surface temperature data to the 4 RCP-based hindcasts from 1901 to 2006. The end date of 2006 is dictated by the HADSST3 data, which still (as of now) has not been brought up to date by the Hadley Centre. The models appear as though they are capable of reproducing the rate at which global temperatures warmed during the late warming period of 1976 to 2006, but it looks like they are still not capable of reproducing the rates at which global temperature anomalies warmed and cooled before that. Let’s check.

Figure 9

We’ll again use the multi-model ensemble mean of the CMIP5-based RCP 8.5 global surface temperature hindcast available through the KNMI Climate Explorer, simply because that’s the scenario the modelers have simulated most. Figures 10 through 13 compare the linear trends of the of the model mean to the approximated HadCRUT4 global surface temperatures during the 2 warming periods and 2 “flat temperature” periods acknowledged by the IPCC. Starting with the late warming period (Figure 10), the models do a reasonable job of approximating the rate at which global surface temperatures warmed. But based on the model mean, the CMIP5-based hindcasts of the 20thCentury are:

1. not able to simulate the rate at which global surface temperatures cooled from 1944 to 1976 (Figure 11),

2. incapable of simulating how quickly global surface temperatures warmed from 1917 to 1944 (Figure 12), the observations warmed at a rate that’s more than 3 times faster than simulated by the models, and,

3. not capable of simulating the low rate at which global surface temperatures warmed from 1901 to 1917 (Figure 13).

Figure 10

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 11

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 12

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 13

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

CLOSING

This was a preview. The intent was to give an idea of the directions of the IPCC’s projections of future global surface temperatures and a glimpse at the hindcasts to see if there were any improvements. According to the schedule listed on the IPCC/CMIP5 AR5 timetable, papers to be included in the IPCC’s 5th Assessment Report (AR5) are to be submitted by July 31, 2012. Therefore, for projections of future global temperatures, there may be a few models that have not yet made it to the CMIP5 archive at the KNMI Climate Explorer. The IPCC also could select specific models for their presentation of 20thCentury global surface temperatures as they had with AR4. But there are a good number of models and ensemble members that presently exist in the AR5 archive at the KNMI Climate Explorer. Adding a few models should not alter the results of the multi-model ensemble mean too much.

I won’t speculate whether the IPCC intends to make RCP 8.5 its primary forcings for its discussions of future climate, but the modelers sure did seem enthusiastic about it, with its projections of a 4.4 deg C rise in global temperatures by 2100.

With respect to the simulations of the 20th Century, it appears the modelers did change some forcings during the mid-20th Century “flat temperature” period, in an effort to force the models to show more of a decrease in temperature between 1944 and 1976. Yet the models still have difficulties simulating the rates at which global surface temperatures warmed and cooled since 1901. Compared to the weighted average of HADSST3 and CruTem4 data (used to approximate HadCRUT4 global surface temperature data), the models are still only able to simulate the rate at which global surface temperatures rose during the late 20thCentury warming period of 1976 to 2006. They still cannot simulate the rates at which global surface temperatures warmed and cooled before 1976.

As illustrated and discussed in my book and in a number of posts over the past few months (see here, here, here, here, and here), for many reasons, it is very difficult to believe the IPCC’s claim that most of the warming in the late 20thCentury is caused by manmade greenhouse gases. One of the reasons: there were two warming periods since 1901. As further illustrated in this post, the increases in manmade greenhouse gases and other forcings caused modeled global surface temperatures (the RCP 8.5-based multi-model mean of the CMIP5/AR5 climate models) to warm at a rate during the late warming period that’s 3+ times faster than the early warming period. Yet the observed global surface temperatures during the late warming period, based on the approximation of HadCRUT4 data, warmed at a rate that was only 27% higher than the early warming period.

And those who have read my book or my posts for the past three years understand that most if not all of the rise in satellite-era global sea surface temperatures can be explained as the aftereffects of strong El Niño-Southern Oscillation (ENSO) events. That further contradicts the IPCC’s claims about the anthropogenic cause of the warming since 1976.

Interesting. I don’t think any of the GCMs will ever get it right while they continue to make the assumption that CO2 has a major effect on temperature. Instead, they should be trying to get to grips with natural climate change by developing the tools to understand spatio-temporal chaos.

It has been several years since I posted a version of this explaination, how the accuracy of hindcasting past temperatures with the computer models is meaningless. I think I should ressurect my explaination here.

Each computer model is composed of dozens of mathematical equations representing known scientific laws, theories, and hypotheses. Each equation has one or more constants. The constants associated with known laws are very well defined. The constants associated with known theories are generally accepted but probably some of them may be off by a factor of 2 or more, maybe even an order of magnitude. The equations representing hypotheses, well, sometimes the hypotheses are just plain wrong. Then each of these equations has to be weighted against each other for use in the computer models, so that adds an additional variable (basically an educated guess) for each law, theory, and hypothesis. This is where the models are tweaked to mimic past climate measurements.

The SCIENTIFIC METHOD is: (1) Following years of academic study of the known physical laws and accepted theories, and after reviewing some data, come up with a hypothesis to explain the data. (2) Develop a plan to obtain and analyze new data. (3) Collect and analyze the data, this may even require new technology not previously available. (4) Determine if the hypothesis is correct, needs refinement, or is wrong. Either way, new data is available for other researchers. (5) Submit results, including data, for peer review and publication.

The output of the computer models run out nearly 90 years forward is considered to be data, but it is not a measurement of a physical phenomenon. Also, there is no way to analyze this so called data to determine if any or which of the hypotheses in the models are correct, need refinement, or are wrong. Also, this method cannot indicate if other new hypotheses need to be generated and incorporated into the models. IT JUST IS NOT THE SCIENTIFIC METHOD.

The worst flaw in the AGW argument is the treatment of GCM computer generated outputs as data. They then use it in follow on hypotheses. For example, if temperature rises by X degrees in 50 years, then Y will be effected in such-and-such a way resulting in Z. Then the next ‘scientist’ comes along and says, well, if Z happens, the effect on W will be a catastrophe. “I need (and deserve) more money to study the effects on W.” Hypotheses, stacked on hypotheses, stacked on more hypotheses, all based on computer outputs that are not data, using a process that does not lend to proof using the SCIENTIFIC METHOD. Look at their results, IF, MIGHT, and COULD are used throughout their news making results. And when one of the underlying hypotheses is proven incorrect, well, the public only remembers the doomsday results 2 or three iterations down the hypotheses train. The hypotheses downstream are not automatically thrown out and can even be used for more follow on hypotheses.

The models have difficulty?
Not nearly the difficulty the modelers do.

What a pity taxpayers are funding so many layers and arms of sloppy science run amok and the useless busy work for countless bureaucrats and academics.
Add it all up and what does the public get out of it? Nothing.

Worse yet we are left to only imagine what those vast resources could have achieved had they been appropriated and utilized by honorable people.

But this is now and looking back. What of tomorrow? Is there no way to curb the waste and redirect public resources to where need and legitimacy can produce what producers of those resources prefer? Genuine and spectacular progress.

“…it appears the modelers did change some forcings during the mid-20th Century “flat temperature” period, in an effort to force the models to show more of a decrease in temperature between 1944 and 1976.” (emphasis added)

That’s, well, an interesting statement – Forcing estimates get updated due to better data regarding those forcings, not to modify model results. The changes seen are the results of both refined measurements and improved modeling of the physics.

Unless you have some support for that claim, I would have to consider it both unreasonable and a smear on the people doing the research.

Steven Mosher says: “for hindcasts you should know that all rcps have the same forcing. you should use then all.”

I’m using the RCP with the greatest number of models and ensemble members. Averaging all of the ensemble members from all of the RCPs would then cause the models that simulated all of the RCPs to carry more weight than the models that didn’t.

I don’t really see why you bother with this Bob. The “measured” temperatures on which these models attempt to “calibrate” are just fictitious concoctions particularly from 1999 onwards. Where are the “calibrations” of the lower atmospheric temp record? Lets see how well they are simulated.

I’d have more confidence in remedies brewed up by a witchdoctor than these model “projections”.

They key element is the slope difference in the 17-44 period. The models hindcast less than a third of the observed warming when they dont have CO2 driving the change. That one period alone is enough to invalidate the models and show a CO2 bias in warming from late 70’s and future projections. When you remove the CO2 forcing from the models they fail to show “natural” variation. If we assume (and it is possible based on this data) that .15 per decade is natural, then we are adding .05/decade above the expected natural variation in the 76-2006 period. That translates to .5-.6 increase in 100 years over the expected “natural” increase. Or that of the .6 warming in 1976-2006, only .15 of that warming is likely caused by increased CO2.

These models are not proof of serious global warming, but actual evidence against it. There is no way around the fact that the early 20th century warming was 75% of the late 20th century warming trend, without CO2. No matter how you try to spin or just pretend this problem doesn’t exist, it won’t go away.

“most if not all of the rise in satellite-era global sea surface temperatures can be explained as the aftereffects of strong El Niño-Southern Oscillation (ENSO) events”

Absolutely right.

The next step is to ascertain how all that energy got into the oceans to fuel both that period of strong El Ninos AND cause a continuing rise in ocean heat content despite those strong discharges of energy to the air.

And those who have read my book or my posts for the past three years understand that most if not all of the rise in satellite-era global sea surface temperatures can be explained as the aftereffects of strong El Niño-Southern Oscillation (ENSO) events.
====================================================
Bob, all of the rise in satellite global sea surface temperatures…..
….can be explained as the aftereffects of adjustments to the satellite outputs after the launch of Envisat

My overall impression is of “just another” mishmash of calculations based on the usual mutilated data; it’s certainly not much like a passable representation of 20th century temperatures.

As for that WMO introduction …

“The Representative Concentration Pathways (RCP) are based on selected scenarios from four modelling teams/models working on integrated assessment modelling, climate modelling, and modelling and analysis of impacts.”

… it gave me “model overload”. Five times in one sentence! Ye Gods and little fishes! Still, fair warning, I suppose.

the 5 runs of the Japan MRI model show trends ranging from 0.042 to 0.371 K/decade. . . .
In a synthetic experiment we show that at least 40 runs (of 20-yr length) are necessary to get convergence of the ‘cumulative ensemble-mean – and >20 runs of 40-yr long runs. . . .
1. The US-CCSP report shows major differences between observed temp trends and those from GH models. These disagreements are confirmed and extended by Douglass et al [in IJC 2007] and by NIPCC 2008. Claims of “consistency’” between models and obs by Santer et al [in IJC 2008] are shown to be spurious
2. IPCC-4 [2007] climate models use an insufficient number of runs to overcome “chaotic uncertainty”
3. We find no evidence in support of the surface warming trend claimed by IPCC-4 as evidence for AGW

“for many reasons, it is very difficult to believe the IPCC’s claim that most of the warming in the late 20th Century is caused by manmade greenhouse gases. One of the reasons: there were two warming periods since 1901.”

That’s a non sequitur. But in fact, despite common assertions, CO2 forcing was quite substantial during the early 20th Century. Here’s a plot of forcing and Hadcrut during that time.

In the AR4 SPM, the IPCC summarised this:“It is very unlikely that climate changes of at least the seven centuries prior to 1950 were due to variability generated within the climate system alone. A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.”

AR, AR , AR, AR5 splutter , clunk….
Hoax won’t restart? Shouldn’t be a problem. Just ignore the satellite temps, hold the aerosol button firmly down and pump the press release a few times… should restart easy. But don’t worry if it doesn’t, the route 20 sustainability bus to Rio should be along shortly. They should be able to give you a ride…

discussing how Hadley processing removes more than half the variation from the record and the circular logic being applied in “validating” these adjustments.

Bob, you are generously comparing the models to data where that has already been adjusted to better fit the models. The models are then used to “validate” the adjustments, which not surprisingly “works”.

Would you have felt differently if you had not clipped the rest of the paragraph, or if I had used a semicolon instead of a period between “there were two warming periods since 1901,” and “As further illustrated in this post…”

And loads of questions, not specifically for Bob, but for the model-defenders and others:

(1) “HADSST3 data, which still (as of now) has not been brought up to date by the Hadley Centre.” Why the hell not? Have they got something better to do? And if we know (we do, don’t we) what data points they’re using, and if we know (ditto) how they’re putting them together to create the series, could this not be done by a retiree with an Excel spreadsheet?

(2) “Climate modellers will conduct new climate model experiments ” (from the IPCC). Will these “experiments” explain the early 20th century warming or the mid-20th century cooling? Rhetorical question, as you’ve made clear.

(3) What are the current figures for RCP radiative forcing pathways for the 20th Century in W/m²?
Exactly how realistic or outlandish is 8.5 W/m²?

(4) What are the estimates for climate sensitivity of the models? Are they roughly the same, or a wide spread? How much is this changed by adjusting the parameters in the same model and re-running?

(5) Why does AGW always come into play in the 1970s/80s in these scenarios, rather than before? And if, as in Nick Stokes citation from the IPCC (April 5, 2012 at 10:07 am) “it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records”, how do we explain the cooling from the 40s to the 70s?

PS: my personal econometric model predicts economic growth for the UK in 2100 will be 0.78539%. Now where’s that Nobel prize?

I do think you need to show the ensemble spreads as well as the mean. You might not expect the ensemble mean to match the observations because of natural climate variability – what is more important is whether the ensemble members reliably encompass the observations.
Ed.

Nick Stokes you said “But in fact, despite common assertions, CO2 forcing was quite substantial during the early 20th Century. Here’s a plot of forcing and Hadcrut during that time.”
-Thanks for pointing to this new plotter
However if you average ALL forcings from 1900 to 1940 they are aprox zero (-0.07 w/m2)
Whilst the average of all forcings over the period 1960 to 2010 is ~ 0.75 w/m2
Despite this, the warming is not substantially different over the two periods, as Bob points out.
What is a non sceptical explanation for this ?

“I won’t speculate whether the IPCC intends to make RCP 8.5 its primary forcings for its discussions of future climate, but the modelers sure did seem enthusiastic about it”
It’s not science, per se. It is gravy train science. They are getting tingly with excitement like dogs hearing a can opener whirring.

Here is an appropriate video for “gravy train.” The table scene could be photo shopped with the heads of famous gravy train warmists:

Chas, here’s the corresponding plot of total forcing vs Hadcrut 3. It also rises in the early 20th cen. It’s true that the later rise is steeper, but interrupted by down spikes from volcanoes, which were mostly absent in the earlier period.

” A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.””

Latitude says: “Bob, all of the rise in satellite global sea surface temperatures…..
….can be explained as the aftereffects of adjustments to the satellite outputs after the launch of Envisat”

The AVHRR and AMSR sensors used for the Reynold OI.v2 sea surface temperature I was referring to are housed in NOAA satellites. Envisat is a European Space Agency satellite.
=============================
Bob, I’m aware of that..Envisat didn’t show what they thought it should show (the first 22 passes showed sea level/temps falling), so they used Jason/s as reference…
Once they finally got Envisat to show what they wanted it to show….they back tuned Jason to match it………..

…it’s all in the adjustments…..in the 2008 working papers

James has several posts about it on his blog….put it there, and asked to you read it at one time

If the data doesn’t match the output of the computer models, the data must be wrong and the satellite has to be adjusted until it matches the output of the computer. I see. Garbage out, garbage in, garbage out…a complete and self contained recycling process in the best Green Tradition. Thanks for clearing that question up Bob Tisdale. It explains a lot of things.
I just hope someone is keeping the real numbers noted down somewhere so that we can go back to use them again them when the madness passes away.

In this thread, commenters have advanced fanciful ideas regarding the process by which a model is statistically tested. Contrary to popular opinion, this process is not an IPCC-style “evaluation.”

In the actual process, the predicted outcomes of events are compared to the observed outcomes of the same events in a sampling of events that are drawn from the underlying statistical population. For the IPCC climate models, this process cannot take place because: a) the models make “projections” rather than the required predictions and b) the IPCC has not yet told us what the population is.

Models are optimised to reproduce as best they can the historic surface record from 1960 -1990. Speculative “bucket” adjustments that are applied to actual surface records and reduce the variations in the data before that period by over 50%, are then “validated” by comparing to computer model hindcast. Models optimised on such a short period, almost by definition, do not produce long term variability. A point the Bob is underlining here. So the models agree better with the “corrected” surface temps than with the real data.

This is taken to be a “validation” of the adjustments which then become the new “historic record” against which the models are tested and developed.

The authors of this methodology seem unable to see the circular logic. At least John Kennedy has not come back on that criticism…

Even after reducing the variability in the original SST data the models are still unable to reproduce the long term variability, this is Bob’s main point. They do not catch long term variability since they have no mechanism to produce one. The feeble agreement they get on some individual runs are noise and random variation, not correct modeling.

In science you adjust your model to fit the data. In climate science you adjust data to fit the model. That is the fundamental reason their models have failed so thoroughly since the end of that last century.

Ahhhhh!!!! so Bob is still trying the strategy of ignoring the values ( which agree very well) and emphasizing the gradients.

Let me explain why this is misleading. Gradients/trends/slopes are calculated from the differences in values. This means that trends are very sensitive to noise and random variation in the values. So it’s quite possible to select artificial short ranges in the time series that maximise the trend differences and thereby exaggerate the differences between the data sets.

People who want to make trend comparisons that are not misleading and which are valid will typically incorporate some least squares fitting process into calculation of the gradient. This will typically apply some convolution kernel of sufficient width to the data to suppress random noise.

LazyTeenager says:
April 6, 2012 at 12:41 am
“Ahhhhh!!!! so Bob is still trying the strategy of ignoring the values ( which agree very well) and emphasizing the gradients.

Let me explain why this is misleading. Gradients/trends/slopes are calculated from the differences in values. This means that trends are very sensitive to noise and random variation in the values. So it’s quite possible to select artificial short ranges in the time series that maximise the trend differences and thereby exaggerate the differences between the data sets.”

A linear trend is computed by fitting a line through an interval of a time series, minimizing the sum of the squares of the differences of the trend line to the data at each point. So all the data points in the interval excert an influence on the slope of the trend line.

Do we know other operators that share this property? Yes, for instance moving averages. What do we know about moving averages with regards to their frequency response? Yes, they are LOW PASS filters; meaning that they DAMPEN the high frequencies.

Further reading for the teenager; this is actually a nicely done page even though it is from wikipedia:http://en.wikipedia.org/wiki/Linear_regression
Of course, their funny attitude about everything shines through at the end:
“Environmental science
[icon] This section requires expansion.

Linear regression finds application in a wide range of environmental science applications. In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and benthic surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem”

Yeah, we most definitely need more examples about how linear regression is used in environmental science. Spoil a perfectly good page with some politically correct drivel, uh, and maybe, we need a picture of an oiled seagull on the page about the Riemannian manifold. /sarc
Ye

And your closing comment of “I don’t believe Bob had done that [snip]”, with respect to the trend analysis, broadcasts your ignorance of the methods employed by the producer of the spreadsheet software (EXCEL) I use to create the graphs.

Someone making a comment on a blog usually studies a subject before making erroneous statements, unless that commenter is simply trying to mislead the readers, as you’ve tried with your comment.

Bob, I think the LazyTeenager’s attack does not show much understanding of signal processing or stats but he is not totally wrong.

Firstly , I have been encouraging you for years (well seems like it) to use a real filter instead of running mean and have pointed out its crappy and misleading frequency response. It seems despite the huge amount of time you put into all this you are not prepared to get beyond clicking a button in Excel. Please work out how to apply a real filter (you can even do it in excel if you really must). I’ve posted on your blog to you have my email. I can send you an example of a filter in excel it you wish.

Also if you want to study rate of change then do so directly by differentiating , not by sloppy averaging LSQ etc.. If your data is continuous and equally spaced, all you need to do is take the difference of each successive pair of points.

Any difference in rate of change will then stand out as a vertical offset and won’t depend on your choice of period over which you calculate your slope. That would remove some lazy critics.

There’s plenty to criticise in these models and you are basically correct. I’d like to see you make a more convincing job of it.

Dirk: Do we know other operators that share this property? Yes, for instance moving averages. What do we know about moving averages with regards to their frequency response? Yes, they are LOW PASS filters; meaning that they DAMPEN the high frequencies.

Yes R-M is a low pass filter, trouble is its also a high pass filter , as and when it feels like it.

Now look at the same data filtered with these two filters (done in excel ;) ).

And, yes, that really did start off from the same column in my spreadsheet, though you’d hardly believe it to look at the results.

Look at what happens to the running means in 1940 and 1960 for example. Now if you’re going to say someone’s model does not match the data you’d do well not to start by using a filter that turn a peak into a trough or bend it sideways.

It’s sad to see how many people with letters after their names make the same mistake as well as doing illegitimate OLS regression on scatter plots and getting totally aberrant values for climate sensitivity.

P. Solar: We’ve been through this before. I present data in fashions that are easily reproducable by laypersons so that they can duplicate and verify. A running-mean filter is commonly used in climate science, regardless of your preference.

If a reader wishes to use different methods, like using another method to determine linear trends, that’s fine. I’ve initiated that investigation.

Your link still doesn’t explain the warming up to 1940. However, I think it’s worse than that. I’m guessing that “ALL Forcings” includes obsolete solar data. I think Leif Svalgaard would argue about the change in solar activity in the early 20th century.

The Pacific Decadel Oscillation (PDO) is a likely explanation for the negative and positive deviation periods between the hindcasts and observations. Notice that the deviation periods last about 30 years as does the PDO half-cycle and the direction matches the PDO cycles. Since we have entered a period of negative PDOs, the models will accordingly overestimate the warming over the 2007-2037 period.

climateprediction says: “The Pacific Decadel Oscillation (PDO) is a likely explanation for the negative and positive deviation periods between the hindcasts and observations…”

There is no mechanism through which the PDO can alter global surface temperatures. The PDO does NOT represent the sea surface temperature of the North Pacific, north of 20N. The PDO is actually inversely related to the sea surface temperature anomalies of the North Pacific.

P. Solar: We’ve been through this before. I present data in fashions that are easily reproducable by laypersons so that they can duplicate and verify. A running-mean filter is commonly used in climate science, regardless of your preference.

You probably do that because you are a layperson yourself.
That everyone can reproduce and “verify” a bad method hardly seems to be a valid reasoning. Especially because you don’t point out the short-comings, you are just inviting others to copy your own mistakes.. Last time you said it was “easy to understand”, an equally poor excuse. In fact it is easy to *misunderstand* because if you do not look at the frequency response (and most laypersons would not even know what one is), it is easy to imagine you applying a valid low pass filter.

You defend using that kind of filter to show the work of others is not reproducing the troughs and peaks in the right places. Hardly credible.

This is not a case of personal preference as you try to suggest. There are several filters you could choose to use if you could be bothered. How can you justify using a filter that distorts the data to the point of inverting peaks and troughs as can be seen in the example plots I posted above to criticise the work of others ?

P. Solar says: “Just look at the 1970′s on this graph, the running mean actually gets the peaks and troughs 100% upside down !! http://i44.tinypic.com/351v6a1.png
“You defend using that kind of filter to show the work of others is not reproducing the troughs and peaks in the right places. Hardly credible.”

Your linked example does not show the raw data. Yet you somehow claim the troughs and peaks are not in the right places. Kinda tough to confirm your claims, P. Solar.

Bob Tisdale says….There is no mechanism through which the PDO can alter global surface temperatures. The PDO does NOT represent the sea surface temperature of the North Pacific, north of 20N. The PDO is actually inversely related to the sea surface temperature anomalies of the North Pacific.

With all due respect, I don’t accept any of your arguments. The undrlying mechanisms for PDO and ENSO are not well understood. That is not a proof that the don’t exist. ENSO doesn’t represent the sea surface temperatues north of 20N either. But it correlates well with global temperatures and it correlates well with the PDO in that there is a high ratio of La Ninas to El ninos during cool PDO periods and vice versa. And for the sake of determing the effects on global temperatures how can the fact that the PDO is inversely related to the sea surface temperatures of the North Pacific be any more significant than the fact that the PDO correlates closely with global temperatures?

That discussion includes this quotation from Kiehle’s 2007 paper which is another formulation of your question;

“The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.”

The discussion concludes with my statements that say;

“Long after my paper about the Hadley GCM, in 2007 Kiehle (see reference in my above post) showed that all other climate models also ‘ran hot’ but by different amounts. And he showed that they each adopt the aerosol fix. But they each adopt a different amount of aerosol cooling to compensate for the different degree of ‘ran hot’ they each display.

This need for a unique amount of aerosol cooling in each climate model proves that at most only one (and probably none) of the models emulates the climate system of the real Earth (there is only one Earth).”

Simply, the models each emulate a different (and unreal) climate system so they indicate different reactions to the same input change to the climate system, and they are especially sensitive to changes in the projected ratio of anthropogenic aerosol and GHG emissions.

Hi! My question is not related to the content of your post, but to a figure mentioned. Can you tell me, where I can get the data of figure SPM 5 so that I can redraw it on my own? Xls or csv would do the job. Thanks in advance for your help! Felix