No Matter How the CMIP5 (IPCC AR5) Models Are Presented They Still Look Bad

UPDATE: I’ve added a comment to the end of the post about the use of 1990 as the start year.

# # #

After an initial look at how the IPCC elected to show their model-data comparison of global surface temperatures in Chapter 1, we’ll look at the CMIP5 models a couple of different ways. And we’ll look at the usual misinformation coming from SkepticalScience.

Keep in mind, the models look best when surface temperatures are presented on a global land-plus-sea surface temperature basis. On the other hand, climate models cannot simulate sea surface temperatures, in any way, shape or form, or the coupled ocean-atmosphere processes that drive their warming and cooling.

# # #

There’s a big hubbub about the IPCC’s change in their presentation of the model-data comparison for global surface temperatures. See the comparison of before and after versions of Figure 1.4 from the IPCC’s 5th Assessment Report (My Figure 1). Steve McIntyre commented on the switch here. (Cross post at WattsUpWithThat here.) Judith Curry discussed it here. The switch was one of the topics in my post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming. And everyone’s favorite climate alarmist Dana Nuccitelli nonsensically proclaimed the models “much better than you think” in his posts here and here, as if that comparison of observed and modeled global surface temperature anomalies is an true indicator of model performance. (More on Dana’s second post later.)

Figure 1

Much of what’s presented in the IPCC’s Figure 1.4 is misdirection. The models presented from the IPCC’s 1st, 2nd and 3rd Assessment Reports are considered obsolete, so the only imaginable reason the IPCC included them was to complicate the graph, redirecting the eye from the fact that the CMIP3/AR4 models performed poorly.

Regardless, what it boils down to is the climate scientists who prepared the draft of the IPCC AR5 presented the model-data comparison with the models and data aligned at 1990 (left-hand cell), and that version showed the global surface temperature data below the model ranges in recent years. Then, after the politicians met in Stockholm, that graph is replaced by the one in the right-hand cell. There they used the base years of 1961-1990 for the models and data, and they presented AR4 model outputs instead of a range. With all of those changes, the revised graph shows the data within the range of the models…but way down at the bottom edge with all of the models that showed the least amount of warming. Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

While that revised IPCC presentation is how most people will envision model performance, Von Storch, et al. (2013) found that the two most recent generations of climate models (CMIP3/IPCC AR4 and CMIP5/IPCC AR5) could NOT explain the cessation of warming.

Bottom line: If climate models can’t explain the hiatus in warming, they can’t be used to attribute the warming from 1975 to 1998/2000 to manmade greenhouse gases and their projections of future climate have no value.

WHAT ABOUT THE CMIP5/IPCC AR5 MODELS?

Based on von Storch et al. (2013) we would not expect the CMIP5 models to perform any better on a global basis. And they haven’t. See Figures 2 and 3. The graphs show the simulations of global surface temperatures. Included are the model mean for the 25 individual climate models stored in the CMIP5 archive, for the period of 1950 to 2035 (thin curves), and the mean of all of the models (thick red curve). Also illustrated is the average of GISS LOTI, HADCRUT4 and NCDC global land plus sea surface temperatures from 1950 to 2012 (blue curve). In Figure 2, the models and data are presented as annual anomalies with the base years of 1961-1990, and in Figure 3, the models and data were zeroed at 1990.

Figure 2

# # #

Figure 3

Note how the models look worse with the base years of 1961-1990 than when they’ve been zeroed at 1990. Curious.

NOTE: Every time I now look at a model-data comparison of global land plus sea surface temperatures, I’m reminded of the fact that the modelers had to double the observed rate of warming of sea surface temperatures over the past 31 years to get the modeled and observed land surface temperatures to align with one another. See my post Open Letter to the Honorable John Kerry U.S. Secretary of State. That’s an atrocious display of modeling skills.

Global mean surface temperature data are plotted not in absolute temperatures, but rather as anomalies, which are the difference between each data point and some reference temperature. That reference temperature is determined by the ‘baseline’ period; for example, if we want to compare today’s temperatures to those during the mid to late 20th century, our baseline period might be 1961–1990. For global surface temperatures, the baseline is usually calculated over a 30-year period in order to accurately reflect any long-term trends rather than being biased by short-term noise.

It appears that the draft version of Figure 1.4 did not use a 30-year baseline, but rather aligned the models and data to match at the year 1990. How do we know this is the case? Up to that date, 1990 was the hottest year on record, and remained the hottest on record until 1995. At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations. In the draft IPCC figure, that wasn’t the case – the models and data matched exactly in 1990, suggesting that they were likely baselined using just a single year.

Mistakes happen, especially in draft documents, and the IPCC report contributors subsequently corrected the error, now using 1961–1990 as the baseline. But Steve McIntyre just couldn’t seem to figure out why the data were shifted between the draft and draft final versions, even though Tamino had pointed out that the figure should be corrected 10 months prior. How did McIntyre explain the change?

Dana’s powers of observation are obviously lacking.

First, how do we know the IPCC “aligned the models and data to match at the year 1990”? Because the IPCC said they did. The text for the Second Order Draft discussing Figure 1.4 stated:

The projections are all scaled to give the same value for 1990.

So Dana Nuccitelli didn’t need to speculate about it.

Second, Figure 4 is a close-up of view of the “corrected” version of the IPCC’s Figure 1.4, focusing on the models and data around 1990. I’ve added a fine line marking that year. And I’ve also altered the contrast and brightness of the image to bring out the model curves during that time. Contrary to the claims made by Nuccitelli, with the 1961-1990 base years, “the 1990 data point” WAS NOT “located toward the high end of the range of model simulations”.

Figure 4

“Mistakes happen?” That has got to be the most ridiculous comment Dana Nuccitelli has made to date. There was no mistake in the preparation of the original version of Figure 1.4. The author of that graph took special steps to make the models align with the data at 1990, and they aligned very nicely, focusing right in at a pinpoint. And the IPCC stated in the text that the “projections are all scaled to give the same value for 1990.” There’s no mistake in that either.

The only mistakes have been Dana Nuccitelli’s misrepresentation of reality. Nothing new there.

# # #

UPDATE: As quoted above, Dana Nuccitelli noted (my boldface):

At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations.

The reality: 1990 was an ENSO-neutral year, according to NOAA’sOceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

Tamino was simply playing games with data as Tamino likes to do, and Dana Nuccitelli bought it hook, line and sinker.

Or Dana Nuccitelli hasn’t yet learned that repeating bogus statements doesn’t make them any less bogus.

Reading the Tamino post he refers to, I think what he meant was that the draft version, starting from 1990, was the one that should have been aligned differently and thus treated an especially warm year as a normal one. Or am I reading it wrong?

This was all hashed out over at McIntyre’s site, in the comments section, where there are several people who seem to know what they are talking about. Using the 1990 temp as the ref was clearly a mistake in the first graph; the new graph corrects that by setting the start to the tread line. The bottom line is that it was changed to a more logical starting point, whether or not you think the first graph was a mistake.

Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.

A notable feature of these models is that none of them make predictions or a predictive inference. Thus, none of them are falsifiable or convey information to a policy maker about the outcomes from his or her policy decisions.

Using the 1990 temp as the ref was clearly a mistake in the first graph; the new graph corrects that by setting the start to the tread line.
==
So they did it the first time to show the models were right…
…and changed the second one to show the models were right

They can’t both be right….and in that case they both show the models were wrong again

The Nuccitelli Principle 1: If the IPCC publishes something that deeply embarrasses the IPCC then some mistake happened in the IPCC.

Corollary 1: If some mistake happened in the IPCC and something deeply embarrassing to the IPCC was published then the IPCC is not responsible for the content of the deeply embarrassing thing that was published.

The Nuccitelli Principle 2: Mistakes happen.

Conclusion: The IPCC is not responsible for its deeply embarrassing publications.

Pippen,
Since you read the comments at CA, you must have seen my analogy:

“The soccer player launches the penalty kick and it misses the goal to the right by one foot. Tamino sprints along the end line with his measuring tape and discovers that the goal was actually placed three feet closer to the left corner of the field than the right. Now that the discrepancy has been rectified, we are being told that the proper thing to do is credit the kicker with the goal.”

Let’s see if we can fit your statement to the analogy:

“Using the [original location of the goal] as the ref was clearly a mistake [when the ball was kicked]; the new [location] corrects that by setting the [goal where its should have been]. The bottom line is that it was changed to a more logical [location], whether or not you think the first [kick missed the goal].

Now either you [think it ws a a goal or you don’t], but the [kick was actually inside the envelope of where the goal should have been, so it should be credited as a goal].”

Pippen Kool – I agree about the 1990 versus the 30 year part of the discussion on McIntyre’s site. However, the professor from Duke pretty much destroys the spaghetti chart. And it isn’t personal taste.

A notable feature of these models is that none of them make predictions or a predictive inference. Thus, none of them are falsifiable or convey information to a policy maker about the outcomes from his or her policy decisions.
>>>>>>>>>>>>
Great, Good. Not only can’t the models can’t make PREDICTIONS but the earth has stopped warming for the past couple of decades in spite of a continued increase in CO2 suggesting saturation of the greenhouse effect or at least a major slow down due to the logrithmic nature of the ‘Forcing’ allowing negative feed backs to swamp the effect of CO2.

Geologists looking into the factors causing the descent into glaciation proclaim that CO2 instead of being a cause for alarm is saving us from glaciation.

The latest IPCC says not only can they not come up with a climate sensitivity but that there is no increase in droughts, hurricanes, tornadoes etc. etc. Other reports show the world is greening. Agricultural crops have higher crop yields per acre.

The crisis has been called off, CO2 is saving the earth, lets all go home and celebrate.

I’m sorry to appear confused but it makes sense to fix the model to 1990 especially for FAR. Anything before this is hindcasting – i.e. not real – and used for initialisation. After 1990 is projection. The key is picking a long enough period to be the baseline but essentially all that matters is that your model matches the real at 1990. It doesn’t matter if that year was cold or hot – that’s the year you use.

The same applies for SAR, TAR and AR4. The data should only be presented for the projection part not the hindcast.

Personally I think that the first graph was fine. It showed enough detail and conveyed a clear enough message rather than the hodpodge of the second. Adding more error and squiggles demonstrates that you know LESS than before – hardly congruent with the 95% certainty.

@Zek202, who said; “What happens to the models if the earth starts to cool again? Could the models account for that? Would the cooling be anthropgenic?”

Now those are excellent questions. If we could only get for the record a response from the IPCC and hold it accountable to the answers it gives, because global temperatures could very well decline for the next few decades. As far as I can see, the IPCC can not accommodate for any such cooling given the models it uses.

You know the temperature in 1990. You should zero the models to the known temperature in 1990.
Each model has an uncertainty range.
Each model is the result of hundreds of runs to get to the best performance.
There are now enough years to start tossing most of the models into the rubbish bin.
The IPCC should pick the model that comes closest to the actual data, and report the predicted climate sensitivity, aerosol forcings, etc for that model. I suspect the crisis is much less than we thought.

The rest is handwaving to maintain grant support for the modeling groups, and retain the high-end predictions, as silly as they are at this stage.

Another interesting DATA shift is apparent in the “Figure 1” side-by-side comparison. The 1990 FAR has a Temp Anomoly of almost 0.3 as the starting point in the AR4 graph but the 1990 FAR anomoly starting point has been shifted to <0.2 in the AR5 Spaghetti Chart. Must be how they lowered the bar

I have to disagree. The trick was eliminating the error bars on the observed data and zooming out on the scale of the graph. No error bars allows them to plot a rising mean trend line, but, it would be obvious that there is no rising mean for the last 15 years if the error bars are added back. They are making the data points as inconspicuous as possible so that your eye only sees the trend line. And for some reason zooming out also gives you the impression that the trend line is right.

Someone should help out the IPCC by recoloring their graph for them. If it becomes known that the color scheme of a graph is essential to its acceptance, well, maybe they might have to add the error bars back themselves.

daniel says: “Reading the Tamino post he refers to, I think what he meant was that the draft version, starting from 1990, was the one that should have been aligned differently and thus treated an especially warm year as a normal one. Or am I reading it wrong?”

daniel, Tamino was playing games. Nothing new about that.

Also, there was nothing especially warm about 1990. It was an ENSO-neutral year; that is, there was no El Nino to make it especially warm. The reason it looked warm was because Mount Pinatubo erupted in 1991 and dropped surface temperatures for a few years.

Pippen Kool says: “This was all hashed out over at McIntyre’s site, in the comments section, where there are several people who seem to know what they are talking about. Using the 1990 temp as the ref was clearly a mistake in the first graph…”

If I point the tip of my pen on today’s temperature and drew a bunch of squiggly lines in the same general direction as the last 100 years I would have a more accurate spaghetti graph “projection” than 99% of the model runs.

Their giant swath of possible future predictions include such a wide variety of possibilities it like saying the temperature tomorrow will be between 0 and 100F. And then they still got it wrong.

The IPCC predictions, I mean projections, were explicit. OK, these don’t fit data so we’ll do a post hoc redefinition of the projections.

This is part of the shifting sands of post-normal science and would be ethically and intellectually unacceptable in other branches of science. Now that PNS is getting into real difficulty, let’s hope that we can retreat into traditional science.

Some say that the IPCC model ensembles make projections based on scenarios of CO2 emissions and therefore cannot be falsified or called predictions because they do not in any way resemble reality. Dead dog. Won’t bark. Dead horse. Stop beating it.

Statistically, there was nothing wrong with choosing 1990 as the base year. Nothing wrong with choosing the 61-90 average, either. But if changing the base year (or range) changes the forcast result significantly, that’s a statistical red flag.

From what I could tell, IPCC simply increased the confidence range of the AR4 forecasts so that post-2010 average temps could fall within that range. But since these confidence ranges are not calculated statistically, IPCC is certainly free to do this, but not free to do this without admitting that they are less confident in their modeling. Too bad they weren’t honest about that…

Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.

There is no global temperature. It’s an utterly meaningless statistical construct.

Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

Our present reality over the past 70 years appears to me to lie within the noise cast by so many of these very sophisticated, quanitative models. From my experience of reservoir production modeling, which can be tweaked to provide a very large range of possible outcomes ….the ones you tend to believe are the ones that fall out from first principles, with minimal assuptions. They are directionally correct with the least amount of forcing or curve fitting. IFrom my exeperience if the trend is wrong, it is time to go back and revisit your assumptions. What strikes me is if that if someone (a public company for example, with public shareholders) was paying for the directional accuracy of these climate models to predict the future physical and therefore finacial behaviour of a producing asset, a lot of these scientific types would be out of business very quickly.

However, it we look at that graph, we note considerable differences in the how the ranges of predictions from the various reports overlap compared to how they are show in AR5.

It’s not “just as” , there is wholesale shifting of, not only the observational data but also the individual reported projections.

It is pretty obvious that if you can find a logic that allows shifting all the data and projections up and down it is a trivial result that they overlap. It demonstrates nothing about the data but a lot about the revisionist nature of the IPCC.

Who was it said: “The future is certain, it is only the past that is unpredictable.”?

I fear the IPCC authors made the mistake with their earlier AR5 draft but are not letting on. If I take AR4 WG1 Fig 1.1 and overlay on AR5 WG1 Fig 1.4 then the uncertainty bounds for TAR temperature projections overlay reasonably closely. However as pointed out above the draft figure (now abandoned) for AR5 as annotated by Steve McIntyre, does not show the TAR uncerainty bounds as overlaying. So rather than a fudge in revising AR5, perhaps a sloppy author made a mistake in preparing the earlier Fig 1.4 of AR5, then fixed it for the current final draft. That said, I dont excuse the use of the spagetti plot – I take a somewhat uncharitable view that use of a completely different plot format may have been a ploy to hide an earlier error, and allow a bit of disinformation to circulate.

I find it very curious that IPCC authors (unlike accountants) feel no need at all to provide comparisons of results for the current time period versus equivalent for the past time period. An accountant who changed formats, baselines, etc and deliberately ignored past results/projections would be shot at dawn, professionally speaking. What a pity scientists cant enforce similar standards.

I plotted the trend prior to the apparent slowdown beginning in 1998, and from 1972 so as not to overemphasise the 1990 anomaly.

To show the problem with centring on 1990, here is a straightforward adjustment centring the trend on 1990, but this time I’ll include all the years 1972 through 2012. (period chosen because the slope has strong statistical significance).

A better way would be to select a statisitcally significant long-term trend from observed data (eg, from 1950 to through 1999), and choosing a year that lies on or close to that trend. If you baseline the model ensemble average to that year, you’d at least avoid the problem of biasing the results on a single year anomaly that was warmer or cooler than average.

There is one clear fix in the new IPCC graph and that is the AR4 predictions. These were made after 2000 and if you look at the “before politicians” graph you see how well they track from the 1990 data the consequent downward trend and then up to 2000. Tamino had to leave out AR4 from his “re-alignment” for this reason. Both the AR4 and AR5 model predictions are above the data. The clever optical illusion in the new graph is to move down FAR, SAR and TAR and smudge everything out with bland colors so this contradiction is invisible.

Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker. Thus, to distinguish between predictions and projections is important.

By revising the chart to zero the models at 1990 it makes the warming before then look like a return to a normal rather than a dangerous shift from a previous normal. The IPCC has sacrificed the observed warming pre-1990 in order to protect the models from appearing to be falsified.

Is this something sceptics could exploit? We need to insist that the IPCC be consistent – they can say the warming pre-1990 is nothing exceptional and the models are still worthy of consideration *or* that the pre-1990 warming is the beginning of a man made climate trend and admit the models are not good enough. They cannot say both (but they will).

Bob,
In your depiction of temperature as average of GISS, HADCRUT4 and NCDC, the region around 2010 shows as higher than 1998. It does not show higher on, for example, RSS. There are reasons to expect a difference, as we know, but this is a rather critical difference when one comes to look at the hiatus.
I’m still left with an impression that the small positive slope upwards in the averaged data is, in part, due to adjustments +/- UHI and the difficulty of assessing it.
Therefore, I have a preference for the UAH or RSS data over surface-based observation, particularly because the satellite data has a better chance over the poles, Africa & Sth America.
If you could see in detail how the Aussie record is adjusted by the time the adjusters finish with it, I’d think you might have similar preferences.
So, do you have a strong reason to stick with the average?

Dana doesn’t like it when people question him or his orthodoxy. Almost every post of mine – currently on pre-mod’ at The G, gets deleted now, even the funny ones that take just a little dig at him or agw.

What’s happening here is that as one after another alarmist claims turn to rubble then the louder they squeal and shout. Diversion tactics. (“If the law *and* the evidence are against you – bang the table”) Hence the current advanced spate of ‘worse than we ever thought possible’ articles.

They’re losing the argument because the data isn’t falling their way, and they know they’re losing.

Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

Models are not run baselined to recent temps, so you have to make a choice. My two cents about that choice is here.

barry says: “1990 was a warm year in all data sets. Here’s the HADCru record.”

Of course it was, barry. Let’s me explain the wiggles before and after 1990 in the instrument temperature record. 1990 was preceded by the strong 1988/89 La Nina and followed by the eruption of Mount Pinatubo. Therefore, 1990 stands out. But it was an ENSO-neutral year, and as a result, it was a prime year to start a model-data comparison, because it was NOT exceptionally warm in response to an El Nino.

It’s real easy, barry. This is the simple stuff. I’m not sure why it’s so hard to grasp.

“There are of course open questions yet to be answered by climate scientists – precisely how sensitive the climate is to the increased greenhouse effect, for example.”

– – – – – – –

barry,

You, of course, may wonder that.

I, on the other hand, wonder how any reasonably normal rational human being cannot see that it is clear that there is little credibility in exclamations like this: AGW is unambiguous in the scientifically documented observational record.

I pity Nuccitelli, it is a difficult time to be an apprentice apologist trying to ‘rationalize’ an excuse for the IPCC’s publicly exposed integrity failure.

Comment from Jochem Marotzke of the Max Planck Institute in a presentation at the Royal Society about the IPCC report.

“As a result of the hiatus, explained Marotzke, the IPCC report’s chapter 11 revised the assessment of near-term warming downwards from the “raw” CMIP5 model range. It also included an additional 10% reduction because some models have a climate sensitivity that’s slightly too high.”

But it was an ENSO-neutral year, and as a result, it was a prime year to start a model-data comparison, because it was NOT exceptionally warm in response to an El Nino.

ENSO is not the only factor that accounts for interannual global temperatures. I’m not persuaded that we should baseline to the ENSO indices alone. Still think it’s better to determine a long term temerature trend, and baseline by selecting a year that lies on the trend, which evens out all the wiggles in the long-run, not just ENSO.

If, say, the above-the-trend warmth of 1990 was caused by massive, once-a-century solar flare activity, it would not be reasonable to use 1990. Seeing as we don’t know what what caused 1990 to pop out above the trend, we are left to make a purely statistical decision. If ENSO is a vital consideration, then select a year that satisfies both requirements – must be ENSO neutral and lie on the long-term trend line. That should not be hard to do if ENSO is overwhelmingly the principal driver of interannual fluctuations. ENSO indices are, after all, trendless over the long-term – by design. And it also has the virtue of being less biased by other interannual influences.

(I didn’t introduce Nuticelli’s article here, nor would I have. I don’t think it’s a good article, but I took more exception to the slanted way in which it was introduced, as if Nuticelli thinks the debate should be political. He’s saying the opposite. At the same time, Nuticelli and SkS certainly have a political agenda. And ‘political’ is not referring to governments, but the political ideology of individuals.)

There should be at least four sets of graphs. Each one depicting the modeled output for the 4 different model ensembles (SAR, TAR, FAR, and AR4) marking the hindcasting period and then changing colors to mark the beginning of the “projection” period. Range of runs should be shaded in. Plot the average and range of real observations and add to the graph. Statistical error bars should be calculated and depicted for both models and real observations. If anomalies and robustness are important, than the climatological average should be more than 30 years. Should be at least 50. These researchers shouldn’t be afraid of doing this. That they are speaks volumes about their own doubts.

Why four? There are 4 different investigations here, each with two parts: hindcast and projection periods. So there should be 4 separate graphs which clarify the two phased experiments of each model ensemble. Why more than four? Because within the ensembles, it is possible that input parameter scenarios may be different, IE CO2 percent increase stays at zero, or increases by 1 percentage point each year, or increases by 2 percentage points each year, etc.

The way the current graph of either version is done leaves out important methodological information.

If the current “pause” is due to natural variation, then the forecasts for the next 20 years should show a much steeper increase than they did five years ago. That’s because we’ll soon have not only the reversal of the natural variation but also the cumulative effects of the CO2, no?

Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

Yes, they do. The graphic from the leaked report is 25 years long, and emphasises the recent apparent downturn. The approved graphic is 85 years long (40 more years of hindcast, 20 more of forecast), and therefore gives more context. As global climate change is a long-term (multi-decadal) phenomenon, the second graphic is more appropriate. Regardless of whether scientists or politicians changed it.

Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker.

Falsifiable predictions are a function of science, not policy-making. They are called projections because the policy makers wanted to know what might happen under different forcing scenarios. So they are given a series of ranges – CO2 increase at various different rates, or stabilising at a certain value. This provides more, not less information to policy makers. Commonly decision-makers on any issue at least want to know the ‘best case/worst case’ scenario to get an idea of the range. Individuals frequently weigh decisions on this basis for ordinary life stuff. We try to pick options that balance cost and outcome.

Thanks for giving me an opportunity to clarify. It is a fact that no events underlie the IPCC climate models. However, it is by counting events of various descriptions that one arrives at the entities which statisticians call “frequencies.” The ratio of two frequencies of particular descriptions is called a “relative frequency.” A relative frequency is the empirical counterpart of a probability. As there are no frequencies or relative frequencies, there are no probabilities. It is by comparison of probability values to relative frequency values that a model is falsified. Thus, the claims that are made by the IPCC climate models are not falsifiable. Also, as “information” is defined in terms of probabilities, “information” is not a concept for the IPCC climate models.

Predictions have a one-to-one relationship with events. As no events underlie the IPCC climate models, there can be no predictions from them. As there are no predictions, the methodology of the associated research cannot truthfully be said to be “scientific.”

I disagree that models are not falsifiable. But they are complex, and describe much more than a one to one relationship. A failure of a particular component of climate models (say, the replicability of cloud behaviour) only tells us that cloud modeling is poor (or falsified, if you want to express it in a binary way). Other components do well, like predicting the cooling of the stratosphere. Should I assume you are focussed exclusively on the evolution of global surface temperatures?

Most commenters in the mainstream (such as realclimate) agree that if something like the trajectory of surface temperatures deviated over a sufficient amount of time from the models, then the ability of models to predict surface temps would be falsified.

Predictions and events are not always a one to one relationship, especially not for modeling of complex systems exhibiting chaotic tendencies. Most modeling is probabilistic. There is usually a range given in the prediction. Falsifying occurs not when the real trajectory deviates from the central estimate, but when it consistently falls outside the range.

The envelope for an ensemble at a particular rate of CO2 rise is fairly broad, but not infinite. A year or two of temps outside the envelope would not falsify the models, but a decade of annual temperatures centred around the 0.3% probability range would falsify the models that had the same forcings trajectory as the real world.

Seems to me that people get disgruntled that falsification hasn’t been conceded yet, based on the last few years lying near the bottom of the envelope. But they are too hasty. Time is an important component of climate model prediction/projections. On related, 5, 10, or 15 years of an apparent flat trend of global surface temperatures is not falsification of AGW. Plenty of commenters in the debate aligned with the mainstream view (eg, Tamino) have stated what they think would be the conditions – how long with no global warming, or how many years outside the range – that would falsify predictions and put current understanding of AGW into serious doubt.

Regarding the oft-cited trend from 1998 – the huge el Nino anomaly – my own conditions for falsifying understanding of the relationship between global temp change and CO2 increase is this: 25 years is a fair length of time to get a statistically significant trend from surface data, so if the global surface temperature has not increased by a statistically significant margin from 1998 to 2023, then the central estimates of the relationship of CO2/global temps have been falsified.

This is assuming that no freakish, non-CO2 events have an influence (this cuts both ways, whether a strong forcing event warms or cools the planet late in the trend), just the normal interannual fluctuations.

Thanks for taking the time to reply. In the literature of climatology, “predict” and “prediction” are polysemic. In other words, they have more than one meaning. When a word changes meaning in the midst of an argument, this argument is an example of an “equivocation.” By logical rule, one cannot draw a proper conclusion from an equivocation. To draw an IMPROPER conclusion is the equivocation fallacy. By drawing conclusions from equivocations, climatologists are repeatedly guilty of instances of the equivocation fallacy in making arguments about global warming. For details, please see my peer-reviewed article at http://wmbriggs.com/blog/?p=7923 .

The equivocation fallacy may be avoided through disambiguation of terms the language in which an argument is framed such that each term of significance to the conclusion is monosemic (has a single meaning). When this is done in reference to arguments about global warming, logically valid conclusions emerge about the nature of the research that is described by the IPCC in its recent assessment reports, One such conclusion is that the methodology of this research was not truly scientific (ibid).

Many of the methodological shortcomings of global warming climatology stem from the absence of reference by the models to the events that underlie them. In the absence of these events it is not possible for one of these models to make a predictive inference. Thus, it is not possible for one of them to make an unconditional predictive inference, that is, “prediction.” A predictive inference is an extrapolation from one observable state of nature to another; conventionally, the first of the two states is called the “condition” while the second is called the “outcome.” In a “prediction,” the condition is observed and the outcome is inferred.

In the falsification of a model, one or more predicted probability values belonging to outcomes are shown not to match observed relative frequency values of the same outcomes in a randomly selected sampling of the events. Absent these events, to falsify a model is obviously impossible.

By the way, events are the entities upon which probabilities are defined. Absent these events, there is no such thing as a probability. Mathematical statistics, which incorporates probability theory as a premise, is out the window.

Of course it is. What parts of the impacts of the eruptions of El Chichon and Mount Pinatubo don’t you understand?

Sedron L says: “If there was anything to Bob Tisdale’s book, it would have been put out by a real publisher, and not via a vanity press.”

Like climate science, you apparently have no grasp of book publishing nowadays. And because you have expressed no grasp of those topics, I will request that you not buy my ebooks.

Are you aware that Borders closed its bookstores and that Barnes and Noble has been closing book stores? Many supermarkets don’t carry books anymore. Even used book stores are closing. Why? Everyone’s buying ebooks.

Further to your lack of understanding, the primary costs for publishing my books are the hundreds of color illustrations. The costs (not sell price) for printing my book “Who Turned on the Heat?” were over $100.00. Because you won’t even part with $10.00 for an ebook, Sedron L, I can’t see you forking over $100+ for a paper edition.

Yes, agreement on the definition of terms is vital. (As is context) discussion like ours often lead to a semantic quagmire.

GCMs are sets of equations based on physics, parametrised processes and (for hindcasting) observed forcing indices. Please define ‘events’ – not in the general scope of knowledge, but specifically regarding climate.

Again: If there were anything to your book, it would have been published by a real publisher. Self-publications is easy. Publishers have standards.

I’m not buying your book because I have seen too many basic and trivial errors from you on this blog. Your work isn’t even peer reviewed — the minimum necessary to ensure basic standards of scholarship. Afraid to try and play in the big leagues?

The reality: 1990 was an ENSO-neutral year, according to NOAA’s
Oceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

This is nonsense. If we truncate the temperature record at 1990 – before Pinatubo – we see that 1990 was the warmest year ever in the instrumental record in both GISS and HADCRUT.

The GISS & HadCRUT data have been shamelessly manipulated. Some years in the 1930s were warmer than 1990, not to mention lots of years between c. AD 950 & 1250, not covered by those “adjusted” figures.

barry says: “The 1990 temperature anomaly was well above the trend, no matter what statisitcally significant linear period you choose. You can’t wish that away by pointing at other indices or events.”

barry, you missed my point. The global temperature responses to the volcanic eruptions (El Chichon and Mount Pinatubo) shifted the trend line downward. If you were to volcano adjust the data, 1990 may not fall exactly on the trend line, but it is nowhere near the 0.1 deg offset chosen by Tamino.

Bob, if you run a regression up to 1990 – the ENSO neutral year – you avoid the trend-flattening Pinatubo even and still 1990 is above the trend line. By about 0.1 deg.

graph

(I even ran a trend from 1982 – the El Chichon explosion – to pre-Pinatubo, so that fiddling with data gave the volcanic effects the best chance of increasing the trend. 1990 was even warmer than the trend by that method).

If you were to volcano adjust the data, 1990 may not fall exactly on the trend line, but it is nowhere near the 0.1 deg offset chosen by Tamino.

If you’ve done that sufficient to estimate the result, you could update your post or share the results here. Otherwise it’s guess-work.

Alternatively, adjust the temperature record by subtracting volcanos and ENSO and see what results.

But if you do that the temperatures go up in the latter part of the record and are no longer outside model results.

I’d be interested to see you results, Bob, for defluctuating the record of volcano effects – but do it over the whole record, so that the results are not skewed by other short-term fluctuation, or at least from 1950, so that we have a strongly significant trend perid to work with. And as ENSO is a primary contributor to interannual global temperatures, subtracting that, too from the temperature record would give a better approximation to the underlying warming trend, no?

The notion that one can “defluctuate” the global temperature time series from the volcano or ENSO effect by subtracting this effect from the global temperature is logically and scientifically flawed. In logic and in science, the most that can theoretically be accomplished is for an observable but unobserved state of nature to be inferred from an observed state of nature. Thus, for example, it is conceivable for the observable but unobserved state “time averaged over 30 years global temperature greater than the median” to be inferred from the observable state “time averaged over 30 years CO2 concentration greater than the median.” As neither the volcano nor ENSO effect is observable, neither effect can properly be subtracted from the global temperature in arriving at the defluctuated global temperature.

I’d be interested to see you results, Bob, for defluctuating the record of volcano effects – but do it over the whole record, so that the results are not skewed by other short-term fluctuation, or at least from 1950, so that we have a strongly significant trend [period] to work with.

There are explicit solar radiation “drops” from three different volcanoes. Two, of course, are greater than the first in Guam (southern hemisphere!), and all are “measured” at the Hawaii observatory: up across the equator.

But! To attempt to show ANY relationship between solar radiation and the earth’s climate or temperature history over time, you MUST include known volcano eruptions.

Now, HOW you do that, and HOW MUCH each eruption changes the potential inbound solar radiation?

But you DO have to show those impacts the temperature record, and you cannot excuse the temperature record (post 1996 for example!) or model failures by claiming volcanic eruptions that do NOT show up on a similar clarity measurement.

Removing noise from trends is a regular process in statistical analyses. Seasonal adjustment is a very common process, applied for understanding economic trends, short-term sea level trends and a host of other applications. While we don’t know what causes every fluctuation in global temperature, we know that strong el Ninos cause warm years, and volcanos and strong la Ninas cause cool years. Removing estimated noise from the trend brings us closer to what the signal actually is. Not perfectly, but better.

Voocanic effects are observable, both in the temperature record and from aloft, where satellites have ovserved the change in radiance through the atmosphere (there are posts on satellite-observed changes to radiative forcing from volcano emissions at this site). It is one of the corroborating features of modeling that includes volcanic forcing. Hansen’s 1988 model successfully predicted the amplitude and duration of a Pinatubo-like event (but not, of course, the timing, which is essentially random). Models that include the aerosol loading for Pinatubo in hindcasts all feature a dip very similar to what actually happened. ENSO has a number of corroborating indices, not just the excellent agreement with interannual temps when a strong ENSO event occurs. The observed data doesn’t perfectly capture the anomalies, but well enough to distinguish it from (and improve) the long-term signal.

This is implicit in Bob’s thesis, which hinges on volcanic and ENSO effects on the temperature record. Do you think his premises are flawed?

I hope Terry read your post, as you pointed out another observation of volcanic effects on global temperature from the ground.

I tend to agree with your thesis, but I would go further. If you want to isolate solar influence on global temperature, don’t just filter out volano events, also filter out ENSO (and any other known influence).

You’ll end up with an approximation, of course, but it would be an improvement on no filtering at all.

Yes, if you use a short-term linear trend, the volcanic events would could make it lower. But if you use a long-term trend, statistically significant trend, these effects will be barely noticeable. That is the method I first argued for – to remove the potential bias of a single year’s fluctuation, baseline according to a long-term average or trend.

But no matter which way you slice the observed temperature record, 1990 pops out over the trend.

But if you do remove ENSO and volcanic effects, you’d have to follow through on the exercise and compare the new filtered series with the models, which is probably a good idea in it’s own right if you want to compare recent trends over periods that are not statistically signficant. I’d wager that the filtered series would now be statstically significant from 1996/7/8 for any data set.

Alternatively, ask Bob Tisdale, who argues in his article that global temps should be seen through the ENSO filter. If he reads the current conversation, perhaps he’ll explain how that can be done, answering your question.

Alternatively, ask Bob Tisdale, who argues in his article that global temps should be seen through the ENSO filter. If he reads the current conversation, perhaps he’ll explain how that can be done, answering your question.

Yes, “Various methods are described in the scientific literature” and that illustrates my point. The “various methods” each provide different results because nobody really understands the effect of ENSO on global temperature.

In my opinion Bob Tisdale provides a better understanding of ENSO than is available “in the scientific literature” but I doubt he would be willing to provide the quantification which you suggest.

Science starts from admitting what we don’t know. Climastrology assumes whatever it wants to ascribe instead of trying to replace ignorance with knowledge.