Climate Science Glossary

Term Lookup

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Term:

Settings

Beginner Intermediate Advanced No DefinitionsDefinition Life:

All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

How reliable are climate models?

What the science says...

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere." (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years. CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Comments

ECS is "determined" via the adjustment of models to track past climate data

Non only. You are ignoring my previous comment asserting that ESC is determined by multiple lines, namely various paleo studies. Check for example here. In your reference (some "skeptic" blog) to the method of ECS estimation we read:

The new lower result is mainly due to the stalling in observed global temperatures since 1998 despite rising CO2 levels [...] In this post I focus on ECS and simply assume that GCM models are a correct description of climate. I then use HADCRUT4 temperature data to try to pin down ECS. Unlike the Otto et al. paper I will avoid using OHC data and simply assume an e-folding ocean heat capacity delay of 15 years (also based on models) to reach equilibrium

(emphasis mine)

I stopped reading after that. If the author is acknowledging ocean heat capacity is having large impact on surface temps but then ignores OHC in his calculation of ECS, then he simply contradicts himself and undermines the validity of his calculations. And as we know, the multi-decadal ocean oscillations (ENSO, AMO) can and do influence the short-term surface temp records (such as since 1998) so that the surface temp data (just 4% of total heat content) is highly irrelevant to the total radiative balance.

It's time for you to ditch such sources and move on to more liable ones, if you want to discuss your point. Unless of course, you don't want to be taken seriously.

Both Razo and Winston2014 are playing a game I call "Trvia Prusuit" They posit trivial observations about climate models and expect other readers to pursue that trivia. They also gloss over or ignore the learned responses to their trivia provided to them by other readers. All thing cosnsidered, they are both engaging in a form of concern trolling. They both are on the cusp of relinquishing their repective privilege of posting on the SkS comments threads.

Don't get me wrong, Im not dissing climate models. They represent our best guess at future climate. I would argue that they are -not- unbiased, but still a decent guess.

But please quit presenting your playing around with models (which is what you are doing from an engineering perspective) with hard, accurate science. You would get run out of engineering in a minute claiming that you think a building will stand because you ran a few models and everything looked good.

And by the way, the topic of this thread is 'Are climate models accurate.' My comments apply. So please do not point me back to dated and sophmoristic articles and say I should comment there. My comments here are completely relevant.

It is exacly this type of scholastic bullying that climate gate exposed and a big part of the reason that climate science lost a huge amount of its credibility a few years back...

"Where is the mathematical proof that averages can be predicted in the climate model?"

This comment shows a lack of understanging of science. You can't prove lots of empirical truths in science, but that doesn't mean they are not true. Similar arguments could be made of much statistical physics, and they would be equally poor arguments.

See my post on the other article for an example of a system that is obviously chaotic, but where common sense ought to be enough to show that its statistical properties are not chaotic.

I have a PhD in mathatics. examples and common sense are very often wrong. I don't feel like having huge amounts of my tax dollars chasing other peoples 'common sense'.

And nobody else does, which is why climate science is not more important.

I challenge your science to study the Verification and Validation methods used commonly in engineering. Climate owes the public and the scientific community intense focus on this issue.

A proof even for a very simple model is unlikely. But I (am many many people, scientists and not) have a philosophical difference with your community over the comment sense argument. In math and engineering we call this 'hand waving'.

Your model is actually correctly solving the equations it claims. (I have worked with the CESM and there has never been a focus on this. Honestly, the fortran codes are so huge and spagetti'fied that this is a serious issue. Numerical error in integration, adaptive integration, aposteriori error analysis would be apropo. At Sandia there was intense focus on this for the engineering codes).

Verification:

Does the code and the model actually model the physics. This is intensely difficult for climate since there are a myriad of physical processes that are parameterised in these codes. No first principles. It is not clear whether these parameterizations are relevant in future states.

Its a hard problem. Climate science owes the world some major V&V investment if they want the answer to this forum's topic to be 'quite reliable'.

You are engaging in non-arguments and strict scholastic bullying. Just for the record, you will convince no one with that approach, only distance them further from engaging in your cause. Good luck. I will not respond to you after this.

nickels, climate models have been validated (using the proper definition of validation, after reversing your definitions) empirically, as is explained thoroughly in the original post at the top of this comment thread. (Be sure to read Intermediate tabbed pane, too, and the cited peer-reviewed publications.) Your challenge to the "science" (sic) "to study the Verification and Validation methods used commonly in engineering" is odd, because climate modelers in fact do use V&V methods commonly used in engineering. In contrast, you seem to believe erroneously that V&V in engineering relies heavily or even exclusively on mathematical proof. Your statement "You would get run out of engineering in a minute claiming that you think a building will stand because you ran a few models and everything looked good" is correct, but your implication that bridge designers instead use only mathematical proof to convince themselves that it will stand, is wildly wrong.

My job largely is V&V of spacecraft software and some hardware and certainly their interaction, of both software used on the ground to monitor and control spacecraft, and software that runs on the spacecraft itself. Mathematical proof is only a tiny portion of that V&V.

sorry, whether science is correct is not dependent on your views of taxation (the causal relation should lie in the other direction).

"I have a PhD in mathatics."and "I challenge your science to study the Verification and Validation methods used commonly in engineering."

I have a PhD in engineering; the methods used in climate models are used in computational fluid dynamics in a wide variety of engineering industries, for example aviation, motor racing, ship design. All without mathematical proof of the nature that you are asking for.

I spent my career building models of financial markets. The notion that a model is 'good' if it correctly predicts unseen data from the historical record is laughable (i.e. the model is tested on a rolling window of data to see if it accurately predicts the subsequent unseen period).

There are two problems, one well understood and one almost universally ignored. The first is that as new explanatory variables are added to the model to improve the forecast accuracy, the unreliability of the model increases. This can be calculated - and almost always means that in complex systems, simple models outperform as predictors even though they are less accurate when back tested. Any discussion of the models that does not discuss this trade off is nonsense. In markets this means that the 'best' models are only slightly better than random, but are reliably better - the key then is risk management. I believe that the same should apply to a complex system like climate. The uncertainty in a 'good' model will make it useless for predicting the future and only useful for risk management.

The less common problem ignored by scientists in many many disciplines, is that knowing what models do not work is a hidden 'look ahead' that is the bane of quant reseachers in financial markets. For example, when building a model of the stock market, it is very very difficult to forget that it crashed in 1987. This knowledge influences the choices that model builders make - they just cannot help themselves. That is why so few people make money in systemaic trading - it is not just a scientific, mathematical, statistical and computational challenge - it is philosophically and psychologically challenging. In markets it doesn't really matter - long live the deluded models with their artificial certainty! They represent profit opportunity for other participants. In building climate models we do not have this comfort.

For the record, I believe that the world is warming and that this will have consequences. I also believe that the models are laughably wrong and that there only reliable attribute is that they will continue to fail to predict the outcome at any useful level of accuracy once unleashed on truly unknown data (otherwise known as the future).

The sooner the debate moves on to how we manage the risk of a warming planet, the better.

Oh, by the way, it is also obvious that we cannot stop it warming by flying less or driving a Prius. This is is not just an economic observation (though economics alone mean it will not happen) but also an obvious consequence of the prisoner's dilemma. Why should I stop flying if the Chinese are building a new coal fired power station every week? I repeat risk management - if it warms by more than X, what could/should we do? That is where the money and time should be spent.

Response:

[JH] You assert:

I also believe that the models are laughably wrong and that there only reliable attribute is that they will continue to fail to predict the outcome at any useful level of accuracy once unleashed on truly unknown data (otherwise known as the future).

nearlyman, there is a *big* difference between models used for financial prediction and climate models, which is that climate models are based on physics, rather than being statistical models that have been fit to the data. With statistical models, the more parameters you have in the model, the (exponentially) more data you need to estimate their parameters correctly (the "curse of dimensionality"). This is not the case with physics based models, where most of the parameters of the models are constrained by physics (i.e. we can perform separate experiments to characterise what different components of the model do).

However, if you really do believe the models are "laughably wrong", that suggests to me that perhaps you have been getting your information on the performance of models from the blogsphere, rather than from the journal papers (or even blog articles written by those who have read and understood the journal papers). If you would like to give a specific example of a model projection that is "laughably wrong" (as JH suggests), I am sure that there will be plenty of people here willing to discuss it with you. If you are unwilling to provide specifics, I suspect your posts will be viewed as trolling; this is intended as well meant advice.

It's interesting. I see people who do other forms of modeling coming from two different sides to diss climate modeling. One side comes from financial modeling where the modeling is purely statistical. The other side is from engineering modelers, who say that the physics can't be sufficiently constrained to return reliable data.

These are two completely contradictory positions, with both sides claiming to have a deep understanding of modeling.

All modeling is wrong. That's just a fact. The point of modeling is that it is instructive. It teaches you things that you otherwise could not understand in the absence of the models.

There is a further fundamental difference between financial models and models of physical systems.

Financial models are far more tightly coupled to the system they model. Indeed, the financial model is itself part of the system it models. That is, a model of some form (and probably many models of many forms) will have been developed and adopted as a guide to decision-making by those involved in the financial trading that is being modelled. Likewise any learning from the modelling about the workings of the financial system will also feed back into the workings of the system. Such coupling between model and system is probably seen as a problem by the financial modellers.

There is potential feedback from climate models into the climate system but here it is the difficulty in achieving that feedback which is seen as a problem (eg CO2 emissions have bad consequences => stop CO2 emissions).

Responding on this appropriate thread, to Donny's comment on an inappropriate thread:

Spencer followed up his claim that you linked, with another claim this time about "90 models" but likewise severely flawed. Hotwhopper clearly explained Spencer's biggest...um, "mistake"...of playing loose and fast with baselines. There is also the issue of Spencer falsely giving the impression that the RSS and UAH satellite trends are consistent.

On another thread, Donny asks:"Let me ask one more question of the accurate models. ... when will the surface temperatures begin to significantly rise again? What do they predict? Also there are so many of them. ... which one should we believe? "

Since no one else has, I will attempt a response.

The question implies a considerable misunderstanding of GCMs and their output. Let's start with some basics. Firstly, models are evaluated in terms of their skill. A skillful model gives more information than a naive heuristic. (eg climate will be stay the same). For all the faults of models, (and modellers can quickly point to their deficiencies), they remain the best tools we have predicting future climate. Even the incredibly simple Manabe model from 1975 managed to nail 2010 temperatures pretty well. Secondly, GCMs for all their usefulness are not the basis for AGW and nor are the only way to estimate climate sensitivity to an increase in CO2. That can be done "bottom up" from pure physical consideration of feedbacks, or from empirical means. Whatever way, you end up with climate sensitivity likely in the range from 2-4.5.

In terms of Donny's question, the next thing to understand is that models have no skill at decadal level prediction of surface temperature (and many other associated parameters). Over short times intervals, the surface temperature variability is dominated by ENSO. This is a chaotic ocean-atmosphere phenomena which is extremely difficult to predict even a few months out. In the El Nino phase, the atmosphere (and thus the surface temperature) gets a huge boost from heat stored in the ocean. Over last 15 years, La Nina or neutral conditions have predominated however. Climate however is about 30-year averages and the effects cancel out. Climate models are skillful estimating future 30-year average.

So what do they predict? Well over a 30 year period, they predict the climate will be close to the ensemble mean. They predict that actual temperatures will follow a trace as variable as one of the grey lines on the graph at the bottom of the article. They do not predict a exact path. Rerun the same model with slightly different initialization and you get a different grey line. Do many runs on many models and you get that nest of grey lines which make up the model mean. I am not aware that there is evidence that would suggest that any one of the 10 or so modelling groups is significantly more skillful than the others. The ensemble mean is the average of them all.

When will you get significant more warming? When the next El Nino cycle happens. If the climate response is more muted than expected, then that will cause some examination of the models. The strength of the aerosol forcing remains an uncertainty as do precise strength of cloud feedbacks.

What is much easier to predict than surface temperature is total ocean heat content. However, we have only had detailed, accurate measurements since 2004. While OHC continues to climb (unlike the decline in mid-20th C or after Mt Punatoba), then it can be expected that surface temperatures will also rapidly climb in an El Nino.

It is good that you brought this up. Many people thing that the models do not accurately include many positive feedbacks like arctic carbon or sea bed methane. This means they systematicly underestimate the expected warming and things are worse than we think.

One only has to look at sea level rise, one of the worst long term problems of AGW. Sea level rise runs at the very top of the model results. It is clear that future sea level rise is much more likely to run over the IPCCC projections than under them.

Thank you for pointing out that the models are often too conservative.

"Many people thing that the models do not accurately include many positive feedbacks like arctic carbon or sea bed methane. This means they systematicly underestimate the expected warming and things are worse than we think."

No! That means that there are some aspects of model performance which, if cherry picked, indicate they underestimate expected warming. Equally there are some other aspects of model performance which, if cherry picked, indicate they overestimate expected warming. In practise they tend to balance out, with the best evidence suggesting they currently overestimate warming by 15%. That evidence remains consistent with their actually underestimating warming (given uncertainties); but not (on a frequentist test) with their overestimateing warming by more than 100% (ie, we can be very confident of at least half of model ensemble mean predicted warming going forward).

I will note that the models are underpredicting (by and large) slow feedbacks more than fast feedbacks. On that basis I suspect the current 15% overestimate of warming figure is misleading in the long term, and that the actual future warming beyond several decades will be closer to the model mean or above it rather than below that 15%.

I agree "that there are some aspects of model performance which, if cherry picked, indicate they underestimate expected warming. Equally there are some other aspects of model performance which, if cherry picked, indicate they overestimate expected warming. "

My comment was intended to illustrate the cherrypicking you mention for Shadow Dragon. Perhaps Shadow Dragon will come back and clarify what they are interested in. Hopefully their interest will be as even handed as your assessment is.

BojanD asked on another thread: "Since some of you know a thing or two about models, I would like to ask you a question that I just can't find an answer to. I'm looking for some credible explanation why 1990 is a baseline year to align model projections with measurements. Since misaligning them is one of the favorite contrarian tactics, I would like to have technical description why it is wrong. Thanks you!"

(Somebody more knowledgeable than me--and there are lots of you--please correct any errors in my below explanation to BojanD.)

BojanD, choosing any baseline does not change the slopes of the trend curves. That's the first thing to note in a discussion of a fake skeptic's manipulation of baselines. Up is still up. Scary is still scary.

Choosing a baseline affects only the position of the curve on the y (temperature) axis. That is irrelevant if you are looking at only one curve. It is relevant only if you are comparing trend curves, and even then only if those curves are not identical. Real temperature data's trend curves never are identical, of course. Computing all the data using the same, single, baseline year in effect moves all the data up and down the y axis to make all those datasets and therefore trend curves intersect at that baseline year. Using as your baseline an average of several consecutive years does the same thing, but intersects the datasets and trend curves at the midpoint of that timespan.

Choosing a baseline period has the effect of choosing the time period in which the curves intersect. That has the effect of choosing which portions of the curves intersect.

The differences among the trend curves are minimized at the midpoint of the baseline period. The differences manifest most before and after that baseline period.

Imagine trend curves Observed and Model both have positive slopes, but Observed's slope is shallower than Model's. If you move the curves up and down the y axis (temperature) so they intersect at the far left of the x axis (years), then Observed will diverge below Model toward the right of the plot; Observed will be cooler than Model the closer to Now that you get. If instead you make them intersect at the far right end of the x axis, Observed will diverge above Model toward the left of the plot; Observed will be warmer than Model in the past, but approach and then match Model at Now.

How do you choose a fair timepoint at which the curves intersect? That depends on the exact question you want to ask. But almost regardless of the exact question you want to ask, it is inappropriate to choose any single year as the baseline period, because that year almost certainly will not be representative of the underlying, "true," population temperature even at that one-year moment in time. If your single chosen year lies above or below the true population temperature at that year, then you will be making the trend curves' intersection unrepresentative of the population data. You can't use even just a few years, because the true population temperature is obscured by short-term noise. You really should baseline on the temperature averaged across 30 years or so. That's why the definition of "climate" is based on 30 years.

HotWhopper showed how Roy Spencer used a short and cherry-picked baseline period to produce his infamous plot of models over-projecting temperature in recent years, here and then with a different flavor here.

BojanD: An example of incorrect baselining by using the single year 1990 was an IPCC AR5 report preliminary draft's plot of model projections versus observations. That error was pointed out by Tamino and explained by Dana here on SkepticalScience.

That particular error has a problem in addition to what I explained in my previous comment: Each model curve is a different run of a model, with different parameters; think of each curve as being from an alternate Earth. The mean curve of those curves inherently averages out the noise and so is representative of the population of all those curves--the Average Earth. But there is only one observed dataset--only one of the many alternate Earths. We can't create an average across all the alternate Earths' observed temperatures, because we've got only one Earth. We have a really sparse sample of the population of alternate Earths' observed temperatures, that we are comparing to a much larger sample of the population of alternate Earths' modeled temperatures.

We can't get more alternate Earths' observed temperatures. But at least we can get a better estimate of the population of observed temperature at that year (1990) by averaging across the 30 years centered on 1990, and using that average as our baseline.

Tom Dayton @778, your explanation is a good one except for the claim that "Using as your baseline an average of several consecutive years ... intersects the datasets and trend curves at the midpoint of that timespan". That will only occur if the various datasets only differ in slope. As the datasets also differ in the shape of the curves (due to differences in annual variation) they will not all intersect in the same year using a multi-year baseline, and different curves will likely intersect multiple times across the baseline period (and probably outside it as well). The average difference across the baseline period (specifically the Root Mean Squared Difference) will be minimized.

In reponse to BojanD, I would like to draw attention to two comments I made (in particular) in the discussion of the AR5 Fig 1.4. First:

"Use of a single year baseline means offsets can vary by 0.25 C over just a few years (CMIP3 comparison), while with the five year mean it can vary by 0.15 C (CMIP5 comparison). That is, choice of baselining interval with short baselines can make a difference equal to or greater than the projected decadal warming in the relative positions of observations to model ensemble. When you are only comparing trends or envelope over a decade or two, that is a large difference. It means the conclusion as to whether a model is falsified or not comes down largely to your choice of baseline, ie, a matter of convention."

Note the range of potential offsets here were calculated specifically for the model temperature comparison. For some data sets, with low annual variability choice of a single year baseline makes no practical difference.

"...On the contrary, a 1990 baseline makes the observations look warm. The emphasis on that point is so that my allies pick up on the fact. Intuitively, we would expect a 1990 baseline to cause the observations to look cool, for 1990 is a local high point in the observations. However, that is not the case, for though the observations are warm, the ensemble mean is warmer still relative to adjacent years. Thus, if anything, a 1990 baseline is favourable to a defence of the validity of models.

But it is still wrong.

It is wrong, basically, because you have to analyze the data as you motivated me to do to know whether it is favourable, unfavourable or neutral with regard to any position. The only way to avoid that necessity is to use a long (thirty year) baseline so that the baseline is robust with respect to time period used. Ideally, we should use the long baseline that minimizes the Root Mean Squared Error (RMSE) between the observation and the ensemble mean. By doing so, we ensure that any discrepancy between observations and ensemble are real, ie, are not artifacts of an ill chosen baseline."

I will note the comments in the thread on IPCC AR5 Fig 1.4 are well worth reading in general (particularly those from SkS authors and regulars) for those who want to understand the issues relating to baselining better.

@Tom & @Tom, very much appreciated. Ok, I get it now. There's nothing absolute about 1990, but it is very convenient year for a sniff test since it was by far the warmest year. And like @TD said and Dana gave it the whole section, you can just forget about aligning stuff and look at the trends instead. Not sure how I missed that Dana's article. Had I read it before I wouldn't have had to ask the question.

Why is it that, from all the models presented by IPCC in its last report (AR5 WG1 fig 9.8), all but one overshoot their estimation of current temperature anomaly, by up to 0.6 °C?

with zoom of the upper right corner

It may well be that the quality of the models doesn't enable a more exact reproduction of actual data.

That's life when historic series are scarce and imprecise, and when interactions are not well understood. More computers can't help.

However, what does it say about their ability to play future scenarii and provide useful projections? Or to render a plausible equilibrium climate sensitivity?

Response:

[Rob P] - IIRC the CMIP5 simulations use historical data (hindcasts) up to 2005, and estimated forcings thereafter. The multi-model mean is the average of all the simulations and thus represents Earth without any natural variation. Natural variation (the IPO) has, however, had a hand in recent surface warming trends, as have light-scattering volcanic sulfate aerosols, a cooler-than-expected solar cycle, and perhaps industrial sulfate pollution too.

Allowing for all these things there is actually a remarkable match of the climate models with the recent slower rate of surface warming. And note too that many of these models have climate sensitivities of around 3°C. Time will tell if there's a problem, but there's no genuine reason to think so yet.

As the intermediate article above states: "Each individual climate model run has a random representation of these natural ocean cycles,"

That is, they are trying to simulate a large source of the 'noise' in the climate signal. The ensemble mean of the models will, in some sense, reflect a scenario in which the short term noise has not pulled the temperature in either direction.

In reality, the noise of the last 15 years has pulled the signal downwards. On the flip side, if the noise had been a positive, rather than negative signal, we would most likely have witnessed temperatures that tracked above projections.

Long time reader of SkS, but not a poster. I was explained the basics of AGW and modelling in the 90s by a crazy Swedish mathematician / scientist, and as a data modeler I took it to heart. My main academic field is Philosophy, but you gotta earn a buck as well I guess…

Question I have though is with regard to the IPCC year-by-year model results. As I understand it, most “skeptics” infer that the farther away observations stray from the ensemble mean, the more wrong you are. As if the ensemble boundaries are error bars of sorts.

But that’s far from being the case of course. In fact, today the models are actually correct when hugging the lower boundaries, given the downward pressure of volcanic activity, short-term solar cycles and a net negative pseudo-ENSO cycle since 1998.

Just wondering then if the IPCC should perhaps offer some qualified projection-scenarios? Like, explaining in 2000 that a possible future La Nina-dominated decade would in fact produce a flatter curve? Certainly would help explain things to my right-wing friends… And the larger public as well. I understand that the ensemble mean does not include natural variability, but lots of people don't...

SDK @784, what you are looking for was in fact provided in the draft version of the recent IPCC report:

In this graph, the range of the projections are given as the range between the mean projections for two different but plausible bau scenarios. To that is appended the grey zone representing the reasonable range of annual variability due to short term factors such as ENSO. The graph was ammended in the final report, mostly because of a fake controversy (see here and here)generated by ignoring that fact (which was not sufficiently emphasized by defenders of climate science, myself included). The graph does have some flaws, including an inappropriate baselining on a single year and the fact that the grey zone, out of graphic necessity, is drawn from the upper or lower limit of all projections. Therefore caution should be used in presenting that graph, which should not be presented without the disclaimers regarding its flaws, in links to rebutals of the trumped up controversy.

For these reasons, I prefer my own graph which plots observations against all model runs for AR4:

Doing so allows the actual model variability to define the expected annual variability, thereby eliminating the false perception of smoothness sometimes generated by showing only ensemble means for projections. The test for those claiming the models failed to project the current temperatures is to pick out the observations from the projections. If they cannot do so easilly, then the model projections have correctly captured both the trends (see below) and the range of annual variability.

Thanks for the charts (from a fellow Tom who is also knee-deep into chart porn…)

Forget about the deniers, but these charts are not even close to addressing the concerns of the fencers either. At least when we need to attach complex disclaimers to them. I mean, if you ask say 90% + of the general population, they would have no idea what the ensemble mean actually erm… means in the first place. Isn’t that kind of a “double catastrophe” when you add the battle of public relations? We’ve won on the science, but losing badly in communications and framing the issue. Unfortunately, science doesn’t always fit nicely on a bumper sticker.

Sure, I can provide charts with the short-term weather noise removed, but then I’ll always be accused of chart manipulation… “after the fact”.

What if I had a chart e.g. from the early 2000s, showing that if the 3 main short-term natural fluctuations put a downward pressure on the GST in the coming years, it will produce a flattening of the curve? Further, why not just produce hundreds of those -— with a combination of premises -— which can subsequently be checked “after the fact”? As appendixes to the IPCC reports? Just asking as it sure as hell would've made my life easier arguing the AGW case on other forums.

Just to add. How many in the general population would understand that today the model ensemble mean is correct because it strays away from the instrumental record? And that if the observed data was closer to the mean -— or even exactly on top it -— it would in fact be more wrong?

Climate modeling is a political issue. No, it is not a Scientific issue, because it is not Scientific. Science relies upon direct observation – not extrapolation. Empiricism underpins Science. Modeling is an aid to Science. Theory is not Science. At best a theory is a guess at the consistency of Reality. Climate modeling is an exercise in Mathematics. Models are based on assumptions. Assumptions prove nothing.

Deon, in fact science is just as much extrapolation via theory into models, as it is observation. In fact, "direct observation" is only the first step. For example, you probably think that looking at a mercury column thermometer and noting the number that the column reaches is "direct observation" of temperature. But it's not. Instead your temperature observation relies on your model of the relationship of the mercury's height to the temperature. The very notion of temperature is theoretical. When an apple comes loose from a tree and you predict that it will fall to the ground, you are using a model. All those models are based on observations. All theory is generalized observation. Models are instantiations of theories.

Perhaps you think climate models are poor models because you think they are merely statistical models that assume the future will be the same as the past. But that is incorrect. Climate models are models of physical processes, whose elements are constructed to match empirical observations of fundamental physical phenomena such as how much water vapor air can hold at a given temperature. The models are then run so that all those individual elements interact, yielding the climate projections.

To learn about how climate models are built and run, click the links in the "Further Reading" green box below the original post on this page (right before the comments section).

Adding to Tom Dayton's excellent post @788, I want to note that a theory is of no practical use in science if it does not lead to mathematical predictions. Absent such mathematical predictions, comparison to observations can only be vague at best. And the way that theories lead to mathematical predictions is by being couched in mathematical models. Consequently, absent mathematical models, a theory can only give vague, qualitative predictions that constitute hand waving more than science. It is one of the outstanding features of climate science that they have taken such extensive efforts to couch their theories in mathematical models, and to check those models against observations across a wide range of observations (not just Global Mean Surface Temperature). Given the difficulties in so doing, both due to the complexity and chaotic nature of climate - it is a stirling effort.

Climate scientists are often criticized for making model based predictions. As the model is just the theory couched in the most mathematically precise way we are capable of at the moment (due to budgetary constraints and limitations of computer power), what else can you base your predictions on? By definition of "prediction", you cannot base it on observations because the observations go outside the current observations.

In fact, those who say we should not base predictions on models typically resort to predictions made on crude models that lack physics, and are constrained to just one or two climate variables. In effect, they want climate science to remain at the hand waving stage found around 11 pm in any bar room across the world.

This is not to say that models are perfect. They are not, and some of their imperfections may be due to flaws in the theory they embody. (Others are related to innacccurate initial or boundary conditions, and/or limitations of computational power.) But you can only find the flaws in the theory by putting them in mathematical form and comparing them to observations (ie, by using models), so that is no reason for not using models. And it is reasonable to make our predictions based on the currently best available theory. The complete lack of climate models developed by "skeptics" means by default that must be the conventional theory accepted by the IPCC.

Tom Dayton @788 correctly notes that "temperture" is a theoretical construct - one that has proved invaluable in uniting a whole host of observations. I will merely note that given that our sense are sound, visions, smell, taste and touch, even such objects as the sun, or even other people (given our lack of ESP) are theoretical constructs for us. We just do not notice because they are so good at uniting disparate phenomenon, and we are so used to them in practise, that unless we are of a philosophical bent, we don't think about it.

When I had taken on the task of learning to trade stocks and currency a few years ago, I was amazed at the number of 'models' that were 'back tested' to be accurate. They tended to be somewhat accurate - until they failed.

Most scientists do not understand randomness and the role it plays in all aspects of our lives. We exist totally due to randomness. Weather - despite how much we think we understand the interactions of everything that affects weather - is totally random. We will never be able to predict, with certainty, the future of weather.

Many, if not all, of the so called solutions to curb CO2 emmisions, or to curb sunlight, or to convert to alternate energy sources, - although they may be necessary - do not address the effects on many people in the middle income brackets and below. Some do not address the potential negative effect on our health.

I would like to see the data that scientists use to predict the negative effects on our environment, animals and humans. Polar bears are often used as an example. Polar bears can survive without the ice covered artic as they do in parts of eastern Canada.

Also, it would be very eye opening to see the source of the actual data, how it is interpreted, and what assumptions are included in their climate models. If we demand openness in government we should also demand it on something as importand as this.

Sangfroid... There's a huge difference between financial models and climate models. Climate models are based on physical processes. Financial models are subject to varying human behavior. Gavin Schmidt has a great TED Talk about climate models that you should listen to.

"Weather [...] is totally random."

If weather were totally random then you'd get snow in the summer and sweltering heat waves in the winter. Your statement is patently absurd.

With regards to the rest of your comment, all the information is there. You just have to dig in and start reading. Understand that the body of scientific research is produced by researchers who've dedicated their lives and careers to getting this stuff as accurate as possible.

Sangfroid @791, there is a major difference between the stockmarket (or currency trading) models and climate models. That is, the stockmarket models are entirely statistical. In contrast, the climate models encode well established physical laws into mathematical representations of the atmosphere. These are laws such as consevation of energy, conservation of energy, radiative transfer physics, boyles law etc. Because we cannot represent the atmosphere molecule by molecule, (or indeed, kilometer by kilometer), some of the laws are approximated based on empirical estimates of the effect of the laws in the real atmosphere. Consequently, when these models retrodict the temperature series, without having been trained on that temperature series, that is a significant prediction.

The achievement is even more impressive in that the models do not predict just a single time series (again unlike stock market models). The predict temperature series for a variety of different altitude and depths of the ocean. The predict major atmospheric and ocean circulations (including ENSO like effects). The predict precipitation changes, and changes in sea and land ice. They are not perfect at any of these - indeed do not always agree among themselves at any of these - but they do so with very far above chance accuracy. This would not be possible if they did not get the fundamental processes right - and if they were not in the right ball park for the subtle effects.

So, quite frankly, I consider your analogy to be on a par with somebody insisting that because a particular sum cannot be calculated in a reasonable time on an abacus, it cannot be calculated in much better time on a Cray xc-40.

"Also, it would be very eye opening to see the source of the actual data, how it is interpreted, and what assumptions are included in their climate models. If we demand openness in government we should also demand it on something as importand as this."

The assumption here is that the data is not available to the public. That assumption is false. For all climate models, the basic assumptions and fundamental equations are specified in the peer reviewed literature, as for example the GISS Model E. Updates will also result in detailing in the peer reviewed literature, either in seperate papers for major updates, or as part of the paper in which the update is first used for minor updates. Further, for some models (notably the GISS Model E) full code is available for earlier versions, and considerable effort is made to make the code available to the public in a convenient format, as detailed here. Nor is NASA alone in granting this level of access to the code. Indeed, one climate model (CCSM 3.0) was explicitly designed to be available to the public, and is freely downloadable along with all relevant documentation.

So, the problem is not the availability of code and assumptions for analysis. It is that so called "skeptics" do not make use of resources made available to them to run their own climate experiments, or to modify the models according to the principles they think will hold. I presume that is because they are quite aware that the result, if they did, would be a very much poorer fit with the data than is provided by the standard models.

That answer is spot on. I have to write that down. I think its worth reemphasizing that the models do not make a sngle set of predictions - for example of global temperature. People who question models often talk as if they do, but in fact these models actually make many different predictions about the atmospheric and climate. Each of these can be compared to observation. the number of different sorts of predictions actually provides a fairly stringent test of the models

"Most scientists do not understand randomness and the role it plays in all aspects of our lives"

Excuse me while I pick my jaw up of the floor!

Understanding of 'randomness' is absolutely central to science. I would back 100 scientist's understanding of statistics against 100n people from just about any other background - apart perhaps from pure statisticians.

So your comment "Weather - despite how much we think we understand the interactions of everything that affects weather - is totally random" betrays a deep limitation in your understanding of both weather and statistics.

Weather is an example of bounded randomness. A process that displays degrees of randomness, but within bounds imposed by non-random processes. Primarily the Conservation Laws. Randomness for example could never produce high pressure systems over the entire planet - the Law of Conservation of Energy prohibits that.

Some models predict better than others and failing to adequately account for emerging environmental factors of differing importance - for instance the increasing importance of precipitation data - will likely, over time, negatively impact a model's predictive capacity.

Response:

[JH] You are once again skating on the thin ice of sloganeering which is prohibited by the SkS Comments Policy.

Please note that posting comments here at SkS is a privilege, not a right. This privilege can be rescinded if the posting individual treats adherence to the Comments Policy as optional, rather than the mandatory condition of participating in this online forum.

Please take the time to review the policy and ensure future comments are in full compliance with it. Thanks for your understanding and compliance in this matter.