Climate Science Glossary

Term Lookup

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Term:

Settings

Beginner Intermediate Advanced No DefinitionsDefinition Life:

All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

How reliable are climate models?

What the science says...

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere." (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years. CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Comments

It also looks like your friend is confusing physical models (used by climate scientists) with statistical models. However, for a purely phenomenological approach, then perhaps he should look at Benestad and Schmidt

Why use temperature anomalies (departure from average) and not absolute temperature measurements?

Absolute estimates of global average surface temperature are difficult to compile for several reasons. Some regions have few temperature measurement stations (e.g., the Sahara Desert) and interpolation must be made over large, data-sparse regions. In mountainous areas, most observations come from the inhabited valleys, so the effect of elevation on a region’s average temperature must be considered as well. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations. The use of anomalies in this case will show that temperatures for both locations were below average.

Using reference values computed on smaller [more local] scales over the same time period establishes a baseline from which anomalies are calculated. This effectively normalizes the data so they can be compared and combined to more accurately represent temperature patterns with respect to what is normal for different places within a region.

For these reasons, large-area summaries incorporate anomalies, not the temperature itself. Anomalies more accurately describe climate variability over larger areas than absolute temperatures do, and they give a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of temperature trends.

I appreciate your response. In fact, I have already pointed out the fact that anomalies rather than absolute temperatures are used for the reasons stated by NOAA.

My problem was that I was not sufficiently confident of my facts regarding the models to say for certain that the raw data output does not appear in the form of absolute temperature. It certainly wouldn't make sense for it to do so given that the global temperature datasets are presented as anomalies, but I wanted to check up first.

Thanks,

Paul

Response:

[DB] Apologies; I didn't mean to imply that you hadn't. My intent was to provide you with a sourced, concise reference. Sphaerica gives some good links to resources on models here.

"Both flux-adjusted and non-flux-adjusted models produce a surprising variety of time-averaged global mean temperatures, from less than 12°C to over 16°C. Perhaps this quantity has not been the subject of as much attention as it deserves in model development and evaluation."

However, given that we're dealing with energy flux here, the appropriate unit is surely the Kelvin. In this context, all of the models get within 2K of the actual global mean, which appears to be around 287K - that's within 0.7%!! I'd say that's pretty remarkable given all the various features which are incorporated into the models. Indeed, if they model the response to greenhouse gases anything like that well I'm sure the scientists will be delighted!

"As far as multiple model runs and picking the middle as a result. Being the models do not do well with clouds, hydro, etc which do affect not only weather, but clmate as well, the outputs of the models should be in question."

Well, of course model outputs are in question - does the actual sensitivity to a doubling of CO2 lie a the low end or high end of the 2.5-4C range that's constrained by a bunch of scientific study, including but not limited to research involving GCMs?

Just because modeling of clouds is identified as being an area where models don't do as well as one would like (because of restrictions on resolution, there are people who do very interesting work modeling clouds using high resolution models on small slices of the atmosphere) doesn't mean that there is no constraint on the magnitude of cloud feedbacks.

Your - and the denialsphere in general - say "cloud feedbacks aren't as well constrained as the radiative properties of CO2" (for instance) and conclude "therefore, the magnitude of cloud feedbacks is not constrained at all" and furthermore argue that cloud feedbacks must be strongly negative to the point of counterbalancing CO2 and water vapor forcing.

dhogaza:
The models do well with co2 because of the simple physics. However, there is a lot more to climate than just co2 levels.
The hydro cycle is critical.

Response:

[DB] Please provide peer-reviewed evidence that models do not deal adequately with the hydrological cycle. This is a climate science website; opinions are of no value without a scientific undercarriage to support them.

Climate models have this scientific undercarriage; your opinions do not.

In addition to their 'undercarriage,' models get better with time. People who run models learn from prior work. That seems to be a significant problem with the denials - they just keep repeating the same old generic 'models are unreliable.'

For example, listed here are several publications from a NASA water cycle study group. These folks are addressing the very issues that Camburn is looking for: evaporation, clouds, soil moisture, etc.

But really: is there something likely to come out of this detail work that will undo the warming to date? That will undo the fact that forcing from atmospheric CO2 keeps rising? That these nonsensical objections (Warming paused! You can't be sure! There's no basis!) are just distractions from the real questions?

You've got to love the way uncertainties in parts of climate models get conflated with "models are unreliable", or "models do not have preditive ability".

Say it's mid-August in Melbourne, the daytime temperature is a respectable (and close to average) 15C. Can I forecast the exact temperature two weeks from now? No. But I can say that it's likely that the average temperature during September will be a bit higher than 15C. Some days will be cooler, but it's very likely, but not certain that most will be warmer. As for October, I can forecast that nearly all days will have a max temperature higher than 15C, and for November and December, it's unlikely that any day will be below 15C. I know this because the underlying forcing, not visible in a short timeseries with large variability, shows up over a longer period of time. The underlying forcing beats the variability every time. I know that October will be warmer than August, although not every October day will beat every August day. In the same manner, I can be very confident that the 2010s and the 2020s will be warmer than the 1990s and 2000s, even though not every later year will beat every earlier year. The models forecast this very well, alongside a great deal of more complex factors. Some factors not so well, but claiming unreliability belies an inability to understand the usefulness of models.

Is the model unreliable because it cannot pick out the exact variability due to noisy variations in the short term? If you're forecasting the weather two months ahead, yes, but if you're forecasting the climate, no.

I have recently heard an allegation from skeptics that you always get the sam results from models, regardless of the information you put in and that they must therefore be extremely unreliable. Anyone have an idea regards what they are talking about?

peacetracker @415, models can be set up with forcings typical of the peak of the last "ice age" (the Last Glacial Maximum) and they will yield climate predictions featuring kilometer thick ice sheets over North America and Europe. They can be set up with forcings typical of the Paleocene-Eocene Thermal Maximum and will yield tropical water temperatures in Arctic seas. So not only do I not know what they are talking about, evidently if they claim climate models produce the same results regardless of input, neither do they.

skywatcher. The weather varies from day to day because of atmospheric pressure and wind blowing either hot or cold air from other areas of the planet on to the location that you're observing the weather from.

Now, if manmade pollution is the main factor that governs climate change, and natural forcing agents are a much lesser factor, then why can't you predict the global mean temperature next year, 5 years, 10 years, 15 years etc. with a reasonable degree of confidence?

"Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption."

I note that there is no hyperlink to any report which actually proves that a model was produced prior to an eruption taking place, that the prediction was proved correct in terms of the volcanic eruption's effect on the global mean temperature.

Jdey123 - the reasons why models have little skill with decadal-level prediction are well understood. It may improve, but this has little to do with the skill of models designed to predict climate not weather. You do understand the difference between a climate model and a weather model? I would note that models are very successful at predictions within their domain. eg
look here (Noting the papers cited both in making the prediction and observing it).

As to volcanoes - models respond to specific aerosol loadings at given altitudes and locations. Until a volcano erupts, you dont know what these will be. Instead models use scenarios to put in volcanoes at the rate they are normally observed. If you look at any of the climate models predictions beyond the present you will see downturn spikes in places (and they will be different for different models and for different runs of the same model). These are simulated volcanoes. They are not saying that there will be a volcano at this time and place, but if they didnt put periodic volcanoes into the scenario, then the temperatures would be too high. (A long span of very quiet volcano activity is in effect a natural forcing).
If you think that code is "fitted" to reproduce volcano change, then you could take code from before eruption, put the volcano into the scenario, and rerun. Glory awaits you if this doesnt match the published outputs from scientists doing the very same thing.

Ok, so my example including hyperlinks showing why stock market prediction is analogous to climate prediction and showing why extrapolating historical trends has been deleted. The post was on topic and scientific, so why has this been deleted?

Response:

[muon] This is not about the stock market. There are several threads dealing with the overall accuracy of past climate predictions - as well as the overall inaccuracy of predictions made by those in denial. You've been counseled multiple times on other threads to read, learn and follow the Comments Policy. As you were already told, posting on this forum is a privilege, not a right.

[DB] Ok, you have now had 3 4 comments deleted since this one was posted, all of which amount to moderation complaints, trolling and taunting. No more warnings. Zero.

Either adhere to the Comments Policy, a rule the vast majority of participants here have no difficulties whatsoever in adhering to, or you "choose to recuse yourself from this venue".

Re Dow. Well actually I expect that stock market does in fact respond to forcings but there isnt a quantitative model to test.

Climate IS different. There is a quantitative model based on known physics not a deduction based on observation of a trend. The models are not one dimensional. They make a huge no. of predictions on wide variety of parameters with spatial and vertical structures. These predictions vary in robustness but all amount to tests of the model. The evolving climate is a continuous test of these predictions.

[muon] This isn't about the weather, its about the climate; you apparently do not know the difference. Anyone investing in the market must have a reasonable expectation that his or her investment will increase in value over a long enough term; that's climate. Day-to-day, week-to-week fluctuations: weather.

Bibliovermis, the thread is about whether the climate change model is reliable or not. Given that we have to wait until 2100 to prove whether it is or not, we have to examine the beliefs that this model is based upon. One of which is that you can extrapolate past history.

Response:

[DB] Let the reader note that Jdey123 found compliance with the Comments Policy too onerous a burden.

The reason we observed climate change is thought to be manmade is because it is consistent with the physics, not due to extrapolation from statistical correlations. The models are not statistical -- they cannot behave differently than dictated by the physics of radiation, heat transfer, mass flow, etc. That physics is based on an enormous amount of experimental, observational and theoretical work that has built up over the years, and must be acknowledged. Given this physics and observed forcings (GHG, aerosols, solar), the only way to explain the recent global warming is via greenhouse gasses. Morever, given good input on forcings, the models do very well at predicting their consequences for climate past and present. It's that simple.

It's crazy to compare climate models to the stock market models; they are apples and oranges. The rules governing the stock market are poorly understood and possibly maleable through time depending on human behavior and perceptions. We can use complex statistical time series analysis to analyse these patterns, but we cannot say for sure whether the rules governing the patterns we see now will not change in the future. It's a real and difficult challenge for that field - hats off to them for trying.

In physics, by contrast, the factors do not change through time. As long as you capture the key variables, your will do OK. And there are many, many well established constraints that limit the range of possible solutions. In that sense, climate scientists have it easy! That's why Arhenius 100 years ago was able to estimate pretty well the CO2 climate sensitivity, and why models haven't really deviated far from that number much in the intervening century.

Also, it isn't so hard to understand the inability to predict changes due to greenhouse changes for periods less than 15 years. The signal from GHGs accumulating increases over time while variation from natural sources does not. So naturally the effect of GHG will be more obvious over longer time scales, when it is larger relative to background natural variation.

Stephen Baines, the article to which these comments are attached says that the model is based on hindcasting. I thought GHGs were already large enough to be a significantly stronger forcing agent than natural sources. It's only deniers who claim otherwise.

mace@431, GHG may be the dominant forcing, but that doesn't mean that their effect on climate dominates unforced variability on short timescales (e.g. 15 years). GCMs are just approaching the point where decadal predictions are beginning to be interesting. There was a good article at RealClimate on this recently.

Let's be clear about what happens in the modelling process. There is the famous George Box statement. "Essentially, all models are wrong, but some are useful".
When you hindcast, you find models capture some observations but all. So what do you do to improve the model? In a physics model, you add more physics. Beyond bugs in the code, a failure in the model is physics not working. A lot of that has to with simplifications necessary for hardware of the time, so it's choose the important stuff. In 1975, "Broecker, W.S. 1975. "Are We on the Brink of a Pronounced Global Warming?" used Manabe's model to make a very good fist of predicting the 2010 temperature. However, the Manabe model was so primitive, that it had little to say of use about a great many other parameters. Improving computer power allows better spatial and temporal resolutions; more direct physics calculations rather than parameterisations etc. You will have no trouble finding things that the models still dont capture well - ask the modellers - but more and more of the important stuff go in.

What doesnt happen in the process is tweaking numbers to fit a line. There are parametrizations made from empirical data - eg evaporation as function of temperature,humidity and wind speed - but the fitting is done in terms of data on evaporation, temperature and windspeed, not fiddling the function to make achieve say a particular global temperature curve.

scaddenp, Hansen et al 1992 predicted a 0.5C drop and the observed drop was 0.3 (see http://paos.colorado.edu/~dcn/ATOC6020/papers/Soden_etal_727.pdf) The difference is usually attributed to El Nino in 1992 (see fig 2a in Soden). I am not so sure since that figure shows the model preceding the observed-ENSO drop by about 6 months and that is not explained.

Hansen has said in this paper that water vapour is the dominant greenhouse gas, rather than CO2 or methane. Can we conclude, that if the ice melts in Greenland, rather than the sea level increasing as many may expect, the global warming will cause seawater to evaporate and hang in the atmosphere. Not sure if more cloudy conditions would cause the earth to cool due to sunlight being unable to penetrate or to warm, as it acts like a blanket keeping the land warm. Any thoughts on this?

mace wrote: "Can we conclude, that if the ice melts in Greenland, rather than the sea level increasing as many may expect, the global warming will cause seawater to evaporate and hang in the atmosphere."

No.

A warmer Earth does mean more water vapor, but the increased atmospheric water vapor content is much smaller than the increase in liquid water due to ice melt. The planet would have to get very hot (c.f. Venus) in order for that to stop being true.

As to cloud feedbacks... there has been alot of research on the positive and negative feedback effects of clouds which you elude to. The exact net value is still uncertain, but it has been narrowed down to 'small'. That is, whatever the exact value it isn't going to have a major impact on the climate compared to the more prominent factors; CO2 forcing, water vapor feedback, and ice albedo feedback.

Eric - I am not sure where you see 0.3 on Soden. It says ~0.5K (text above Fig1) and that seems to match Fig 2a as well. The GCM predictions are helpfully on the same graphs and seem to match my assessment of "very accurate".

"Noone has created a general circulation model that can explain climate's behaviour over the past century without CO2 warming."

This isn't true regardless of the veracity of Qing-Bin Lu's claim that CFCs actually more closely model Global Warming trends than CO2: It is a model that shows the trend without using CO2 as the driver.

CFCs are also much more of a GHG than CO2. Lending them actually higher credibility as the driver of Global Warming. From a scientific perspective you need much less of them to cause a problem.

The Graph of Sea Levels according to Jason 2 is out of date. This is the Nasa site. Oddly the Jason-2 site shows the change and drop starting in 2010 but I can't find a link to that at the moment.
http://climate.nasa.gov/keyIndicators/

I have now looked briefly at Kramm and Delugi. One thing I noted is that large sections of the introductory material is more diatribe than discussion. More troubling to me, however, where sections like the following:

"The notion “global climate”, however, is a contradiction in terms. According to Monin and Shishkov, Schönwiese and Gerlich, the term “climate” is based on the Greek word “klima” which means inclination. It was coined by the Greek astronomer Hipparchus of Nicaea (190-120 BC) who divided the then known inhabited world into five latitudinal zones—two polar, two temperate and one tropical—according to the inclination of the incident sunbeams, in other words, the Sun’s elevation above the horizon. Alexander von Humboldt in his five-volume “Kosmos” (1845-1862) added to this “inclination” the effects of the underlying surface of ocean and land on the atmosphere."

Of course, it is obvious that in modern usage that climate does not mean "inclination" as in the angle of the sun. In fact, it currently means, as defined by the IPCC and WMO:

"Climate Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization. The relevant quantities are most often surface variables such as temperature, precipitation and wind. Climate in a wider sense is the state, including a statistical description, of the climate system. In various chapters in this report different averaging periods, such as a period of 20 years, are also used."

Now, patently it is possible to determine the mean and variability of temperature, precipitation, wind speed, frequency and types of extreme events for the Earth's surface just as it is possible to do so for some subpart of the Earth's surface, say Texas. It follows that the only way “global climate” can be "a contradiction in terms" is if, for example, the "the climate of Texas" is a contradiction in terms, or indeed, if "the climate of Houston" is a contradiction in terms.

As it happens, the Ancient Greek word "οἰκονομία" from which we derive the term "economics" means "household management". Kramm and Dlugi's argument that global climate is a contradiction in terms is as coherent as an argument that there is no such thing as the world economy because the world is not a household, and economics means household management. Such nonsense verbal arguments are a clear sign of pseudoscience, and their prominent presence and Kramm and Delugi shows that it is ideology, not science that drives their work.

However, that is not the reason I am discussing their work on this thread (which would be off topic). Rather it is because of their critique of the WMO definition of the greenhouse effect.

In that critique they correctly develop a zero dimensional model of the global energy balance. They then proceed to criticize it because:

1) Surface storage of energy is not considered in the zero dimensional model;
2) The zero dimensional model assumes the entire Earth's surface has the same temperature;
3) The albedo used in the equation includes contributions to the Earth's total albedo from the atmosphere, and not just those from the surface only (I kid you not);
4) Comparing Te, the predicted temperature required to maintain equilibrium temperature with Tns is inappropriate because Te is the theoretically predicted temperature and Tns is the actually observed temperature. (Again, I kid you not!)
5) The observed mean surface temperature of the Moon is 31 degrees Kelvin lower than that predicted for the moon using the Zero dimensional model.

I note that all five objections are true. Some are bizarre stated as objections, or course. For instance, it is always true in any prediction that the prediction is not the measurement. To conclude from that, as Kramm and Dlugi do in their fourth objection is breath taking, to say the least. It shows a gall not found even in creationists.

One objection, the fifth, does need a small comment. It is well known that surfaces with variable temperatures will radiate away more energy than similar surfaces with even temperatures given that they have the same mean temperature. This is so well known that planetary scientists never use the zero dimensional model used by Kramm and Dlugi for planetary bodies known to have very large heat differences at their surface (such as the moon). Further, because of this it is also known that the estimate of the greenhouse effect obtained by zero-dimensional models are an underestimate of the full strength of the greenhouse effect, although still a good first approximation.

And that is the point, really. Zero dimensional models are only intended to provide a first approximation. They make counter factual but convenient assumptions for simplicity knowing that they are not determining the exact effect. In this regard they are like other physics models that ignore friction, or wind resistance, or (as famously done by Newton) the extended nature of planetary bodies.

Of course, climate scientists do not rest on first approximations and zero dimensional models. Instead they develop more complex models which eliminate the simplifying assumptions used in zero dimensional models. Coupled Ocean-Atmosphere Global Circulation models (AOGCM) include, for example, (1) heat storage and transport by atmosphere and ocean; (2) variable surface temperatures; and (3)surface only albedo at the surface, with atmospheric contributions to albedo included in the modeled atmosphere. In other words, not one of Kramm and Dlugi's objections (that can be taken at all seriously) is an objection to AOGCMs.

That being the case, Kramm and Delugi's argument logically devolves to this:

The predictions of AOGCMs are necessarily wrong because zero dimensional models are only first approximations.

Nothing more need be said to refute them, and having stated their argument, nothing could ever make me take them seriously again.

Curiously, the publisher of the Kramm and Delugi article, Scientific Research Publishing, is an open access (translation, pay to be published) set of journals with a curious reputation for (re)publishing old articles, listing academics on the editorial boards much to the surprise of said academics, who in some cases had agreed to be associated with different journals, and in others had not agreed to any relationship.

The publisher appears to be based in China, but details of the publisher, staff, etc., are very hard to come by.

While not E&E (with an editorial policy of posting papers just because they disagree with the consensus), I would consider SRP a not terribly reliable source...