Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice, and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can't tell you what the temperature will be on a specific day—that's weather forecasting. Climate is weather averaged out over time, usually 30 years. Trends are important because they smooth out single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can't wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called "hindcasting." The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years. CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modelers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being alarmist, may be conservative in the predictions they produce. For example, here's a graph of sea level rise:

Sea level change; tide gauge data are indicated in red and satellite data in blue. The grey band shows the projections of the IPCC Third Assessment report (Copenhagen Diagnosis 2009).

Here, the models have understated the problem. In reality the events are all within the upper range of the model's predictions. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits (uncertainties) for they are modeling chaotic systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Science says: While there are uncertainties with climate models, they successfully reproduce the past and have made predictions that have been subsequently confirmed by observations.

There are two major questions regarding climate models: can they accurately reproduce the past (hindcasting) and can they successfully predict the future? To answer the first question, here is a summary of the IPCC model results of surface temperature from the 1800's - both with and without man-made forcings. All the models are unable to predict recent warming without taking rising CO2 levels into account. No one has created a general circulation model that can explain climate's behavior over the past century without CO2 warming.

A common argument heard is "scientists can't even predict the weather next week - how can they predict the climate years from now". This betrays a misunderstanding of the difference between weather, which is chaotic and unpredictable, and climate which is weather averaged out over time. While you can't predict with certainty whether a coin will land heads or tails, you can predict the statistical results of a large number of coin tosses. In weather terms, you can't predict the exact route a storm will take but the average temperature and precipitation over the whole region is the same regardless of the route.

There are various difficulties in predicting future climate. The behavior of the sun is difficult to predict. Short-term disturbances like El Nino or volcanic eruptions are difficult to model. Nevertheless, the major forcings that drive climate are well understood. In 1988, James Hansen projected future temperature trends (Hansen 1988). Those initial projections show good agreement with subsequent observations (Hansen 2006).

Hansen's Scenario B (described as the most likely option and most closely matched the level of CO2 emissions) shows close correlation with observed temperatures. Hansen overestimated future CO2 levels by 5 to 10% so if his model were given the correct forcing levels, the match would be even closer. There are deviations from year to year but this is to be expected. The chaotic nature of weather will add noise to the signal but the overall trend is predictable.

When Mount Pinatubo erupted in 1991, it provided an opportunity to test how successfully models could predict the climate response to the sulfate aerosols injected into the atmosphere. The models accurately forecasted the subsequent global cooling of about 0.5°C soon after the eruption. Furthermore, the radiative, water vapor and dynamical feedbacks included in the models were also quantitatively verified (">Hansen 2007).

A common misconception is that climate models are biased towards exaggerating the effects from CO2. It bears mentioning that uncertainty can go either way. In fact, in a climate system with net positive feedback, uncertainty is skewed more towards a stronger climate response (Roe 2007). For this reason, many of the IPCC predictions have subsequently been shown to underestimate the climate response. Satellite and tide-gauge measurements show that sea level rise is accelerating faster than IPCC predictions. The average rate of rise for 1993-2008 as measured from satellite is 3.4 millimetres per year while the IPCC Third Assessment Report (TAR) projected a best estimate of 1.9 millimetres per year for the same period. Observations are tracking along the upper range of IPCC sea level projections(Copenhagen Diagnosis 2009).

Figure 4.Sea level change. Tide gauge data are indicated in red and satellite data in blue. The grey band shows the projections of the IPCC Third Assessment report (Copenhagen Diagnosis 2009).

Similarly, summertime melting of Arctic sea-ice has accelerated far beyond the expectations of climate models. The area of sea-ice melt during 2007-2009 was about 40% greater than the average prediction from IPCC AR4 climate models. The thickness of Arctic sea ice has also been on a steady decline over the last several decades.

Figure 5. Observed (red line) and modeled September Arctic sea ice extent in millions of square kilometers. Solid black line gives the average of 13 IPCC AR4 models while dashed black lines represent their range. The 2009 minimum has recently been calculated at 5.10 million km2, the third lowest year on record and still well below the IPCC worst case scenario (Copenhagen Diagnosis 2009.

Do we know enough to act?

Skeptics argue that we should wait till climate models are completely certain before we act on reducing CO2 emissions. If we waited for 100% certainty, we would never act. Models are in a constant state of development to include more processes, rely on fewer approximations and increase their resolution as computer power develops. The complex and non-linear nature of climate means there will always be a process of refinement and improvement. The main point is we now know enough to act. Models have evolved to the point where they successfully predict long-term trends and are now developing the ability to predict more chaotic, short-term changes. Multiple lines of evidence, both modeled and empirical, tell us global temperatures will change 3°C with a doubling of CO2 (Knutti & Hegerl 2008).

Models don't need to be exact in every respect to give us an accurate overall trend and its major effects - and we have that now. If you knew there were a 90% chance you'd be in a car crash, you wouldn't get in the car (or at the very least, you'd wear a seatbelt). The IPCC concludes, with a greater than 90% probability, that humans are causing global warming. To wait for 100% certainty before acting is recklessly irresponsible.

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."

Skeptical Science was founded by physicist John Cook in 2007 to explore what science has to say about global warming. In 2011, Skeptical Science won the Australian Museum Eureka Prize for the Advancement of Climate Change Knowledge. It is not affiliated with any organization, and is funded by contributions from readers.

John Cook is the Climate Change Communication Fellow for the Global Change Institute at the University of Queensland. He created and runs skepticalscience.com. His efforts have concentrated on making climate science accessible to the general public, releasing smartphone apps for the iPhone and Android phones. He has produced climate communication resources adopted by organizations such as NOAA and the U.S. Navy, and co-authored the book Climate Change Denial: Heads in the Sand with environmental scientist Haydn Washington.