Climate science and the public

How do climate models work?

T = [(1-α)S/(4εσ)]1/4

(T is temperature, α is the albedo, S is the incoming solar radiation, ε is the emissivity, and σ is the Stefan-Boltzmann constant)

An extremely simplified climate model, that is. It’s one line long, and is at the heart of every computer model of global warming. Using basic thermodynamics, it calculates the temperature of the Earth based on incoming sunlight and the reflectivity of the surface. The model is zero-dimensional, treating the Earth as a point mass at a fixed time. It doesn’t consider the greenhouse effect, ocean currents, nutrient cycles, volcanoes, or pollution.

If you fix these deficiencies, the model becomes more and more complex. You have to derive many variables from physical laws, and use empirical data to approximate certain values. You have to repeat the calculations over and over for different parts of the Earth. Eventually the model is too complex to solve using pencil, paper and a pocket calculator. It’s necessary to program the equations into a computer, and that’s what climate scientists have been doing ever since computers were invented.

A pixellated Earth

Today’s most sophisticated climate models are called GCMs, which stands for General Circulation Model or Global Climate Model, depending on who you talk to. On average, they are about 500 000 lines of computer code long, and mainly written in Fortran, a scientific programming language. Despite the huge jump in complexity, GCMs have much in common with the one-line climate model above: they’re just a lot of basic physics equations put together.

Computers are great for doing a lot of calculations very quickly, but they have a disadvantage: computers are discrete, while the real world is continuous. To understand the term “discrete”, think about a digital photo. It’s composed of a finite number of pixels, which you can see if you zoom in far enough. The existence of these indivisible pixels, with clear boundaries between them, makes digital photos discrete. But the real world doesn’t work this way. If you look at the subject of your photo with your own eyes, it’s not pixellated, no matter how close you get – even if you look at it through a microscope. The real world is continuous (unless you’re working at the quantum level!)

Similarly, the surface of the world isn’t actually split up into three-dimensional cells (you can think of them as cubes, even though they’re usually wedge-shaped) where every climate variable – temperature, pressure, precipitation, clouds – is exactly the same everywhere in that cell. Unfortunately, that’s how scientists have to represent the world in climate models, because that’s the only way computers work. The same strategy is used for the fourth dimension, time, with discrete “timesteps” in the model, indicating how often calculations are repeated.

It would be fine if the cells could be really tiny – like a high-resolution digital photo that looks continuous even though it’s discrete – but doing calculations on cells that small would take so much computer power that the model would run slower than real time. As it is, the cubes are on the order of 100 km wide in most GCMs, and timesteps are on the order of hours to minutes, depending on the calculation. That might seem huge, but it’s about as good as you can get on today’s supercomputers. Remember that doubling the resolution of the model won’t just double the running time – instead, the running time will increase by a factor of sixteen (one doubling for each dimension).

Despite the seemingly enormous computer power available to us today, GCMs have always been limited by it. In fact, early computers were developed, in large part, to facilitate atmospheric models for weather and climate prediction.

Cracking the code

A climate model is actually a collection of models – typically an atmosphere model, an ocean model, a land model, and a sea ice model. Some GCMs split up the sub-models (let’s call them components) a bit differently, but that’s the most common arrangement.

Each component represents a staggering amount of complex, specialized processes. Here are just a few examples from the Community Earth System Model, developed at the National Center for Atmospheric Research in Boulder, Colorado:

Sea Ice: pollution trapped within the ice, melt ponds, the age of different parts of the ice

Each component is developed independently, and as a result, they are highly encapsulated (bundled separately in the source code). However, the real world is not encapsulated – the land and ocean and air are very interconnected. Some central code is necessary to tie everything together. This piece of code is called the coupler, and it has two main purposes:

Pass data between the components. This can get complicated if the components don’t all use the same grid (system of splitting the Earth up into cells).

Control the main loop, or “time stepping loop”, which tells the components to perform their calculations in a certain order, once per time step.

For example, take a look at the IPSL (Institut Pierre Simon Laplace) climate model architecture. In the diagram below, each bubble represents an encapsulated piece of code, and the number of lines in this code is roughly proportional to the bubble’s area. Arrows represent data transfer, and the colour of each arrow shows where the data originated:

We can see that IPSL’s major components are atmosphere, land, and ocean (which also contains sea ice). The atmosphere is the most complex model, and land is the least. While both the atmosphere and the ocean use the coupler for data transfer, the land model does not – it’s simpler just to connect it directly to the atmosphere, since it uses the same grid, and doesn’t have to share much data with any other component. Land-ocean interactions are limited to surface runoff and coastal erosion, which are passed through the atmosphere in this model.

You can see diagrams like this for seven different GCMs, as well as a comparison of their different approaches to software architecture, in this summary of my research.

Show time

When it’s time to run the model, you might expect that scientists initialize the components with data collected from the real world. Actually, it’s more convenient to “spin up” the model: start with a dark, stationary Earth, turn the Sun on, start the Earth spinning, and wait until the atmosphere and ocean settle down into equilibrium. The resulting data fits perfectly into the cells, and matches up really nicely with observations. It fits within the bounds of the real climate, and could easily pass for real weather.

Scientists feed input files into the model, which contain the values of certain parameters, particularly agents that can cause climate change. These include the concentration of greenhouse gases, the intensity of sunlight, the amount of deforestation, and volcanoes that should erupt during the simulation. It’s also possible to give the model a different map to change the arrangement of continents. Through these input files, it’s possible to recreate the climate from just about any period of the Earth’s lifespan: the Jurassic Period, the last Ice Age, the present day…and even what the future might look like, depending on what we do (or don’t do) about global warming.

The highest resolution GCMs, on the fastest supercomputers, can simulate about 1 year for every day of real time. If you’re willing to sacrifice some complexity and go down to a lower resolution, you can speed things up considerably, and simulate millennia of climate change in a reasonable amount of time. For this reason, it’s useful to have a hierarchy of climate models with varying degrees of complexity.

As the model runs, every cell outputs the values of different variables (such as atmospheric pressure, ocean salinity, or forest cover) into a file, once per time step. The model can average these variables based on space and time, and calculate changes in the data. When the model is finished running, visualization software converts the rows and columns of numbers into more digestible maps and graphs. For example, this model output shows temperature change over the next century, depending on how many greenhouse gases we emit:

Predicting the past

So how do we know the models are working? Should we trust the predictions they make for the future? It’s not reasonable to wait for a hundred years to see if the predictions come true, so scientists have come up with a different test: tell the models to predict the past. For example, give the model the observed conditions of the year 1900, run it forward to 2000, and see if the climate it recreates matches up with observations from the real world.

This 20th-century run is one of many standard tests to verify that a GCM can accurately mimic the real world. It’s also common to recreate the last ice age, and compare the output to data from ice cores. While GCMs can travel even further back in time – for example, to recreate the climate that dinosaurs experienced – proxy data is so sparse and uncertain that you can’t really test these simulations. In fact, much of the scientific knowledge about pre-Ice Age climates actually comes from models!

Climate models aren’t perfect, but they are doing remarkably well. They pass the tests of predicting the past, and go even further. For example, scientists don’t know what causes El Niño, a phenomenon in the Pacific Ocean that affects weather worldwide. There are some hypotheses on what oceanic conditions can lead to an El Niño event, but nobody knows what the actual trigger is. Consequently, there’s no way to program El Niños into a GCM. But they show up anyway – the models spontaneously generate their own El Niños, somehow using the basic principles of fluid dynamics to simulate a phenomenon that remains fundamentally mysterious to us.

In some areas, the models are having trouble. Certain wind currents are notoriously difficult to simulate, and calculating regional climates requires an unaffordably high resolution. Phenomena that scientists can’t yet quantify, like the processes by which glaciers melt, or the self-reinforcing cycles of thawing permafrost, are also poorly represented. However, not knowing everything about the climate doesn’t mean scientists know nothing. Incomplete knowledge does not imply nonexistent knowledge – you don’t need to understand calculus to be able to say with confidence that 9 x 3 = 27.

Also, history has shown us that when climate models make mistakes, they tend to be too stable, and underestimate the potential for abrupt changes. Take the Arctic sea ice: just a few years ago, GCMs were predicting it would completely melt around 2100. Now, the estimate has been revised to 2030, as the ice melts faster than anyone anticipated:

Answering the big questions

At the end of the day, GCMs are the best prediction tools we have. If they all agree on an outcome, it would be silly to bet against them. However, the big questions, like “Is human activity warming the planet?”, don’t even require a model. The only things you need to answer those questions are a few fundamental physics and chemistry equations that we’ve known for over a century.

You could take climate models right out of the picture, and the answer wouldn’t change. Scientists would still be telling us that the Earth is warming, humans are causing it, and the consequences will likely be severe – unless we take action to stop it.

36 Responses

Kate, welcome to the world of large-scale computer modelling! I have been doing large-scale computer modeling since the late 1970s, not of the earth’s atmosphere, mind you, but of mechanical systems. Over the years I have often been critical of those who would oversimplify their models in the interest of getting faster answers, or of those who would ignore or be unaware of critical effects or boundary conditions that could significantly bias their results. I even invented a descriptive acronym for the output from such programs, GIGOSIM, which stands for Garbage In, Garbage Out SIMulation. Recently I had a heated argument with a university researcher who had written just such a program as you describe for predicting global warming 50, 100, … years in advance. He had listed all of his inputs to the model, and it was clear he had overlooked too many. For example, one cannot blindly use trends, let’s say, from 1800 to 2010 to predict what will happen 50 or 100 or 200 years from now or what has happened 50 or 100 or 200 years before 1800. We are currently in the grips of a fossil fuel blip on the world’s timeline, and that blip will be gone 50 or 100 or 200 years from now, and that blip didn’t exist 50 or 100 or 200 years prior to 1800. Thus if we want to accurately predict what will happen in the future or what has happened in the past, we must knowledgeably project the correct models, inputs, and boundary conditions to the appropriate time frame and not naively use what we are currently experiencing. For example, let’s look at how fast our world population is growing. “Experts” are using our historical population growth rate to project future world populations of 8, 10, 12 and upward billion. They are ignoring this temporary energy spike and that people survive on energy. Historically world population has been low, because access to energy has been low. Read “Topsoil and Civilization” to see how humans have destroyed much of their environment in the quest for more energy to increase their numbers. In the short course of five thousand years, mankind has turned the world’s Eden into desert. Man has destroyed most of the world’s forests. What will technologically evolving countries like China, India, Brazil, Russia, and many other third-world countries do to the world in the coming decades? If climate models don’t factor in the hundreds or thousands of changing and emerging effects, then they just as well be relegated to the GIGOSIM garbage heap. I would be wary of some GCMs written in Fortran, as they were probably developed back in the 1970s and 1980s, when Fortran was the norm. In the past twenty years or so, ANSI C and C++ and some other computer languages have taken the reigns. Understand the assumptions and inputs to each model and be aware of their limitations and potential inaccuracies. Some of these models may look quite sophisticated and impressive, but sophisticated and impressive do not a non-GIGOSIM model make. Enjoy!

Roger. I disagree that the use of Fortran means that a model is out of date.

Most modellers use Fortran these days because it’s easier than going back and rewriting large models (and introducing new bugs in the process). And Fortran is still very fast, especially for the array-based programming required for large climate models. So lots of bleeding-edge models still use Fortran (despite the fact that it’s an absolute bitch to program in).

The line that climate models are used as garbage in, gospel out black boxes by climate scientists is a total strawman. There is a very concerted effort to try to constrain how sensitive models are to the assumptions that you put in, and we have good reasons for supposing that the models are doing a good job of modelling the climate.

P.S. I’ve heard of GIGO, as noted originally by Babbage, and made popular by the IBM programmers in the 50 and 60s. I’ve never heard of GIGOSIM, and clearly Google hasn’t either.

Kate: Thanks for a nice human summary of climate models that I can point people to. I had a look at your poster at AGU but didn’t have time to come back and have a chat. Nice work! It’s nice to see the source of the diagrams that I’ve seen then proliferating here in presentations in Australia.

Good point about Fortran – although it’s an old and clunky language, climate modellers don’t really need features such as object orientation that are included in newer languages. The code would probably look very similar if it were written in C or Python.

Great to hear that our diagrams have made it as far as Australia! Is this at the modelling centre there? -Kate

Hmm. I’d say that Fortran is easier to for a research scientist to learn and use than C, and the array operations are both fast and efficient. Newer versions of Fortran also have some object-oriented features. Overall, I think Fortran is often a better choice for researchers than C.

I’ve been waiting to see something like this. I am a skeptic, which simply means I’d like to see for myself how things like climate sensitivity are calculated. One often sees the Stefan-Boltzmann relation used to demonstrate 33 degrees of greenhouse warming but one rarely hears the caveats. This post is a big step in the right direction for me. I am a consumer of non-fiction popularizations of science. There is a healthy market for it. I hope to see more of climate science delved into and written about for the layperson. The debate has become polarized and politicized and has become needlessly snide with a lot of discussion of things that are beside the point. It would be great to read a piece sometime not on climate models but on modeling in general.

Thanks for stopping by, Hank. A few popular science books on this topic that I would recommend to you are A Vast Machine by Paul Edwards (about climate and weather modelling) and The Discovery of Global Warming by Spencer Weart (about climate science in general). -Kate

The conditions for the 20th century depend on forcings – what the sun was doing, the composition of the atmosphere, volcanoes – not the initial conditions of how warm or wet or cloudy cell was on the particular Tuesday that the run started. Since this is a climate forecast, not a weather forecast, all the noise averages out. As long as you start with initial conditions that are within the bounds of observations, it works fine. It is a very interesting application of chaos theory and the underlying patterns of chaotic systems.

Thanks for your reply, but I’m even more unclear now. How is a model judged to match 20th century history? I understand that a particular distribution of temperature and wind on a particular day doesn’t matter, but I’m guessing you measure against things like: overall average global temp, average ocean salinity, amount of water stored on land as ice, average global humidity. (I’d be interested to know what the real list is).

So, how can you “spin up a model from dark earth” to match the values of 20th century start? I guess you could “guide” your model as it boots up by adjusting various parameters (forcings) to get it close to the desired initial state. That seems like a trial and error process, like learning to drive a sports car thru a difficult track or learning to play a difficult video game. (Not that there’s anything wrong with that… I’m just trying to understand how its done).

I think I understand that you specify the forcings during the 20th century such as sun activity, historic CO2 emissions, volcanic events. But that is during the run, after you’ve reached the starting condition.

Also, can you give any good places to see these kinds of comparisons of runs vs. history?

P.S. I’m blown away that the models reproduce El Nino. To me that says they are doing extremely well. I’d be interested in more info on that too.

Eric, here’s an example of 20th century simulations vs observations. In this graphic they are comparing average temperature, but that’s only one of many metrics (there’s also sea level rise, Arctic sea ice extent, patterns of precipitation change…) It is from chapter 8 of the IPCC AR4 which is all about model evaluation (click the image for a link). The black line is observations, yellow lines are model runs, and red line is the average of model runs.

The project that compares all the models is called CMIP (Climate Model Intercomparison Project), and they are currently working on CMIP5. This was a big topic of discussion at AGU. If you dig around in their website you can find lots of documentation on what the standard experiments are, what metrics they compare to observations, which GCMs take part, and so on. The results of CMIP5 will be in the next IPCC report.

In that same chapter, I found some more information on spin ups that you might want to take a look at. It references this paper which describes the method that most GCMs use today. Unfortunately the full text is behind a paywall.

Eric, there is a “spin-up period” during which the model is run under the initial forcings for the period of study. It has to be long enough for the model to “settle”. Only after that, is the model driven by the time-varying forcings over the period of study.

About El Nino, yes that’s pretty spectacular isn’t it. I differ from Kate in that I wouldn’t say that the causes of El Nino are unknown; I would say there isn’t any cause, rather it’s an emergent phenomenon.

Actually let’s remember climate is a lot more than just global annual-average temperatures; reproducing El Nino, the ITCZ, the jet streams, realistic looking hurricanes, deserts in the right places; and of course what we humans have known as ‘climate’ since the dawn of time, the cycle of the seasons, the diurnal cycle, the dependence on latitude and (in the mountains) altitude… all of these can be handles on testing and judging how good models are!

Another thing to remember is, that the “engine” inside a GCM is really no different from that inside weather prediction software (I was told that in the case of the UK Met Office, it’s actually the same engine). Looking at TV weather predictions it’s actually hard sometimes to spot the transitions between observations (weather radar, satellites) and model output. And they give five day predictions nowadays, without blinking or blushing! When I was little, living in the Netherlands, being hit by a one-day unpredicted storm was not even unusual…

As the first commentator says the climate models aren’t yet capable of producing reliable results as climate science is in it’s infancy.

That is not what Roger said at all. He was commenting on the difficulty of projecting anthropogenic impacts into the future. The main source of uncertainty in the models is what people decide to do about climate change. Climate science is not in its infancy – in fact, it’s older than quantum mechanics: scientists have been studying the greenhouse effect since the 1800s. -Kate

There are some fundamental things that are not yet known e.g what the actual value of climate sensitivity is, and what causes ENSO events, and are they a cause or an effect of temperature change.

Climate sensitivity is not in any way “fundamental”, it is simply a parameter that ties together thousands of processes in an attempt to estimate the constant of proportionality between forcing and temperature, simplifying back-of-the-envelope calculations. GCMs do not use climate sensitivity as any kind of input, they work from first principles. ENSO is an interesting phenomenon but has little to do with the long-term climate; being a 5-year cycle, its impacts on the atmosphere average out within decades. -Kate

The land-based temperature records only go back to 1850 or 1880, dependent on which dataset you prefer.

The satellite based temperature and solar activity records only date from 1979.

The satellite based land ice thickness and sea level records only from the last decade.

True, but proxy data from ice cores, sediments, and tree rings gives us data for hundreds of thousands of years of climate. The error bars are bigger, but the data is still useful. -Kate

You can’t say that a climate model is reliable if it can only model known data. Obviously if a climate model has 500,000 lines of code and you apply weighting factors and time lags to the known forcing agents, you can program something to mirror known data. A climate model is only proved reliable when it predicts the future and observations prove it to be accurate.

I think I know where you are mistaken here. The parameters which are inferred from data do not change from run to run – scientists don’t keep one set for the ice ages, one set for today’s climate, etc. There’s just one set of parameters that is calibrated to today’s climate, to account for small-scale processes that can’t be modelled explicitly. When scientists model previous climate changes, such as the ice ages or the Medieval Climate Anomaly, the only thing that changes are the inputs. The model itself is the same. In this way, predicting the past is as good as predicting the future for testing the skill of climate models. -Kate

The most famous climate model is obviously Hansen’s 1984 climate model. Obviously, climate science has moved on since then, but it is a fact that global warming became famous based on the predictions that Hansen presented to the US congress in 1988 based on this model. His predictions were way out.

This is patently false. Hansen gave three scenarios for temperature change: A, B, and C. Observations are tracking between B and C. Scenario A was too high, but he expected that – it was meant as an upper bound. You can read more about how this exercise worked in Hansen’s paper; also see the Skeptical Science rebuttal of this common myth. -Kate

The 2007 IPCC 4th report didn’t look at Hansen’s predictions but at an increasing number of climate models with an increasingly broad range of predictions about the global mean temperature. Again, the observed temperature is heading towards breaching the lower range of these predictions.

I have no idea where you’re getting this from. Please tell me which part of the AR4 you are looking at. Models can easily reproduce current observations – Ben Santer recently tested this rigorously and presented his findings at AGU. -Kate

The credibility of climate science is also affected by the stretched claims that scientists understand the cooling period which occurred in the 19th century or the flattening periods that occurred between 1940 and 1980, and from 1998 or 2003 (dependent on which dataset you choose) and now.

Being able to explain decadal variations in climate in no way undermines the field’s credibility – that seems like rather curious logic to me. I’m not sure what you mean by “the cooling period in the 19th century” – do you mean the Little Ice Age (more like the 17th-18th century)? That can easily be accounted for by negative forcings from solar activity and volcanoes. The mid-century cooling period is also easy to explain because of the heavy aerosol emissions after World War II.

It is arguable how much global temperatures have actually “flattened” over the past decade, but a lower rate of warming is entirely consistent with climate change. There are numerous negative forcings that could be playing a part – the lowest solar minimum on record, heavy aerosol emissions from developing countries such as China, and even the cumulative effects of many small volcanoes (this was an interesting hypothesis I heard at AGU). It will take a few years before scientists figure out how much each forcing contributed, but the underlying mechanisms causing global warming are still underway. -Kate

Some things that climate science needs to do to gain credibility is to agree a unified baseline period, agree an averaged dataset, and that all graphs should start with their timescales at 1880, say. Otherwise it appears to me as an analyst that statistical tricks are being used to present which ever point of view the author wants the reader to see.

This is quite an accusation to make without any evidence or examples, particularly against an entire scientific field. Please check these things out before you go spreading them around based on speculation. Also, how would standardizing graphs and baselines have any effect? As long as the study is clear about what its baseline is, where its data comes from, and what time period is being examined, I think it’s very useful to examine climate change using a diversity of approaches and datasets. -Kate

It’s the black line (land and ocean) temperature which everybody uses. You can see that it’s been below scenario C (which was the highly optimistic scenario in which we stablised the content of CO2 in the atmosphere by 2000) for every year since 1999, apart from 2010 when it matched it. It’s way below scenario B which is the scenario which most closely matches emissions. Even if you allow for Hansen’s fudge of introducing the station data (red line) in his 2006 paper, he says in that paper, that we should use an average of the red and black line which would mean that at best observed data matched Hansen’s scenario C.

You can see that by 2010, the observed data trend was below the average, 95% confidence line and 2011 has been a lot cooler, lower than the 2009 & 2007 but higher than 2008, so you can see that give it a couple more years on the current trend, it will break out of the lower end of what is a very broad range of predictions produced by the IPCC climate models.

Aerosol production after the 2nd world war would be in line with CO2 emissions.

The increase in CO2 emissions after the 2nd world war appears from the graph to have started around 1950, but the flattening trend started 10 years earlier.

The standardisation of baseline periods etc. would help a heck of a lot. It’s easy for both those who believe and don’t believe in the AGW hypothesis to cherry pick data to support their point of view. If everybody worked off a consistent set of metrics and scales, it would be easy to correlate information from disparate sources, and it would be impossible to focus in on a particular period and explain the reasons for what’s going on then, and then focus on a different period and provide a completely contradictory explanation of what’s going on there. I’m sure you’ll observe from the blogosphere the countless arguments as to whether 1998, 2005 or 2010 was the warmest year and the resulting significance as to what state the climate is in. With a unified dataset, this argument would go. I’d suggesting creating an average similar to the WoodForTrees index of the 3 land-based datasets and starting them in 1880. I work as an analyst, so know that statistics can never tell you what’s true and what’s false but can at best allow you to see differences between the past and present, but only if you keep the consistency. As things change in the real world, we typically produce “As was” information to keep things consistent and “As is” information to show things as how they are now.

“You can see that by 2010, the observed data trend was below the average, 95% confidence line and 2011 has been a lot cooler”

What??? The 95% confidence interval is the GREY area. You seem to think it is the thin black line. This black line is the centre of the confidence interval. Temperatures are expected to be in the grey area 19 times out of 20, and they are indeed. If you take half the interval. i.e. draw another area centred on the black line but half as wide, then 2 out of 3 temperature points would be expected to be inside it, and again, they pretty much are.

Rick, can you really say to yourself that climate models that have a range of 0.7c change in temperature over a time period as short as 10 years can be considered credible? If the current flattening trend continues for a few more years, even this incredibly broad range of results is going to be broken.

Now a challenge for you.

Here are the changes in temperature over the earth-based temperature record (1880 to now).

Could I have a logical and consistent argument as to how CO2 correlates with the temperature changes observed over the entire temperature record?

The explanations that I’ve heard so far include:-

1. Krakatoa – 1883 caused large scale aerosol production which cooled the planet for 30 years (1880-1910). Obviously, this explanation isn’t credible. No volcanic eruption has ever had an effect for such a long time. Particulates get washed out of the atmosphere much quicker than CO2.

2. 1910-1940 – No explanation yet.

3. 1940-1980. Flattening period was due to aerosol pollution due to heavy industrialisation after the second world war. There was an increase in industrialisation after WW2, but as well as aerosols increasing, CO2 also increased. As aerosols wash out of the atmosphere quicker than CO2, this explanation doesn’t work. Further, the flattening period started 10 years before heavy industrialisation.

4. 1980-2000. CO2 in the atmosphere only got to a sufficient level in 1980 to overcome the cooling force of the aerosols. This was a tripping point. Sounds potentially possible but then what caused the 1910-1940 warming period?

5. 2000 onwards. Solar lull and La Nina’s have flattened the trend. Hansen reproduces the TSI graph just above the solar cycle in his recent analysis. He also says that temperature change is delayed by 18 months due to ocean thermal inertia. Firstly, I’d challenge that, sea temperature change normally lags land temperature change only by a month or two. However, if we accept Hansen’s view, then we shouldn’t have seen the effects of the solar lull until mid-2003. Hansen says that the TSI increase that we should have seen in mid-2011 was masked by a slightly time lagged La Nina which occurred from mid-2010 to spring 2011. ENSO events remain an enigma.

It’s the black line (land and ocean) temperature which everybody uses. You can see that it’s been below scenario C…”

The figure you link to is an updated version of one presented by Hansen initially in 1998, and as far as I can tell updated by Hansen himself, so he clearly doesn’t feel he has anything to hide. You should read what the original paper says about the values

“Surface air temperature change in a warming climate is slightly larger than the SST change (4), especially in regions of sea ice. Therefore, the best temperature observation for comparison with climate models probably falls between the meteorological station surface air analysis and the land–ocean temperature index”

Which is between the red and black lines and not on the black one as you suggest.

Whilst is is undoubtedly the case that the actual temperature values are lower than projection B, this projection was made in 1998 and Hansen would have been unable to predict the recent very low solar minimum and the recent increase in aerosol levels arising from the burgeoning Asian economies. Given these unknowns the projection was pretty accurate.

Eventually, as occurred in Europe and North America in the 20th Century, people in Asia will get fed up breathing poor quality air and action will be taken to reduce aerosol levels. When this happens we can expect to see rapid warming as the cooling effect of these diminishes.

It is clear to me that you will not consider GCMs to be useful or credible until they are completely perfect in their projections – basically claiming that “until we know everything, we know nothing.” At the same time, you are blindly trusting a rather odd method that would fail numerous hindcasting tests: extending a 5-year trend line indefinitely into the future – and using that result to claim that the models are “broken”!

Please familiarize yourself with the comment policy in the sidebar, and come back if you have anything new or useful to add to the discussion. -Kate

Hello Kate. Thanks for the great “intro to climate modeling” article, and in general for your site. We need more such reasoned explanation supported by data. I wrote a similar type of article suitable for the layperson about just what global warming is, called “Global Warming in a Nutshell”, posted at http://milo-scientific.com/pers/essays/gw.php It aims to be a factual explanation backed by peer-reviewed research, although in the middle there is a bit of opinion on why there is so much irrational skepticism out there.

Have you heard of a site called Global Warming Art at http://www.globalwarmingart.com/ ? It’s an excellent source of graphics from peer-reviewed research, and seems like a great site for anyone writing about global warming from a scientific perspective.

Kate, I came to your site from skepticalscience.com that have a similar policy of deleting posts that question AGW hypothesis. I will continue my debates on more high profile websites were the debate isn’t rigged. As a BSc student, you’re early on in your career. Science has never been politicised as it is on this debate.

On the topic of comment moderation, I agree with Gavin Schmidt, one of the top climate modellers in the world, who justified the comment policy on RealClimate as follows:

“We choose to try and create a space for genuine conversation…This is an imperfect process, but the alternative is a free-for-all that quickly deteriorates into a food fight. There are plenty of places to indulge in that kind of crap. There are only a few places where it’s not and we are not embarrassed to try to make this site one of them.”

I think that the question isn’t about the temperature, but about Hansen’s paper saying that if we had BAU, warming would track closer to A. Well, we have had BAU and warming ISN’T tracking the A warming track, but, as you say, between B and C.

Dana’s response is pretty much the same thing with his response at #3 in the SkS post you link to.

“Also while actual temps are in the range of Scenario C, greenhouse gas emissions have not followed those in that particular projection. It makes more sense to focus on Scenario B, which has been very close to actual emissions, and then determine why the actual temp change has been lower (mainly the climate sensitivity factor difference).”

Hansen did note in his PNAS paper that the model they used had a slightly high climate sensitivity – around 4 C. I wonder why this is – perhaps the indirect effect of aerosols was modelled poorly? That was the real frontier in those days. Anyway, it’s not like anyone uses that model any more, so it is hardly a representation of GCM skill today.

I found your discussion of the Arctic ice pack melting graph (area vs GCM modeling results) interesting. Also, your comments on the SkS site and here concerning the “simulation” of El Nino with GCM were revealing.

“It is a very interesting application of chaos theory and the underlying patterns of chaotic systems.”

Concerning the modeling of the Arctic ice pack, obviously some improvements are needed. The effect of “soot” from aerosols is affecting the melting rate of the Greenland glaciers. Perhaps, the concentration of” soot” on the Arctic ice pack is accelerating the melt rate (the volume of ice is being reduced “exponentially”).

Would a time-dependent albedo be appropriate for the ice/sea model to better correct for “the indirect effect of aerosols?”

Is it surprising that many self-styled ‘open-minded skeptics’ are completely closed to the idea that climate models might have something to them? Oh well.

* * *

As for the more genuine skeptics who actually want to learn something, such as HankHenry, I think a good place to start would be to read up on probability modelling in general, which has applications in game theory, artificial intelligence, thermodynamics, etc.

And I recall — correct me if I’m wrong — that climate models, being hard to solve analytically, use a Monte Carlo approach to model uncertainty, so that’s another thing to read up on.

Speaking of which, Kate, your blog post seems to say little about how climate models incorporate and handle uncertainty. Perhaps you can write something about it some day?

Curry & Webster (2011) completely pull apart climate models. The climate models in the IPCC AR4 report use the same techniques as weather forecasting and financial risk modelling. The models that supposedly prove that observed temperatures can only be modelled successfully if you include manmade forcing agents were actually produced by reverse engineering from the desired result.

Climate models are actually very different from weather models and economic models, because they calculate boundary conditions (the general shape of the system over time) rather than initial conditions (the exact path a situation will follow). It’s sort of like predicting the outcome of 100 dice rolls, rather than a single dice roll.

Model results are not produced by reverse engineering. I have personally read through the code for seven GCMs and can tell you that this is not the case. You can download the code and see for yourself if you don’t believe me. -Kate

Timesteps vary between models, in general they’re limited by the grid resolution via the CFL condition for stability. In the high-res ocean models I work with the timesteps are usually 5-10 minutes; sea ice can go up to 1 hour. The incoming shortwave is calculated each timestep based on the zenith angle (a composition of sinusoids for latitude, longitude, and time), the solar constant, and a parameterisation of atmospheric optical thickness based on air temp, humidity, and cloud cover. For full details see section 3c of Parkinson and Washington 1979.

I would be very surprised for an atmospheric model at CMIP5 resolution (1 degree ish) to be able to run at a 12 hour time step. That would violate the CFL stability condition for all but incredibly low resolutions. If I recall correctly, the radiation timesteps for some CMIP5 codes I looked at (NASA’s ModelE and NCAR’s CESM, both publicly available) were 30 minutes. I suppose a 12 hour timestep could be true for a simple atmosphere model, eg an energy moisture balance model, but you can’t really call that a GCM. Do you have any other examples?

About

Kaitlin Alexander is a PhD student in climate science at the University of New South Wales in Sydney, Australia. She became interested in climate science as a teenager on the Canadian Prairies, and increasingly began to notice the discrepancies between scientific and public knowledge on climate change. She started writing this blog at age sixteen to help address this gap in public understanding, and it slowly evolved into a record of her research as a young climate scientist. Read more

Enter your email address to subscribe to this blog and receive notifications of new posts by email.