One of the biggest challenges in understanding climate change is that the timescales involved are far longer than most people are used to thinking about. Garvey points out that this makes climate change different from any other ethical question, because both the causes and consequences are smeared out across time and space:

“There is a sense in which my actions and the actions of my present fellows join with the past actions of my parents, grandparents and great-grandparents, and the effects resulting from our actions will still be felt hundreds, even thousands of years in the future. It is also true that we are, in a way, stuck with the present we have because of our past. The little actions I undertake which keep me warm and dry and fed are what they are partly because of choices made by people long dead. Even if I didn’t want to burn fossil fuels, I’m embedded in a culture set up to do so.” (Garvey, 2008, p60)

Part of the problem is that the physical climate system is slow to respond to our additional greenhouse gas emissions, and similarly slow to respond to reductions in emissions. The first part of this is core to a basic understanding of climate change, as it’s built into the idea of equilibrium climate sensitivity (roughly speaking, the expected temperature rise for each doubling of CO2 concentrations in the atmosphere). The extra heat that’s trapped by the additional greenhouse gases builds up over time, and the planet warms slowly, but the oceans have such a large thermal mass, it takes decades for this warming process to complete.

Unfortunately, the second part, that the planet takes a long time to respond to reductions in emissions, is harder to explain, largely because of the common assumption that CO2 will behave like other pollutants, which wash out of the atmosphere fairly quickly once we stop emitting them. This assumption underlies much of the common wait-and-see response to climate change, as it gives rise to the myth that once we get serious about climate change (e.g. because we start to see major impacts), we can fix the problem fairly quickly. Unfortunately, this is not true at all, because CO2 is a long-lived greenhouse gas. About half of human CO2 emissions are absorbed by the oceans and soils, over a period of several decades. The remainder stays in the atmosphere There are several natural processes that remove the remaining CO2 from the atmosphere, but they take thousands of years, which means that even with zero greenhouse gas emissions, we’re likely stuck with the consequences of life on a warmer planet for centuries.

So the physical climate system presents us with two forms of inertia, one that delays the warming due to greenhouse gas emissions, and, one that delays the reduction in that warming in response to reduced emissions:

The thermal inertia of the planet’s surface (largely due to the oceans), by which the planet can keep absorbing extra heat for years before it makes a substantial difference to surface temperatures. (scale: decades)

The carbon cycle inertia by which CO2 is only removed from the atmosphere very slowly, and has a continued warming effect for as long as it’s there. (scale: decades to millennia)

But these are not the only forms of inertia that matter. There are also various kinds of inertia in the socio-economic system that slow down our response to climate change. For example, Davis et. al. attempt to quantify the emissions from all the existing energy infrastructure (power plants, factories, cars, buildings, etc that already exist and are in use), because even under the most optimistic scenario, it will take decades to replace all this infrastructure with clean energy alternatives. Here’s an example of their analysis, under the assumption that things we’ve already built will not be retired early. This assumption is reasonable because (1) its rare that we’re willing to bear the cost of premature retirement of infrastructure and (2) it’s going to be hard enough building enough new clean energy infrastructure fast enough to replace stuff that has worn out while meeting increasing demand.

Expected ongoing carbon dioxide emissions from existing infrastructure. Includes primary infrastructure only – i.e. infrastructure that directly releases CO2 (e.g. cars & trucks), but not infrastructure that encourages the continued production of devices that emit CO2. (e.g. the network of interstate highways in the US). From Davis et al, 2010.

So that gives us our third form of inertia:

Infrastructural inertia from existing energy infrastructure, as emissions of greenhouse gases will continue from everything we’ve built in the past, until it can be replaced. (scale: decades)

We’ve known about the threat of climate change for decades, and various governments and international negotiations have attempted to deal with it, and yet have made very little progress. Which suggests there are more forms of inertia that we ought to be able to name and quantify. To do this, we need to look at the broader socio-economic system that ought to allow us as a society to respond to the threat of climate change. Here’s a schematic of that system, as a systems dynamic model:

The socio-geophysical system. Arrows labelled ‘+’ are positive influence links (“A rise in X tends to cause a rise in Y, and a fall in X tends to cause a fall in Y”). Arrows labelled ‘-‘ represent negative links, where a rise in X tends to cause a fall in Y, and vice versa. The arrow labelled with a tap (faucet) is an accumulation link: Y will continue to rise even while X is falling, until X reaches net zero.

Broadly speaking, decarbonization will require both changes in technology and changes in human behaviour. But before we can do that, we have to recognize and agree that there is a problem, develop an agreed set of coordinated actions to tackle it, and then implement the policy shifts and behaviour changes to get us there.

At first, this diagram looks promising: once we realise how serious climate change is, we’ll take the corresponding actions, and that will bring down emissions, solving the problem. In other words, the more carbon emissions go up, the more they should drive a societal response, which in turn (eventually) will reduce emissions again. But the diagram includes a subtle but important twist: the link from carbon emissions to atmospheric concentrations is an accumulation link. Even as emissions fall, the amount of greenhouse gases in the atmosphere continue to rise. The latter rise only stops when carbon emissions reach zero. Think of a tap on the bathtub – if you reduce the inflow of water, the level of water in the tub still rises, until you turn the tap off completely.

Worse still, there are plenty more forms of inertia hidden in the diagram, because each of the causal links takes time to operate. I’ve given these additional sources of inertia names:

Sources of inertia in the socio-geophysical climate system

For example, there are forms of inertia that delay the impacts of increased temperatures, both on ecosystems and on human society. Most of the systems that are impacted by climate change can absorb smaller changes in the climate without much noticeable difference, but then reach a threshold whereby they can no longer be sustained. I’ve characterized two forms of inertia here:

Natural variability (or “signal to noise”) inertia, which arises because initially, temperature increases due to climate change are much smaller than internal variability with daily and seasonal weather patterns. Hence it takes a long time for the ‘signal’ of climate change to emerge from the noise of natural variability. (scale: decades)

Ecosystem resilience. We tend to think of resilience as a good thing – defined informally as the ability of a system to ‘bounce back’ after a shock. But resilience can also mask underlying changes that push a system closer and closer to a threshold beyond which it cannot recover. So this form of inertia acts by masking the effect of that change, sometimes until it’s too late to act. (scale: years to decades)

Then, once we identify the impacts of climate change (whether in advance or after the fact), it takes time for these to feed into the kind of public concern needed to build agreement on the need for action:

Societal resilience. Human society is very adaptable. When storms destroy our buildings, we just rebuild them a little stronger. When drought destroys our crops, we just invent new forms of irrigation. Just as with ecosystems, there is a limit to this kind of resilience, when subjected to a continual change. But our ability to shrug and get on with things causes a further delay in the development of public concern about climate change. (scale: decades?)

Denial. Perhaps even stronger than human resilience is our ability to fool ourselves into thinking that something bad is not happening, and to look for other explanations than the ones that best fit the evidence. Denial is a pretty powerful form of inertia. Denial stops addicts from acknowledging they need to seek help to overcome addiction, and it stops all of us from acknowledging we have a fossil fuel addiction, and need help to deal with it. (scale: decades to generations?)

Even then, public concern doesn’t immediately translate into effective action because of:

Individualism. A frequent response to discussions on climate change is to encourage people to make personal changes in their lives: change your lightbulbs, drive a little less, fly a little less. While these things are important in the process of personal discovery, by helping us understanding our individual impact on the world, they are a form of voluntary action only available to the privileged, and hence do not constitute a systemic solution to climate change. When the systems we live in drive us towards certain consumption patterns, it takes a lot of time and effort to choose a low-carbon lifestyle. So the only way this scales is through collective political action: getting governments to change the regulations and price structures that shape what gets built and what we consume, and making governments and corporations accountable for cutting their greenhouse gas contributions. (scale: decades?)

When we get serious about the need for coordinated action, there are further forms of inertia that come into play:

Missing governance structures. We simply don’t have the kind of governance at either the national or international level that can put in place meaningful policy instruments to tackle climate change. The Kyoto process failed because the short term individual interests of the national governments who have the power to act always tend to outweigh the long term collective threat of climate change. The Paris agreement is woefully inadequate for the same reason. Similarly, national governments are hampered by the need to respond to special interest groups (especially large corporations), which means legislative change is a slow, painful process. (scale: decades!)

Bureaucracy. Hampers implementation of new policy tools. It takes time to get legislation formulated and agreed, and it takes time to set up the necessary institutions to ensure they are implemented. (scale: years)

Social Resistance. People don’t like change, and some groups fight hard to resist changes that conflict with their own immediate interests. Every change in social norms is accompanied by pushback. And even when we welcome change and believe in it, we often slip back into old habits. (scale: years? generations?)

Finally, development and deployment of clean energy solutions experience a large number of delays:

R&D lag. It takes time to ramp up new research and development efforts, due to the lack of qualified personnel, the glacial speed that research institutions such as universities operate, and the tendency, especially in academia, for researchers to keep working on what they’ve always worked on in the past, rather than addressing societally important issues. Research on climate solutions is inherently trans-disciplinary, and existing research institutions tend to be very bad at supporting work that crosses traditional boundaries. (scale: decades?)

Investment lag: A wholescale switch from fossil fuels to clean energy and energy efficiency will require huge upfront investment. Agencies that have funding to enable this switch (governments, investment portfolio managers, venture capitalists) tend to be very risk averse, and so prefer things that they know offer a return on investment – e.g. more oil wells and pipelines rather than new cleantech alternatives (scale: years to decades)

Diffusion of innovation: new technologies tend to take a long time to reach large scale deployment, following the classic s-shaped curve, with a small number of early adopters, and, if things go well, a steadily rising adoption curve, followed by a tailing off as laggards resist new technologies. Think about electric cars: while the technology has been available for years, they still only constitute less than 1% of new car sales today. Here’s a study that predicts this will rise to 35% by 2040. Think about that for a moment – if we follow the expected diffusion of innovation pattern, two thirds of new cars in 2040 will still have internal combustion engines. (scale: decades)

All of these forms of inertia slow the process of dealing with climate change, allowing the warming to steadily increase while we figure out how to overcome them. So the key problem isn’t how to address climate change by switching from the current fossil fuel economy to a carbon-neutral one – we probably have all the technologies to do this today. The problem is how to do it fast enough. To stay below 2°C of warming, the world needs to cut greenhouse gas emissions by 50% by 2030, and achieve carbon neutrality in the second half of the century. We’ll have to find a way of overcoming many different types of inertia if we are to make it.

I’ve been exploring how Canada’s commitments to reduce greenhouse gas emissions stack up against reality, especially in the light of the government’s recent decision to stick with the emissions targets set by the previous administration.

Once upon a time, Canada was considered a world leader on climate and environmental issues. The Montreal Protocol on Substances that Deplete the Ozone Layer, signed in 1987, is widely regarded as the most successful international agreement on environmental protection ever. A year later, Canada hosted a conference on The Changing Atmosphere: Implications for Global Security, which helped put climate change on the international political agenda. This conference was one of the first to identify specific targets to avoid dangerous climate change, recommending a global reduction in greenhouse gas emissions of 20% by 2005. It didn’t happen.

It took another ten years before an international agreement to cut emissions was reached: the Kyoto Protocol in 1997. Hailed as a success at the time, it became clear over the ensuing years that with non-binding targets, the agreement was pretty much a sham. Under Kyoto, Canada agreed to cut emissions to 6% below 1990 levels by the 2008-2012 period. It didn’t happen.

At the Copenhagen talks in 2009, Canada proposed an even weaker goal: 17% below 2005 levels (which corresponds to 1.5% above 1990 levels) by 2020. Given that emissions have risen steadily since then, it probably won’t happen. By 2011, facing an embarrassing gap between its Kyoto targets and reality, the Harper administration formally withdrew from Kyoto – the only country ever to do so.

Last year, in preparation for the Paris talks, the Harper administration submitted a new commitment: 30% below 2005 levels by 2030. At first sight it seems better than previous goals. But it includes a large slice of expected international credits and carbon sequestered in wood products, as Canada incorporates Land Use, Land Use Change and Forestry (LULUCF) into its carbon accounting. In terms of actual cuts in greenhouse gas emissions, the target represents approximately 8% above 1990 levels.

The new government, elected in October 2015, trumpeted a renewed approach to climate change, arguing that Canada should be a world leader again. At the Paris talks in 2015, the Trudeau administration proudly supported both the UN’s commitment to keep global temperatures below 2°C of warming (compared to the pre-industrial average), and voiced strong support for an even tougher limit of 1.5°C. However, the government has chosen to stick with the Harper administration’s original Paris targets.

Here’s what all of this looks like – click for bigger version. Note: emissions data from Government of Canada; the Toronto 1988 target was never formally adopted, but was Liberal party policy in the early 90’s. Global 2°C pathway 2030 target from SEI; Emissions projection, LULUCF adjustment, and “fair” 2030 target from CAT.

Several things jump out at me from this chart. First, the complete failure to implement policies that would have allowed us to meet any of these targets. The dip in emissions from 2008-2010, which looked promising for a while, was due to the financial crisis and economic downturn, rather than any actual climate policy. Second, the similar slope of the line to each target, which represents the expected rate of decline from when the target was proposed to when it ought to be attained. At no point has there been any attempt to make up lost ground after each failed target. Finally, in terms of absolute greenhouse gas emissions, each target is worse than the previous ones. Shifting the baseline from 1990 to 2005 masks much of this, and shows that successive governments are more interested in optics than serious action on climate change.

At no point has Canada ever adopted science-based targets capable of delivering on its commitment to keep warming below 2°C.

At the beginning of March, I was invited to give a talk at TEDxUofT. Colleagues tell me the hardest part of giving these talks is deciding what to talk about. I decided to see if I could answer the question of whether we can trust climate models. It was a fascinating and nerve-wracking experience, quite unlike any talk I’ve given before. Of course, I’d love to do another one, as I now know more about what works and what doesn’t.

Here’s the video and a transcript of my talk. [The bits in square brackets in are things I intended to say but forgot!]

Computing the Climate: How Can a Computer Model Forecast the Future? TEDxUofT, March 1, 2014.

Talking about the weather forecast is a great way to start a friendly conversation. The weather forecast matters to us. It tells us what to wear in the morning; it tells us what to pack for a trip. We also know that weather forecasts can sometimes be wrong, but we’d be foolish to ignore them when they tell us a major storm is heading our way.

[Unfortunately, talking about climate forecasts is often a great way to end a friendly conversation!] Climate models tell us that by the end of this century, if we carry on burning fossil fuels at the rate we have been doing, and we carry on cutting down forests at the rate we have been doing, the planet will warm by somewhere between 5 to 6 degrees centigrade. That might not seem much, but, to put it into context, in the entire history of human civilization, the average temperature of the planet has not varied by more than 1 degree. So that forecast tells us something major is coming, and we probably ought to pay attention to it.

But on the other hand, we know that weather forecasts don’t work so well the longer into the future we peer. Tomorrow’s forecast is usually pretty accurate. Three day and five day forecasts are reasonably good. But next week? They always change their minds before next week comes. So how can we peer 100 years into the future and look at what is coming with respect to the climate? Should we trust those forecasts? Should we trust the climate models that provide them to us?

Six years ago, I set out to find out. I’m a professor of computer science. I study how large teams of software developers can put together complex pieces of software. I’ve worked with NASA, studying how NASA builds the flight software for the Space Shuttle and the International Space Station. I’ve worked with large companies like Microsoft and IBM. My work focusses not so much on software errors, but on the reasons why people make those errors, and how programmers then figure out they’ve made an error, and how they know how to fix it.

To start my study, I visited four major climate modelling labs around the world: in the UK, in Paris, Hamburg, Germany and in Colorado. Each of these labs have typically somewhere between 50-100 scientists who are contributing code to their climate models. And although I only visited four of these labs, there are another twenty or so around the world, all doing similar things. They run these models on some of the fastest supercomputers in the world, and many of the models have been in construction, the same model, for more than 20 years.

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):

We know it’s very hard to build a large complex piece of software without making mistakes. Even the space shuttle’s software had errors in it. So the question is not “is the software perfect for predicting the future?”. The question is “Is it good enough?” Is it fit for purpose?

To answer that question, we’d better understand what that purpose of a climate model is. First of all, I’d better be clear what a climate model is not. A climate model is not a projection of trends we’ve seen in the past extrapolated into the future. If you did that, you’d be wrong, because you haven’t accounted for what actually causes the climate to change, and so the trend might not continue. They are also not decision-support tools. A climate model cannot tell us what to do about climate change. It cannot tell us whether we should be building more solar panels, or wind farms. It can’t tell us whether we should have a carbon tax. It can’t tell us what we ought to put into an international treaty.

What it does do is tell us how the physics of planet earth work, and what the consequences are of changing things, within that physics. I could describe it as “computational fluid dynamics on a rotating sphere”. But computational fluid dynamics is complex.

I went into my son’s fourth grade class recently, and I started to explain what a climate model is, and the first question they asked me was “is it like Minecraft?”. Well, that’s not a bad place to start. If you’re not familiar with Minecraft, it divides the world into blocks, and the blocks are made of stuff. They might be made of wood, or metal, or water, or whatever, and you can build things out of them. There’s no gravity in Minecraft, so you can build floating islands and it’s great fun.

Climate models are a bit like that. To build a climate model, you divide the world into a number of blocks. The difference is that in Minecraft, the blocks are made of stuff. In a climate model, the blocks are really blocks of space, through which stuff can flow. At each timestep, the program calculates how much water, or air, or ice is flowing into, or out of, each block, and if so, in which directions? It calculates changes in temperature, density, humidity, and so on. And whether stuff such as dust, salt, and pollutants are passing through or accumulating in each block. We have to account for the sunlight passing down through the block during the day. Some of what’s in each block might filter some of the the incoming sunlight, for example if there are clouds or dust, so some of the sunlight doesn’t get down to the blocks below. There’s also heat escaping upwards through the blocks, and again, some of what is in the block might trap some of that heat — for example clouds and greenhouse gases.

As you can see from this diagram, the blocks can be pretty large. The upper figure shows blocks of 87km on a side. If you want more detail in the model, you have to make the blocks smaller. Some of the fastest climate models today look more like the lower figure:

Ideally, you want to make the blocks as small as possible, but then you have many more blocks to keep track of, and you get to the point where the computer just can’t run fast enough. A typical run of a climate model, to simulate a century’s worth of climate, you might have to wait a couple of weeks on some of the fastest supercomputers for that run to complete. So the speed of the computer limits how small we can make the blocks.

Building models this way is remarkably successful. Here’s video of what a climate model can do today. This simulation shows a year’s worth of weather from a climate model. What you’re seeing is clouds and, in orange, that’s where it’s raining. Compare that to a year’s worth of satellite data for the year 2013. If you put them side by side, you can see many of the same patterns. You can see the westerlies, the winds at the top and bottom of the globe, heading from west to east, and nearer the equator, you can see the trade winds flowing in the opposite direction. If you look very closely, you might even see a pulse over South America, and a similar one over Africa in both the model and the satellite data. That’s the daily cycle as the land warms up in the morning and the moisture evaporates from soils and plants, and then later on in the afternoon as it cools, it turns into rain.

Note that the bottom is an actual year, 2013, while the top, the model simulation is not a real year at all – it’s a typical year. So the two don’t correspond exactly. You won’t get storms forming at the same time, because it’s not designed to be an exact simulation; the climate model is designed to get the patterns right. And by and large, it does. [These patterns aren’t coded into this model. They emerge as a consequences of getting the basic physics of the atmosphere right].

So how do you build a climate model like this? The answer is “very slowly”. It takes a lot of time, and a lot of failure. One of the things that surprised me when I visited these labs is that the scientists don’t build these models to try and predict the future. They build these models to try and understand the past. They know their models are only approximations, and they regularly quote the statistician, George Box, who said “All models are wrong, but some are useful”. What he meant is that any model of the world is only an approximation. You can’t get all the complexity of the real world into a model. But even so, even a simple model is a good way to test your theories about the world.

So the way that modellers work, is they spend their time focussing on places where the model does isn’t quite right. For example, maybe the model isn’t getting the Indian monsoon right. Perhaps it’s getting the amount of rain right, but it’s falling in the wrong place. They then form a hypothesis. They’ll say, I think I can improve the model, because I think this particular process is responsible, and if I improve that process in a particular way, then that should fix the simulation of the monsoon cycle. And then they run a whole series of experiments, comparing the old version of the model, which is getting it wrong, with the new version, to test whether the hypothesis is correct. And if after a series of experiments, they believe their hypothesis is correct, they have to convince the rest of the modelling team that this really is an improvement to the model.

In other words, to build the models, they are doing science. They are developing hypotheses, they are running experiments, and using peer review process to convince their colleagues that what they have done is correct:

Climate modellers also have a few other weapons up their sleeves. Imagine for a moment if Microsoft had 25 competitors around the world, all of whom were attempting to build their own versions of Microsoft Word. Imagine further that every few years, those 25 companies all agreed to run their software on a very complex battery of tests, designed to test all the different conditions under which you might expect a word processor to work. And not only that, but they agree to release all the results of those tests to the public, on the internet, so that anyone who wanted to use any of that software can pore over all the data and find out how well each version did, and decide which version they want to use for their own purposes. Well, that’s what climate modellers do. There is no other software in the world for which there are 25 teams around the world trying to build the same thing, and competing with each other.

Climate modellers also have some other advantages. In some sense, climate modelling is actually easier than weather forecasting. I can show you what I mean by that. Imagine I had a water balloon (actually, you don’t have to imagine – I have one here):

I’m going to throw it at the fifth row. Now, you might want to know who will get wet. You could measure everything about my throw: Will I throw underarm, or overarm? Which way am I facing when I let go of it? How much swing do I put in? If you could measure all of those aspects of my throw, and you understand the physics of how objects move, you could come up with a fairly accurate prediction of who is going to get wet.

That’s like weather forecasting. We have to measure the current conditions as accurately as possible, and then project forward to see what direction it’s moving in:

If I make any small mistakes in measuring my throw, those mistakes will multiply as the balloon travels further. The further I attempt to throw it, the more room there is for inaccuracy in my estimate. That’s like weather forecasting. Any errors in the initial conditions multiply up rapidly, and the current limit appears to be about a week or so. Beyond that, the errors get so big that we just cannot make accurate forecasts.

In contrast, climate models would be more like releasing a balloon into the wind, and predicting where it will go by knowing about the wind patterns. I’ll make some wind here using a fan:

Now that balloon is going to bob about in the wind from the fan. I could go away and come back tomorrow and it will still be doing about the same thing. If the power stays on, I could leave it for a hundred years, and it might still be doing the same thing. I won’t be able to predict exactly where that balloon is going to be at any moment, but I can predict, very reliably, the space in which it will move. I can predict the boundaries of its movement. And if the things that shape those boundaries change, for example by moving the fan, and I know what the factors are that shape those boundaries, I can tell you how the patterns of its movements are going to change – how the boundaries are going to change. So we call that a boundary problem:

The initial conditions are almost irrelevant. It doesn’t matter where the balloon started, what matters is what’s shaping its boundary.

So can these models predict the future? Are they good enough to predict the future? The answer is “yes and no”. We know the models are better at some things than others. They’re better at simulating changes in temperature than they are at simulating changes in rainfall. We also know that each model tends to be stronger in some areas and weaker in others. If you take the average of a whole set of models, you get a much better simulation of how the planet’s climate works than if you look at any individual model on its own. What happens is that the weaknesses in any one model are compensated for by other models that don’t have those weaknesses.

But the results of the models have to be interpreted very carefully, by someone who knows what the models are good at, and what they are not good at – you can’t just take the output of a model and say “that’s how it’s going to be”.

Also, you don’t actually need a computer model to predict climate change. The first predictions of what would happen if we keep on adding carbon dioxide to the atmosphere were produced over 120 years ago. That’s fifty years before the first digital computer was invented. And those predictions were pretty accurate – what has happened over the twentieth century has followed very closely what was predicted all those years ago. Scientists also predicted, for example, that the arctic would warm faster than the equatorial regions, and that’s what happened. They predicted night time temperatures would rise faster than day time temperatures, and that’s what happened.

So in many ways, the models only add detail to what we already know about the climate. They allow scientists to explore “what if” questions. For example, you could ask of a model, what would happen if we stop burning all fossil fuels tomorrow. And the answer from the models is that the temperature of the planet will stay at whatever temperature it was when you stopped. For example, if we wait twenty years, and then stopped, we’re stuck with whatever temperature we’re at for tens of thousands of years. You could ask a model what happens if we dig up all known reserves of fossil fuels, and burn them all at once, in one big party? Well, it gets very hot.

More interestingly, you could ask what if we tried blocking some of the incoming sunlight to cool the planet down, to compensate for some of the warming we’re getting from adding greenhouse gases to the atmosphere? There have been a number of very serious proposals to do that. There are some who say we should float giant space mirrors. That might be hard, but a simpler way of doing it is to put dust up in the stratosphere, and that blocks some of the incoming sunlight. It turns out that if you do that, you can very reliably bring the average temperature of the planet back down to whatever level you want, just by adjusting the amount of the dust. Unfortunately, some parts of the planet cool too much, and others not at all. The crops don’t grow so well, and everyone’s weather gets messed up. So it seems like that could be a solution, but when you study the model results in detail, there are too many problems.

Remember that we know fairly well what will happen to the climate if we keep adding CO2, even without using a computer model, and the computer models just add detail to what we already know. If the models are wrong, they could be wrong in either direction. They might under-estimate the warming just as much as they might over-estimate it. If you look at how well the models can simulate the past few decades, especially the last decade, you’ll see some of both. For example, the models have under-estimated how fast the arctic sea ice has melted. The models have underestimated how fast the sea levels have risen over the last decade. On the other hand, they over-estimated the rate of warming at the surface of the planet. But they underestimated the rate of warming in the deep oceans, so some of the warming ends up in a different place from where the models predicted. So they can under-estimate just as much as they can over-estimate. [The less certain we are about the results from the models, the bigger the risk that the warming might be much worse than we think.]

So when you see a graph like this, which comes from the latest IPCC report that just came out last month, it doesn’t tell us what to do about climate change, it just tells us the consequences of what we might choose to do. Remember, humans aren’t represented in the models at all, except in terms of us producing greenhouse gases and adding them to the atmosphere.

If we keep on increasing our use of fossil fuels — finding more oil, building more pipelines, digging up more coal, we’ll follow the top path. And that takes us to a planet that by the end of this century, is somewhere between 4 and 6 degrees warmer, and it keeps on getting warmer over the next few centuries. On the other hand, the bottom path, in dark blue, shows what would happen if, year after year from now onwards, we use less fossil fuels than we did the previous year, until about mid-century, when we get down to zero emissions, and we invent some way to start removing that carbon dioxide from the atmosphere before the end of the century, to stay below 2 degrees of warming.

The models don’t tell us which of these paths we should follow. They just tell us that if this is what we do, here’s what the climate will do in response. You could say that what the models do is take all the data and all the knowledge we have about the climate system and how it works, and put them into one neat package, and its our job to take that knowledge and turn it into wisdom. And to decide which future we would like.

Yesterday I talked about three re-inforcing feedback loops in the earth system, each of which has the potential to accelerate a warming trend once it has started. I also suggested there are other similar feedback loops, some of which are known, and others perhaps yet to be discovered. For example, a paper published last month suggested a new feedback loop, to do with ocean acidification. In a nutshell, as the ocean absorbs more CO2, it becomes more acidic, which inhibits the growth of phytoplankton. These plankton are a major source of sulphur compounds that end up as aerosols in the atmosphere, which seeds the formation of clouds. Less clouds mean lower albedo, which means more warming. Whether this feedback loop is important remains to be seen, but we do know that clouds have an important role to play in climate change.

I didn’t include clouds on my diagrams yet, because clouds deserve a special treatment, in part because they are involved in two major feedback loops that have opposite effects:

Two opposing cloud feedback loops. An increase in temperature leads to an increase in moisture in the atmosphere. This leads to two new loops…

As the earth warms, we get more moisture in the atmosphere (simply because there is more evaporation from the surface, and warmer air can hold more moisture). Water vapour is a powerful greenhouse gas, so the more there is in the atmosphere, the more warming we get (greenhouse gases reduce the outgoing radiation). So this sets up a reinforcing feedback loop: more moisture causes more warming causes more moisture.

However, if there is more moisture in the atmosphere, there’s also likely to be more cloud formation. Clouds raise the albedo of the planet and reflect sunlight back into space before it can reach the surface. Hence, there is also a balancing loop: by blocking more sunlight, extra clouds will help to put the brakes on any warming. Note that I phrased this carefully: this balancing loop can slow a warming trend, but it does not create a cooling trend. Balancing loops tend to stop a change from occurring, but they do not create a change in the opposite direction. For example, if enough clouds form to completely counteract the warming, they also remove the mechanism (i.e. warming!) that causes growth in cloud cover in the first place. If we did end up with so many extra clouds that it cooled the planet, the cooling would then remove the extra clouds, so we’d be back where we started. In fact, this loop is nowhere near that strong anyway. [Note that under some circumstances, balancing loops can lead to oscillations, rather than gently converging on an equilibrium point, and the first wave of a very slow oscillation might be mistaken for a cooling trend. We have to be careful with our assumptions and timescales here!].

So now we have two new loops that set up opposite effects – one tends to accelerate warming, and the other tends to decelerate it. You can experience both these effects directly: cloudy days tend to be cooler than sunny days, because the clouds reflect away some of the sunlight. But cloudy nights tend to be warmer than clear nights because the water vapour traps more of the escaping heat from the surface. In the daytime, both effects are operating, and the cooling effect tends to dominate. During the night, there is no sunlight to block, so only the warming effect works.

If we average out the effects of these loops over many days, months, or years, which of the effects dominate? (i.e. which loop is stronger?) Does the extra moisture mean more warming or less warming? This is clearly an area where building a computer model and experimenting with it might help, as we need to quantify the effects to understand them better. We can build good computer models of how clouds form at the small scale, by simulating the interaction of dust and water vapour. But running such a model for the whole planet is not feasible with today’s computers.

To make things a little more complicated, these two feedback loops interact with other things. For example, another likely feedback loop comes from a change in the vertical temperature profile of the atmosphere. Current models indicate that, at least in the tropics, the upper atmosphere will warm faster than the surface (in technical terms, it will reduce the lapse rate – the rate at which temperature drops as you climb higher). This then increases the outgoing radiation, because it’s from the upper atmosphere that the earth loses its heat to space. This creates another (small) balancing feedback:

The lapse rate feedback – if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

Note that this lapse rate feedback operates in the same way as the main energy balance loop – the two ‘-‘ links have the same effect as the existing ‘+’ link from temperature to outgoing infra-red radiation. In other words this new loop just strengthens the effect of the existing loop – for convenience we could just fold both paths into the one link.

The difficulty of simulating all these different interactions of clouds accurately leads to one of the biggest uncertainties in climate science. In 1979, the Charney report calculated that all these cloud and water vapour feedback loops roughly cancel out, but pointed out that there was a large uncertainty bound on this estimate. More than thirty years later, we understand much more about the how cloud formation and distribution are altered in a warming world, but our margins of error for calculating cloud effects have barely reduced, because of the difficulty of simulating them on a global scale. Our best guess is now that the (reinforcing) water vapour feedback loop is slightly stronger than than the (balancing) cloud albedo and lapse rate loops. So the net effect of these three loops is an amplifying effect on the warming.

We’re taking the kids to see their favourite band: Muse are playing in Toronto tonight. I’m hoping they play my favourite track:

I find this song fascinating, partly because of the weird mix of progressive rock and dubstep. But more for the lyrics:

All natural and technological processes proceed in such a way that the availability of the remaining energy decreases. In all energy exchanges, if no energy enters or leaves an isolated system, the entropy of that system increases. Energy continuously flows from being concentrated to becoming dispersed, spread out, wasted and useless. New energy cannot be created and high grade energy is destroyed. An economy based on endless growth is unsustainable. The fundamental laws of thermodynamics will place fixed limits on technological innovation and human advancement. In an isolated system, the entropy can only increase. A species set on endless growth is unsustainable.

This summarizes, perhaps a little too succinctly, the core of the critique of our current economy, first articulated clearly in 1972 by the Club of Rome in the Limits to Growth Study. Unfortunately, that study was widely dismissed by economists and policymakers. As Jorgen Randers points out in a 2012 paper, the criticism of the Limits to Growth study was largely based on misunderstandings, and the key lessons are absolutely crucial to understanding the state of the global economy today, and the trends that are likely over the next few decades. In a nutshell, humans exceeded the carrying capacity of the planet sometime in the latter part of the 20th century. We’re now in the overshoot portion, where it’s only possible to feed the world and provide energy for economic growth by consuming irreplaceable resources and using up environmental capital. This cannot be sustained.

We cannot use renewable resources faster than they can be replenished.

We cannot generate wastes faster than they can be absorbed by the environment.

We cannot use up any non-renewable resource.

We can and do violate all of these conditions all the time. Indeed, modern economic growth is based on systematically violating all three of them, but especially #3, as we rely on cheap fossil fuel energy. But any system that violates these rules cannot be sustained indefinitely, unless it is also able to import resources and export wastes to other (external) systems. The key problem for the 21st century is that we’re now violating all three conditions on a global scale, and there are no longer other systems that we can rely on to provide a cushion – the planet as a whole is an isolated system. There are really only two paths forward: either we figure out how to re-structure the global economy to meet Daly’s three conditions, or we face a global collapse (for an understanding of the latter, see GrahamTurner’s 2012 paper).

Last week, Damon Matthews from Concordia visited, and gave a guest CGCS lecture, “Cumulative Carbon and the Climate Mitigation Challenge”. The key idea he addressed in his talk is the question of “committed warming” – i.e. how much warming are we “owed” because of carbon emissions in the past (irrespective of what we do with emissions in the future). But before I get into the content of Damon’s talk, here’s a little background.

The question of ‘owed’ or ‘committed’ warming arises because we know it takes some time for the planet to warm up in response to an increase in greenhouse gases in the atmosphere. You can calculate a first approximation of how much it will warm up from a simple energy balance model (like the ones I posted about last month). However, to calculate how long it takes to warm up you need to account for the thermal mass of the oceans, which absorb most of the extra energy and hence slow the rate of warming of surface temperatures. For this you need more than a simple energy balance model.

You can do a very simple experiment with a Global Circulation Model, by setting CO2 concentrations at double their pre-industrial levels, and then leave them constant at this level, to see how long the earth takes to reach a new equilibrium temperature. Typically, this takes several decades, although the models differ on exactly how long. Here’s what it looks like if you try this with EdGCM (I ran it with doubled CO2 concentrations starting in 1958):

Of course, the concentrations would never instantaneously double like that, so a more common model experiment is to increase CO2 levels gradually, say by 1% per year (that’s a little faster than how they have risen in the last few decades) until they reach double the pre-industrial concentrations (which takes approx 70 years), and then leave them constant at that level. This particular experiment is a standard way of estimating the Transient Climate Response – the expected warming at the moment we first reach a doubling of CO2 – and is included in the CMIP5 experiments. In these model experiments, it typically takes a few decades more of warming until a new equilibrium point is reached, and the models indicate that the transient response is expected to be a little over half of the eventual equilibrium warming.

This leads to a (very rough) heuristic that as the planet warms, we’re always ‘owed’ almost as much warming again as we’ve already seen at any point, irrespective of future emissions, and it will take a few decades for all that ‘owed’ warming to materialize. But, as Damon argued in his talk, there are two problems with this heuristic. First, it confuses the issue when discussing the need for an immediate reduction in carbon emissions, because it suggests that no matter how fast we reduce them, the ‘owed’ warming means such reductions will make little difference to the expected warming in the next two decades. Second, and more importantly, the heuristic is wrong! How so? Read on!

For an initial analysis, we can view the climate problem just in terms of carbon dioxide, as the most important greenhouse gas. Increasing CO2 emissions leads to increasing CO2 concentrations in the atmosphere, which leads to temperature increases, which lead to climate impacts. And of course, there’s a feedback in the sense that our perceptions of the impacts (whether now or in the future) lead to changed climate policies that constrain CO2 emissions.

So, what happens if we were to stop all CO2 emissions instantly? The naive view is that temperatures would continue to rise, because of the ‘climate commitment’ – the ‘owed’ warming that I described above. However, most models show that the temperature stabilizes almost immediately. To understand why, we need to realize there are different ways of defining ‘climate commitment’:

Zero emissions commitment – How much warming do we get if we set CO2 emissions from human activities to be zero?

Constant composition commitment – How much warming do we get if we hold atmospheric concentrations constant? (in this case, we can still have some future CO2 emissions, as long as they balance the natural processes that remove CO2 from the atmosphere).

The difference between these two definition is shown here. Note that in the zero emissions case, concentrations drop from an initial peak, and then settle down at a lower level:

The model experiments most people are familiar with are the constant composition experiments, in which there is continued warming. But in the zero emissions scenarios, there is almost no further warming. Why is this?

The relationship between carbon emissions and temperature change (the “Carbon Climate Response”) is complicated, because it depends two factors, each of which is complicated by (different types of) inertia in the system:

Climate Sensitivity – how much temperature changes in response to difference levels of CO2 in the atmosphere. The temperature response is slowed down by the thermal inertia of the oceans, which means it takes several decades for the earth’s surface temperatures to respond fully to a change in CO2 concentrations.

Carbon sensitivity – how much concentrations of CO2 in the atmosphere change in response to different levels of carbon emissions. A significant fraction (roughly half) of our CO2 emissions are absorbed by the oceans, but this also takes time. We can think of this as “carbon cycle inertia” – the delay in uptake of the extra CO2, which also takes several decades. [Note: there is a second kind of carbon system inertia, by which it takes tens of thousands of years for the rest of the CO2 to be removed, via very slow geological processes such as rock weathering.]

It turns out that the two forms of inertia roughly balance out. The thermal inertia of the oceans slows the rate of warming, while the carbon cycle inertia accelerates it. Our naive view of the “owed” warming is based on an understanding of only one of these, the thermal inertia of the ocean, because much of the literature talks only about climate sensitivity, and ignores the question of carbon sensitivity.

The fact that these two forms of inertia tend to balance leads to another interesting observation. The models all show an approximately linear response to cumulative emissions. For example, here are the CMIP3 models, used in the IPCC AR4 report (the average of the models, indicated by the arrow, is around 1.6C of warming per 1,000 gigatonnes of carbon):

The same relationship seems to hold for the CMIP5 models, many of which now include a dynamic carbon cycle:

This linear relationship isn’t determined by any physical properties of the climate system, and probably won’t hold in much warmer or cooler climates, nor when other feedback processes kick in. So we could say it’s a coincidental property of our current climate. However, it’s rather fortuitous for policy discussions.

Historically, we have emitted around 550 billion tonnes since the beginning of the industrial era, which gives us an expected temperature response of around 0.9°C. If we want to hold temperature rises to be no more than 2°C of warming, total future emissions should not exceed a further 700 billion tonnes of Carbon. In effect, this gives us a total worldwide carbon budget for the future. The hard policy question, of course, is then how to allocate this budget among the nations (or people) of the world in an equitable way.

[A few years ago, I blogged about a similar analysis, which says that cumulative carbon emissions should not exceed 1 trillion tonnes in total, ever. That calculation gives us a smaller future budget of less then 500 billion tonnes. That result came from analysis using the Hadley model, which has one of the higher slopes on the graphs above. Which number we use for a global target then might depend on which model we believe gives the most accurate projections, and perhaps how we also factor in the uncertainties. If the uncertainty range across models is accurate, then picking the average would give us a 50:50 chance of staying within the temperature threshold of 2°C. We might want better odds than this, and hence a smaller budget.]

In the National Academies report in 2011, the cumulative carbon budgets for each temperature threshold were given as follows (note the size of the uncertainty whiskers on each bar):

The idea that there is some additional warming owed, no matter what emissions pathway we follow is incorrect. Zero future emissions means little to no future warming, so future warming depends entirely on future emissions. And while the idea of zero future emissions isn’t policy-relevant (because zero emissions is impossible, at least in the near future), it does have implications for how we discuss policy choices. In particular, it means the idea that CO2 emissions cuts will not have an effect on temperature change for several decades is also incorrect. Every tonne of CO2 emissions avoided has an immediate effect on reducing the temperature response.

Another source of confusion is the emissions scenarios used in the IPCC report. They don’t diverge significantly for the first few decades, largely because we’re unlikely (and to some extent unable) to make massive emissions reductions in the next 1-2 decades, because society is very slow to respond to the threat of climate change, and even when we do respond, the amount of existing energy infrastructure that has to be rebuilt is huge. In this sense, there is some inevitable future warming, but it comes from future emissions that we cannot or will not avoid. In other words, political, socio-economic and technological inertia are the primary causes of future climate warming, rather than any properties of the physical climate system.

I’ve been collecting examples of different types of climate model that students can use in the classroom to explore different aspects of climate science and climate policy. In the long run, I’d like to use these to make the teaching of climate literacy much more hands-on and discovery-based. My goal is to foster more critical thinking, by having students analyze the kinds of questions people ask about climate, figure out how to put together good answers using a combination of existing data, data analysis tools, simple computational models, and more sophisticated simulations. And of course, learn how to critique the answers based on the uncertainties in the lines of evidence they have used.

Anyway, as a start, here’s a collection of runnable and not-so-runnable models, some of which I’ve used in the classroom:

Simple Energy Balance Models (for exploring the basic physics)

Zero dimensional Energy Balance from Wolfram. Allows you to adjust one parameter, the greenhouse effect, and explore the resulting equilibrium global temperature. Also serves to show off Wolfram’s Computable Document Format (CDF) which might be a neat way to share simple models with students.

A simple spreadsheet zero-dimension energy balance model from Climateprediction.net. I like the idea of getting the students to do this in spreadsheets, because most of them already understand spreadsheets. This one has a parameter for heat capacity, so you can see how long it takes to reach a new equilibrium temperature.

One-dimensional Energy Balance model from Shodor. Calculates the equilibrium temperature for each latitude zone on the planet, allowing you to specify cloud and ice albedo, solar constant, longwave radiation loss, and starting temperatures for each zone. Not very usable, but good illustration of what a 1-dimensional model might do. (Note: Shodor also have a great ecosystem sim with rabbits and wolves, and a disease transmission sim).

A one-layer energy-balance model developed by Michael Mann at Penn state, for use in his course on global warming. Allows you to alter different feedback factors (albedo, clouds, ice, water vapour), to test their effect on temperature and climate sensitivity.

General Circulation Models (for studying earth system interactions)

EdGCM – an educational version of the NASA GISS general circulation model (well, an older version of it). EdGCM provides a simplified user interface for setting up model runs, but allows for some fairly sophisticated experiments. You typically need to let the model run overnight for a century-long simulation.

Integrated Assessment Models (for policy analysis)

C-Learn, a simple policy analysis tool from Climate Interactive. Allows you to specify emissions trajectories for three groups of nations, and explore the impact on global temperature. This is a simplified version of the C-ROADS model, which is used to analyze proposals during international climate treaty negotiations.

The Climate Challenge: Our Choices, also from Sterman’s team at MIT. This one looks fancier, but gives you less control over the simulation – you can just pick one of three emissions paths: increasing, stabilized or reducing. On the other hand, it’s very effective at demonstrating the point about emissions vs. concentrations.

And while we’re on systems dynamics, I ought to mention toolkits for building your own systems dynamics models, such as Stella from ISEE Systems (here’s an example of it used to teach the global carbon cycle).

Other Related Models

A Kaya Identity Calculator, from David Archer at U Chicago. The Kaya identity is a way of expressing the interaction between the key drivers of carbon emissions: population growth, economic growth, energy efficiency, and the carbon intensity of our energy supply. Archer’s model allows you to play with these numbers.

An Orbital Forcing Calculator, also from David Archer. This allows you to calculate what the effect changes in the earth’s orbit and the wobble on its axis have on the solar energy that the earth receives, in any year in the past of future.

This term, I’m running my first year seminar course, “Climate Change: Software Science and Society” again. The outline has changed a little since last year, but the overall goals of the course are the same: to take a small, cross-disciplinary group of first year undergrads through some of the key ideas in climate modeling.

As last year, we’re running a course blog, and the first assignment is to write a blog post for it. Please feel free to comment on the students’ posts, but remember to keep your comments constructive!

Oh I do hate seeing blog posts with titles like “Easterbrook’s Wrong (Again)“. Luckily, it’s not me they’re talking about. It’s some other dude who, as far as I know, is completely unrelated to me. And that’s a damn good thing, as this Don Easterbrook appears to be a serial liar. Apparently he’s an emeritus geology prof from from some university in the US. And because he’s ready to stand up and spout scientific sounding nonsense about “global cooling”, he gets invited to talk to journalists all the time. And then his misinformation then gets duly repeated on blog threads all over the internet, despite the efforts of a small group of bloggers trying to clean up the mess:

In a way, this another instance of the kind of denial of service attack I talked about last year. One retired professor fakes a few graphs, and they spread so widely over the internet that many good honest science bloggers have to stop what they’re doing, research the fraud, and expose it. And still they can’t stop the nonsense from spreading (just google “Don Easterbrook” to see how widely he’s quoted, usually in glowing terms).

William Connolly has written a detailed critique of our paper “Engineering the Software for Understanding Climate Change”, which follows on from a very interesting discussion about “Amateurish Supercomputing Codes?” in his previous post. One of the issues raised in that discussion is the reward structure in scientific labs for software engineers versus scientists. The funding in such labs is pretty much all devoted to “doing science” which invariably means publishable climate science research. People who devote time and effort to improving the engineering of the model code might get a pat on the back, but inevitably it’s under-rewarded because it doesn’t lead directly to publishable science. The net result is that all the labs I’ve visited so far (UK Met Office, NCAR, MPI-M) have too few software engineers working on the model code.

Which brings up another point. Even if these labs decided to devote more budget to the software engineering effort (and it’s not clear how easy it would be to do this, without re-educating funding agencies), where will they recruit the necessary talent? They could try bringing in software professionals who don’t yet have the domain expertise in climate science, and see what happens. I can’t see this working out well on a large scale. The more I work with climate scientists, the more I appreciate how much domain expertise it takes to understand the science requirements, and to develop climate code. The potential culture clash is huge: software professionals (especially seasoned ones) tend to be very opinionated about “the right way to build software”, and insensitive to contextual factors that might make their previous experiences inapplicable. I envision lots of the requirements that scientists care about most (e.g. the scientific validity of the models) getting trampled on in the process of “fixing” the engineering processes. Right now the trade-off between getting the science right versus having beautifully engineered models is tipped firmly in favour of the former. Tipping it the other way might be a huge mistake for scientific progress, and very few people seem to understand how to get both right simultaneously.

The only realistic alternative is to invest in training scientists to become good software developers. Greg Wilson is pretty much the only person around who is covering this need, but his software carpentry course is desperately underfunded. We’re going to need a lot more like this to fix things…

Last week, I ran a workshop for high school kids from across Toronto on “What can computer models tell us about climate change?“. I already posted some of the material I used: the history of our knowledge about climate change. Jorge, Jon and Val ran another workshop after mine, entitled “Climate change and the call to action: How you can make a difference“. They have already blogged their reflections: See Jon’s summary of the workshop plan, and reflections on how to do it better next time, and the need for better metaphors. I think both workshops could have done with being longer, for more discussion and reflection (we were scheduled only 75 minutes for each). But I enjoyed both workshops a lot, as I find it very useful for my own thinking to consider how to talk about climate change with kids, in this case mainly from grade 10 (≈15 years old).

The main idea I wanted to get across in my workshop was the role of computer models: what they are, and how we can use them to test out hypotheses about how the climate works. I really wanted to do some live experiments, but of course, this is a little hard when a typical climate simulation run takes weeks of processing time on a supercomputer. There are some tools that high school kids could play with in the classroom, but none of them are particularly easy to use, and of course, they all sacrifice resolution for ability to run on a desktop machine. Here are some that I’ve played with:

EdGCM – This is the most powerful of the bunch. It’s a full General Circulation Model (GCM), based on the NASA’s models, and does support many different types of experiment. The license isn’t cheap (personally, I think it ought to be free and open source, but I guess they need a rich sponsor for that), but I’ve been playing with the free 30-day license. A full century of simulation tied up my laptop for 24 hours, but I kinda liked that, as it’s a bit like how you have to wait for results on a full scale model too (it even got hot, and I had to think about how to cool it, again just like a real supercomputer…). I do like the way that the documentation guides you through the process of creating an experiment, and the idea of then ‘publishing’ the results of your experiment to a community website.

JCM – This is (as far as I can tell) a box model, that allows you to experiment with outcomes of various emissions scenarios, based on the IPCC projections, which means it’s simple enough to give interactive outputs. It’s free and open source, but a little cumbersome to use – the interface doesn’t offer enough guidance for novice users. It might work well in a workshop, with lots of structured guidance for how to use it, but I’m not convinced such a simplistic model offers much value over just showing some of the IPCC graphs and talking about them.

Climate Interactive (and the C-Roads model). C-ROADS is also a box model, but with the emissions of different countries/regions separated out, to allow exploration of the choices in international climate negotiations. I’ve played a little with C-ROADS, and found it frustrating because it ignores all the physics, and after all, my main goal in playing with climate models with kids is to explore how climate processes work, rather than the much narrower task of analyzing policy choices. It also seems to be hard to tell the difference between various different policy choices – even when I try to run it with extreme choices (cease all emissions next year vs. business as usual), the outputs are all of a similar shape (“it gets steadily warmer”). This may well be the correct output, but the overall message is a little unfortunate: whatever policy path we choose, the results look pretty similar. Showing the results of different policies as a set of graphs showing the warming response doesn’t seem very insightful; it would be better to explore different regional impacts, but for that we’re back to needing a full GCM.

CCCSN – the Canadian Climate Change Scenarios Network. This isn’t a model at all, but rather a front end to the IPCC climate simulation dataset. The web tool allows you to get the results from a number of experiments that were run for the IPCC assessments, selecting which model you want, which scenario you want, which output variables you want (temperature, precipitation, etc), and allows you to extract just a particular region, or the full global data. I think this is more useful than C-ROADS, because once you download the data, you can graph it in various ways, and explore how different regions are affected.

Some Online models collected by David Archer, which I haven’t played with much, but which include some box models, some 1-dimensional models, and the outputs of NCAR’s GCM (which I think is the one of the datasets included in CCCSN). Not much explanation is provided here though – you have to know what you’re doing…

John Sterman’s Bathtub simulation. Again, a box model (actually, a stocks-and-flows dynamics model), but this one is intended more to educate people about the basic systems dynamics principles, rather than to explore policy choices. So I already like it better than C-ROADS, except that I think the user interface could do with a serious make-over, and there’s way too much explanatory text – there must be a way to do this with more hands on and less exposition. It also suffers from a problem similar t0 C-ROADS: it allows you to control emissions pathways, and explore the result on atmospheric concentrations and hence temperature. But the problem is, we can’t control emissions directly – we have to put in place a set of policies and deploy alternative energy technologies to indirectly affect emissions. So either we’d want to run the model backwards (to ask what emissions pathway we’d have to follow to keep below a specific temperature threshold), or we’d want as inputs the things we can affect – technology deployments, government investment, cap and trade policies, energy efficiency strategies, etc.

None of these support the full range of experiments I’d like to explore in a kids’ workshop, but I think EdGCM is an excellent start, and access to the IPCC datasets via the CCCSN site might be handy. But I don’t like the models that focus just on how different emissions pathways affect global temperature change, because I don’t think these offer any useful learning opportunities about the science and about how scientists work.

I guess headlines like “An error found in one paragraph of a 3000 page IPCC report; climate science unaffected” wouldn’t sell many newspapers. And so instead, the papers spin out the story that a few mistakes undermine the whole IPCC process. As if newspapers never ever make mistakes. Well, of course, scientists are supposed to be much more careful than sloppy journalists, so “shock horror, those clever scientists made a mistake. Now we can’t trust them” plays well to certain audiences.

And yet there are bound to be errors; the key question is whether any of them impact any important results in the field. The error with the Himalayan glaciers in the Working Group II report is interesting because Working Group I got it right. And the erroneous paragraph in WGII quite clearly contradicts itself. Stupid mistake, that should be pretty obvious to anyone reading that paragraph carefully. There’s obviously room for improvement in the editing and review process. But does this tell us anything useful about the overall quality of the review process?

There are errors in just about every book, newspaper, and blog post I’ve ever read. People make mistakes. Editorial processes catch many of them. Some get through. But few of these things have the kind of systematic review that the IPCC reports went through. Indeed, as large, detailed, technical artifacts, with extensive expert review, the IPCC reports are much less like normal books, and much more like large software systems. So, how many errors get through a typical review process for software? Is the IPCC doing better than this?

Even the best software testing and review practices in the world let errors through. Some examples (expressed in number of faults experienced in operation, per thousand lines of code):

Worst military systems: 55 faults/KLoC

Best military systems: 5 faults/KLoC

Agile software development (XP): 1.4 faults/KLoC

The Apache web server (open source): 0.5 faults/KLoC

NASA Space shuttle: 0.1 faults/KLoC

Because of the extensive review processes, the shuttle flight software is purported to be the most expensive in the world, in terms of dollars per line of code. Yet still about 1 error every ten thousand lines of code gets through the review and testing process. Thankfully none of those errors have ever caused a serious accident. When I worked for NASA on the Shuttle software verification in the 1990’s, they were still getting reports of software anomalies with every shuttle flight, and releasing a software update every 18 months (this, for an operational vehicle that had been flying for two decades, with only 500,000 lines of flight code!).

The IPCC reports consist of around 3000 pages, and approaching 100 lines of text per page. Let’s assume I can equate a line of text with a line of code (which seems reasonable, when you look a the information density of each line in the IPCC reports) – that would make them as complex as a 300,000 line software system. If the IPCC review process is as thorough as NASA’s, then we should still expect around 30 significant errors made it through the review process. We’ve heard of two recently – does this mean we have to endure another 28 stories, spread out over the next few months, as the drone army of denialists toils through trying to find more mistakes? Actually, it’s probably worse than that…

The IPCC writing, editing and review processes are carried out entirely by unpaid volunteers. They don’t have automated testing and static analysis tools to help – human reviewers are the only kind of review available. So they’re bound to do much worse than NASA’s flight software. I would expect there to be 100s of errors in the reports, even with the best possible review processes in the world. Somebody point me to a technical review process anywhere that can do better than this, and I’ll eat my hat. Now, what was the point of all those newspaper stories again? Oh, yes, sensationalism sells.

When I was visiting MPI-M earlier this month, I blogged about the difficulty of documenting climate models. The problem is particularly pertinent to questions of model validity and reproducibility, because the code itself is the result of a series of methodological choices by the climate scientists, which are entrenched in their design choices, and eventually become inscrutable. And when the code gets old, we lose access to these decisions. I suggested we need a kind of literate programming, which sprinkles the code among the relevant human representations (typically bits of physics, formulas, numerical algorithms, published papers), so that the emphasis is on explaining what the code does, rather than preparing it for a compiler to digest.

The problem with literate programming (at least in the way it was conceived) is that it requires programmers to give up using the program code as their organising principle, and maybe to give up traditional programming languages altogether. But there’s a much simpler way to achieve the same effect. It’s to provide an organising structure for existing programming languages and tools, but which mixes in non-code objects in an intuitive way. Imagine you had an infinitely large sheet of paper, and could zoom in and out, and scroll in any direction. Your chunks of code are laid out on the paper, in an spatial arrangement that means something to you, such that the layout helps you navigate. Bits of documentation, published papers, design notes, data files, parameterization schemes, etc can be placed on the sheet, near to the code that they are relevant to. When you zoom in on a chunk of code, the sheet becomes a code editor; when you zoom in on a set of math formulae, it becomes a LaTeX editor, and when you zoom in on a document it becomes a word processor.

Well, Code Canvas, a tool under development in Rob Deline‘s group at Microsoft Research does most of this already. The code is laid out as though it was one big UML diagram, but as you zoom in you move fluidly into a code editor. The whole thing appeals to me because I’m a spatial thinker. Traditional IDEs drive me crazy, because they separate the navigation views from the code, and force me to jump from one pane to another to navigate. In the process, they hide the inherent structure of a large code base, and constrain me to see only a small chunk at a time. Which means these tools create an artificial separation between higher level views (e.g. UML diagrams) and the code itself, sidelining the diagrammatic representations. I really like the idea of moving seamlessly back and forth between the big picture views and actual chunks of code.

Code Canvas is still an early prototype, and doesn’t yet have the ability to mix in other forms of documentation (e.g. LaTeX) on the sheet (or at least not in any demo Microsoft are willing to show off), but the potential is there. I’d like to explore how we take an idea like this an customize it for scientific code development, where there is less of a strict separation of code and data than in other forms of programming, and where the link to published papers and draft reports is important. The infinitely zoomable paper could provide an intuitive unifying tool to bring all these different types of object together in one place, to be managed as a set. And the use of spatial memory to help navigate will be helpful, when the set of things gets big.

I’m also interested in exploring the idea of using this metaphor for activities that don’t involve coding – for example complex decision-support for sustainability, where you need to move between spreadsheets, graphs & charts, models runs, and so on. I would lay out the basic decision task as a graph on the sheet, with sources of evidence connecting into the decision steps where they are needed. The sources of evidence could be text, graphs, spreadsheet models, live datafeeds, etc. And as you zoom in over each type of object, the sheet turns into the appropriate editor. As you zoom out, you get to see how the sources of evidence contribute to the decision-making task. Hmmm. Need a name for this idea. How about DecisionCanvas?

Well, my intention to liveblog from interesting sessions is blown – the network connection in the meeting rooms is hopeless. One day, some conference will figure out how to provide reliable internet…

Yesterday I attended an interesting session in the afternoon on climate services. Much of the discussion was building on work done at the third World Climate Conference (WCC-3) in August, which set out to develop a framework for provision of climate services. These would play a role akin to local, regional and global weather forecasting services, but focussing on risk management and adaptation planning for the impacts of climate change. Most important is the emphasis on combining observation and monitoring services with research and modeling services (both of which already exist) with a new climate services information system (I assume this would be distributed across multiple agencies across the world) and system of user interfaces to deliver the information in forms needed for different audiences. Rasmus at RealClimate discusses some of the scientific challenges.

My concern in reading the outcomes of the WCC this is that it’s all focussed on a one-way flow of information, with insufficient attention to understanding who the different users would be, and what they really need. I needn’t have worried – the AGU session demonstrated that there’s plenty of people focussing on exactly this issue. I got the impression that there’s a massive international effort quietly putting in place the risk management and planning tools needed for a us to deal with the impacts of a rapidly changing climate, but which is completely ignored by a media still obsessed with the “is it happening?” pseudo-debate. The extent of this planning for expected impacts would make a much more compelling media story, and one that matters, on a local, scale to everyone.

Some highlights from the session:

Mark Svboda from the National Drought Mitigation Centre at the University of Nebraska, talking about drought planning the in US. He pointed out that drought tends to get ignored compared to other kinds of natural disasters (tornados, floods, hurricanes), presumably because it doesn’t happen within a daily news cycle. However drought dwarfs the damage costs in the US from all other kinds of natural disasters except hurricanes. One problem is that population growth has been highest in regions most subject ot drought, especially in the southwest US. The NDMC monitoring program includes the only repository of drought impacts. Their US drought monitor has been very successful, but next generation of tools need better sources of data on droughts, so they are working on adding a drought reporter, doing science outreach, working with kids, etc. Even more important, is improving the drought planning process, hence a series of workshops on drought management tools.

Tony Busalacchi from the Earth System Science Interdisciplinary Centre at the University of Maryland. Through a series of workshops in the CIRUN project, they’ve identified the need for tools for forecasting, especially around risks such as sea level rise. Especially the need for actionable information, but no service currently provides this. Climate information system needed for policymakers, on scales of seasons to decades, providing tailorable to regions, and with ability to explore “what-if” questions. To build this, needs coupling of models not used together before, and the synthesis of new datasets.

Robert Webb from NOAA, in Boulder, on experimental climate information services to support risk management. The key to risk assessment is to understand it’s across multiple timescales. Users of such services do not distinguish between weather and climate – they need to know about extreme weather events, and they need to know how such risks change over time. Climate change matters because of the impacts. Presenting the basic science and predictions of temperature change are irrelevant to most people – its the impacts that matter (His key quote: “It’s the impacts, stupid!”). Examples: water – droughts and floods, changes in snowpack, river stream flow, fire outlooks, and planning issues (urban, agriculture, health). He’s been working with the Climate Change and Western Water Group (CCAWWG) to develop a strategy on water management. How to get people to plan and adapt? The key is to get people to think in terms of scenarios rather than deterministic forecasts.

Guy Brasseur from German Climate Services Center, in Hamburg. German adaption strategy developed by german federal government, which appears to be way ahead of the US agencies in developing climate services. Guy emphasized the need for seamless prediction – need a uniform ensemble system to build from climate monitoring of recent past and present, and forward into the future, at different regional scales and timescales. Guy called for an Apollo-sized program to develop the infrastructure for this.

Kristen Averyt from the University of Colorado, talking about her “Climate services machine” (I need to get hold of the image for this – it was very nice). She’s been running workshops for Colorado-specific services, with beakout sessions focussed on impacts and utility of climate information. She presented some evaluations of the success of these workshop, including a climate literacy test they have developed. For example at one workshop, the attendees had 63% correct answers at the beginning (where the wrong answers tended to cluster and indicate some important misperceptions. I need to get hold of this – it sounds like an interesting test. Kristen’s main point was that these workshops play an important role in reaching out to people of all ages, including kids, and getting them to understand how climate change will affect them.

Overall, the main message of this session was that while there have been lots of advances in our understanding of climate, these are still not being used for planning and decision-making.