Weather and climate are different. Weather varies tremendously from day to day, week to week, season to season. Climate, on the other hand is average weather over a period of years; it can be thought of as the boundary conditions on the variability of weather. We might get an extreme cold snap, or a heatwave at a particular location, but our knowledge of the local climate tells us that these things are unusual, temporary phenomena, and sooner or later things will return to normal. Forecasting the weather is therefore very different from forecasting changes in the climate. One is an initial value problem, and the other is a boundary value problem. Let me explain.

Good weather forecasts depend upon an accurate knowledge of the current state of the weather system. You gather as much data you can about current temperatures, winds, clouds, etc., feed them all into a simulation model and then run it forward to see what happens. This is hard because the weather is an incredibly complex system. The amount of information needed is huge: both the data and the models are incomplete and error-prone. Despite this, weather forecasting has come a long way over the past few decades. Through a daily process of generating forecasts, comparing them with what happened, and thinking about how to reduce errors, we have incredibly accurate 1- and 3- day temperature forecasts. Accurate forecasts of rain, snow, and so on for a specific location is a little harder because of the chance that the rainfall will be in a slightly different place (e.g a few kilometers away) or a slightly different time than the model forecasts, even if the overall amount of precipitation is right. Hence, daily forecasts give fairly precise temperatures, but put probabilistic values on things like rain (Probability of Precipitation, PoP), based on knowledge of the uncertainty factors in the forecast. The probabilities are known because we have a huge body of previous forecasts to compare with.

The limit on useful weather forecasts seems to be about one week. There are inaccuracies and missing information in the inputs, and the models are only approximations of the real physical processes. Hence, the whole process is error prone. At first these errors tend to be localized, which means the forecast for the short term (a few days) might be wrong in places, but is good enough in most of the region we’re interested in to be useful. But the longer we run the simulation for, the more these errors multiply, until they dominate the computation. At this point, running the simulation for longer is useless. 1-day forecasts are much more accurate than 3-day forecasts, which are better than 5-day forecasts, and beyond that it’s not much better than guessing. However, steady improvements mean that 3-day forecasts are now as accurate as 2-day forecasts were a decade ago. Weather forecasting centres are very serious about reviewing the accuracy of their forecasts, and set themselves annual targets for accuracy improvements.

A number of things help in this process of steadily improving forecasting accuracy. Improvements to the models help, as we get better and better at simulating physical processes in the atmosphere and oceans. Advances in high performance computing help too – faster supercomputers mean we can run the models at a higher resolution, which means we get more detail about where exactly energy (heat) and mass (winds, waves) are moving. But all of these improvements are dwarfed by the improvements we get from better data gathering. If we had more accurate data on current conditions, and could get it into the models faster, we could get big improvements in the forecast quality. In other words, weather forecasting is an “initial value” problem. The biggest uncertainty is knowledge of the initial conditions.

One result of this is that weather forecasting centres (like the UK Met Office) can get an instant boost to forecasting accuracy whenever they upgrade to a faster supercomputer. This is because the weather forecast needs to be delivered to a customer (e.g. a newspaper or TV station) by a fixed deadline. If the models can be made to run faster, the start of the run can be delayed, giving the meteorologists more time to collect newer data on current conditions, and more time to process this data to correct for errors, and so on. For this reason, the national weather forecasting services around the world operate many of the world’s fastest supercomputers.

Hence weather forecasters are strongly biased towards data collection as the most important problem to tackle. They tend to regard computer models as useful, but of secondary importance to data gathering. Of course, I’m generalizing – developing the models is also a part of meteorology, and some meteorologists devote themselves to modeling, coming up with new numerical algorithms, faster implementations, and better ways of capturing the physics. It’s quite a specialized subfield.

Climate science has the opposite problem. Using pretty much the same model as for numerical weather prediction, climate scientists will run the model for years, decades or even centuries of simulation time. After the first few days of simulation, the similarity to any actual weather conditions disappears. But over the long term, day-to-day and season-to-season variability in the weather is constrained by the overall climate. We sometimes describe climate as “average weather over a long period”, but in reality it is the other way round – the climate constrains what kinds of weather we get.

For understanding climate, we no longer need to worry about the initial values, we have to worry about the boundary values. These are the conditions that constraint the climate over the long term: the amount of energy received from the sun, the amount of energy radiated back into space from the earth, the amount of energy absorbed or emitted from oceans and land surfaces, and so on. If we get these boundary conditions right, we can simulate the earth’s climate for centuries, no matter what the initial conditions are. The weather itself is a chaotic system, but it operates within boundaries that keep the long term averages stable. Of course, a particularly weird choice of initial conditions will make the model behave strangely for a while, at the start of a simulation. But if the boundary conditions are right, eventually the simulation will settle down into a stable climate. (This effect is well known in chaos theory: the butterfly effect expresses the idea that the system is very sensitive to initial conditions, and attractors are what cause a chaotic system to exhibit a stable pattern over the long term)

To handle this potential for initial instability, climate modellers create “spin-up” runs: pick some starting state, run the model for say 30 years of simulation, until it has settled down to a stable climate, and then use the state at the end of the spin-up run as the starting point for science experiments. In other words, the starting state for a climate model doesn’t have to match real weather conditions at all; it just has to be a plausible state within the bounds of the particular climate conditions we’re simulating.

To explore the role of these boundary values on climate, we need to know whether a particular combination of boundary conditions keep the climate stable, or tend to change it. Conditions that tend to change it are known as forcings. But the impact of these forcings can be complicated to assess because of feedbacks. Feedbacks are responses to the forcings that then tend to amplify or diminish the change. For example, increasing the input of solar energy to the earth would be a forcing. If this then led to more evaporation from the oceans, causing increased cloud cover, this could be a feedback, because clouds have a number of effects: they reflect more sunlight back into space (because they are whiter than the land and ocean surfaces they cover) and they trap more of the surface heat (because water vapour is a strong greenhouse gas). The first of these is a negative feedback (it reduces the surface warming from increased solar input) and the second is a positive feedback (it increases the surface warming by trapping heat). To determine the overall effect, we need to set the boundary conditions to match what we know from observational data (e.g. from detailed measurements of solar input, measurements of greenhouse gases, etc). Then we run the model and see what happens.

Observational data is again important, but this time for making sure we get the boundary values right, rather than the initial values. Which means we need different kinds of data too – in particular, longer term trends rather than instantaneous snapshots. But this time, errors in the data are dwarfed by errors in the model. If the algorithms are off even by a tiny amount, the simulation will drift over a long climate run, and it stops resembling the earth’s actual climate. For example, a tiny error in calculating where the mass of air leaving one grid square goes could mean we lose a tiny bit of mass on each time step. For a weather forecast, the error is so small we can ignore it. But over a century long climate run, we might end up with no atmosphere left! So a basic test for climate models is that they conserve mass and energy over each timestep.

Climate models have also improved in accuracy steadily over the last few decades. We can now use the known forcings over the last century to obtain a simulation that tracks the temperature record amazingly well. These simulations demonstrate the point nicely. They don’t correspond to any actual weather, but show patterns in both small and large scale weather systems that mimic what the planet’s weather systems actually do over the year (look at August – see the the daily bursts of rainfall in the Amazon, the gulf stream sending rain to the UK all summer long, and the cyclones forming off the coast of Japan by the middle of the month). And these patterns aren’t programmed into the model – it is all driven by sets of equations derived from the basic physics. This isn’t a weather forecast, because on any given day, the actual weather won’t look anything like this. But it is an accurate simulation of typical weather over time (i.e. climate). And, as was the case with weather forecasts, some bits are better than others – for example the Indian monsoons tend to be less well-captured than the North Atlantic Oscillation.

At first sight, numerical weather prediction and climate models look very similar. They model the same phenomena (e.g. how energy moves around the planet via airflows in the atmosphere and currents in the ocean), using the same computational techniques (e.g., three dimensional models of fluid flow on a rotating sphere). And quite often they use the same program code. But the problems are completely different: one is an initial value problem, and one is a boundary value problem.

Which also partly explains why a small minority of (mostly older, mostly male) meteorologists end up being climate change denialists. They fail to understand the difference in the two problems, and think that climate scientists are misusing the models. They know that the initial value problem puts serious limits on our ability to predict the weather, and assume the same limit must prevent the models being used for studying climate. Their experience tells them that weaknesses in our ability to get detailed, accurate, and up-to-date data about current conditions is the limiting factor for weather forecasting, and they assume this limitation must be true of climate simulations too.

Ultimately, such people tend to suffer from “senior scientist” syndrome: a lifetime of immersion in their field gives them tremendous expertise in that field, which in turn causes them to over-estimate how well their expertise transfers to a related field. They can become so heavily invested in a particular scientific paradigm that they fail to understand that a different approach is needed for different problem types. This isn’t the same as the Dunning-Kruger effect, because the people I’m talking about aren’t incompetent. So perhaps we need a new name. I’m going to call it the Dyson-effect, after one of it’s worst sufferers.

I should clarify that I’m certainly not stating that meteorologists in general suffer from this problem (the vast majority quite clearly don’t), nor am I claiming this is the only reason why a meteorologist might be skeptical of climate research. Nor am I claiming that any specific meteorologists (or physicists such as Dyson) don’t understand the difference between initial value and boundary value problems. However, I do think that some scientists’ ideological beliefs tend to bias them to be dismissive of climate science because they don’t like the societal implications, and the Dyson-effect disinclines them to finding out what climate science actually does.

I am, however, arguing that if more people understood this distinction between the two types of problem, we could get past silly soundbites about “we can’t even forecast the weather…” and “climate models are garbage in garbage out”, and have a serious conversation about how climate science works.

Share this:

Related

5 Comments

The Wikipedia starts its description of Freeman Dyson’s long career with the following paragraph.

Although Dyson has won numerous scientific awards, he has never won a Nobel Prize, which has led Nobel physics laureate Steven Weinberg to state that the Nobel committee has “fleeced” Dyson. Dyson has said that “I think it’s almost true without exception if you want to win a Nobel Prize, you should have a long attention span, get hold of some deep and important problem and stay with it for 10 years. That wasn’t my style.”

So IMHO his style is not good example of a person “so heavily invested in a particular scientific paradigm that they fail to understand that a different approach is needed for different problem types.”

But using the terminology anyway, my experience has been orthogonal to yours. People suffering from the Dyson-effect don’t appreciate that what may be a very difficult problem or concept in one field of study may actually be an easy problem or concept in another. There are numerous, often seeming bizarre, examples. Such as the fact that finding the common modes of failure in a nuclear power plant is the same problem as finding the repeated phrases in a novel. So that Google actually knows something about finding faults in nuclear power plants and phrases in novels by knowing how to do internet searches.

Second comment. IMHO, an educated, unbiased person with no particular knowledge of the weather/climate PDEs would likely assume that both weather and climate are chaotic on all time scales. That a weather/climate model is neither an initial value nor a boundary value problem — but both. And that no level of precision of either the initial or boundary conditions would change the chaotic nature of weather or climate.

Therefore, I think the issue is to clearly explain why it is that even though the weather is chaotic, its statistics (the climate) are not. I have tried discussing this with fellow geeks, but have had little success. IMHO, it is harder than it looks. And more my fault due to lack of knowledge/skill than their fault by being biased.

Chaos is an interesting beast. While it’s clearly observed that weather is chaotic, it is far from obvious that climate is. One thing you might point to, George, is the fact that the orbits of planets are chaotic. Nevertheless, the earth has remained near 1 AU from the sun. Chaos can be ‘bounded’ — while you’re still out of luck in trying to predict the exact orbital parameters for the earth 10 million years from now (or into the past) you can be confident that they’ll be quite close to your expectations.

Dyson shares, I think, a similar issue in his approach to Richard Lindzen. Namely, in doing the work that they’re professionally best known for, they found ways to apply a simple analysis to what others thought was a complex problem. If you can manage to do this, you get a fair amount of fame. The error is to think that you can always find a simple solution to a complex problem. Some problems really are complex, and it is not a matter of being clever.

Radiative transfer, for instance, is unavoidably complex. You really do have to find the absorption characteristics of each gas, and you do have to muck around with doing the computations line by line (for each of a zillion lines) — there’s no clever way to avoid this. We can simplify and approximate, and do. But we know that we lose important things as we do so, and the simpler we get, the worse the results.

I think there’s a much simpler contribution to the observation of a group of mostly older meteorologists being in denial. If you go back to when they were in school, the orthodoxy was that CO2 wasn’t much of a player. They’re merely holding on to their early training. I find it amusing, or sad, that younger folks who are in denial turn to these people as if they were ‘mavericks’. The people who rebelled against orthodoxy were Manabe, Weatherald, Hansen, Schneider, Washington, and so on.

The reason I would probably not use the example of planetary orbits is because of the Interplanetary Transport Network. It seems that once I get in the vicinity of a Lagrange point and on the network, I can go to just about anywhere in (or even out of) the solar system for very little energy. The very definition of an “unbounded” trajectory.

It is my understanding that not only are the planetary orbits chaotic, they are non-ergodic too. So orbital statistics are often, but not always (!?), divergent. (You might be interested in the comments at the end of this blog post.)

Because, as you point out, the earth has long remained at about 1 AU from the sun. A good point. So I would like to be able to show, so to speak, that our particular climate is quite obviously “far” from one of earth’s climate “Lagrange points”. That is, to be able to show by some intuitive argument that a small disturbance (trace gas such as CO2 doubling, etc.) will only produce a “bounded” and predictable result in our climate’s trajectory. But I have no clue how to do that.

One thing I’d point to, in terms of climate’s degree of chaos, is the seasonal cycle. Weather-style chaos can fairly easily produce a mid-afternoon temperature that is below the late night/very early morning temperature — even though ordinarily the mid-afternoon is the high, and very early morning is the low for the day. I’ll put that up as the a major chaos-induced divergence.

Yet, even though we’re confident of weather’s chaos, we are still also confident that summer will be colder than winter. Although we’re sure that weather still happens, climate is either not chaotic in that sense, or is so seldom chaotic in that sense (of reversing a seasonal cycle) that we can generally ignore it.

One ‘place’ in the climate system that might be amenable to a more ‘unbounded’ trajectory is the thermohaline circulation. But … if that really is the thing at hand for the younger dryas cooling, we’re talking about a chaotic jump that occurred only once in the last 100,000 years. The natural system doesn’t seem to spend much time close to that ‘lagrange point’, or, if it does, it’s a very small target.

On the other hand, that’s looking at the natural system. I’m less confident about the current, strongly forced by human activity, system. We haven’t had 387 (and rising) ppm CO2 in the atmosphere for millions of years, nor a deforested eastern north america, nor … quite a few things that the now 7 billion of us have done to the climate system.

Digressing some: I’ve put up a couple notes on theory of climate recently. Related enough to the topic at hand to mention, not so much as to light up a neon sign or even put the direct link here.

Dyson shares, I think, a similar issue in his approach to Richard Lindzen.

You are right about that, but you misunderstand their criticism. Their fundamental criticism is about model validation. It doesn’t matter if your model solves an IVP or a BVP, at some point you need to ‘close the loop’ between model predictions and observations of reality.

Although we’re sure that weather still happens, climate is either not chaotic in that sense, or is so seldom chaotic in that sense (of reversing a seasonal cycle) that we can generally ignore it.

I think that is probably an open research question. What attractor are we actually on?