The Doomsday Clock is an internationally recognized design that conveys how close we are to destroying our civilization with dangerous technologies of our own making. First and foremost among these are nuclear weapons, but the dangers include climate-changing technologies, emerging... Read More

Tacit knowledge gets lost in translation with climate modeling

Reading this discussion, it's safe to say that any policy maker would get a pretty similar idea of what climate models can tell them, no matter which of us he talked to. I have a few quibbles that might be good for an after-work discussion at the pub, but they are small grievances in the greater scheme of things.

If we are all pretty much agreed about the conditional utility and limitations of climate models, why do media still interpret climate model outputs as exact predictions? Why are large-scale projections discussed as though they were local forecasts? Or worse, why the (occasional) blind dismissal of anything bearing the taint of "modeling"? This problem is significantly more complex than a Bayesian analysis of the 22 contributing models to the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report.

I would argue that this problem is not fundamental to climate models, but is a symptom of something more general: how scientific information gets propagated beyond the academy. What we have discussed here can be broadly described as tacit knowledge--the everyday background assumptions that most practicing climate modelers share but that rarely gets written down. It doesn't get into the technical literature because it's often assumed that readers know it already. It's not included in popular science summaries because it's too technical. It gets discussed over coffee, or in the lab, or in seminars, but that is a very limited audience. Unless policy makers or journalists specifically ask climate modelers about it, it's the kind of information that can easily slip through the cracks.

Shorn of this context, model results have an aura of exactitude that can be misleading. Reporting those results without the appropriate caveats can then provoke a backlash from those who know better, lending the whole field an aura of unreliability.

So, what should be done? Exercises like this discussion are useful, and should be referenced in the future. But there's really no substitute for engaging more directly with the people that need to know. This may not be as much fun as debugging a new routine (OK, that isn't much fun either), and it takes different skills than those usually found in a modeling center, i.e. clearly communicating at the appropriate technical level and with an appreciation of listeners' needs. Not all modelers need to have these skills, but we do need enough spokespersons and envoys to properly represent all of us.

Distribution of modeling information is expanding dramatically--especially through efforts by the IPCC and the freely available online archives of model outputs. (Read more about the work being done on these archives.) The number of users and interested parties are therefore increasing all of the time. Consequently, modeling centers are spending greater amounts of time both packaging their output and explaining it.

In the same way that the public has become more adept at dealing with probabilistic weather forecasts, it will get more used to dealing with climate model outputs. For the time being, however, an increased amount of hand-holding will be necessary.