The EU airspace shutdown provides yet another example of ignorance of the role of models in policy:

Computer Models Ruining EU?

Flawed computer models may have exaggerated the effects of an Icelandic volcano eruption that has grounded tens of thousands of flights, stranded hundreds of thousands of passengers and cost businesses hundreds of millions of euros. The computer models that guided decisions to impose a no-fly zone across most of Europe in recent days are based on incomplete science and limited data, according to European officials. As a result, they may have over-stated the risks to the public, needlessly grounding flights and damaging businesses. “It is a black box in certain areas,” Matthias Ruete, the EU’s director-general for mobility and transport, said on Monday, noting that many of the assumptions in the computer models were not backed by scientific evidence. European authorities were not sure about scientific questions, such as what concentration of ash was hazardous for jet engines, or at what rate ash fell from the sky, Mr. Ruete said. “It’s one of the elements where, as far as I know, we’re not quite clear about it,” he admitted. He also noted that early results of the 40-odd test flights conducted over the weekend by European airlines, such as KLM and Air France, suggested that the risk was less than the computer models had indicated. – Financial Times

Other venues picked up similar stories:

Also under scrutiny last night was the role played by an eight-man team at the Volcanic Ash Advisory Centre at Britain’s Meteorological Office. The European Commission said the unit started the chain of events that led to the unprecedented airspace shutdown based on a computer model rather than actual scientific data. – National Post

These reports miss a number of crucial points:

The decision to shut down the airspace was political, not scientific. Surely the Met Office team had input, but not the final word, and model results were only one input to the decision.

The distinction between computer models and “actual scientific data” is false. All measurements involve some kind of implicit model, required to interpret the result. The 40 test flights are meaningless without some statistical interpretation of sample size and so forth.

It’s not uncommon for models to demonstrate that data are wrong or misinterpreted.

The fact that every relationship or parameter in a model can’t be backed up with a particular measurement does not mean that the model is unscientific.

Numerical measurements are not the only valid source of data; there are also laws of physics, and a subject matter expert’s guess is likely to be better than a politician’s.

Calibration of the aggregate result of a model provides indirect measurement of uncertain components.

Feedback structure may render some parameters insensitive and therefore unimportant.

Good decisions sometimes lead to bad outcomes.

The reporters, and maybe also the director-general (covering his you-know-what), have neatly shifted blame, turning a problem in decision making under uncertainty into an anti-science witch hunt. What alternative to models do they suggest? Intuition? Prayer? Models are just a way of integrating knowledge in a formal, testable, shareable way. Sure, there are bad models, but unlike other bad ideas, it’s at least easy to identify their problems.

Nice post at the Marginal Damage link above. In particular, it makes two key points:

The airlines optimized their operations for short term performance, and didn’t insure themselves against significant service interruptions. A low-probability, high-consequence event has happened … and now they’re mad.

The private sector may be more keen to take risks than regulators or the public. For one thing, they’re fragmented – if there’s an accident, it might not be their plane. The returns of flying are local to a company, while the risk to the industry is more global.