All early warning systems have one inherent problem in common: To be of any use, they must amplify faint, unintelligible signals that cannot be reliably interpreted by laypersons, and translate these into a decision-making basis capable of being communicated or even into specific instructions for action.

...There is no lack of details - what is lacking is the constructive and integrated view. That cannot be achieved by a monologue of alternately alarming and reassuring statements and appraisals. What is needed is the interdisciplinary and - in the age of globalisation - increasingly intercultural "polylogue".

That's from the reinsurance giant Swiss Re's high-level overview, "The Risk Landscape of the Future". In reinsurance as in governments and markets, we're seeing more sophisticated risk analysis - and more options to mitigate those risks, at all levels from the individual to the international.

We've covered Swiss Re before, in relation to "The Great Warming" and event-based risk analysis. Other publications in the "Risk Perception" series discuss nanotechnology, terrorism, climate change, and even obesity. Shorter reports give an insurer's perspective on a variety of issues, from liability for genetic engineering to tsunami insurance.

The modeling that goes into understanding risks is fascinating work, right at the cutting edge of science. Traditional actuarial science fits probability distributions to past data and estimated future trends. Today's modelers may also use simulations to make a miniature world representing their phenomenon, run it many times under different hypotheses, and look at the resulting distribution of outcomes.

Without these tools, we would have little basis to make quantitative predictions or adaptations to complex future risks. And many of the issues we deal with here at Worldchanging can be framed as positive responses for risks to which the world is not (yet) paying sufficient attention - ranging from obvious environmental dangers to more subtle computer-related risks.

But no matter how sophisticated the math or modeling, all these approaches are only as good as their assumptions. Do price fluctuations follow a lognormal distribution? Is global warming independent of solar radiation patterns? Could a tsunami happen in the Atlantic basin? A model missing crucial possibilities is far worse than no model at all.

As modeling tools grow easier to use and more powerful, a positive alternative to a world of blinkered ideological subcultures is "debating through models" - making our assumptions explicit, and collaboratively working out the consequences of potential policies. (Watch this meme.)

Who else's core business relies on their ability to accept exposure to a variety of risks, stretching many decades into the future? Governments, of course. For example, many national health systems are starting to act seriously on preventive measures. That's been made possible by, essentially, math - reliable accounting, statistics, and modeling that make the added health people will enjoy in the future as tangible as the current benefit of treating today's illnesses.

There's a good argument to be made that civilization advances by sharing more and more of the downside risks we face. Once it's somebody's job to pay up in case disaster strikes, they have to understand what could cause that disaster, which likely leads to insight into preventing it in the first place - and hopefully to institutional incentives to act on that insight.

As a complement to governments and insurance, Robert Shiller suggests an intriguing market-insurance hybrid in The New Financial Order. His "macro markets" would hedge against risks to currencies, national incomes, home equity, and salary crashes - the kind of risks most people are inadvertently exposed to just by ordinary living.

Of course, these measures are only as good as our collective data, modeling capabilities, and assumptions - a point delightfully made in Fooled by Randomness by Nassim Nicholas Taleb. Being wedded to a wonky model of how the world works is all too common, especially as important issues become so far removed from our everyday experience that we must choose which expert or authority figure to believe.

Progress is being made on many fronts, not least through open models from widely-respected "social consortia" like the IPCC. In research as in reinsurance, the best people use every tool and technique to know more - and to clearly know what they don't know.

There's a recent paper by "WHO" regarding "avian influenza" and "rumours". I can't find it right now but it was interesting. If more than half of the rumours turn out to be wrong, it's still worthwhile to check them.

In epidemiologic surveillance, and I would guess in many other fields, they call this "sensibility" (detecting small signal) and "specificity" (checking that the signal is indeed caused by what you're monitoring).

Most likely you need two "devices" working in sequence: a sensitive one and then a specific one. No one system can do it all, I guess.