I was interested to read Saltelli and Funtowicz’s article “When All Models Are Wrong”1, not least because several people sent it to me due to its title and mention of my blog2. The article criticised complex computer models used for policy making – including environmental models – and presented a checklist of criteria for improving their development and use.

As a researcher in uncertainty quantification for environmental models, I heartily agree we should be accountable, transparent, and critical of our own results and those of others. Open access journals — particularly those accepting technical topics (e.g. Geoscientific Model Development) and replications (e.g. PLOS One) — would seem key, as would routine archiving of preprints (e.g. arXiv.org) and (ideally non-proprietary) code and datasets (e.g. FigShare.com). Academic promotions and funding directly or indirectly penalise these activities, even though they would improve the robustness of scientific findings. I also enjoyed the term “lamp-posting”: examining only the parts of models we find easiest to see.

However, I found parts of the article somewhat uncritical themselves. The statement “the number of retractions of published scientific work continues to rise” is not particularly meaningful. Even the fraction of retraction notices is difficult to interpret, because an increase could be due to changes in time lag (retraction of older papers), detection (greater scrutiny, e.g. RetractionWatch.com), or relevance (obsolete papers not retracted). It is not currently possible to reliably compare retraction notices across disciplines. But in one study of scientific bias, measured by fraction of null results, Geosciences and Environment/Ecology were ranked second only to Space Science in their objectivity3. It is not clear we can assert there are “increasing problems with the reliability of scientific knowledge”.

There was also little acknowledgement of existing research on the question “Which of those uncertainties has the largest impact on the result?”: for example, the climate projections used in UK adaptation4. Much of this research goes beyond sensitivity analysis, part of the audit proposed by the authors, because it explores not only uncertain parameters but also inadequately represented processes. Without an attempt to quantify structural uncertainty, a modeller implicitly makes the assumption that errors could be tuned away. While this is, unfortunately, common in the literature, the community is making strides in estimating structural uncertainties for climate models5,6.

The authors make strong statements about political motivation of scientists. Does a partial assessment of uncertainty really indicate nefarious aims? Or might scientists be limited by resources (computing, person, or project time) or, admittedly less satisfactorily, statistical expertise or imagination (the infamous “unknown unknowns”)? In my experience modellers might already need tactful persuasion to detune carefully tuned models, and consequently increase uncertainty ranges; slinging accusations of motivation would not help this process. Far better to argue the benefits of uncertainty quantification. By showing that sensitivity analysis helps us understand complex models and highlight where effort should be concentrated, we can be motivated by better model development. And by showing where we have been ‘surprised’ by too small uncertainty ranges in the past, we can be motivated by the greater longevity of our results.

About Tamsin Edwards

Tamsin Edwards, PhD is a Lecturer in Environmental Sciences at The Open University. She uses computer models to study climate change, what impacts climate change has on sea level and the environment, and how confident we can be in our knowledge of the past and our predictions of the future.

Share this page

A note to readers…

The PLOS BLOGS Network is made up of two types of blogs, the six staff-written blogs from PLOS journal editors or departmental teams, at the top of the next column, and PLOS BLOGS Network-hosted independent blogs, listed below them. Independent blogs are not pre-screened or edited by PLOS; as such any views presented are solely those of their authors, and do not necessarily represent views of PLOS. Unless otherwise noted, all posts on active PLOS BLOGS are published under a Creative Commons CC BY 4.0 license, making them available for reuse by anyone, for any purpose, with appropriate attribution. For questions or comments please contact blogs@plos.org