Skip to content

PLOS is a non-profit organization on a mission to drive open science forward with measurable, meaningful change in research publishing, policy, and practice.

Building on a strong legacy of pioneering innovation, PLOS continues to be a catalyst, reimagining models to meet open science principles, removing barriers and promoting inclusion in knowledge creation and sharing, and publishing research outputs that enable everyone to learn from, reuse and build upon scientific knowledge.

We believe in a better future where science is open to all, for all.

PLOS BLOGS All Models Are Wrong

Whether environmental modellers are wrong

[This is a comment invited by Issues in Science and Technology as a reply to the article “When All Models Are Wrong” in their Winter 2014 issue. The article is not online there but has been archived by thefreelibrary.com. My comment will appear in the Spring 2014 issue.]

I was interested to read Saltelli and Funtowicz’s article “When All Models Are Wrong”1, not least because several people sent it to me due to its title and mention of my blog2. The article criticised complex computer models used for policy making – including environmental models – and presented a checklist of criteria for improving their development and use.

As a researcher in uncertainty quantification for environmental models, I heartily agree we should be accountable, transparent, and critical of our own results and those of others. Open access journals — particularly those accepting technical topics (e.g. Geoscientific Model Development) and replications (e.g. PLOS One) — would seem key, as would routine archiving of preprints (e.g. arXiv.org) and (ideally non-proprietary) code and datasets (e.g. FigShare.com). Academic promotions and funding directly or indirectly penalise these activities, even though they would improve the robustness of scientific findings. I also enjoyed the term “lamp-posting”: examining only the parts of models we find easiest to see.

However, I found parts of the article somewhat uncritical themselves. The statement “the number of retractions of published scientific work continues to rise” is not particularly meaningful. Even the fraction of retraction notices is difficult to interpret, because an increase could be due to changes in time lag (retraction of older papers), detection (greater scrutiny, e.g. RetractionWatch.com), or relevance (obsolete papers not retracted). It is not currently possible to reliably compare retraction notices across disciplines. But in one study of scientific bias, measured by fraction of null results, Geosciences and Environment/Ecology were ranked second only to Space Science in their objectivity3. It is not clear we can assert there are “increasing problems with the reliability of scientific knowledge”.

There was also little acknowledgement of existing research on the question “Which of those uncertainties has the largest impact on the result?”: for example, the climate projections used in UK adaptation4. Much of this research goes beyond sensitivity analysis, part of the audit proposed by the authors, because it explores not only uncertain parameters but also inadequately represented processes. Without an attempt to quantify structural uncertainty, a modeller implicitly makes the assumption that errors could be tuned away. While this is, unfortunately, common in the literature, the community is making strides in estimating structural uncertainties for climate models5,6.

The authors make strong statements about political motivation of scientists. Does a partial assessment of uncertainty really indicate nefarious aims? Or might scientists be limited by resources (computing, person, or project time) or, admittedly less satisfactorily, statistical expertise or imagination (the infamous “unknown unknowns”)? In my experience modellers might already need tactful persuasion to detune carefully tuned models, and consequently increase uncertainty ranges; slinging accusations of motivation would not help this process. Far better to argue the benefits of uncertainty quantification. By showing that sensitivity analysis helps us understand complex models and highlight where effort should be concentrated, we can be motivated by better model development. And by showing where we have been ‘surprised’ by too small uncertainty ranges in the past, we can be motivated by the greater longevity of our results.

With thanks to Richard Van Noorden, Ed Yong, Ivan Oransky and Tony O’Hagan.

 

[1] Issues in Science & Technology, Winter 2014.

[2] “All Models Are Wrong”, now hosted by PLOS at https://blogs.plos.org/models.

[3] Fanelli (2010): “Positive” Results Increase Down the Hierarchy of the Sciences, PLoS ONE 5(4): e10068.

[4] “Contributions to uncertainty in the UKCP09 projections”, Appendix 2.4 in: Murphy et al. (2009): UK Climate Projections Science Report: Climate change projections. Met Office Hadley Centre, Exeter. Available from http://ukclimateprojections.defra.gov.uk

[5] Rougier et al. (2013), Second-Order Exchangeability Analysis for Multimodel Ensembles, J. Am. Stat. Assoc. 108: 503, 852-863.

[6] Williamson et al. (2013): History matching for exploring and reducing climate model parameter space using observations and a large perturbed physics ensemble, Climate Dynamics 41:7-8, 1703-1729.

Back to top