Disease models need room for randomness, paper urges

Speed read

Leaving space for uncertainty led to a closer match with Ebola’s progress

Forecasting is needed to guide spending on treatments and vaccinations

But uncertainty risks deterring policymakers from strong, early action

Shares

Models used to manage disease outbreaks such as the ongoing Ebola epidemic must make greater allowances for uncertainty, according to a paper published today by the Royal Society in the United Kingdom.

Using stochastic models to correct for some uncertainties, the researchers were able to improve existing models of the Ebola outbreak in West Africa in 2014. Their work reduced biases in the data, and accounted for ambiguity and vagueness in predictions of the outbreak’s progress, the paper says.

“We will be better off with forecasts that are both more accurate and provide reliable statements of uncertainty.”

Ben Bolker, McMaster University

“Attention to uncertainty helps one prepare rationally for a range of likely scenarios,” says author Aaron A. King, who researches evolutionary biology and mathematics at the University of Michigan, United States. “If followed, our proposals will help to reduce the chance that we make large mistakes.”

Around the world, health organisations use statistical models to predict the spread of disease, with the results guiding spending decisions on treatments, vaccinations and even hospital beds. The paper, published in Proceedings B, says the ever-higher expectations placed on models increase the importance of being able to accurately quantify data and deal with associated uncertainty.

One model used to track Ebola, called SEIR, includes parameters such as the number of people who are susceptible to the disease, those who are exposed but not infectious, those who are infectious and those who have recovered. The paper calls such models “deterministic”, because these parameters hide uncertainties, which can easily lead to large errors when applied to raw data.

You might also like

The authors applied stochastic tools — which leave room for uncertainties — to the SEIR model of the Ebola epidemic in Guinea, Liberia and Sierra Leone. They found that these models more reliably reflected the actual progression of the disease and associated uncertainties. While they were imperfect, they helped the researchers better understand the gaps in their predictions.Data on the Ebola outbreak in Liberia and Sierra Leone show the unpredictability of disease spread during a pandemic. Credit: King et al, Proceedings of the Royal Society B
The authors urge epidemic researchers to make their models more reliable by leaving more room for random events and predicting different scenarios to reflect uncertainty around disease progression. They found that, despite being more advanced, the stochastic models could quickly incorporate and apply existing data.

But Ben Bolker, a mathematician, statistician and biologist at McMaster University, Canada, says uncertainty can deter policymakers from strong, early action.

“If modellers had expressed great uncertainty in their [Ebola] forecasts, the response might have been that: ‘We didn’t know enough to make it worth acting’,” he says. “That said, I believe we will be better off with forecasts that are both more accurate and provide reliable statements of uncertainty.”

To deal with such concerns around models and their reflection of reality, disease models should always be compared over time with an outbreak’s actual course, the paper’s authors say. “We are troubled that screening for lack of model fit is not a completely standard part of modelling protocol,” they conclude.