The paper “Computational Models that Matter During a Global Pandemic Outbreak: A Call to Action” by Squazzoni et al. (2020) is a valuable contribution to the ongoing self-reflection in the social simulation community regarding the role of ABM in the broader social-scientific enterprise. In this paper the authors try to assess the potential capacity of ABM to provide policy makers with a tool allowing them to predict the evolution of the pandemic and the effects of alternative policy responses. Their conclusions suggest a role for computational modelling during the pandemic, but also have implications regarding the position of ABM within the scientific and policy arenas, and its added value relative to other methodologies of scientific inquiry.

We agree with the authors that ABM has an important (and urgent) role to play to help policy makers to take more informed decisions, provided that the models are based on reliable and robust theories of human behaviour and social interaction. However, following in the footsteps of Joshua Epstein (2008), we claim that the importance and relevance of ABM goes beyond the capacity of the models to make point predictions (i.e. in the form of ‘There will be X infections/deaths in Y days time’). We propose that the ability of ABM to develop, inform, and test relevant theory is of particular relevance during this global crisis.

This does not mean that additional data allowing for the models’ calibration and validation are not important, as they can certainly help reduce the uncertainty associated with the models’ outputs, but in our view they are not essential to what agent-based models have to offer. With that in mind, the lack of these data should not prevent the ABM community from participating in the mass mobilization of the scientific community, which is working at unprecedented speed to develop models to inform the vital policy decisions being taken during this pandemic.

As we argue in a recent position paper (Silverman et al. 2020), it is precisely when we have limited data, or no data at all, that simulations provide greater value than traditional methodologies like statistical inference; indeed, the less data we have the more important is the role that agent-based (and other computational) simulations have to play. Computational models provide a way to say something about the evolution of complex systems by delimiting the set of possible outcomes through the constraints imposed by the theoretical framework which is encoded in the model. When we find ourselves in new situations such as the Covid-19 pandemic, where the data (i.e., our past experience) cannot give us any clue regarding the future evolution of the system, we find that theories become the only tool we have to make educated guesses about what could (and could not) possibly happen. Models of complex systems have typically hundreds, if not thousands, of parameters, many of which have unknown values, and some of which have values we cannot know. If we wait for the data we need to make point predictions, we would never have a say in the policy arena, and probably if these data were available other methodologies would serve the purpose better than computational models. Delimiting and quantifying the uncertainty associated with future scenarios in the face of limited data is where computational models can make a vital contribution, as they can give policy-makers useful information for risk management.

By no means are we saying that the development and effective deployment of computational models is without challenges. But we claim that the main challenge lies in the identification and inclusion of sound behavioural theories, as the outputs we get will depend upon the reliability of our models’ theoretical input. Identifying such theories is a significant challenge, requiring theoretical contributions from a number of different fields, ranging from epidemiology and urban studies to sociology and economics.

Further, putting scholars from those disciplines into the same room will not be sufficient; we must create a multidisciplinary community of people sharing the same conceptual framework, an endeavour that takes a lot of dedication, perseverance and, crucially, time. The lack of such multidisciplinary research groups strongly limits the ABM community’s capacity to develop an effective computational model of the pandemic, and we hope that at least this crisis will prove that developing such a community is necessary to improve our capacity for a timely response to the next one.

In relation to this challenge, we are aiming to develop and support a global community of agent-based modellers focused on population health concerns, via the PHASE Network project funded by the UK Prevention Research Partnership. We urge readers to join the network via our website at https://phasenetwork.org/, and help us build a multidisciplinary health modelling community that can contribute to global efforts in improving health both during and after the Covid-19 pandemic.

We must also remember that the current crisis is very unlikely to be over quickly, and its longer-term effects on society will be substantial. At the time of writing more than 80 separate groups and institutions are embarking on efforts to build a vaccine for the coronavirus, but even with such concerted efforts there are no guarantees that a vaccine will be found. As Kissler et al. have shown, even if the virus appears to abate, further waves of infections could arise years afterwards (Kissler et al. 2020). Because of the resources and time it takes to develop theoretically sound computational models, in our view this methodology is better suited to address these longer-term questions of how society can reorganize itself to increase resilience against future pandemics – and here the ability of computational models to implement and test behavioural theories is of paramount importance. The questions that must be asked in the years to come are numerous and profound: How can the world of work change to be more robust to future crises and global shut-downs? Can welfare policies like universal basic income help prevent widespread economic devastation in future crises? How must our health and care systems evolve to better protect the most vulnerable in society?

We propose that computational models can make a particularly valuable contribution in this area. At the present time there is ample evidence of the disastrous effects of delayed or insufficient policy responses to a pandemic. Economic projections already suggest we are due to enter a post-pandemic collapse to rival the Great Depression. We can, and should, begin to develop theories and models about how we may adjust society for the post-Covid world. Models could be valuable tools for testing and developing ambitious socio-economic policy ideas in silico, in order to prepare for this new reality.

To conclude, in principle we share with the authors of the paper the belief that computational models have an important role to play to inform policy makers during crisis (such as pandemics). However, we wish to emphasize the need for sound and robust theoretical frameworks ready to be included in these models, rather than on the existence and availability of data. In practice, the lack of such frameworks is more critical for ensuring that the computational modelling community can make a useful contribution during this pandemic.

Abstract

We respond to the recent JASSS article on COVID-19 and computational modelling. We disagree with the authors on one major point and note the lack of discussion of a second one. We believe that COVID-19 cannot be predicted numerically, and attempting to make decisions based on such predictions will cost human lives. Furthermore, we note that the original article only briefly comments on uncertainty. We urge those attempting to model COVID-19 for decision support to acknowledge the deep uncertainties surrounding the pandemic, and to employ Decision Making under Deep Uncertainty methods such as exploratory modelling, global sensitivity analysis, and robust decision-making in their analysis to account for these uncertainties.

Introduction

We read the recent article in the Journal of Artificial Societies and Social Simulation on predictive COVID-19 modelling (Squazzoni et al. 2020) with great interest. We agree with the authors on many general points, such as the need for rigorous and transparent modelling and documentation. However, we were dismayed that the authors focused solely on how to make predictive simulation models of COVID-19 without first discussing whether making such models is appropriate under the current circumstances. We believe this question is of greater importance, and that the answer will likely disappoint many in the community. We also note that the original piece does not engage substantively with methods of modelling and model analysis specifically designed for making time-critical decisions under uncertainty.

We respond to the call issued by the Review of Artificial Societies and Social Simulation for responses and opinions on predictive modelling of COVID-19. In doing so, we go above and beyond the recent RofASSS contribution by de Matos Fernandes & Keijzer (2020)—rather than saying that definite “predictions” should be replaced by probabilistic “expectations”, we contend that no probabilities whatsoever should be applied when modelling systems as uncertain as a global pandemic. This is presented in the first section. In the second section, we discuss how those with legitimate need for predictive epidemic modelling should approach their task, and which tools might be beneficial in the current context. In the last section, we summarize our opinions and issue our own challenges to the community.

To Model or Not to Model COVID-19, That Is the Question

The recent call attempts to lay out a path for using simulation modelling to forecast the COVID-19 epidemic. However, there is no critical reflection on the question of whether modelling is the appropriate tool for this, under the current circumstances. The authors argue that with sufficient methodological rigour, high-quality data and interdisciplinary collaboration, complex outcomes (such as the COVID-19 epidemic) can be predicted well and quickly enough to provide actionable decision support.

Computational modelling is difficult in the best of times. Even models with seemingly simple structure can have emergent behavior rendering them perfectly random (Wolfram 1983) or Turing complete (Cook 2004). Attempting to draw any kind of conclusions from a simulation model, especially in the life-and-death context of pandemic decision making, must be done carefully and with respect for uncertainty. If, for whatever reason, this cannot be done, then modelling is not the right tool to answer the question at hand (Thompson & Smith 2019). The numerical nature of models is seductive, but must be employed wisely to avoid “useless arithmetic” (Pilkey-Jarvis & Pilkey 2008) or statistical fallacies (Benessia et al. 2016).

Trying to skilfully predict how the COVID-19 outbreak will evolve regionally or globally is a fool’s errand. Epistemic uncertainties about key parameters and processes describing the disease abound. Human behaviour is changing in response to the outbreak. Research and development burgeon in many sciences with presently unknowable results. Anyone claiming to know where the world will be in even a few weeks is at best delusional. Uncertainty is aggravated by the problem of equifinality (Oreskes et al. 1994). For any simulation model of COVID-19, there will be a set of model parametrizations that has a similar quality of fit with the available data. Much of this is acknowledged by Squazzoni et al. (2020), yet inexplicably they still call for developing probabilistic forecasts of the outbreak using empirically validated models. We instead contend that “about these matters, there is no scientific basis on which to form any calculable probability” (Keynes 1937), and that validation should be based on usefulness in aiding time-urgent decision-making, rather than predictive accuracy (Pielke 2003). However, the capacity for such policy-oriented modelling must be built between pandemics, not during them (Rivers et al. 2019).

This call to abstain from predicting COVID-19 does not imply that the broader community should refrain from modelling completely. The illustrative power of simple models has been amply demonstrated in various media outlets. We do urge modellers not to frame their work as predictive (e.g. “How Pandemics Can End”, rather than “How COVID-19 Will End”), and to use watermarks where possible to indicate that the shown work is not predictive. There is also ample opportunity to use simulation modelling to solve ancillary problems. For example, established transport and logistics models could be adapted to ensure supply of critical healthcare equipment is timely and efficient. Similarly, agri-food models could explore how to secure food production and distribution under labour shortages. These can be vital, though less sensational, contributions of simulation modelling to the ongoing crisis.

All three conditions are present in the case of the COVID-19 pandemic. To give a brief example of each, we know very little about asymptomatic infections, whether a vaccine will ever become available, and whether the socio-psychological and economic impacts of a “flattened curve” future are bearable (and by whom). The field of Decision Making under Deep Uncertainty has been working on problems of a similar nature for many years already, and developed a variety of tools to analyse such problems (Marchau et al. 2019). These methods may be beneficial for designing COVID-19 policies with simulation models—if, as discussed previously, this is appropriate. In the following, we present three such methods and their potential value for COVID-19 decision support: exploratory modelling, global sensitivity analysis, and robust decision-making.

Exploratory modelling (Bankes 1993) is a conceptual approach to using simulation models for policy analysis. It emerges as a response to the question how models that cannot be empirically validated can still be used to inform planning and decision-making (Hodges 1991, Hodges & Dewar 1992). Instead of consolidating increasing amounts of knowledge into “the” model of a system, exploratory modelling advocates using wide uncertainty ranges for unknown parameters to generate a large ensemble of plausible futures, with no predictive or probabilistic power attached or implied a priori (Shortridge & Zaitchik 2018). This ensemble may represent a variety of assumptions, theories, and system structures. It could even be generated using a multitude of models (Page 2018; Smaldino 2017) and metrics (Manheim 2018). By reasoning across such an ensemble, insights agnostic to specific assumptions may be reached, sidestepping a priori biases that are inherent in only examining a simple set of scenarios, as COVID-19 policy models observed by the authors do. Reasoning across such limited sets obscures policy-relevant futures which emerge as hybrids of pre-specified positive and negative narratives (Lamontagne et al. 2018). In the context of the COVID-19 pandemic, exploratory modelling could be used to contrast a variety of assumptions about disease transmission mechanisms (e.g., the role of schools, children, or asymptomatic cases in the speed of the outbreak), reinfection potential, or adherence to social distancing norms. Many ESSA members are already familiar with such methods—NetLogo’s BehaviorSpace function is a prime example. The Exploratory Modelling & Analysis Workbench (Kwakkel 2017) provides a similar, platform-agnostic functionality by means of a Python interface. We encourage all modellers to embrace such tools, and to be honest about which parameters and structural assumptions are uncertain, how uncertain they are, and how this affects the inferences that can and cannot be made based on the results from the model.

Global sensitivity analysis (Saltelli 2004) is a method of studying both the importance and interaction of uncertain parameters on the outputs of a simulation model. Many simulation modellers are already familiar with local sensitivity analysis, where parameters are varied one at a time to ascertain their individual effect on model output. This is insufficient for studying parameter interactions in non-linear systems (Saltelli et al. 2019; ten Broeke et al. 2016). In global sensitivity analysis, combinations of parameters are varied and studied simultaneously, illuminating their joint or interaction effects. This is critical for the rigorous study of complex system models, where parameters may have unexpected, non-linear interactions. In the context of the COVID-19 epidemic, we have seen at least two public health agencies perform local sensitivity analysis over small parameter ranges, which may blind decision makers to worst-case futures (Siegenfeld & Bar-Yam 2020). Global sensitivity analysis might reveal how different assumptions for e.g. duration of Intensive Care (IC) and age-related case severity may interact to create a “perfect storm” of IC need. A collection of global sensitivity analysis methods has been encoded for Python in the SALib package (Herman & Usher 2018), and how to use these with NetLogo is illustrated in Jaxa-Rozen & Kwakkel (2018).

Robust Decision Making (RDM) (Lempert et al. 2006) is a general analytic method for designing policies which are robust across uncertainties—they perform well regardless of which future actually materializes. Policies are designed by iteratively stress-testing them across ensembles of plausible futures representing different assumptions, theories, and input parameter combinations. This represents a departure from established, probabilistic risk management approaches, which are inappropriate for fat-tailed processes such as pandemics (Norman et al. 2020). More recently, RDM has been extended to Dynamic Adaptive Policy Pathways (DAPP) (Kwakkel et al. 2015) by incorporating adaptive policies conditioned on specific triggers or signposts identified in exploratory modeling runs. In the context of the COVID-19 epidemic, DAPP might be used to design policies which can adapt as the situation develops (Hamarat et al. 2012)—possibly representing a transparent and verifiable approach to implementing the “hammer and dance” epidemic suppression method which has been widely discussed in popular media. Thinking in terms of pathways conditional on how the outbreak evolves is also a more realistic way of preparing for the dance: Rather than giving a human time line, the virus determines the time line. All we can do is indicate the conditions under which certain types of actions will be taken.

Conclusions: Please Don’t. If You Must, Use Deep Uncertainty methods.

We have raised two points of importance which are not discussed in a recent article on COVID-19 predictive modelling in JASSS. In particular, we have proposed that the question of whether such models should be created must precede any discussion of how to do so. We found that complex outcomes such as epidemics cannot reliably be predicted using simulation models, as there are numerous uncertainties that significantly affect possible future system states. However, models may be still be useful in times of crisis, if created and used appropriately. Furthermore, we have noted that there exists an entire field of study focusing on Decision Making under Deep Uncertainty, and that model analysis methods for situations like this already exist. We have briefly highlighted three methods—exploratory modelling, global sensitivity analysis, and robust decision-making—and given examples for how they might be used in the present context.

Stemming from these two points, we issue our own challenges to the ESSA modelling community and the field of systems simulation in general:

COVID-19 prediction distancing challenge: Do not attempt to predict the COVID-19 epidemic.

Models are pivotal to battle the current COVID-19 crisis. In their call to action, Squazzoni et al. (2020) convincingly put forward how social simulation researchers could and should respond in the short run by posing three challenges for the community among which is a COVID-19 prediction challenge. Although Squazzoni et al. (2020) stress the importance of transparent communication of model assumptions and conditions, we question the liberal use of the word ‘prediction’ for the outcomes of the broad arsenal of models used to mitigate the COVID-19 crisis by ours and other modelling communities. Four key arguments are provided that advocate using expectations derived from scenarios when explaining our models to a wider, possibly non-academic audience.

The current COVID-19 crisis necessitates that we implement life-changing policies that, to a large extent, build upon predictions from complex, quickly adapted, and sometimes poorly understood models. The examples of models spurring the news to produce catchphrase headlines are abundant (Imperial College, AceMod-Australian Census-based Epidemic Model, IndiaSIM, IHME, etc.). And even though most of these models will be useful to assess the comparative effectiveness of interventions in our aim to ‘flatten the curve’, the predictions that disseminate to news media are those of total cases or timing of the inflection point.

The current focus on predictive epidemiological and behavioural models brings back an important discussion about prediction in social systems. “[T]here is a lot of pressure for social scientists to predict” (Edmonds, Polhill & Hales, 2019), and we might add ‘especially nowadays’. But forecasting in human systems is often tricky (Hofman, Sharma & Watts, 2017). Approaches that take well-understood theories and simple mechanisms often fail to grasp the complexity of social systems, yet models that rely on complex supervised machine learning-like approaches may offer misleading levels of confidence (as was elegantly shown recently by Salganik et al., 2020). COVID-19 models appear to be no exception as a recent review concluded that “[…] their performance estimates are likely to be optimistic and misleading” (Wynants et al., 2020, p. 9). Squazzoni et al. describe these pitfalls too (2020: paragraph 3.3). In the crisis at hand, it may even be counter-productive to rely on complex models that combine well-understood mechanisms with many uncertain parameters (Elsenbroich & Badham, 2020).

Considering the level of confidence we can have about predictive models in general, we believe there is an issue with the way predictions are communicated by the community. Scientists often use ‘prediction’ to refer to some outcome of a (statistical) model where they ‘predict’ aspects of the data that are already known, but momentarily set aside. Edmonds et al. (2019: paragraph 2.4) state that “[b]y ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model”. Predictive accuracy, in this case, can then be computed later on, by comparing the prediction to the truth. Scientists know that when talking about predictions of their models, they don’t claim to generalize to situations outside of the narrow scope of their study sample or their artificial society. We are not predicting the future, and wouldn’t claim we could. However, this is wildly different from how ‘prediction’ is commonly understood: As an estimation of some unknown thing in the future. Now that our models quickly disseminate to the general public, we need to be careful with the way we talk about their outcomes.

Predictions in the COVID-19 crisis will remain imperfect. In the current virus outbreak, society cannot afford to rely on the falsification of models for interventions against empirical data. As the virus remains to spread rapidly, our only option is to rely on models as a basis for policy, ceteris paribus. And it is precisely here – at ‘ceteris paribus’ – where the terminology ‘predictions’ miss the mark. All things will not be equal tomorrow, the next day, or the day after that (Van Bavel et al. [2020] note numerous topics that affect managing the COVID-19 pandemic and its impact on society). Policies around the globe are constantly being tweaked, and people’s behaviour changes dramatically as a consequence (Google, 2020). Relying on predictions too much may give a false sense of security.

We propose to avoid using the word ‘prediction’ too much and talk about scenarios or expectations instead where possible. We identify four reasons why you should avoid talking about prediction right now:

Not everyone is acquainted with noise and emergence. Computational Social Scientists generally understand the effects of noise in social systems (Squazzoni et al., 2020: paragraph 1.8). Small behavioural irregularities can be reinforced in complex systems fostering unexpected outcomes. Yet, scientists not acquainted with studying complex social systems may be unfamiliar with the principles we have internalized by now, and put over-confidence in the median outputs of volatile models that enter the scientific sphere as predictions.

Predictions do not convey uncertainty. The general public is usually unacquainted with academic esoteric concepts. For instance, showing a flatten-the-curve scenario generally builds upon mean or median approximation, oftentimes neglecting to include variability of different scenarios. Still, there are numerous other outcomes, building on different parameter values. We fear that by stating a prediction to an undisciplined public, they expect such a thing to occur for certain. If we forecast a sunny day, but there’s rain, people are upset. Talking about scenarios, expectations, and mechanisms may prevent confusion and opposition when the forecast does not occur.

It’s a model, not a reality. The previous argument feeds into the third notion: Be honest about what you model. A model is a model. Even the most richly calibrated model is a model. That is not to say that such models are not informative (we reiterate: models are not a shot in the dark). Still, richly calibrated models based on poor data may be more misleading than less calibrated models (Elsenbroich & Badham, 2020). Empirically calibrated models may provide more confidence at face value, but it lies in the nature of complex systems that small measurement errors in the input data may lead to big deviations in outputs. Models present a scenario for our theoretical reasoning with a given set of parameter values. We can update a model with empirical data to increase reliability but it remains a scenario about a future state given an (often expansive) set of assumptions (recently beautifully visualized by Koerth, Bronner, & Mithani, 2020).

Stop predicting, start communicating. Communication is pivotal during a crisis. An abundance of research shows that communicating clearly and honestly is a best practice during a crisis, generally comforting the general public (e.g., Seeger, 2006). Squazzoni et al. (2020) call for transparent communication. by stating that “[t]he limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed”. We are united in our aim to avert the COVID-19 crisis but should be careful that overconfidence doesn’t erode society’s trust in science. Stating unequivocally that we hope – based on expectations – to avert a crisis by implementing some policy, does not preclude altering our course of action when an updated scenario about the future may require us to do so. Modellers should communicate clearly to policy-makers and the general public that this is the role of computational models that are being updated daily.

Squazzoni et al. (2020) set out the agenda for our community in the coming months and it is an important one. Let’s hope that the expectations from the scenarios in our well-informed models will not fall on deaf ears.

It is natural to want to help in a crisis (Squazzoni et al. 2020), but it is important to do something that is actually useful rather than just ‘adding to the noise’. Usefully modelling disease spread within complex societies is not easy to do – which essentially means there are two options:

Model it in a fairly abstract manner to explore ideas and mechanisms, but without the empirical grounding and validation needed to reliably support policy making.

Model it in an empirically testable manner with a view to answering some specific questions and possibly inform policy in a useful manner.

Which one does depends on the modelling purpose one has in mind (Edmonds et al. 2019). Both routes are legitimate as long as one is clear as to what it can and cannot do. The dangers come when there is confusion – taking the first route whilst giving policy actors the impression one is doing the second risks deceiving people and giving false confidence (Edmonds & Adoha 2019, Elsenbroich & Badham 2020). Here I am only discussing the second, empirically ambitious route.

Some of the questions that policy-makers might want to ask, include, what might happen if we: close the urban parks, allow children of a specific range of ages go to school one day a week, cancel 75% of the intercity trains, allow people to go to beauty spots, visit sick relatives in hospital or test people as they recover and give them a certificate to allow them to go back to work?

To understand what might happen in these scenarios would require an agent-based model where agents made the kind of mundane, every-day decisions of where to go and who to meet, such that the patterns and outputs of the model were consistent with known data (possibly following the ‘Pattern-Oriented Modelling’ of Grimm & Railsback 2012). This is currently lacking. However this would require:

A long-term, iterative development (Bithell 2018), with many cycles of model development followed by empirical comparison and data collection. This means that this kind of model might be more useful for the next epidemic rather than the current one.

A collective approach rather than one based on individual modellers. In any very complex model it is impossible to understand it all – there are bound to be small errors and programmed mechanisms will subtly interaction with others. As (Siebers & Venkatesan 2020) pointed out this means collaborating with people from other disciplines (which always takes time to make work), but it also means an open approach where lots of modellers routinely inspect, replicate, pull apart, critique and play with other modellers’ work – without anyone getting upset or feeling criticised. This does involve an institutional and normative embedding of good modelling practice (as discussed in Squazzoni et al. 2020) but also requires a change in attitude – from individual to collective achievement.

Both are necessary if we are to build the modelling infrastructure that may allow us to model policy options for the next epidemic. We will need to start now if we are to be ready because it will not be easy.

Understanding a situation is the precondition to make good decisions. In the extraordinary current situation of a global pandemic, the lack of consensus about a good decision path is evident in the variety of government measures in different countries, analyses of decision made and debates on how the future will look. What is also clear is how little we understand the situation and the impact of policy choices. We are faced with the complexity of social systems, our ability to only ever partially understand them and the political pressure to make decisions on partial information.

The JASSS call to arms (Flaminio & al. 2020) is pointing out the necessity for the ABM modelling community to produce relevant models for this kind of emergency situation. Whilst we wholly agree with the sentiment that ABM modelling can contribute to the debate and decision making, we would like to also point out some of the potential pitfalls inherent in a false application and interpretation for ABM.

Small change, big difference: Given the complexity of the real world, there will be aspects that are better and some that are less well understood. Trying to produce a very large model encompassing several different aspects might be counter-productive as we will mix together well understood aspects with highly hypothetical knowledge. It might be better to have different, smaller models – on the epidemic, the economy, human behaviour etc. each of which can be taken with its own level of validation and veracity and be developed by modellers with subject matter understanding, theoretical knowledge and familiarity with relevant data.

Carving up complex systems: If separate models are developed, then we are necessarily making decisions about the boundaries of our models. For a complex system any carving up can separate interactions that are important, for example the way in which fear of the epidemic can drive protective behaviour thereby reducing contacts and limiting the spread. While it is tempting to think that a “bigger model”, a more encompassing one, is necessarily a better carving up of the system because it eliminates these boundaries, in fact it simply moves them inside the model and hides them.

Policy decisions are moral decisions: The decision of what is the right course to take is a decision for the policy maker with all the competing interests and interdependencies of different aspects of the situation in mind. Scientists are there to provide the best information for the understanding of a situation, and models can be used to understand consequences of different courses of action and the uncertainties associated with that action. Models can be used to inform policy decisions but they must not obfuscate that it is a moral choice that has to be made.

Delaying a decision is making a decision to do nothing: Like any other policy option, a decision to maintain the status quo while gathering further information has its own consequences. The Call to Action (paragraph 1.6) refers to public pressure for immediate responses, but this underplays the pressure arising from other sources. It is important to recognise the logical fallacy: “We must do something. This is something. Therefore we must do this.” However, if there are options available that are clearly better than doing nothing, then it is equally illogical to do nothing.

Instead of trying to compete with existing epidemiological models, ABM could focus on the things it is really good at:

Understanding uncertainty in complex systems resulting from heterogeneity, social influence, and feedback. For the case at hand this means not to build another model of the epidemic spread – there are excellent SEIR models doing that – but to explore how the effect of heterogeneity in the infected population (such as in contact patterns or personal behavior in response to infection) can influence the spread. Other possibilities include social effects such as how fear might spread and influence behaviours of panic buying or compliance with the lockdown.

Build models for the pieces that are missing and couple these to the pieces that exist, thereby enriching the debate about the consequences of policy options by making those connections clear.

Visualise and communicate difficult to understand and counterintuitive developments. Right now people are struggling to understand exponential growth, the dynamics of social distancing, the consequences of an overwhelmed health system, and the delays between actions and their consequences. It is well established that such fundamentals of systems thinking are difficult (Booth Sweeney and Sterman https://doi.org/10.1002/sdr.198). Models such as the simple models in the Washington Post or less abstract ones like the routine day activity one from Vermeulen et al (2020) do a wonderful job at this, allowing people to understand how their individual behaviour will contribute to the spread or containment of a pandemic.

Highlight missing data and inform future collection. This unfolding pandemic is defined through the constant assessment using highly compromised data, i.e. infection rates in countries are entirely determined by how much is tested. The most comparable might be the rates of death but even there we have reporting delays and omissions. Trying to build models is one way to identify what needs to be known to properly evaluate consequences of policy options.

The problem we are faced with in this pandemic is one of complexity, not one of ABM, and we must ensure we are honouring the complexity rather than just paying lip service to it. We agree that model transparency, open data collection and interdisciplinary research are important, and want to ensure that all scientific knowledge is used in the best possible way to ensure a positive outcome of this global crisis.

But it is also important to consider the comparative advantage of agent-based modellers. Yes, we have considerable commitment to, and expertise in, open code and data. But so do many other disciplines. Health information is routinely collected in national surveys and administrative datasets, and governments have a great deal of established expertise in health data management. Of course, our individual skills in coding models, data visualisation, and relevant theoretical knowledge can be offered to individual projects as required. But we believe our institutional response should focus on activities where other disciplines are less well equipped, applying systems thinking to understand and communicate the consequences of uncertainty and complexity.

The JASSS position paper ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action’ (Squazzoni et al 2020) calls on the scientific community to improve the transparency, access, and rigour of their models. A topic that we think is equally important and should be part of this list is the quest to more “interdisciplinarity”; scientific communities to work together to tackle the difficult job of understanding the complex situation we are currently in and be able to give advice.

The modelling/simulation community in the UK (and more broadly) tend to work in silos. The two big communities that we have been exposed to are the epidemiological modelling community, and social simulation community. They do not usually collaborate with each other despite working on very similar problems and using similar methods (e.g. agent-based modelling). They publish in different journals, use different software, attend different conferences, and even sometimes use different terminology to refer to the same concepts.

The UK pandemic response strategy (Gov.UK 2020) is guided by advice from the Scientific Advisory Group for Emergencies (SAGE), which in turn has comprises three independent expert groups- SPI-M (epidemic modellers), SPI-B (experts in behaviour change from psychology, anthropology and history), and NERVTAG (clinicians, epidemiologists, virologists and other experts). Of these, modelling from member SPI-M institutions has played an important role in informing the UK government’s response to the ongoing pandemic (e.g. Ferguson et al 2020). Current members of the SPI-M belong to what could be considered the ‘epidemic modelling community’. Their models tend to be heavily data-dependent which is justifiable given that their most of their modelling focus on viral transmission parameters. However, this emphasis on empirical data can sometimes lead them to not model behaviour change or model it in a highly stylised fashion, although more examples of epidemic-behaviour models appear in recent epidemiological literature (e.g. Verelst et al 2016; Durham et al 2012; van Boven et al 2008; Venkatesan et al 2019). Yet, of the modelling work informing the current response to the ongoing pandemic, computational models of behaviour change are prominently missing. This, from what we have seen, is where the ‘social simulation’ community can really contribute their expertise and modelling methodologies in a very valuable way. A good resource for epidemiologists in finding out more about the wide spectrum of modelling ideas are the Social Simulation Conference Proceeding Programmes (e.g. SSC2019 2019). But unfortunately, the public health community, including policymakers, are either unaware of these modelling ideas or are unsure of how these are relevant to them.

As pointed out in a recent article, one important concern with how behaviour change has possibly been modelled in the SPI-M COVID-19 models is the assumption that changes in contact rates resulting from a lockdown in the UK and the USA will mimic those obtained from surveys performed in China, which unlikely to be valid given the large political and cultural differences between these societies (Adam 2020). For the immediate COVID-19 response models, perhaps requiring cross-disciplinary validation for all models that feed into policy may be a valuable step towards more credible models.

Effective collaboration between academic communities relies on there being a degree of familiarity, and trust, with each other’s work, and much of this will need to be built up during inter-pandemic periods (i.e. “peace time”). In the long term, publishing and presenting in each other’s journals and conferences (i.e. giving the opportunity for other academic communities to peer-review a piece of modelling work), could help foster a more collaborative environment, ensuring that we are in a much better to position to leverage all available expertise during a future emergency. We should aim to take the best across modelling communities and work together to come up with hybrid modelling solutions that provide insight by delivering statistics as well as narratives (Moss 2020). Working in silos is both unhelpful and inefficient.

I totally share the view on the importance of DATA. What we need is data driven models and the reference to weather forecasting and data assimilation is very appropriate. This probably implies the establishment of a center for epidemics forecasting similar to Reading in the UK or Météo-France in Toulouse. The persistence of such an institution in “normal times” would be hard to warrant, but its operation could be organised as the military reserve.

Let me stress three points.

Models are needed not only by National Policy makers but by a wide range of decision makers such as hospitals and even households. These meso-scales units face hard problems of supplies: hospitals have to manage the supplies of material, consumables, personnel to face hard to predict demand from patients. The same holds true for households: e.g. how to program errands in view of the dynamics of the epidemics? All the supply chain issues also exist for firms, including the chain of deliveries of consumables to hospitals. Hence the importance of available data provided by a center for epidemics forecasting.

The JASSS call (Flaminio et al. 2020) stresses the importance DATA, but does not provide many clues about how to get them. One can hope that some institutions would provide them, but my limited experience is that you have to dig for them. Do It Yourself is a leitmotiv of the Big Data industry. I am thinking of processing patient records to build models of the disease, or private diaries and tweets to model individual behaviour. One then needs collaboration from the NLP (Natural Language Processing) community.

The public and even the media have a very low understanding of dynamical systems and of exponential growth. We know since D. Kahneman book “Thinking, Fast and Slow” (2011) that we have a hard time reasoning on probabilities for instance, but this also applies to dynamics and exponential. We face situations that mandate different actions at different stage of the epidemics such as doing errands or moving to the country-side for town dwellers. The issue is even more difficult for firms, who have to manage employment. Simple models and experimental cognitive science results should be brought to journalists and the general public concerning these issues, in the style of Kahneman if possible.