Tag Archives: anthrax

Last month, the SB7.0 conference attracted around 800 synthetic biology experts from all around the world to Singapore. I was attending as part of the SB7.0 biosecurity fellowship, together with 30 other early-career synthetic biologists and biosecurity researchers. The main goal of the conference was to start a dialogue on biosecurity policies geared specifically towards synthetic biology.

Biosafety refers to the protection of humans and the facilities that deal with biological agents and waste: this has also traditionally encompassed GMO regulations.

Biosecurity is the protection of biological agents that could be intentionally misused

Although the meanings of biosafety and biosecurity are often somewhat interchangeable in the remainder of this blog, I focus on biosecurity as this mainly involves the human component of policy making.

During the conference, Gigi Gronvall from the Center for Health Security illustrated a prime example of biosecurity from a 2010 WHO report on the Variola virus, the smallpox pathogen: “nobody anticipated that […] advances in genome sequencing and gene synthesis would render substantial portions of [Variola virus] accessible to anyone with an internet connection and access to a DNA synthesizer. That “anyone” could even be a well‐intentioned researcher, unfamiliar with smallpox and lacking an appreciation of the special rules that govern access to [Variola virus] genes.”

The take home lesson? What might not look like a security issue now, may soon become a threat!

Biorisks are likely terrorism or nation-state driven

What are the most likely sources that pose a biorisk? According to Crystal Watson, the following risks demand scrutiny:

Natural occurring strains (e.g., the recent Ebola outbreak)

Accidental release (e.g. the 1979 accidental release of anthrax spores by the Sverdlovsk-19a military research facility in the USSR)

Terrorism (e.g., the 2001 anthrax-spore contaminated letters in the US)

State bioweapons (e.g., the US biological warfare program ultimately renounced by President Nixon)

From a biosecurity perspective, it is interesting to note which of these risks are most imminent. The same authors recently published a perspective in Science that describes the actors and organizations that pose a bioweapons threat. It describes the results of a Delphi study of 59 experts with backgrounds broadly ranging from biological and non-biological sciences, medicine, public health, and national security to political science, foreign policy and international affairs, economics, history, and law.

Although the results varied considerably, terrorism was rated as the most likely source of biothreats because of the “rapid technological advances inthe biosciences, ease of acquiring pathogens, democratization of bioscience knowledge, information about a nonstate actors’ intent, and the demonstration of the chaos surrounding the Ebola epidemic in West Africa in 2014”. Another likely biorisk source would be a nation-state actor because of the “technological complexities of developing a bioweapon, the difficulty in obtaining pathogens, and ethical and/or cultural barriers to using biological weapons.”

According to the expert panel, some threats are particularly likely to impact society:

This list essentially covers everything that has been weaponized — only fungi, prions, and synthetic pathogens were not predicted to become weaponized in the next decade.

Now that the threats are defined: how to counteract them? One of the safeguards that has been put in place is the Australia Group,“an informal forum of countries which, through the harmonisation of export controls, seeks to ensure that exports do not contribute to the development of chemical or biological weapons.” This organization seeks to develop international norms and procedures to strengthen export controls in service of chemical and biological nonproliferation aims. However, as Piers Millett from biosecu.re pointed out, these tools do not on their own adequately address our current needs for properly assessing and managing risks. For example, under the Australia agreement you need an export license to export the Ebola virus itself or a sample of prepped Ebola RNA. But you do not need one if you just want to download the sequence of the genome. In other words, access restriction in an inadequate biosecurity failsafe.

Biosecurity is directly related to the challenge posed by the dual use of research: it both creates a risk while providing insights to mitigate that risk. A particularly illustrative example is the recent synthesis of the horsepox virus, which is from the same viral genus as smallpox, but is apparently extinct in nature. Last year, the lab of virologist David Evans at the University of Alberta in Canada reconstituted the horsepox virus, which is extinct. Synthesizing and cloning together almost 200 kb of DNA is not exceptionally challenging today, but it just hadn’t been attempted before for this family of viruses.

But why did Evans and his team set out to synthesize the horsepox virus in the first place? There were several motivating objectives:

the development of a new smallpox vaccine

the potential use of the horsepox virus as a carrier to target tumors

a proof-of-concept for synthesizing extinct viruses using ‘mail-order DNA.’

Evans broadly defended his actions in a recent Science article: “Have I increased the risk by showing how to do this? I don’t know. Maybe yes. But the reality is that the risk was always there. The world just needs to accept the fact that you can do this and now we have to figure out what is the best strategy for dealing with that.”Tom Inglesby from the Center for Health Security reasoned that the proof-of-concept argument does not justify the research as “creating new risks to show that these risks are real is the wrong path.”

How well can the horsepox synthesis study be misused? Evans notes that his group did “provide sufficient details so that someone knowledgeable could follow what we did, but not a detailed recipe.” Unfortunately, there are no international regulations that control this kind of research. And many scholars argue it is now time to start discussing this on a global level.

Paul Keim from Northern Arizona University has proposed a permit system for researchers who want to recreate an extinct virus. And Nicholas Evans from the University of Massachusetts suggests that the WHO create a sharing mechanism that obliges any member state to inform the organization when a researcher plans to synthesize viruses related to smallpox. Both options are well-intentioned. However, anyone can already order a second-hand DNA synthesizer on eBay and countless pathogenic DNA sequences are readily available, so these proposals do not contribute significantly to biosecurity. But, while these rules would increase the amount of red-tape for researchers, they would also contribute to the development of norms and cultural expectations around acceptable practice of the life sciences. The bottom line, which is not novel but very much worth restating, is that scientists should constantly be aware of what they create as well as any associated risks.

The future of synthetic biology and biosecurity

Synthetic biology has only been recently recognized as a mature subject in the context of biological risk assessment — and the core focus has been infectious diseases. The main idea, to build resilience and a readiness to respond, was reiterated by several speakers at the SB7.0 conference. For example, Reshma Shetty, co-founder of Ginkgo Bioworks, explained that in cybersecurity, we didn’t really think a lot about security issues until computers were already ubiquitous. In the case of biosecurity, we’re already dependent on biology [with respect to food, health etc.] but we still have an opportunity to develop biosecurity strategies before synthetic biology is ubiquitous. There is still an opportunity to act now and put norms and practices in place because the community is still relatively small.

Another remark from Shetty was also on point: “We are getting better at engineering biology, so that also means that we can use this technology to engineer preventative or response mechanisms.” For example, we used to stockpile countermeasures such as vaccines. With biotechnological advances, it now possible to move to a rapid-response model, in which we can couple the detection of threats as they emerge via public health initiatives and then develop custom countermeasures using in part synthetic biology approaches. Shetty envisioned that foundries — with next-generation sequencing and synthesis capabilities — are going to play a key role in such rapid responses. Governments should be prepared to support and enable such foundries to rapidly manufacture vaccines for smallpox or any other communicable disease, on-demand. While it is not clear that the details of these processes and the countermeasures themselves can be made public and still maintain their effectiveness, the communication and decision-making processes should be transparent.

Elizabeth Cameron, Senior Director for Global Biological Policy and Programs at the Nuclear Threat Initiative, similarly warned that “if scientists are not taking care of biosecurity now, other people will start taking care of it, and they most likely will start preventing researchers from doing good science.” A shrewd starting point for this development was noted by Matt Watson: “one reason we as a species survived the Cold War was that nuclear scientists—on both sides of the Iron Curtain—went into government and advised policymakers about the nature of the threat they faced. It’s imperative for our collective security that biologists do the same.”

In other words, it is time to start having these serious discussions about imminently needed biosecurity measures during events or conferences such as SB7.0.