Risk, research, and life as a membrane-bound organism

Category Archives: existential risk

In Eurasia, smallpox was undoubtedly a killer. It came and went in waves for ages, changing the course of empires and countries. 30% of those infected with the disease died from it. This is astonishingly high mortality from a disease – worse than botulism, Lassa Fever, tularemia, the Spanish flu, Legionnaire’s disease, and SARS.

In the Americas, smallpox was a rampaging monster.

When it first appeared Hispaniola in 1518, it spread 150 miles in four months and killed 30-50% of people. Not just of those infected, of the entire population1.It’s said to have infected a quarter of the population of the Aztec Empire within two weeks, killing half of those2, and laying the stage for another disease to kill many more3.

Then, alongside other diseases and warfare, it contributed to 84% of the Incan Empire dying4.

Among the people who sometimes traded at the Hudson Bay Company’s Cumberland House on the Seskatchewan River in 1781 and 1782, 95% seemed to have died. Of them, the U’Basquiau (also called, I believe, the Basquia Cree people) were entirely killed5.

Over time, smallpox killed 90% of the Mandan tribe, along with 80% of people in the Columbia River region, 67% of the Omahas, and half of the Piegan tribe and of the Huron and Iroquois Confederations6.

Here are some estimates of the death rates between ~1605 and 1650 in various Northeastern American groups. This was during a time of severe smallpox epidemics. Particularly astonishing figures are highlighted (mine).

Figure adapted from European contact and Indian depopulation in the Northeast: The timing of the first epidemics[^7]

Most of our truly deadly diseases don’t move quickly or aren’t contagious. Rabies, prion diseases, and primary amoebic meningoencephalitis have more or less 100% fatality rates. So do trypanosomiasis (African sleeping sickness) and HIV, when untreated.

When we look at the impact of smallpox in the Americas, we see extremely fast death rates that are worse than the worst forms of Ebola.

What happened?

In short, probably a total lack of previous exposure to smallpox and the other pathogenic European diseases, combined with cultural responses that helped the pathogen spread. The fact that smallpox was intentionally spread by Europeans in some cases probably contributed, but I’m not sure how much.

Virgin soil

Smallpox and its relatives in the orthopox family – monkeypox, cowpox, horsepox, and alastrim (smallpox’s milder variant) – had been established in Eurasia and Africa for centuries. Exposure to one would give some immune protection to the others. Variolation, a cruder version of vaccination, was also sometimes practiced.

Between these, and the frequent waves of outbreaks, a European adult would have survived some kind of direct exposure to smallpox-like antigens in the past, and would have the protection of antibodies to it, preventing future sickness. They would also have had, as children, the indirect protection of maternal antibodies, protecting them as children1.

In the Americas, everyone was exposed to the most virulent form of the disease with no defenses. This is called a “virgin soil epidemic”.

In this case, epidemics would stampede through occasionally, ferociously but infrequently enough for any given tribe that antibodies wouldn’t successfully form, and maternal protection didn’t develop. Many groups were devastated repeatedly by smallpox outbreaks over decades, as well as other European diseases: the Cocolizti epidemics3, measles, influenza, typhoid fever, and others7.

In virgin soil epidemics, including these ones, disease strikes all ages: children and babies, the elderly and strong young adults6. This sort of indiscriminate attack on all age groups is a known sign in animal populations that a disease is extremely lethal8. In humans, it also slows the gears of society to a halt.

When so much of the population of a village was too sick to move, not only was there nobody to tend crops or hunt – setting the stage for scarcity and starvation – but there was nobody to fetch water. Dehydration is suspected as a major cause of death, especially in children16.Very sick mothers would also be unable to nurse infants6.

Other factors that probably contributed:

Cultural factors

Native Americans had some concept of disease transmission – some people would run away when smallpox arrived in their village, possibly carrying and spreading the germ7. They also would steer clear of other tribes that had it. That said, many people lived in communal or large family dwellings, and didn’t quarantine the sick to private areas. They continued to sleep alongside and spend time with contagious people6.

In addition, pre-colonization Native American measures against diseases were probably somewhat effective to pre-colonization diseases, but tended to be ineffective or harmful for European diseases. Sweat baths, for instance, could have spread the disease and wouldn’t have helped9. Transmission could also have occurred during funerals10.

Looking at combinations of the above factors, death rates of 70% and up are not entirely unsurprising.

Use as a bioweapon

Colonizers repeatedly used smallpox as an early form of biowarfare against Native Americans, knowing that they were more susceptible. This included, at times, intentionally withholding vaccines from them. Smallpox also spreads rapidly naturally, so I’m not sure how much contributed to the overall extreme death toll, although it certainly resulted in tremendous loss of life.

Probably not responsible:

Genetics. A lack of immunological diversity, or some other genetic susceptibility, has been cited as a possible reason for the extreme mortality rate. This might be particularly expected in South America, because of the serial founder effect – in which a small number of people move away from their home community and start their own, repeated over and over again, all the way across Beringia and down North America, into South America9.

That said, this theory is considered unlikely today1.For one, the immune systems of native peoples of the Americas react similarly to vaccines as the immune systems of Europeans10. For another, groups in the Americas also had unusually high mortality from other European diseases (influenza, measles, etc), but this mortality decreased relatively quickly after first exposure – quickly enough that genetic attributes couldn’t change quickly enough to explain the response10.

Some have also proposed general malnutrition, which would weaken the immune system and make it harder to fight off smallpox. This doesn’t seem to have been a factor1. Scarce food was a fact of life in many Native American groups, but then again, the same was true for European peasants, who still didn’t suffer as much from smallpox.

Africa

Smallpox has had a long history in parts of Africa – the earliest known instance of smallpox infection comes from Egyptian mummies2, and frequent European contact throughout the centuries spread the disease to the parts they interacted with. Various groups in North, East, and West Africa developed their own variolation techniques11.

However, when the disease was introduced to areas it hadn’t existed before, we saw similarly astounding death rates as in the Americas: one source describes mortality rates of 80% among the Griqua people of South Africa. Less quantitatively, it describes how several Hottentot tribes were “wiped out” by the disease, that some tribes in northern Kenya were “almost exterminated”, and that parts of the eastern Congo River basin became “completely depopulated”2.

This makes it sound like smallpox acted similarly in unexposed people in Africa. It also lends another piece of evidence against the genetic predisposition hypothesis – that the disease would act similarly on groups so geographically removed.

Wikipedia also tells me that smallpox was comparably deadly when it was first introduced to various Australasian islands, but I haven’t looked into this further.

Extra

When smallpox arrived in India around 400 AD, it spurred the creation of Shitala, the Hindu goddess of (both causing and curing) smallpox. She is normally depicted on a donkey, carrying a broom for either spreading germs or sweeping out a house, and a bowl of either smallpox germs or of cool water.

The last set of images on this page also seems to be a depiction of the goddess, and captures something altogether different, something more dark and visceral.

Finally, this blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

References

Riley, J. C. (2010). Smallpox and American Indians revisited. Journal of the history of medicine and allied sciences, 65(4), 445-477. ↩↩↩↩↩

One study on a German nature reserve found insect biomass (e.g., kilograms of insects you’d catch in a net) has declined 75% over the last 27 years. Here’s a good summary that answered some questions I had about the study itself.

Another review study found that, globally, invertebrate (mostly insect) abundance has declined 35% over the last 40 years.

This is an honest question, and I want an answer. (Readers will know I take catastrophic possibilities very seriously.) Insects are among the most numerous animals on earth and central to our ecosystems, food chains, etcetera. 35%+ lower populations are the kind of thing where, if you’d asked me to guess the result in advanced, I would have expected marked effects on ecosystems. By 75% declines – if the German study reflects the rest of the world to any degree – I would have predicted literal global catastrophe.

Yet these declines have been going on for apparently decades apparently consistently, and the biosphere, while not exactlydoing great, hasn’t literally exploded.

So what’s the deal? Any ideas?

Speculation/answers welcome in the comments. Try to convey how confident you are and what your sources are, if you refer to any.

(If your answer is “the biosphere has exploded already”, can you explain how, and why that hasn’t changed trends in things like global crop production or human population growth? I believe, and think most other readers will agree, that various parts of ecosystems worldwide are obviously being degraded, but not to the degree that I would expect by drastic global declines in insect numbers (especially compared to other well-understood factors like carbon dioxide emissions or deforestation.) If you have reason to think otherwise, let me know.)

Sidenote: I was going to append this with a similar question about the decline in ocean phytoplankton levels I’d heard about – the news that populations of phytoplankton, the little guys that feed the ocean food chain and make most of the oxygen on earth, have decreased 40% since 1950.

But a better dataset, collected over 80 years with consistent methods, suggests that phytoplankton have actually increased over time. There’s speculation that the appearance of decrease in the other study may have been because they switched measurement methods partway through. An apocalypse for another day! Or hopefully, no other day, ever.

Also, this blog has a Patreon. If you like my work, consider incentivizing me to make more!

Early September brought Seattle what were to be some of the hottest days of the summer. For weeks, people had been turning on fans, escaping to cooler places to spend the day, buying out air conditioners, which most of the city didn’t own. I cowered in my room with an AC unit on loan from a friend lodged in the window, only going out walking when the sun had set.

That week, Eastern Washington was burning. It does that every summer. But this year, a lot of Eastern Washington was burning. Say it with me – 2017 was one of the worst fire years on record. That week, the ash from the fires drifted over Seattle. You smelled smoke everywhere in the city. The sky was gray. At sunrise and sunset, the sun was blood-red. One day, gray ash drifted down from the sky, the exact size of snowflakes. It dusted the cars and kept falling through the afternoon.

That day, people said the weather was downright apocalyptic. They weren’t entirely wrong.

Many people aren’t clear on what exactly a nuclear winter is. The mechanic is straightforward. When cities burn in the extreme heat of a nuclear blast – and we do mean cities, plural, most nuclear exchange scenarios involve multiple strikes for strategic reasons – they turn into soot, and the soot floats up. If enough ash from burned cities reaches the stratosphere, the upper layer of the atmosphere, it stays there for a long time. The ash clouds blot out the sun, cool the earth, and choke off the growth of crops. Within weeks, agriculture grinds to a halt.

There’s a lot of uncertainty over nuclear winter. But by one estimate, the detonation of less than 1% of the world’s nuclear arsenal – a fairly small war – could drop the temperatures by five degrees Celsius, and warm up slowly again over twenty years. The ozone layer would thin. Less rain would fall. Two billion people would starve.

On Tuesday and Wednesday that week, the temperature was predicted to reach over 100 degrees. It didn’t. The particulates in the air blocked enough of the sun’s heat that it barely hit the 90’s. Pedestrians didn’t quite breathe easier, but did sweat less. Our own tiny, toy model taste of a nuclear apocalypse.

I’d been feeling strange for the last few weeks, unrelatedly, and sitting at my desk for hours, my mind did a lot of wandering. I hoped things would be looking up – I’d just gotten back from an exciting conference with good friends, and also from seeing the solar eclipse.

I’d made the pilgrimage with friends. We drove for hours, east across the mountains the week before they burned. We crossed the Colombia River into Oregon, and finally, drove up a winding dirt road to a wide clearing with a small pond. I studied for the GRE in the shadows of dry pines. We played tug-of-war with the crayfish and watched the mayflies dance above the pond. The morning of, the sun climbed in the sky, and I had never appreciated how invisible the new moon is, or how much light the sun puts out – even when it was half-gone, we still had to peer through black plastic glasses to confirm that something had changed. But soon, it became impossible not to notice.

I kept thinking about what state I would have been thrown into if I hadn’t known the mechanism of an eclipse – how deep the pits of my spiritual terror could go. Whether it would be limited by biology or belief. As it is, it was only sort of spiritually terrifying, in a good way. The part of my brain that knew what was happening had spread that knowledge to all the other parts well, so I could run around in excitement and really appreciate the chill in the air, the eerie distortion of shadows into slivers, and finally, the moon sealing off the sun.

The solar corona.

The sunset-colored horizon all around the rim of the sky.

Stars at midday.

We left after the daylight returned, but while the moon was still easing away, eager to beat the crowds back to the city. I thought about the mayflies in the pond, and their brief lives – the only adults in hundreds of generations to see the sun, see the stars, and then see the sun again.

I thought something might shake loose in my brain. Things should have been looking up, but the adventures had scarcely touched the inertia. Oh, right, I had also been thinking a lot about the end of the world.

I wonder about the mental health of people who work in existential risk. I think it must vary. I know people who are terrified on a very emotional and immediate level, and I know people who clearly believe it’s bad but don’t get anxious over it, and aren’t inclined to start. I can’t blame them. I used to be more of the former, and now I’m not sure if it’s eased up or if I’m just not thinking about things that viscerally scare me anymore. I’m not sure the existential terror can tip you towards something you weren’t predisposed to. In my case, I don’t think the mental fog was from it. But the backdrop of existential horror certainly lent it an interesting flavor.

It’s late October now. I’ve pulled out the flannel and the space heater and the second blanket for the bed. When I went jogging, my hands got numb. I don’t mind – I like autumn, I like the descent into winter, heralded by rain and red leaves and darkness, and the trappings of horror and then of the harvest. Peaches in the summertime, apples in the fall. The seasons have a comforting rhyme to them.

That strange inertia hasn’t quite lifted, but I’m working on it. Meanwhile, the world continues to cant sideways. When we arranged the Stanislav Petrov day party in Seattle this year, to celebrate the day a single man decided not to start World War 3, I wondered if we should ease up on the “nuclear war is a terrifying prospect” theme we had laid on last year. I thought that had probably been on people’s minds already.

So geopolitical tensions are rising, and have been rising. The hemisphere gets colder. Not quite out of nostalgia, my mind keeps flickering back to last month, to not-quite-a-hundred-degrees Seattle, to the red sun.

There’s a beautiful quote from Albert Camus: “In the midst of winter, I found there was, within me, an invincible summer.” That Tuesday, like the momentary pass of the moon over the sun in mid-day, in the height of summer, I saw the shadow of a nuclear winter.

There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. There are three other major events – the Great Oxygenation Event, End Ediacaran extinction, and the Anthropocene / Quaternary extinction.

Let’s look at four of them. The first actually occurs right before this graph starts.

I decided not to discuss the Great Oxygenation Event in the talk itself, but it’s also an example – photosynthetic cyanobacteria evolved and started pumping oxygen into the atmosphere, which after filling up oxygen sinks in rocks, flooded into the air and poisoned many of the anaerobes, leading to the “oxygen die-off” and the “rusting of the earth.” I excluded it because A) it wasn’t about multicellular life, which, let’s face it, is much more relevant and interesting, and B) I believe it happened over such a long amount of time as to be not worth considering on the same scale as the others.

(I was going to jokingly call these “animal x-risks”, but figured that might confuse people about that the point of the talk was.)

The End-Ediacaran extinction

We don’t know much about Precambrian life, but it’s known as the “Garden of Ediacara” and seems to have been a peaceful time.

The Ediacaran sea floor was covered in a mat of algae and bacteria, and ‘critters’ – some were definitely animals, others we’re not sure – ate or lived on the mats. There were tunneling worms, the limpets, some polyps, and the sand-filled curiosities termed “vendozoans”. They may have been single enormous cells like today’s xenophylophores, with the sand giving them structural support. The fiercest animal is described as a “soft limpet” that eats microbes. They don’t seem to have had predators, and this period is sometimes known as the “Garden of Ediacara”. (1)

At 542 million years ago, something happens – the Cambrian explosion. In a very short 5 million years, a variety of animals evolve in a short window.

Molluscs, trilobites and other arthropods, a creative variety of worms eventually including the delightful Hallucigenia, and sponges exploded into the Cambrian. They’re faster and smarter than anything that’s ever existed. The peaceful Ediacaran critters are either outcompeted or gobbled up, and vanish from the fossil record. The first shelled animals indicate that predation had arrived, and that the gates of the Garden of Ediacara had closed forever.

The end-Devonian extinction

Jump forward a few million years – 50% of genuses go extinct. Marine species suffered the most in this event, probably due to anoxia.

There’s an unexpected possible culprit – plants around this time made a few evolutionary leaps that began the first forests. Suddenly a lot of trees pumping oxygen into the air lead to global cooling, and large amounts of soil lead to nutrient-rich runoff, which lead to widespread marine anoxia which decimates the ocean.

We do know that there were a series of extinction events, so forests were probably only a partial cause. The longer climate trend around the extinction was global warming, so the yo-yoing temperature (from general warming and cooling from plants) likely contributed to extinction. (2) It’s strange to think that the land before 375 million years ago didn’t have much in the way of soil – major root structures contributed to rock wearing away. Plus, once you have some soil, and once the first trees die and contribute their nutrients, you get more soil and more plants – a positive feedback loop.

The specific trifecta of evolutions that let forests take over land: significant root structures, complex vascular systems, and seeds. Plants prior to this were small, lichen-like, and had to reproduce in water. (3)

The Permian-Triassic extinction

96% of marine species go extinct. Most of this happens in a 20,000 year window, which is nothing in geologic time. This is the largest and most sudden prehistoric extinction known.

The cause of this one was confusing for a long time. We know the earth got warmer, or maybe cooler, and that volcanoes were going off, but the timing didn’t quite match up.

Volcanoes were going off for much longer than the extinction, and it looks like die-offs were happening faster than we’d expect from increasing volcanism, or standard climate change cycles. (4) One theory points out that die-offs line up with exponential or super-exponential growth, as in, from a replicating microbe. Remember high school biology?

One theory suggests Methanosarcina, an archaea that evolved the chemical process that turned organic carbon into methane around the same time. Remember those volcanoes? They were spewing enormous amounts of nickel – an important co-factor for that process.

Methanosarcina, image from Nature

(Methanosarcina appeared to have gotten the gene from a cellulose-digesting bacteria – definitely a neat trick. (5) )

The theory goes that Methanosarcina picked up its new pathway, and flooded the atmosphere with methane, which raised the surface temperature of the oceans to 45 degrees Celsius and killed most life. (2)

This report is a little recent, and it’s certainly unique, so I don’t want to claim that it’s definitely confirmed, or sure on the same level that, say, the Chicxulub impact theory is confirmed. That said, at the time of this writing, the cause of the Permian-Triassic extinction is unclear, and the methanogen theory doesn’t seem to have been majorly criticized or debunked.

Quaternary and Anthropocene extinctions

Finally, I’m going to combine the Quaternary and Anthropocene events. They don’t show up on this chart because the data’s still coming in, but you know the story – maybe you’re an ice-age megafauna, or rainforest amphibian, and you are having a perfectly fine time, until these pretentious monkeys just walk out of the Rift Valley, and turn you into a steak or a corn farm.

Art by Heinrich Harder.

Because of humans, since 1900, extinctions have been happening at about a thousand times the background rate.

(Looking at the original chart, you might notice that the “background” number of extinctions appears to be declining over time – what’s with that? Probably nothing cosmic – more recent species are just more likely to survive to the present day.)

Impacts from evolutionary innovation

You can probably see a common thread by now. These extinctions were caused – at least in part – by natural selection stumbling upon an unusually successful strategy. Changing external conditions, like nickel from volcanoes or other climate change, might contribute by giving an edge to a new adaptation.

In some cases, something evolved that directly competed the others – biotic replacement

In others, something evolved that changed the atmosphere.

I’m going to throw in one more – that any time a species goes extinct due to a new disease, that’s also an evolutionary innovation. Now, as far as we can tell, this is extremely rare in nature, but possible. (7)

Are humans at risk from this?

From natural risk? It seems unlikely. These events are rare and can take on the order of thousands of years or more to unfold, at which point we’d likely be able to do something about it.

That is, as far as we know – the fossil record is spotty. As far as I can tell, we were able to pin the worst of the Permian-Triassic extinction down to 20,000 years only because that’s how narrow the resolution on the fossil band formed at the time was. It might have actually been quicker.

Even determining if an extinction has happened or not, or if the rock just happened to become less good at holding fossils, is a struggle. I liked this paper not really for the details of extinction events (I don’t think the “mass extinctions are periodic” idea is used these days), but for the nitty gritty details of how to pull detailed data out of rocks.

That said, for calibrating your understanding, it seems possible that extinctions from evolutionary innovation are more common than mass extinctions involving asteroids (only one mass extinction has been solidly attributed to an asteroid: the Chicxulub impact that ended the reign of dinosaurs.) That’s not to say large asteroid impacts (bolides) don’t cause smaller extinctions – but one source estimated the bolide:extinction ratio to be 175:1. (2)

Plus, having a brain matters, and I think I can say it’s really unlikely that a better predator (or a new kind of plant) is going to evolve without us noticing. There are some parallels here with, say, artificial intelligence risk, but I think the connection is tenuous enough that it might not be useful.

If we learn that such an event is happening, it’s not clear what we’d do – it depends on specifics.

Synthetic biology

But consider synthetic biology – the thing where we design new organisms and see what happens. As capabilities expand, should we worry about lab escapes on an existential scale? I mean, it has happened in nature.

Evolution has spent billions of years trying to design better and better replicators. And yet, evolutionary innovation catastrophes are still pretty rare.

That said, people have a couple of advantages:

We can do things on purpose. (I mean, a human working on this might not be trying to make a catastrophic geoweapon – but they might still be trying to make a really good replicator.)

We can come up with entirely new things. When natural selection innovates, every incremental step on the way to the final result has to an improvement on what came before. It’s like if you tried to build a footbridge, but at every single step of building it, it had to support more weight than before. We don’t have those constraints – we can just design a bridge and then build it and then have people walk across it. We can design biological systems that nobody has seen before.

This question of if we can design organisms more effective than evolution is still open, and crucial for telling us how concerned we should be about synthetic organisms in the environment.

People are concerned about synthetic biology and the risk of organisms “escaping” from a lab, industrial setting, or medical setting into the environment, and perhaps persisting or causing local damage. They just don’t seem to be worried on an existential level. I’m not sure if they should be, but it seems like the possibility is worth considering.

For instance, a company once almost released large quantities of an engineered bacteria that turned out to produce soil ethanol in large enough quantities to kill all plants in a lab microcosm. It appears that we don’t have reason to think it would have outcompeted other soil biota and actually caused an existential or even a local catastrophe, but it was caught at the last minute and the implications are clearly troubling. (9)

Natural Die-offs of Large Mammals: Implications for Conservation I’m pretty sure I’ve seen at least a couple other sources mention this, but can’t find them right now. I had Chytridiomycosis in mind as well. This seems like an important research project and obviously has some implications for, say, biology existential risk.

I spent a memorable college summer – and much of the next quarter – trying to run a particular experiment involving infecting cultured tissue cells with bacteria and bacteriophage. The experiment itself was pretty interesting, and I thought the underpinnings were both useful and exciting. To prepare, all I had to do was manage to get some tissue culture up and running. Nobody else at the college was doing tissue culture, and the only lab technician who had experience with it was out that summer.

No matter, right? We had equipment, and a little money for supplies, and some frozen cell lines to thaw. Even though neither I, nor the student helping me, nor my professor, had done tissue culture before, we had the internet, and even some additional help once a week from a student who did tissue culture professionally. Labs all around the world do tissue culture every day, and have for decades. Cakewalk.

Five months later, the entire project had basically stalled. The tissue cells were growing slower and slower, we hadn’t been able to successfully use them for experiments, our frozen backup stocks were rapidly dwindling and of questionable quality, and I was out of ideas on how to troubleshoot any of the myriad things that could have been going wrong. Was it the media? The cells? The environment? Was something contaminated? If so, what? Was the temperature wrong? The timing? I threw up my hands and went back to the phage lab downstairs, mentally retiring to a life of growing E. coli at slightly above room temperature.

It was especially frustrating, because this was just tissue culture. It’s a fundamental of modern biology. It’s not an unsolved problem. It was just benchwork being hard to figure out without hands-on expertise. All I can say if any disgruntled lone wolves trying to start bioterrorism programs in their basements were also between the third PDF from 1970 about freezing cells with a minimal setup and losing their fourth batch of cells because they gently tapped the container until it was cloudy but not cloudy enough, it’d be completely predictable if they gave up their evil plans right there and started volunteering in soup kitchens instead.

It was obscure enough that it wasn’t at the library, but at the low cost of ending up on every watchlist ever, I got it from Amazon and can ultimately recommend it. I think it’s a well-researched and interesting contrary opinion to common intuitions about biological weapons, which changed my mind about some of those.

For all the attention drawn by biological weapons, they are, for now, rare. […] This should paint the picture of an uneasy world. It certainly does to me. If you buy arguments about why risk from bioweapons is important to consider, given that they kill far fewer people than many other threats, then this also suggests that we’re in an unusually fortunate place right now – one where the threat is deep and getting deeper, but nobody is actively under attack.

Barriers to Bioweapons argues that actually, we’re not all living on borrowed time – that there are real organizational and expertise challenges to successfully creating bioweapons. She then discusses specific historical programs, and their implications for biosecurity in the future.

The importance of knowledge transfer

The first part of the book discusses in detail how tacit knowledge spreads, and how scientific progress is actually accomplished in an organization. I was fascinated by how much research exists here, for science especially – I could imagine finding some of this content in a very evidence-driven book on managing businesses, but I wouldn’t have thought I could find the same for, e.g., how switching locations tends to make research much harder to replicate because available equipment and supplies have changed just slightly, or that researchers at Harvard Medical School publish better, more-frequently-cited articles when they and their co-authors work in the same building.

Basically, this book claims – and I’m inclined to agree – that spreading knowledge about specific techniques is really, really hard. What makes a particular thing work is often a series of unusual tricks, the result of trial and error, that never makes it into the ‘methods’ of a journal. (The hashtag #OverlyHonestMethods describes this better than I could.)

Measurements were stabilized by slapping the god damn analyzer 8 times like an old tv. #OverlyhonestMethods

All of that tacit knowledge is promoted by organizational structures and stored in people, so the movement and interaction of people is crucial in sharing knowledge. Huge problems arise when that knowledge is lost. The book describes the Department of Energy replacing nuclear weapons parts in the late 1990s, and realizing that they no longer knew how to make a particular foam crucial to thermonuclear warheads, that their documentation for the foam’s production was insufficient, and that anyone who had done it before was long retired. They had to spend nine years and 70 million dollars inventing a substitute for a single component.

Every now and then when reading this, I was tempted to think “Oh come on, it can’t be that hard.” And then I remembered tissue culture.

The thing that went wrong that summer was a lack of tacit knowledge. Tacit knowledge is very, very slow to build, and you can either do it by laboriously building that knowledge from scratch, or by learning from someone else who does. Bioweapon programs tend to fail because their organizations neither retain nor effectively share tacit knowledge, and so their hopeful scientific innovations take extremely long and often never materialize. If you can’t solve the problems that your field has already solved, you’re never going to be able to solve new ones.

For a book on why bioweapons programs have historically failed, this section seems like it would be awkwardly useful reading for scientists or even anyone else trying to build communities that can effectively research and solve problems together. Incentives and cross-pollination are crucial, projects with multiple phases should have those phases integrated vertically, tacit knowledge stored in brains is important.

Specific programs

In the second part of the book, Ouagrham-Gormley discusses specific bioweapons programs – American, Soviet, Iraqi, South African, and that of the Aum Shinrikyo cult – and why they failed at one or more of these levels, and why we might expect future programs to go the same way. It’s true that all of these programs failed to yield much in the way of military results, despite enormous expenditures of resources and personnel, and while I haven’t fact checked the section, I’m tempted to buy her conclusions.

Secrecy can be lethal to complicated programs. Because of secrecy constraints:

Higher-level managers or governments have to put more faith in lower-level managers and their results, letting them steal or redirect resources

Sites are small and geographically isolated from each other

Scientists can’t talk about their work with colleagues in other divisions

Collaboration is limited, especially internationally

Facilities are more inclined to try to be self-sufficient, leading to extra delays

Maintaining secrecy is costly

Destroying research or moving to avoid raids or inspections sets back progress

Focus on results only means that researchers are incentivized to make up results to avoid harsh punishments

Supervisors are also incentivized to make up results, which works, because their supervisors don’t understand what they’re doing

Feedback only goes down the hierarchy, suggestions from staff aren’t passed up

Working in strict settings is unrewarding and demoralizes staff

Promotion is based on political favor, not expertise, and reduces quality of research

Power struggles between staff reduce ability to cooperate

Sometimes cases are more subtle. The US bioweapons program ran from roughly 1943 to 1969, and didn’t totally fall prey to either of these – researchers and staff met at Fort Detrick at different levels and cross-pollinated knowledge with relative freedom. Crucially, it was “secret but legal, as it operated under the signature of the Biological Weapons Convention (BWC). Therefore, it could afford to maintain a certain degree of openness in its dealings with the outside world.”

Its open status was highly unusual. Nonetheless, while it achieved a surprising amount, the US program still failed to produce a working weapon after 27 years. It was closed later when the US ratified the BWC itself.

Ouagrham-Gormley says this failure was mostly due to a lack of collaboration between scientists and the military, shifting infrastructure early on, and diffuse organization. The scientists at Fort Detrick made impressive research progress, including dozens of vaccines, and research tools including decontamination with formaldehyde, negative air pressure in pathogen labs, and the laminar flow fume hood used ubiquitously for biological work in labs across the world.

Used for, among other things, tissue culture. || Public domain by TimVickers.

But research and weaponization are two different things, and military and scientific applications rarely met. The program was never considered a priority by the military. In fact, its leadership (responsibilities and funding decisions) in the government was ambiguously presided over by about a dozen agencies, and it was reorganized and re-funded sporadically depending on what wars were going on at the time. Uncertainty and a lack of coordination ultimately lead the program nowhere. It was amusing to learn that the same issue plaguing biodefense in the US today was also responsible for sinking bioweapons research decades ago.

Ouagrham-Gormley discussed the Japanese Aum Shinrikyo cult’s large bioweapons efforts, but didn’t discuss Japan’s military bioweapon program, Unit 731, which ran from 1932 to 1935 and included testing numerous agents on Chinese civilians, and a variety of attacks on Chinese cities. While the experiments conducted are among the most horrific war crimes known, its war use was mixed – release of bombs containing bubonic-plague infected fleas, as well as other human, livestock, and crop diseases – killed between 200,000 and 600,000. Unless I’m very wrong, this makes that the largest modern bioweapon attack. Further attacks were planned, including on the US, but the program was ended and evidence was destroyed when Japan surrendered in World War II.

I haven’t looked into the case too much, but it’s interesting because that program appears to have had an unusually high death toll (for a bioweapon program). As far as I can tell, some factors were: the program having general government approval and lots of resources, stable leadership, a main location, and its constant testing of weapons on enemy civilians, which added to the death toll – they didn’t wait as long to develop weapons that were perfect, and gathered data on early tests, without much concern for secrecy. This program predated the others, which might have been a factor in its ability to test weapons on civilian populations (even though the program was technically forbidden by the 1925 Germ Warfare provision of the Geneva Conventions).

Ramifications for the future

One interesting takeaway is that covertness has a substantial cost – forcing a program to “go underground” is a huge impediment to progress. This suggests that the Biological Weapons Convention, which has been criticized for being toothless and lacking provisions for enforcement, is actually already doing very useful work – by forcing programs to be covert at all. Of course, Ouagrham-Gormley recommends adding those provisions anyways, as well as checks on signatory nations – like random inspections – that more effectively add to the cost of maintaining secrecy for any potential efforts. I agree.

In fact, it’s working already. Consider:

In weapons programs, expertise is crucial, both in manufacturing and in the relevant organisms but also bioweapons themselves.

The Biological Weapons Convention has been active since 1975. The huge Soviet bioweapon program continued secretly, but as shrinking in the late 1980’s, and was officially acknowledged and ended in 1992.

While the problem hasn’t disappeared since then, new experts in bioweapon creation are very rare.

People working on bioweapons before 1975 are mostly already retired.

As a result, that tacit knowledge transfer is being cut off. A new state that wanted to pick up bioweapons would have to start from scratch. The entire field has been set back by decades, and for once, that statement is a triumph.

Another takeaway is that the dominant message, from the government and elsewhere, about the perils of bioweapons needs to change. Groups from Japan’s 451 Unit to al-Qaeda have started bioweapon programs because they learned that the enemy was scared that they would. This suggests that the meme “bioweapons are cheap, easy, and dangerous” is actively dangerous for biodefense. Aside from that, as demonstrated by the rest of the book, it’s not true. And because it encourages groups to make bioweapons, we should perhaps stop spreading it.

(Granted, the book also relays an anecdote from Shoko Ashara, the head of the Aum Shinrikyo cult, who after its bioterrorism project failure “speculat[ed] that U.S. assessments of the risk of biological terrorism were designed to mislead terrorist groups into pursuing such weapons.” So maybe there’s something there, but I strongly suspect that such a design was inadvertent and not worth relying on.)

I’m overall fairly convinced by the message of the book, that bioweapons programs are complicated and difficult, that merely getting a hold of a dangerous agent is the least of the problems of a theoretical bioweapons program, and that small actors are unlikely to be able to effectively pull this off now.

I think Ouagrham-Gormley and I disagree most on the dangers of biotechnology. This isn’t discussed much in the book, but when she references it towards the end, she calls it “the so-called biotechnology revolution” and describes the difficulty and hidden years of work that have gone into feats of synthetic biology, like synthesizing poliovirus in 2002.

It makes sense that the early syntheses of viruses, or other microbiological works of magic, would be incredibly difficult and take years of expertise. This is also true for, say, early genome sequencing, taking thousands of hours of hand-aligning individual base pairs. But it turns out being able to sequence genomes is kind of useful, and now…

How low are those tacit knowledge barriers? How low will they be? There are obvious reasons to not necessarily publish all of these results, but somebody ought to keep track.

Ouagrham-Gormley does stress, I think accurately, that getting a hold of a pathogen is a small part of the problem. In the past, I’ve made the argument that biodefense is critical because “the smallpox genome is online and you can just download it” – which, don’t get me wrong, still isn’t reassuring – but that particular example isn’t immediately a global catastrophe. The US and Soviet Russia tried weaponizing smallpox, and it’s not terribly easy. (Imagine that you, you in particular, are evil, and have just been handed a sample of smallpox. What are you going to do with it? …Start some tissue culture?)

…But maybe this will become less of a barrier in the future, too. Genetic engineering might create pathogens more suited for bioweapons than extant diseases. They might be well-tailored enough not to require dispersal via the clunky, harsh munitions that have stymied past efforts to turn delicate microbes into weapons. Obviously, natural pandemics can happen without those – could human alteration give a pathogen that much advantage over the countless numbers of pathogens randomly churned out of humans and animals daily? We don’t know.

The book states: “In the bioweapons field, unless future technologies can render biomaterials behavior predictable and controllable… the role of expertise and its socio-organizational context will remain critically important barriers to bioweapons development.”

Which seems like the crux – I agree with that statement, but predictable and controllable biomaterials is exactly what synthetic biology is trying to achieve, and we need to pay a lot of attention to how these factors will change in the future. Biosafety needs to be adaptable.

At least, biodefense in the future of cheap DNA synthesis will probably still have a little more going for it than ad campaigns like this.

[Photomicrograph of Bacillus anthracis, the anthrax bacteria, in human tissue. From the CDC, 1976.]

For all the attention drawn by biological weapons, they are, for now, rare. Countries with bioweapon programs started during World War 2 or the Cold War have apparently dismantled them, or at least claim to, after the 1972 international Biological Weapons Convention. The largest modern bioweapon attack on US soil was in 1984, when an Oregon cult sprayed salmonella in a salad bar in the hopes of getting people too sick to vote in a local election. 750 people were sickened, and nobody died. In 2001, anthrax spores were mailed to news media offices and two US senators, killing 5 and injuring 17.

A few countries are suspected to have violated the Biological Weapons Convention, and may have secret active programs. A couple terrorist groups were found to have planned attacks, but not carried them out. Biotechnology is expanding rapidly, the price and know-how required to print genomes and do genetic editing and access information is dropping. An increasingly globalized world makes it easier to swap everything from information to defensive strategies to pathogens themselves.

This should paint the picture of an uneasy world. It certainly does to me. If you buy arguments about why risk from bioweapons is important to consider, given that they kill far fewer people than many other threats, then this also suggests that we’re in an unusually fortunate place right now – one where the threat is deep and getting deeper, but nobody is actively under attack. It seems like an extraordinarily good time to prepare.

The Blue Ribbon Study Panel on Biodefense is a group of experts working on US biodefense policy. I heard about them via the grant they won from Open Philanthropy Project/Good Ventures in 2015. Open Philanthropy Project suggests them as a potentially high-impact organization for improving pandemic preparedness.

Philanthropy isn’t an obvious fit for biodefense – large-scale biodefense is mostly handled in governments. The Blue Ribbon Study Panel was funded because of its apparent influence to policy (and because OPP suspected it wouldn’t get funded without their grant, which allowed the panel to issue its major policy recommendation.)

I wrote this because the panel’s descriptions of current biodefense measures in the US seemed comprehensive and accurate. What follows is my attempt to summarize the panel’s view. I haven’t necessarily looked into each claim, but they’re accurate as far as I can tell. The actual paper is also interspersed with some very good-sounding policy recommendations, which I won’t cover in depth.

What the Blue Ribbon Study Panel found

China, Iran, North Korea, Russia, and Syria (as assessed by the Department of Defense) seem to be failing to comply with the Biological Weapons Convention. Partially-destroyed or buried weapons are accessible by new state programs. Weapons are taking less time and resources to create, by terrorists, small states, domestic militias, or lone wolves. Synthetic biology is expanding. Natural pandemics and emerging diseases are spreading more frequently. Escapes from laboratories are also a risk.

This presents an enormous challenge which the US has not currently measured up to. Previous commissions on the matter have continually expressed concern, and these concerns have never been fully addressed.

Currently, responsibility for one aspect or another of biodefense is spread between literally dozens of government agencies, acting without centralized coordination. In the recent past, this has led to agencies tripping over each other trying to mount appropriate responses to threats, and it’s very unclear what the response would be or who would take charge of it in a more massive or threatening pandemic, or in the case of bioterrorism.

(One example comes from the 2013-15 Ebola outbreak, when the CDC took it upon itself to issue guidelines to hospitals for personal protective equipment (PPE) requirements for preparing for Ebola. But the CDC isn’t usually responsible for PPE requirements, OSHA is – and the CDC didn’t consult with them when issuing their recommendations. They ended up issuing guidelines that were hard to follow, poorly distributed, and not appropriate for many hospitals.)

Also, funding and support for pandemic preparedness programs is on the decline, even though most experts will agree that the threat is growing.

The paper recommends producing a unified strategy, a central authority, and a unified budget on biodefense.

Areas in need of more focus and coordination

A recurring theme in the Blue Ribbon Study Panel’s analysis:

The government is currently paying at least some attention to a particular topic, but not very much, and it’s not well-funded, and efforts are scattered in several different agencies that aren’t coordinating with each other.

This despite all biodefense experts saying “this topic is hugely important to successful biodefense and we need to put way more effort into it.”

Some of these topics:

“One Health” focuses

One Health is the concept that animal, human, and environmental health are all inseparably linked.

60% of emerging diseases are zoonotic (they occur in humans and animals), as are all extant diseases classified as threats by the DHS (e.g., all but smallpox).

Despite this, environmental and animal health are significantly more underfunded and poorly tracked than public health.

Decontamination and remediation after biological incidents

This is kind of the purview of OSHA, the EPA, and FEMA. OSHA is good and already has experience in some limited environments. The EPA has lots of pre-existing data and experience, but is not equipped to work quickly. FEMA is good at working quickly, but usually isn’t at the table in remediation policy discussions. The EPA currently does some of this coordination, but isn’t required to.

A comprehensive and modern threat warning system

Existing systems are slow, sometimes outdated (e.g. the DHS BioWatch program, which searches for some airborne pathogens in some major cities, which is slow and hasn’t been technologically upgraded since 2003.)

A better system could become aware of threats in hours, rather than days.

This is especially true for crop and animal data, especially livestock.

Cybersecurity with regard to pathogen and biotechnology information

Much pathogen data and biotechnology data is swapped around government, industry, or academic circles on the cloud or on unsecured servers.

Department of Defense and civilian collaboration

Attribution of a specific biological threat

A hard problem theoretically studied by the National Biodefense Analysis Center, but which other agencies in practice don’t necessarily cooperate with.

Medical Countermeasure development

A few major players into research in responding to biological threats are: BARDA, PHEMCE, NIAID. Project Bioshield is a congressional act that funds medical countermeasures (MCM, e.g., vaccine stockpiles or prophylactic drugs), mostly through BARDA.

These agencies’ funding for the development of MCM goes mostly to early R&D – discovering new possible treatments, countermeasures, etc. Advanced R&D in bringing those newfound options to a usable state, however, is by far the more lengthy and expensive part of the process, and receives much less funding. Compare industry’s 50% of money on advanced development, to the government’s 10-30%. PHEMCE is trying to correct this. Rapid point-of-care diagnostics are especially underexplored.

The government typically hasn’t used innovative or high-risk/high-reward strategies the way the private sector has, but biodefense requires some amount of urgency and risk-taking. Even if the problem were well-understood (it’s not), the response under the current regime wouldn’t be clear.

The government has managed to produce viable MCMs quickly at times, as in Operation Desert Storm or the 2014 Ebola outbreak (when three vaccines and one therapeutic were pushed from very early stages to clinical development in less than three months.)

Certainly, the government isn’t the same as private industry – the “surge model” of MCM development wouldn’t be effective for a business, but from the government has been a successful strategy in the past. MCM development is commercially risky, and the federal government is the only actor that can incentivize it.

That said, BARDA has efficiently partnered with the industry in the past, pushing twelve new MCM into available use with six billion dollars. Normally, bringing a drug to the commercial market takes over two billion each. Twelve MCM is far from enough, but proves that this kind of partnership is feasible. Project Bioshield is also facing low amounts of funding, which is confusing, given its relative success, bipartisan support, and a sustained threat.

Other notes from the panel

Research suggests that in the event of a catastrophic pandemic, emergency service providers are especially at risk, and only likely to help respond if they believe that they and their families are sufficiently protected – e.g. with vaccines, personal protective equipment, or other responses. EMS providers only have these now for, say, the flu and HIV, and not rarer diseases (with different protective equipment needs) that could be used in an attack. Since much bioterrorism knowledge is classified, it would also be difficult to get it into the hands of EMS providers. This is also true for hospital preparedness.

The Strategic National Stockpile is the nation’s stockpile of medical countermeasures (MCM) to biological threats. Existing MCM response architecture doesn’t have centralized leadership, goals, funding, coordination, or imagination for non-standard possible scenarios, which is, well, an issue. There aren’t clinical guidelines for MCM use from the CDC, and there isn’t a solid way to deliver them to anyone who might need them. On the plus side, a few places like New York City have demonstrated that their EMS providers can effectively distribute MCMs.

The Select Agent Program (SAP) is the primary federal tool to prevent misuse of pathogens and toxins. It only names agents, and doesn’t fully address risks, approaches, ensuring that standards are met, or its own transparency. Synthetic biology has also expanded since its creation, and the SAP hasn’t been updated in response. Its actual ability to improve security are also in doubt.

The Biological Weapons Convention and biorisk across the globe

International law meets federal policy in the 1972 Biological Weapons Convention, where 178 signatory nations agreed never to acquire or retain microbial or other biological agents or toxins as weapons. A major shortcoming of the convention is that it lacks a verification system or clear judgments or protocols to compare peaceful and non-peaceful possession of biological agents. The 5 signatory nations mentioned at the top of this section are in fact suspected of violating the convention.

Emerging diseases, especially zoonoses, often come from developing countries and especially urban areas in developing countries. Developing countries lack human and animal health structures. The US has the potential to assist the WHO and OIE with public health resources for resource-strapped areas.

About the report

For the solutions proposed by the Blue Ribbon Study Panel, you can read the entire report, or you could ask me for my 25-page summary (which is, admittedly, not much of a summary.) The short version is that they propose a unified strategy and budget addressing all of the above specific issues, put in a well-organized structure under the ultimate control of the office of the Vice President. They made 46 specific policy recommendations.

The Zika pandemic happened. The response continued to lack coordination in ways the Blue Ribbon described for past events.

Al-Qaeda and ISIL have both been found with plans and materials to create and use bioweapons.

The 2015 Federal Select Agent Program annual report described 233 occupational exposures or releases of select agents or toxins from laboratories, demonstrating that biocontainment needs improvement.

The US attended the 8th Biological Weapons Convention (BWC) Review Conference in November 2016. The ambassador attending, Robert Wood, wrote a report criticizing the Convention nations for failing to come to strong consensus or create solid strategies.

As of a December 2016 follow-up report, 2 of the 46 specific recommendations were completed (both of them involving giving full funding to pre-existing projects), and partial progress was made on only 17 of the 46.

The senate bill is both interesting, and suggests a possible anti-biorisk action if you live in the US – trying to get it passed. The biodefense strategy bill appears to be a step in the right direction of filling a major need in the US’ biodefense plan, and I can’t see major negative externalities from this plan. I imagine that the straightforward next action is contacting your senators and asking them to support the bill.

We live in a rather pleasant time in history where biotechnology is blossoming, and people in general don’t appear to be using it for weapons. If the rest of human existence can carry on like this, that would be great. In case it doesn’t, we’re going to need back-up strategies.

Here, I investigate some up and coming biological innovations with a lot of potential to help us out here. I kept a guiding question in mind: will biosecurity ever be a solved problem?

If today’s meat humans are ever replaced entirely with uploads or cyborg bodies, biosecurity will be solved then. Up until then, it’s unclear. Parasites have existed since the dawn of life – we’re not aware of any organism that doesn’t have them. When considering engineered diseases and engineered defenses, we’ve left the billions-of-years-old arms race for a newer and faster paced one, and we don’t know where an equilibrium will fall yet. Still, since the arrival of germ theory, our species has found a couple broad-spectrum medicines that have significantly reduced threat from disease: antibiotics and vaccines.

What technologies are emerging now that might fill the same role in the future?

Phage therapy

What it is: Viruses that attack and kill bacteria.

What it works against: Bacteria.

How it works: Bacteriophage are bacteria-specific viruses that have been around since, as far as we can tell, the dawn of life. They occur frequently in nature in enormous variety – it’s estimated that for every bacteria on the planet, there are 10 phages. If you get a concentrated stock of bacteriophage specific to a given bacteria, they will precisely target and eliminate that strain, leaving any other bacteria intact. They’re used therapeutically in humans in several countries, and are extremely safe.

Biosecurity applications: It’s hard to imagine even a cleverly engineered bacteria that’s immune to all phage. Maybe if you engineered a bacteria with novel surface proteins, it wouldn’t have phage for a short window at first, but wait a while, and I’m sure they’ll come. No bacteria in nature, as far as we’re aware, is free of phage. Phage have been doing this for a very, very long time. Phage therapy is not approved for wide use in the US, but has been established as being safe and quite effective. A small dose of phage can have powerful impacts on infection.

Current constraints: Lack of research. Very little current precedent for using phage in the US, although this may change as researchers hunt for alternatives to increasingly obsolete antibiotics.

Choosing the correct phage for therapeutics is something of an art form, and phage therapy tends to work better against some kind of infection than others. Also, bacteria will evolve resistance to specific phages over time – but once that happens, you can just find new phages.

DRACO

What it works against: Viruses. (Specifically, double-stranded RNA, single-stranded RNA, and double-stranded DNA (dsRNA, ssRNA, and dsDNA), which is most human viruses.)

How it works: DsDNA, dsRNA, and ssRNA virus-infected cells each produce long sequences of double-stranded RNA at some point while the virus replicates. Human cells make dsRNA occasionally, but it’s quickly cleaved into little handy little chunks by the enzyme Dicer. These short dsRNAs then go about, influencing translation of DNA to RNA in the cell. (Dicer also cuts up incoming long dsRNA from viruses.)

DRACO is a fusion of several proteins that, in concert, goes a step further than Dicer. It has two crucial components:

P that recognizes/binds viral sequences on dsRNA

P that triggers apoptosis when fused

Biosecurity applications: The viral sequences it recognizes are pretty broad, and presumably, it wouldn’t be hard to generate addition recognition sequences for arbitrary sequences found in any target virus.

Current constraints: Delivering engineered proteins intracellularly is a very new technology. We don’t know how well it works in practice.

DRACO, specifically, is extremely new. It hasn’t actually been tested in humans yet, and may encounter major problems in being scaled up. It may be relatively trivial for viruses to evolve a means of evading DRACO. I’m not sure that it would be trivial for a virus to not use long stretches of dsRNA. It could, however, evolve not to use targeted sequences (less concerning, since new targeting sequences could be used), inactivate some part of the protein (more concerning), or modify its RNA in some way to evade the protein. Even if resistance is unlikely to evolve on its own, it’s possible to engineer resistant viruses.

On a meta level, DRACO’s inventor made headlines when his NIH research grant ran out, and he used a kickstarter to fund his research. Lack of funding could end this research in the cradle. On a more meta level, if other institutions aren’t leaping to fund DRACO research, experts in the field may not see much potential in it.

Programmable RNA vaccines

What it is:RNA-based vaccines that are theoretically creatable from just having the genetic code of a pathogen.

What it works against: Just about anything with protein on its outside (virus, bacteria, parasite, potentially tumors.)

How it works: An RNA sequence is made that codes for some viral, bacterial, or other protein. Once the RNA is inside a cell, the cell translates it and expresses the protein. Since it’s not a standard host protein, the immune system recognizes and attacks it, effectively creating a vaccine for that molecule.

The idea for this technology has been around for 30-odd years, but the MIT team that discovered this were the first to package the RNA in a branched, virus-shaped structure called a dendrimer (which can actually enter and function in the cell.)

Biosecurity applications: Sequencing a pathogen’s genome should be quite cheap and quick once you get a sample of it. An associate professor claims that vaccines could be produced “in only seven days.”

Current constraints: Very new technology. May not actually work in practice like it claims to. Might be expensive to produce a lot of it at once, like you would need for a major outbreak.

Chemical antivirals

What it is: Compounds that are especially effective at destroying viruses at some point in their replication process, and can be taken like other drugs.

What it works against: Viruses

How it works: Conventional antivirals are generally tested and targeted against specific viruses.

The class of drugs called thiazolides, particularly nitazoxanide, is effective against not only a variety of viruses, but a variety of parasites, both helminthic (worms) and protozoan (protists like Cryptosporidum and Giardia.) Thiazolides are effective against bacteria, both gram positive and negative (including tuberculosis and Clostridium difficile). And it’s incredibly safe. This apparent wonderdrug appears to disrupt creation of new viral particles within the infected cell.

There are others, too. For instance, beta-defensin P9 is a promising peptide that appears to be active against a variety of respiratory viruses.

Biosecurity applications: Something that could treat a wide variety of viruses is a powerful tool against possible threats. It doesn’t have to be tailored for a particular virus- you can try it out and go.

Also, using a single compound drastically increases the odds that a virus will evolve resistance. In current antiviral treatments, patients are usually hit with a cocktail of antivirals with different mechanisms of action, to reduce the chance of a virus finding resistance of them.

Space for finding new antivirals seems promising, but they won’t solve viruses any more than antibiotics have solved bacterial infections – which is to say, they might help a lot, but will need careful shepherding and combinations with other tactics to avoid a crisis of resistance. Viruses tend to evolve more quickly than bacteria, so resistance will happen much faster.

Gene drives

What it is: Genetically altering organisms to spread a certain gene ridiculously fast – such as a gene that drives the species to extinction, or renders them unable to carry a certain pathogen.

Biosecurity applications: Gene drives have been in the news lately, and they’re a very exciting technology – not just for treating some of the most deadly diseases in the world. To see their applications for biosecurity, we have to look beyond standard images of viruses and bacteria. One possible class of bioweapon is a fast-reproducing animal – an insect or even a mouse, possibly genetically altered, which is released into agricultural land as a pest, then decimates food resources and causes famine.

Another is release of pre-infected vectors. This has already been used as a biological weapon, including Japan’s infamous Unit 731, which used hollow shells to disperse fleas carrying the bubonic plague into Chinese villages. Once you have an instance of the pest or vector, you can sequence its genome, create a genetic modification, and insert the modification along with the gene drive sequences. This can either wipe the pest out, or make it unable to carry the disease.

Current constraints: A gene drive hasn’t actually been released into the wild yet. It may be relatively easy for organisms to evolve strategies around the gene drive, or for the gene drive genes to spread somehow. Even once a single gene drive, say, for malaria, has been released, it will probably have been under deep study for safety (both directly on humans, and for not catastrophically altering the environment) in that particular case – the idea of a gene drive released on short notice is, well, a little scary. We’ve never done this before.

Additionally, there’s currently a lot of objection and fears around gene drives in society, and the idea of modifying ecosystems and things that might come into contact with people isn’t popular. Due to the enormous potential good of gene drives, we need to be very careful about avoiding public backlash to them.

Finding the right modification to make an organism unable to carry a pathogen may be complicated and take quite a while.

Gene drives act on the pest’s time, not yours. Depending on the generation time of the organism, it may be quite a while before you can A) grow up enough of the modified organism to productively release, and B), wait while the organism replicates and spreads the modified gene to enough of the population to have an effect.

Therapeutic antibodies

What it is: Concentrated stocks of antibodies similar to the ones produced in your own body, specific to a given pathogen.

What it works against: Most pathogens, some toxins, cancers.

How it works: Antibodies are proteins produced by B-cells as part of the adaptive immune system. Part of the protein attaches to a specific molecule that identifies a virus, bacteria, toxin, etc.. The rest of the molecule acts as a ‘tag’ – showing other cells in the adaptive immune system that the tagged thing needs dealt with (lysed, phagocytosed, disposed of, etc.)

Biosecurity applications: Antibodies can be found and used therapeutically against a huge variety of things. The response is effectively the same as your body’s, reacting as though you’d been vaccinated against the toxin in question, but it can be successfully administered after exposure.

Current constraints: Currently, while therapeutic antibodies are used in a few cases like snake venom and tumors, they’re extremely expensive. Snake antivenom is taken from the blood serum of cows and horses, while more finicky monoclonal therapeutics are grown in tissue culture. Raising entire animals for small amounts of serum is pricey, as are the nutrients used for tissue culture.

One possible answer is engineering bacteria or yeast to produce antibodies. These could grow antibodies faster, cheaper, and more reliably than cell culture. This is under investigation – E. coli doesn’t have the ability to glycosylate proteins correctly, but that can be added in with genetic engineering, and anyways, yeasts can already do that. The promise of cheap antibody therapy is very exciting, and more basic research in cell biology will get us there faster.

This post has also been published on the Global Risk Research Network, a group blog for discussing risks to humanity. Take a look if you’d like more excellent articles on global catastrophic risk.]

Several times in evolutionary history, the arrival of an innovative new evolutionary strategy has lead to a mass extinction followed by a restructuring of biota and new dominant life forms. This may pose an unlikely but possible global catastrophic risk in the future, in which spontaneous evolutionary strategies (like new biochemical pathways or feeding strategies) become wildly successful, and lead to extreme climate change and die-offs. This is also known as a ‘biotic replacement’ hypothesis of extinction events.

Biotic replacement in past extinctions

Is this still a possible risk?

Risk factors from climate change and synthetic biology

The shape of the risk

What next?

Identifying specific causes of mass extinction events may be difficult, especially since mass extinctions tend to be quickly followed by expansion of previously less successful species into new niches. A specific evolutionary advantage might be considered as the cause when either no other major physical disruptions (asteroids, volcanoes, etc) were occurring, or when our record of such events doesn’t totally explain the extinctions.

1. Biotic replacement in past extinctions

There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as either the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. I outline these, as well as four other extinction events.

Great oxygenation event

Cyanobacteria became the first microbes to produce oxygen (O2) as a waste product, and began forming colonies 200 million years before the extinction event. O2 was absorbed into dissolved iron or organic matter, and the die-off began when these naturally occurring oxygen sinks became saturated, and toxic oxygen began to fill the atmosphere.

The event was followed by die-offs, massive climate change, and permanent alteration of the earth’s atmosphere, and eventually the rise of the aerobic organisms.

End-Ediacaran extinction

The Ediacaran period was filled with a variety of large, autotrophic, sessile organisms of somewhat unknown heritage, known today mostly by fossil evidence. Recent evidence suggests that one explanation for this is the evolution of animals, able to move quickly and and re-shape ecosystems. This resulted in the extinction of Ediacaran biota, and was followed by the Cambrian explosion in which animal life spread and diversified rapidly.

Late Devonian extinction

Both modern plant seeds and modern plant vascular system developed in this period. Land plants grew significantly as a result, now able to more efficiently transport water and nutrients higher – with maximum heights changing from 30 cm to 30 m. Two things would have happened as a result:

The increase in soil content produced more weathering in rocks, which released ionic nutrients into rivers. The nutrient levels would have increased plant growth and then death in oceans, resulting mass anoxia.

Less atmospheric carbon dioxide would have cooled the planet.

Permian-Triassic extinction

96% of marine species, and 70% of land vertebrate species went extinct. 57% of families and 83% of general became extinct.

One hypothesis explaining the Permian-Triassic extinction events posits that an anaerobic methanogenic archaea, Methanosarcina, developed a new metabolic pathway allowing them to metabolize acetate into methane, leading to exponential reproduction and consuming vast amounts of oceanic carbon. Volcanic activity around the same time would have released large amounts of nickel, a crucial but rare cofactor needed for Methanosarcina’s enzymatic pathway.

Quaternary and Holocene extinction events

The evolution of human intelligence and human civilization has lead to mass climate alteration by humans. Another set of adaptations among human society (IE agriculture, use of fossil fuels) could be considered here, but in terms of this hypothesis, the evolution of human intelligence and civilization could be considered to be the driving evolutionary innovation.

Minor extinction events

Any single species that goes extinct due to a new disease can be said to have become extinct due to another organism’s innovative adaptation. These are less well described as “biotic replacement”, because the new pathogen won’t be able to replace its extinct hosts, but it was still an evolutionary event that caused the disease. A new disease may also attack the sole or primary food source of an organism, leading to its extinction indirectly.

2. Is this still a possible risk?

It seems unlikely that all possible disruptive evolutionary strategies have already happened: Disruptive new strategies are rare – while billions of new mutations arise every day, any new gene must meet stringent criteria in order to spread: Is actually expressed, is passed on to progeny, immediately conveys a strong fitness benefit to its bearer, serves any vital function of the old version of the gene, is supported by the organism’s other genes and environment, and the organism isn’t killed by random chance before having the chance to reproduce. For instance, an unusually efficient new metabolic pathway isn’t going to succeed if it’s in a non-reproducing cell, if its byproducts are toxic to the host organism, if its host can’t access the food required for the process, or if its host happens to be born during a drought and starves to death anyways.

Environmental conditions that make a pathway more or less likely to be ridiculously successful, meanwhile, are constantly changing. Given the rareness of ridiculously successful genes, it seems foolhardy to believe that evolution up til now has already picked all low-hanging fruit.

How worried should we be? Probably, not very. The major extinction events listed above seem to be spaced by 100-200 million years, suggesting a 1-in-100,000,000 chance of occurring in any given year. For comparison, NASA estimates that asteroids causing major extinction events strike the earth every 50-100 million years. These threats are possibly on the same orders of magnitude.

(This number requires a few caveats: This is a high estimate, assuming that evolutionary advantages were a major factor in all cases. Also, an advantage that “starts” in one year may take millions of years to alter the biosphere or climate catastrophically. Once in 100 million years is also an average – there’s no reason to believe that disruptive evolutionary events, or asteroid strikes for that matter, occur on regular intervals.)

On a smaller scale, entire species are occasionally wiped out by a single disease. This is more likely to happen when species are already stressed or in decline. Data on how often this happens, or what fraction of extinctions are caused by a novel disease, is hard to find.

3. Risk factors from climate change and synthetic biology

Two risk factors are worth noting which may increase the odds of a biotic replacement event – climate change and synthetic biology.

Historically, a catastrophic evolutionary innovation seems to follow other massive climate disruption, as in the Permian-Triassic extinction explanation that followed volcanic eruptions. A change in conditions may select for innovative new strategies that quickly take over and produce much more disruption than the instigating geological event.

While the specific nature of the next disruptive evolutionary innovation may be nigh-impossible to predict, this suggests that we should give more credence to environment alteration as a threat – via climate change, volcanic eruptions, or asteroids – as changing environments will select for disruptive new alleles (or resurface preserved strategies.) This means that a minor catastrophic event could snowball into a globally catastrophic or existential threat.

The other emerging source of alleles as-of-yet unseen in the environment comes from synthetic biology, as scientists are increasingly capable of combining genes from distinct organisms and designing new molecular pathways. While genes crossing between wildly different organisms is not unheard of in nature, the increased rate at which this is being done in the laboratory, and the fact that an intentional hand is selecting for viability and novelty (rather than natural selection and random chance), both imply some cause for alarm.

A synthetic organism designed for a specific purpose, may disperse from its intended environment and spread widely. This is probably especially a risk for organisms using completely synthetic and novel pathways unlikely to have evolved in nature, rather than previously evolved genes – otherwise, the naturally occurring genes would have probably already seized low-hanging evolutionary fruit and expanded into possible niches.

4. The shape of the risk

How does this risk compare to other existential risks? It is not especially likely to occur, as described in Part 2. The precise shape or cause of the risk is harder to determine than, say, an asteroid strike. Also, as opposed to asteroid strikes or nuclear wars, which have immediate catastrophic effects, evolutionary innovations involve significant time delays.

Historically, two time delays appear to be relevant:

Time for the evolution to become widespread

Presumably, this is quicker in organisms that disperse/reproduce more quickly. EG, this could be fairly quickly for an oceanic bacteria with a quick generation cycle, but slowly for the 180,000 years it took between the first appearance of modern humans, and their eventual spread to the Americas.

Time between the organism’s dispersal and the induction of a catastrophe

EG, during the global oxygen crisis, it took 200 million years from the evolution of the species, to when the possible oxygen sinks filled up, for a crisis to occur. (At least some of this time included the period required for cyanobacteria to diversify and become commonplace.)

During the azolla event, azolla ferns accumulated for 800,000 years causing steady climate change. The modern threat from anthropogenic global warming is much steeper than that.

What are the actual threats to life?

Climate change

The great oxygenation event and the Permian-Triassic extinction hypothesis involve the dispersal of a microbe that induces rapid, extreme climate change.

Other events such as volcanoes erupting may change the environment such that a new strategy becomes especially successful, as in the Permian-Triassic extinction event.

Faster, stronger, cleverer predation

The Ediacaran extinction event and the Holocene extinction event involved the dispersal of an unprecedentedly capable predator – animals and humans, respectively.

This seems unlikely to be a current risk. The risk from runaway artificial intelligence somewhat resembles this concern.

Death from disease

Any event in which a novel disease causes a species to go extinct has a direct impact. Additionally, a disease might cause one or more major food sources to go extinct (for humans or animals.)

Globalization and global trade has increased the risk of a novel disease spreading worldwide. This also mirrors current concerns over engineered bioweapons.

5. What next?

Disruptive evolutionary innovation is problematic in that there don’t appear to be clear ways of preventing it – evolution has been indiscriminately optimizing away for billions of years, and we don’t appear to be especially able to stop it. Building civilization-sustaining infrastructure that is more robust to a variety of climate change scenarios may increase our odds of surviving such a catastrophe. Additionally, any such disruptive event is likely to happen over a long period of time, meaning that we could likely mitigate or prepare for the worst effects. However, evolutionary innovation hasn’t been explored or studied as an existential risk, and more research is needed to clarify the magnitude of the threat, or which – if any – interventions are possible or reasonable to study now.

Questions for further study:

How common are extinction events due to disruptive evolutionary innovation?

What factors make these evolution events more likely?

How often do species go extinct due to single disease outbreaks?

Can small-scale models help us improve our understanding of the likelihood of global warming inducing “runaway” scenarios involving microbial evolution?

What man-made environmental changes could potentially lead to runaway microbial evolution?