As part of that meeting, I am presenting public comment on the ethics of the deliberative process. A copy of the handout I provided to the members of the NSABB—updated to correct a couple of typographical errors—is available here.

You can also view the webcast of my comments live. I am not sure when I’ll be speaking—the public comment sessions are planned for 2:00pm-2:30pm, and again at 3:30pm-3:50pm. However, if you want to watch me give comment (or the rest of the meeting) the webcast is available here.

[Update: someone at The Atlantic confirmed for me that this was not so much their article, as it was run “as part of our partnership with the site Defense One.” Defense One is a part of the AtlanticMedia group, which owns both publications. As the science editor for Defense One—where the piece was first published—it isn’t totally clear to me who edited Tucker’s work for content, other than… himself? Transparency and accountability, anyone?]

Patrick Tucker has a piece in TheAtlantic titled “The Next Manhattan Project.” It concerns the current dual-use gain-of-function saga—now the so-called deliberative process about biosafety. It is, in short, a piece of ahistorical fiction. Here’s why—or, here is one list of reasons why.

1) “In January 2012, a team of researchers from the Netherlands and the University of Wisconsin published a paper in the journal Science about airborne transmission of H5N1 influenza, or bird flu, in ferrets.”

False. It was two papers: one in Natureby University of Wisconsin-Madision researchers; one in Scienceby Dutch researchers. When a writer for TheAtlantic can’t Google something that happened 3 years ago, you can bet the previous century is going to be a challenge.

2) Eschewing the history behind current events: “[the 2012 paper (should be papers)] changed the way the United States and nations around the world approached manmade biological threats.”

False. The 2011 (it started in 2011, not 2012) controversy was a continuation of a, by then, decade-old debate about what is now called dual-use research of concern. This started in 2001, when a team of Australian researchers published work describing the creation of (in VERY simplistic terms) a super-poxvirus.There was a CIA report, and a NAS committee. Oh, and does anyone remember Amerithrax?

3) “it solved the riddle of how H5N1 became airborne in humans.”

False. Hilariously, the standard defense of the 2012 studies (remember, TheAtlantic, plural) is that they don’t show how H5N1 can transmit via aerosolized respiratory droplets. Vincent Racaniello commonly refers to this as “ferrets are not people.” There’s a complexity about animal models that doesn’t lend to those kinds of easy conclusions. It wasn’t the end result of these papers (or the papers that followed), and it certainly wasn’t the intent of the researchers.

4) Eschewing the reasons behind the Manhattan Project.

The Manhattan Project has a complex history. A group of independent, politically minded—largely emigre—scientists; a world on the edge of war; a novel and particular scientific discovery with a potentially catastrophic outcome; and a belligerent power (well, powers—the Japanese and Russians had programs, in addition to the Nazis) the scientists had good reason to suspect was pursuing said technology.

The 2012 story has almost no parallel with these contexts—much less has an organizational, clearly defined set of ends, or unilateral mandate with which to achieve those ends. The existential threat in the background of the Manhattan Project is absent here—there is no Nazi power. If we truly considered H5N1 highly pathogenic avian influenza to be an existential threat, our public health systems and scientific endeavors would look totally different.

5) Misrepresenting the classified complex.

Despite it being the single comparison Tucker draws between the 2012 studies (plural) and the Manhattan Project, Tucker doesn’t discuss the classified complex as any more than a passing comment. He boils the entire conversation down to “but now the Internet makes classifying things hard.”

Never mind that the classified community was remarkably successful at its job, to the point where it invented ways to create information sharing within an environment of total secrecy. The classified community continues to do its work today—just because we don’t pay much attention to Los Alamos, Oak Ridge, or Lawrence Livermore don’t mean they don’t exist.

Tucker also misses some of the human factors that would actually make his claims interesting. Between Fuchs and the Rosenbergs, ye olde security could be compromised in much the same way as it is today: too much trust of the wrong people, and a bit of carelessness inside the confines of a community that thinks itself insulated. If anything, the current debate about dual-use is more about misplaced trust and overconfidence than it is about nukes.

***

These are only five of a variety of problems with Tucker’s article. What bothers me most is that the headline grants a legitimacy to one perspective on the current debate that simply isn’t warranted. These scientists aren’t racing against the clock to avert a catastrophe—and if they are, their methods are questionable at best. The current debate is far more nuanced, and far less certain than the conversation that went down in Long Island in 1939. And that’s saying something, because the debate then was pretty damned nuanced.

What would the Next Manhattan Project really look like? Lock the best minds in biology in a series of laboratories across the country—or world, that’s cool too. Give them at least $26 billion. And give them charge of creating a cheap, easily deployable, universal flu vaccine.

That’d be great. Or, at least, it’d be much better than TheAtlantic’s piece from yesterday.

As is stated by Marc Lipsitch on the Cambridge Working Group site, the CWG reflects a consensus. My personal views do not reflect the views of the group. When you build a consensus, you often don’t end up with everything you wanted. When a group of very different people forms around a common issue, the outcomes that get devised are heavily moderated by the competing priorities and backgrounds of the participants. Sometimes that leads to a stagnation.[1] Other times, it leads to a more reasonable and practical set of priorities. In the case of the Cambridge Working Group, in which I participated as a founding member last month, our Consensus Statement on the Creation of Potential Pandemic Pathogens (PPPs) was the product of deliberation on the types of steps the eighteen founding members could agree on. For those of you who are just arriving, PPP studies involve the creation of a novel pathogen that could, if released, cause a disease pandemic. In my line of work, PPP studies are a type of “gain of function” study, and associated with dual-use research—scientific research that can be used to benefit or harm humanity. When it comes to PPP studies, the CWG stated one ultimate goal:

Experiments involving the creation of potential pandemic pathogens should be curtailed until there has been a quantitative, objective and credible assessment of the risks, potential benefits, and opportunities for risk mitigation, as well as comparison against safer experimental approaches.

And one proximate goal in the pursuit of that ultimate goal:

A modern version of the Asilomar process, which engaged scientists in proposing rules to manage research on recombinant DNA, could be a starting point to identify the best approaches to achieve the global public health goals of defeating pandemic disease and assuring the highest level of safety.

In short, we want to ask a question: what are the risks and benefits of PPP studies? To ask that question, we want to convene a meeting. And though we’ve no ability to stop them, we’d really like it if scientists could just, I don’t know, not make any new and improved strains of influenza before we have that meeting. Simple, right? Well, I thought so. Which is why I was surprised when colleague said this:

@neva9257@rocza I hope you realize that if you shut BSL3/4 labs there will be no scientists trained to study/treat/find cures for Ebola — NewProf1 (@newprof1) August 5, 2014

Wait what?!

Hyperbole is Not Helping

NewProf is right: *if* we shut down (all) BSL-3/4 labs, there would be nowhere (safe) for people to work on dangerous pathogens like Ebola, or train new people to do the same. The only problem is that no-one—that I know of—is saying that.

First: the CWG statement says nothing about shutting down laboratories. As a consensus statement, it is necessarily limited by pragmatic considerations. The CWG calls for a risk assessment. It calls for collecting data. That data collection is focused on PPP studies, and primarily in the context of influenza research. Even if the CWG were to be looking at Ebola, PPP studies would (I really, really hope) be a very small subset of Ebola research. Of course, NewProd is not concerned only about individual research, but whole labs:

@neva9257@rocza That risk assessment would shut down labs. I know its not just you, just wanted to you realize what the CWG would lead to — NewProf1 (@newprof1) August 5, 2014

That is, NewProf claims that a CWG-inspired risk assessment would lead to labs shutting down, which would lead to there being “no scientists trained to study/treat/find cures for Ebola.” But that’s equally ludicrous. A risk assessment of a small set of experiments would be unlikely to result in an entire field being unable to perform. In fact, that would be a really bad thing. The risk of that bad thing would—ought—to be something that informs the risk-benefit analysis of research in the life sciences. Regulation that unduly limits the progress of genuinely (or even plausibly) beneficial research, without providing any additional benefit, would be bad regulation.

Grind Your Axe on Your Own Time

What is most frustrating, however, is how mercenary the whole thing feels. If you are concerned about the Ebola virus, you should be concerned that the public health effort to stem the tide of the virus in West Africa is failing. That a combination of poverty, civil unrest, environmental degradation,, failing healthcare, traditional practices, and a (historically justified) mistrust of western healthcare workers is again the perfect breeding ground for the Ebola virus. You shouldn’t be concerned about a risk-benefit analysis that has been advocated for a particular subset of scientific experiments—with a focus on influenza—that may or may not lead to some outcome in the future. Dual-use research and the Ebola virus, right now, have very little to do with each other. If there comes a time where researchers decide they want to augment the already fearsome pathology caused by the virus with, say, a new and improved transmission mechanism, we should definitely have a discussion about that. That, I think it is uncontroversial to say, would probably be a very bad idea.

A Personal View of Moving Forward

I’ve been present the last few days talking about Ebola, primarily on Twitter (and on other platforms whenever someone asks). I’ve not had a lot of time to talk about the CWG’s statement, or my views on the types of questions we need to ask in putting together a comprehensive picture of the types of risks and benefits posed by PPP studies. So here’s a few thoughts, because it is apparently weighing on people’s minds quite heavily. I don’t know how many high-containment labs are needed to study the things we need to study in order to improve public health. I know Richard Ebright, in the recent Congressional Subcommittee Hearing on the CDC Anthrax Lab “incident” mentioned a figure of 50, but I don’t know of the basis on which he made that claim. As such, I, personally, wouldn’t back such a number without more information. I do know that the question of risk and benefits of PPP studies—and other dual-use research—has been a decade in the making. The purported benefits to health and welfare of gain-of-function research, time and again, fail to meet scrutiny Something needs to happen. The next step is an empirical, multi-disciplinary analysis of the benefits and risks of the research. It has to be empirical because we need to ground policy in rigorous evidence. It has to be multi-disciplinary because first, the question itself can’t be answered by one group; second, the values into which we are inquiring cover more than one set of interests. That, as I understand it, is what the CWG is moving towards. That’s certainly why I put my name to the Consensus Statement. I’m coming into that risk-assessment process looking for an answer, not presuming one. I’m not looking to undermine any single field of research wholesale. And frankly, I find the use of the current tragedy in West Africa as an argumentative tool pretty distasteful.

The twists and turns of consensus-building are playing out on a grand scale at the current experts meeting of the Biological and Toxins Weapons Convention in Geneva. My colleagues are participating as academics and members of NGOs at the meeting, and you can follow them at #BWCMX. And yes, I’m terribly sad to not be there. Next time, folks. ↩

Late in the chat, Marissa Evans expressed a desire to know some more about bioethics and bioterror, and I offered to post some links to engaging books on the topic.

The big problem is that there aren’t that many books that specifically deal with bioterrorism and bioethics. There are a lot of amazing books in peace studies, political science, international relations, history, and sociology on bioterorism. Insofar as these fields intersect with—and do—bioethics, they are excellent things to read. But a bioethics-specific, bioterror-centric book is a lot rarer.

As such, the readings provided are those that ground the reader in issues that are important to understanding bioterrorism from a bioethical perspective. These include ethical issues involving national security and scientific research, dangerous experiments with vulnerable populations, and the ethics of defending against the threat of bioterror.

The Plutonium Files: America’s Secret Medical Experiments in the Cold War. If you read one book on the way that national security, science, and vulnerable people do not mix, read Eileen Welsome’s account of the so-called “Human Radiation Experiments.” Read about dozens of experiments pursued on African Americans, pregnant women, children with disabilities, and more, in the name of understanding the biological properties of plutonium—the fuel behind atomic bombs. All done behind the great screen of the the Atomic Energy Act, because of plutonium’s status as the key to atomic weapons.

Undue Risk: Secret State Experiments on Humans. A book by my current boss, Jonathan D. Moreno, that covers some of the pivotal moments in state experimentation on human beings. The majority of the cases Moreno covers are those pursued in the interests of national security. Particularly in the context of the Cold War, there was a perceived urgent need to marshall basic science in aid of national security. What happened behind the curtain of classification in the name of that security, however, was grim.

This list could be very long, but if I were to pick out a selection of books that I consider essential to my work, these would be among the top of the list.

As an addendum, an argument emerged on the back of the NSTNS chat about whether science is “good.” That’s a huge topic, but is really important for anyone interested in Science, Technology, Engineering and Mathematics and their intersection with politics and power. As I stated yesterday on Twitter, however, understanding whether “science is good” requires understanding what the “science” bit means. That’s not altogether straightforward.

Giving a recommendation on that issue involves stepping into a large and relatively bitter professional battle. Nonetheless, my first recommendation is always Phillip Kitcher’s Science, Truth, and Democracy. Kitcher carefully constructs a model of where agents interact with scientific methods and tools, and so identifies how we should make ethical judgements about scientific research. I don’t think he gets everything right, but that’s kind of a given in philosophy.

So, thousands of pages of reading. You’re welcome, Internet. There will be a test on Monday.

I’ll update later with a link to a Storify that I believe is currently being built around the event. ↩

This is a fantastic addition to the dual-use debate. Too often, stock answers given for the benefits of dual-use are put forward without sustained analysis: things like “will help us make new vaccines,” “will help us with disease surveillance,” or “will raise awareness.” Lipsitch and Galvani have drawn up roadmap of challenges that advocates of gain-of-function studies—specifically those that deal with influenza—must confront in order to the justify public health benefit of their work. We should hold researchers and funding agencies accountable to this kind of burden of proof when it comes to dual-use research.

Lipsitch and Galvani’s response is also important because it critically addresses the narrative that Fouchier and Kawaoka have woven around their research. This narrative has been bolstered by the researcher’s expertise in virology, but doesn’t meet the standards of biosecurity, science policy, public health, or bioethics analysis. It’s good to see Lipsitch and Galvani push back, and point to inconsistencies in the type of authority that Fouchier and Kawaoka wield.

UPDATE 06/19/14, 16:32: as I posted this, it occurred to me that the diagram Lipsitch and Galvani provide, while useful, is incomplete. That is, Lipsitch and Galvani have—correctly, I believe—illustrated the problems dual-use advocates must respond in the domain the authors occupy. These are challenges in fields like virology, biology, and epidemiology.

There are other challenges, however, that we could add to this diagram—public health and bioethical, for a start. It’d be a great, interdisciplinary activity to visualize a more complete ecosystem of challenges that face dual-use research, with an eye to presenting avenues forward that address multiple and conflicting perspectives.

The original title for this piece was “How not to critique in bioethics,” but Kelly pointed out that this episode of TWiV is a case study in how not to go about critiquing anything.

Last Monday I was drawn into a conversation/angry rant about an article by Lynn C. Klotz and Edward J. Sylvester, that appeared in the Bulletin of the Atomic Scientists…in 2012. After briefly forgetting one of the cardinal rules of the internet—check the date stamp— I realized the error of my ways, and started to inquire with my fellow ranters, in particular Matt Freiman, about why a 2012 article suddenly had virologists up in arms.

Turns out that the Bulletin article was cited by a study on dual-use authored by Marc Lipsitch and Alison P. Galvani; a study that was the subject of a recent post of mine . The Bulletin article draws from a working paper where the provide an estimate for the number of laboratory accidents involving dangerous pathogens we should expect as a function of hours spent in the laboratory. Lipsitch and Galvani use this figure in their analysis of potential pandemic pathogens (PPPs).

I’d started writing a blow-by-blow account of the entire segment, but that quickly mushroomed into 5,000-odd words. There is simply too much to talk about—all of it bad. So there’s a draft of a paper on the importance of good science communication on my desk now, that I’ll submit to a journal in the near future.Instead, I’m going to pick up just one particular aspect of the segment that I feel demonstrates the character of TWiV’s critique.

“It’s a bad opinion; that’s my view.”

Despommier, at 58:30 of the podcast, takes issue with this sentence in the PLoS Medicine paper:

The H1N1 influenza strain responsible for significant morbidity and mortality around the world from 1977 to 2009 is thought to have originated from a laboratory accident.

The problem, according to Despommier, is that “thought to have originated” apparently sounds so vague as to be meaningless. This leads to a rousing pile-on conversation in which Despommier claims that he could have just easily claimed that the 1977 flu came from Middle Eastern Respiratory Syndrome because “he thought it;” he also claims that on the basis of this sentence alone he’d have rejected the article from publication. Finally, he dismisses the citation given in the article as unreliable because it is a review article,[1] and “you can say anything in a review article.”

At the same time, Dove notes that “when you’re on the editorial board of the journal you can avoid [having your paper rejected].” The implication here is that Lipsitch, as a member of the editorial board of PLoS Medicine, must have used that position to get his article to print despite the alleged inaccuracy that has Despommier so riled up. Racaniello notes that “[statements like this are] often done in this opinion–” his full meaning is interrupted by Despommier. It’s a common theme throughout the podcast, though, that Lipsitch and Galvani’s article is mere “opinion,” and thus invalid.

Facts first

If he’d done his homework, Despommier would have noted that the review article cited by Lipsitch and Galvani doesn’t mention a lab. What it does say is:

There is no evidence for integration of influenza genetic material into the host genome, leaving the most likely explanation that in 1977 the H1N1 virus was reintroduced to humans from a frozen source.[2]

So Lipsitch and Galvani do make an apparent leap from “frozen source” to “lab freezer.” Despommier doesn’t pick that up. If he had, however, it would have given us pause about whether or not is a valid move to jump from “frozen source” to “laboratory freezer.”

Not a long pause, however; there are other sources that argue that the 1977 strain is likely to have been a laboratory.[3] The other alternative—that the virus survived in Siberian lake ice—was put forward in a 2006 paper (note, after the publication of the review article used by Lipsitch and Galvani), but that paper was found to be methodologically flawed.[4] Laboratory release remains the most plausible answer to date.

The belief that the 1977 flu originated from frozen laboratory sources is widely held. Even Racaniello—at least, in 2009—holds this view. Racaniello argued that of multiple theories about the origin of the 1977 virus, “only one was compelling”:

…it is possible that the 1950 H1N1 influenza virus was truly frozen in nature or elsewhere and that such a strain was only recently introduced into man.

The suggestion is clear: the virus was frozen in a laboratory freezer since 1950, and was released, either by intent or accident, in 1977. This possibility has been denied by Chinese and Russian scientists, but remains to this day the only scientifically plausible explanation.

So no, there is no smoking gun that confirms, with absolutely unwavering certainty, that the 1977 flu emerged from a lab. But there is evidence: this is far from an “opinion,” and is far from simply making up a story for the sake of an argument. Lipsitch and Galvani were right to write “…it is thought,” because a plausible answer doesn’t make for unshakeable proof—but their claim stands on the existing literature.

Science and policy

The idea that Lipsitch and Galvani’s piece is somehow merely “opinion” is a hallmark of the discussion in TWiV. Never mind that the piece was an externally peer-reviewed, noncommissioned piece of work.[5] As far as TWiV is concerned, it seems that if it isn’t Science, it doesn’t count. Everything else is mere opinion.

But that isn’t how ethics, or policy, works. In ethics we construct arguments, argue about the interpretation of facts and values, and use that to guide action. With rare exception, few believe that we can draw conclusions about what we ought to do straight from an experiment.

In policy, we have to set regulations and guidelines with the information at hand—a policy that waits for unshakeable proof is a policy that never makes it to committee. Is there some question about the true nature of the 1977 flu, or the risks of outbreaks resulting from BSL–3 laboratory safety? You bet there is. We should continue to do research on these issues. We also have to make a decision, and the level of certainty the TWiV hosts seem to desire isn’t plausible.

Authority and Responsibility

This podcast was irresponsible. The hosts, in their haste to pan Lipsitch and Galvani’s work, overstated their case and then some. Dove also accused Lipsitch of research misconduct. I’m not sure what the rest of the editors at PLoS Medicine think of the claim—passive aggressive as it was—that one of their colleagues may have corrupted the review process, but I’d love to find out.

The podcast is also deeply unethical, because of the power in the platform. Racaniello, in 2010, wrote:

Who listens to TWiV? Five to ten thousand people download each episode, including high school, college, and graduate students, medical students, post-docs, professors in many fields, information technology professionals, health care physicians, nurses, emergency medicine technicians, and nonprofessionals: sanitation workers, painters, and laborers from all over the world.[6]

What that number looks like in 2014, I have no idea. I do know, however, that a 5,000–10,000 person listenership, from a decorated virologist and his equally prestigious colleagues, is a pretty decent haul. That doesn’t include, mind you, the people who read Racaniello’s blog, articles, or textbook; who listen to the other podcasts in the TWiV family, or follow the other hosts in other fora.

These people have authority, by virtue of their positions, affiliations, exposure, and followings. The hosts of TWiV have failed to discharge their authority with any kind of responsibility.[7] I know the TWiV format is designed to be “informal,” but there’s a marked difference between being informal, and being unprofessional.

Scientists should—must—be part of conversation about dual-use, as with other important ethical and scientific issues. Nothing here is intended to suggest otherwise. Scientists do, however, have to exercise their speech and conduct responsibly. This should be an example of what not to do.

Final Notes

I want to finish with a comment on two acts that don’t feature in Despommier’s comments and what followed, but are absolutely vital to note. The first is that during the podcast, the paper by Lipsitch and Galvani is frequently referred to as “his” paper. Not “their” paper. Apparently recognizing the second—female—author isn’t a priority for the hosts or guests.

Also, Dove and others have used Do Not Link (“link without improving ”their“ search engine position”) on the TWiV website for both the paper by Lipsitch and Galvani, and supporting materials. So not only do the hosts and guests of the show feel that the paper without merit; they believe that to the point that they’d deny the authors—and the journal—traffic. Personally, I think that’s obscenely petty, but I’ll leave that for a later post.

Science needs critique to function. Critique can be heated—justifiably so. But it also needs to be accurate. This podcast is a textbook example of how not to mount a critique.

Terrence McCoy has an article in the Washington Post‘s “Morning Mix” on the 1918-like flu virus gain-of-function study. It provides a bit of extra information beyond the coverage at the Guardian, and is worth a read.

The article, unfortunately, has a terrible headline. A “we need an award for Bad Bioethics Headlines” headline.

The headline reads “Was it ‘crazy’ for this scientist to re-create a bird flu virus that killed 50 million people?” There are some glaring errors, or misinformation, embedded in this headline; errors that, unfortunately, aren’t explicitly dealt with in the content of the article. And the errors, to a certain extent, undercut the seriousness of the work done.

1) First, nothing was “re-created.” The 1918 strain of H1N1 influenza has already been recreated using reverse genetics—in 2005. This work is also widely considered dual-use research of concern.

The work performed by Kawaoka and his team, however, is not a recreation in the traditional sense—the sense we mean when we talk about piecing together a poliovirus, or synthesizing Spanish flu. Rather, this new research involved piecing together a “1918-like” virus—one whose proteins differ by a few amino acids from the one that emerged almost a century ago—using segments of avian influenza. This wasn’t a recreation; it was just creation pure and simple.

2) The influenza pandemic in 1918 isn’t really “bird flu” in the conventional sense. Sure, it is likely that the strain of flu in 1918 emerged from an avian (and swine) reservoir at some point, but that’s because of the 18 different types of hemagglutinin (the “H” in H1N1), and 11 types of neuraminidase (the “N” in H1N1) that we know of, all of them can survive in birds. So all flu has something to do with birds.

We tend to label viruses as “bird” or “pig” viruses when we’re talking about the most common host.

But that’s definitely not what we typically mean when we talk about “bird flu.” Avian influenza is used to describe influenza viruses that arise predominantly or exclusively in birds. What makes H5N1, or H7N9 scary is that they are viruses that predominantly occur in birds, that are crossing over to humans. Fortunately, H5N1 hasn’t been terribly successful at this, and H7N9—while more successful—is not at pandemic levels. Yet.

The “recreated” virus isn’t an avian influenza virus in the same way. The influenza that served as its template is ostensibly “human” or at least “mammalian” flu, that to the best of our knowledge came about from both avian and swine viruses that combined to make a human-transmissible superbug. The parts from which this novel strain was stitched together are bits of bird flu.

The point Kawaoka has been trying to sell the world on is that he wanted to find out—for the good of us all, he has claimed—if something like 1918-influenza could emerge from H5N1. Apparently, the answer is yes.

These all matter because they bear heavily on the “why?!” of this story. This virus didn’t come out of nature, but rather was made in a lab. And it hasn’t been recreated, but outright engineered. As with other gain of function studies, proponents of this story like to say that this will raise awareness about the pandemic preparedness, disease surveillance, and so on. I’ve voiced my skepticism about this before.

It also matters because—as the study notes—the virus created isn’t as harmful as 1918 influenza, but more harmful. Kawaoka’s paper notes that the virus his team created is more pathogenic than an authentic avian influenza virus, or the 1918 influenza pandemic strain. We’re not dealing with Spanish influenza: this is a human created, mammalian transmissible strain of flu that outperforms the “Mother of All Pandemics” in trials. That’s scary. And, while they were only testing the strains on three ferrets at a time, which isn’t enough to give us an idea of how this would affect humans in an outbreak, it is certainly enough to give us pause.