This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech.

In its wake, that false information has left confusion and a big question: What can social media platforms, individuals and governments do about it?

That question was central to “Confronting Viral Disinformation,” Future Tense’s latest web event in our yearlong Free Speech Project series, which is examining the ways technology is influencing how we think about speech.

When we’re considering information that is provably false, we can think about information that falls into two buckets: misinformation and disinformation. The difference, said Nathaniel Gleicher, head of cybersecurity policy for Facebook, lies in intent. With misinformation, “you don’t know the intent of the actor behind it,” he said, whereas with disinformation, the actor “intended to spread it to deceive.”

In a nutshell, the challenge is that while the Russian Internet Research Agency, a scammer trying to make money, and your hapless great uncle Bill may all be spreading deceptive information, they all represent different problems that require different reactions, Gleicher said.

In the context of COVID-19, Gleicher explained, Facebook is grappling with two broad categories of misinformation and disinformation. The first, which Facebook is most focused on, is content that is “provably false, … has been flagged by a global health expert like the WHO and could lead to imminent harm”—for example, the idea that drinking bleach cures coronavirus. That kind of information, once identified, is immediately taken down.

The second category of misinformation and disinformation is more complex, Gleicher said, “because part of it is just people trying to figure out what is happening.” The content in this category is speculative and fuzzy, dealing with questions like the implications and source of the virus and its spread, and what governments are or aren’t doing in its wake. A key part of tackling this second category is ensuring that accurate, authoritative voices are amplified and heard, providing a trusted counterweight to the myths and conspiracy theories. Think the World Health Organization, public health professionals, and our friend and savior Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

But such trusted experts also can be targets for bad actors online.

“My worry is that those very important public health officials who have our attention—and should have our attention—will be beset by cyber mobs trying to chase them offline, to discredit them and to silence them,” said Danielle Citron, a professor at Boston University School of Law, a MacArthur fellow, and the author of Hate Crimes In Cyberspace.

And there is an ongoing risk that deceptive information online will wreak havoc offline.

“I worry about situations where suddenly everyone starts saying, ‘Don’t go to Hospital X, but … go to Hospital Y because they have this kind of testing or this kind of access to certain kinds of facilities,’ ” when such claims are actually not true, said Jennifer Daskal, the faculty director of the Technology, Law, & Security Program at American University Washington College of Law, who moderated the online discussion.

Gleicher said that dealing with that kind of situation is deeply challenging, particularly because information is shifting so quickly, and “something might actually be true one moment and untrue the next.” It may, after all, be true that Hospital Y has key supplies. But once large numbers show up, that is no longer the case. And monitoring accuracy in real-time poses obvious challenges.

You might look at Facebook’s fight against false information on its platforms in the context of an “attacker-defender” model, Gleicher said.

“An old insight from military strategy is that defenders tend to win when they can control the terrain, and attackers tend to win when defenders don’t,” he said. “The communications mediums we are using, in a very fundamental way, are the terrain of this conversation.”

Structural changes to platforms that provide context (for example, how many times a message was forwarded, or which country a page is being managed from) can shift the terrain to our advantage.

But private companies can’t be the only ones shifting the terrain to defeat misinformation and disinformation. The panelists agreed that governments also have a part to play in mandating more transparency and restricting things that amount to forged communications.

“There’s all sorts of ways we can use digital technologies to create forgeries, to show people doing and saying things that they never did and said in ways that cause … cognizable harms, economic, reputational, emotional harm,” said Citron, who is working with lawmakers to develop legal frameworks to address such digital forgeries. That’s a complex process, Citron explained, because in drafting those laws, you have to “be really narrow and careful.”

Regulation, agreed Daskal, needs to be “incredibly specific and incredibly clear, because when we start getting into the realm of things like misinformation or hate speech … there’s a real risk of over-inclusiveness … and that has some significant and pretty critical chilling effects.”