When, how and why do we report on rumors and fabricated content?

First Draft has been asking ourselves this question since the French election, when we had to make difficult decisions about what information to publicly debunk for CrossCheck. We became worried that – in cases where rumours, misleading articles or fabricated visuals were confined to niche communities – addressing the content might actually help to spread it farther.

As Alice Marwick and Rebecca Lewis noted in their 2017 report, Media Manipulation and Disinformation Online, “[F]or manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.” Buzzfeed’s Ryan Broderick seemed to confirm our concerns when, on the weekend of the #MacronLeaks trend, he tweeted that 4channers were celebrating news stories about the leaks as a “form of engagement.”

We have since faced the same challenges in the UK and German elections. Our work convinced us that journalists, fact-checkers and civil society urgently need to discuss when, how and why we report on examples of mis- and dis-information and the automated campaigns often used to promote them. Of particular importance is defining a “tipping point” at which mis- and dis-information becomes beneficial to address. We offer 10 questions below to spark such a discussion.

Before that, though, it’s worth briefly mentioning the other ways that coverage can go wrong. Many research studies examine how corrections can be counterproductive by ingraining falsehoods in memory or making them more familiar. Ultimately, the impact of a correction depends on complex interactions between factors like subject, format and audience ideology.

Reports of disinformation campaigns, amplified through the use of bots and cyborgs, can also be problematic. Experiments suggest that conspiracy-like stories can inspire feelings of powerlessness and lead people to report lower likelihoods to engage politically. Moreover, descriptions of how bots and cyborgs were found give their operators the opportunity to change strategies and better evade detection. In a month awash with revelations about Russia’s involvement in the US election, it’s more important than ever to discuss the implications of reporting on these kinds of activities.

Following the French election, First Draft has switched from the public-facing model of CrossCheck to a model where we primarily distribute our findings via email to newsroom subscribers. Our election teams now focus on stories that are predicted (by NewsWhip’s “Predicted Interactions” algorithm) to be shared widely. We also commissioned research on the effectiveness of the CrossCheck debunks and are awaiting its results to evaluate our methods.

Without further ado, here are some questions our work has inspired:

Who is my audience?
Are they likely to have seen a particular piece of mis- or dis-information already? If not, what are the consequences of bringing it to the attention of a wider audience?

When should we publish stories about mis- and dis-information?
How much traffic should a piece of mis- and dis-information have before we address it? In other words, what is the “tipping point,” and how do we measure it? On Twitter, for example, do we check whether a hashtag made it to a country’s top 10 trending topics?

How do we think about the impact of mis- and dis-information, particularly on Twitter?
Do we care about how many people see the content? Or do we care about who sees the content? In particular, is Twitter important in virtue of the number of people who use it, or is it important because certain groups, like news organizations and politicians, use it? How do our answers to these questions change how we evaluate the impact of information?

How do we isolate human interactions in a computationally affordable manner?
When we talk about the “reach” of a piece of content, we should be referring to how many humans saw it. Yet, identifying the number of humans who saw a piece of information can be difficult and computationally expensive. What algorithms might be devised to calculate human reach (at least on Twitter) in a timely and inexpensive way?

For those of us whose primary goal is to stop mis- and dis-information, what strategies of distribution beyond publishing might we consider?
Should we target accounts who have engaged with problematic content with direct messages, to decrease our chances of perpetuating the falsehood? Should we be using Facebook ads that target certain groups? Is this even the role of news organizations or non-profits like First Draft?

How do we write our corrections?
How can we use research from the fields of psychology and communication to maximize the positive impact of our corrections and minimize chances of blowback?

Why do we report on attempts at manufactured amplification?
Are we putting the popularity of artificially boosted content into proportion? Are we trying to make people aware of bots so that they’ll be more vigilant? Are we trying to encourage platforms or government to take action against mis- and dis-information?

Who should be talking about manufactured amplification?
News organizations aren’t in a position to do work that won’t be published. So, given that it may sometimes be counterproductive to publish about bot networks, should news organizations be investigating them?

Where do the responsibilities of journalists end and the responsibilities of the intelligence community start?
The monitoring and active debunking of information is falling uncomfortably across different sectors. We’re seeing more disinformation monitoring initiatives emerge outside journalism, such as the Hamilton 68 dashboard, which was co-created by current and former counterterrorism analysts. What role should journalists have in actively combating attempts to influence public opinion in another country?

How should we write about attempts at manufactured amplification?
Should we focus on debunking the messages of automated campaigns (fact-checking), or do we focus on the actors behind them (source-checking)? Do we do both? How might we show our investigations are credible without informing bot operators or perpetuating the content they were boosting?

The questions above, though fundamental, are not easy to answer. Nor is it simple to decide what to do once we have answers to these questions. Perhaps we need new ethical policies for reporting on these topics. Maybe newsrooms should coordinate about handling large-scale attempts to manipulate the media, such as strategically timed leaks.

Organizations covering mis- and dis-information need to discuss these issues, and it’s clear that those conversations should include academic researchers, some of whom have been studying corrections and disinformation for decades. This is too important to get wrong.