Should we consider fake news another form of (not particularly effective) political persuasion — or something more dangerous?

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Most forms of political persuasion seem to have little effect at all.” Dartmouth’s Brendan Nyhan writes in The New York Times that it isn’t that easy to change people’s votes in an election, in an Upshot post titled “Fake news and bots may be worrisome, but their political power is overblown.” When we’re trying to evaluate “claims about vast persuasion effects from dubious online content,” Nyhan writes, we should actually be looking at three things: 1) How many people actually saw the material; 2) Whether the people exposed are persuadable/swing voters; and 3) the percentage of bogus news as a percentage of all news viewed.

There is a lot more to learn about "fake news" and bots, but existing research suggests that persuasion is difficult in any format – TV and online ads, political campaigns, etc. Misinformation and polarization effects are likely to be greater threats. https://t.co/dNMXCQmXeD

But: What if “persuadability” isn’t the right metric to look at? That’s the argument from information warfare expert Molly McKew, who specializes in U.S.–Russia relations. Read her whole thread in response to Nyhan’s piece, but here are some excerpts:

The content was video, visual, memetic, & text elements contributing to narrative themes, conspiracies, character attacks. It wasn't sponsored by a candidate or PAC, so it was absent the label that allows people to reject or accept its source as easily. This difference matters /3

The question is not "how many people looked at X misinformation website". It is "what is the idea/narrative that was on X website that made it into mainstream media, influencers, verified accts, etc". Not a numeric evaluation, but a "mainstreaming" one — /12

“There aren’t good tools to evaluate the impact of shadow campaigns,” she writes.

Nyhan and McKew then discussed:

They aren't speculative when it can be documented that narrative moves from specific introduction points through other accounts etc into mainstream usage and acceptance. Individuals/individual pieces of content ultimately matter less than the movement of narrative thru a system.

I agree. But it matters what research is designed to measure. Social media companies won't produce data that will allow assessment at individual level. Instead of going bottom up, looking at narrative that gained amplification then gauging penetration rate can be more effective.

If you’re interested in more by McKew, here’s a recent piece she wrote for Politico, “How Twitter bots and Trump fans made #releasethememo go viral,” on how misinformation leaks into the public consciousness. “Information and psychological operations being conducted on social media — often mischaracterized by the dismissive label ‘fake news’ — are not just about information, but about changing behavior,” she writes. “And they can be surprisingly effective.”

Russia’s disinformation campaign relied on mainstream media sources. This fits in well with the above: New research from Jonathan Albright, the Tow Center for Digital Journalism’s research director, written up by The Washington Post’s Craig Timberg, shows that during the 2016 presidential election, of more than 36,000 tweets sent by Russian accounts, “obscure or foreign news sources played a comparatively minor role, suggesting that the discussion of ‘fake news’ during the campaign has been somewhat miscast.” Instead, the Russian accounts curated mainstream news to achieve their ends:

Some well-chronicled hoaxes reached large audiences. But Russian-controlled Twitter accounts, Albright said, were far more likely to share stories produced by widely read sources of American news and political commentary. The stories themselves were generally factually accurate, but the Russian accounts carefully curated the overall flow to highlight themes and developments that bolstered Republican Donald Trump and undermined his Democratic rival Hillary Clinton.

In a Medium post, Albright documents the sources that the troll tweets linked to.

Sure, Breitbart ranks first, but it’s followed by a long list of what many would argue are credible — if not mainstream — news organizations, as well a surprising number of local and regional news outlets.

Another result from this analysis is the effect of “regional” troll accounts, aka the fake accounts with a city or region name in the handle (e.g., HoustonTopNews, DailySanFran, OnlineCleveland), which showed a pattern of systematically re-broadcasting local news outlets’ stories…

Trolls are using real news — and in particular local news — to drive reactionary news coverage, set the daily news agenda, and target local journalists and community influencers to follow certain stories.

In separate tweets, Albright noted, “Twitter isn’t the platform where you reach ‘Americans.’ Twitter is the place where you mislead the 1-2% of susceptible journalists, policymakers, techies, and opinion leaders.”

I DM’d Albright to ask him a little more about this. “Twitter has little effect on regular citizens outside of helping to set the daily news agenda. I have most of the data needed to prove this, just need to get to formally presenting it and writing it up,” he told me. “Facebook and especially Instagram, and of course YouTube, reach more people more often, and almost certainly have a greater impact on their political and daily understanding of the world.”

Reality apathy.Charlie Warzel writes in BuzzFeed about Aviv Ovadya, chief technologist at the Center for Social Media Responsibility at the University of Michigan and a Knight News innovation fellow at the Tow Center for Digital Journalism at Columbia, about a terrifying possible future of disinformation that includes, for instance,

“polity simulation,” [a] dystopian combination of political botnets and astroturfing, where political movements are manipulated by fake grassroots campaigns. In Ovadya’s envisioning, increasingly believable AI-powered bots will be able to effectively compete with real humans for legislator and regulator attention because it will be too difficult to tell the difference. Building upon previous iterations, where public discourse is manipulated, it may soon be possible to directly jam congressional switchboards with heartfelt, believable algorithmically-generated pleas. Similarly, Senators’ inboxes could be flooded with messages from constituents that were cobbled together by machine-learning programs working off stitched-together content culled from text, audio, and social media profiles….

[It all] can lead to something Ovadya calls “reality apathy”: Beset by a torrent of constant misinformation, people simply start to give up. Ovadya is quick to remind us that this is common in areas where information is poor and thus assumed to be incorrect. The big difference, Ovadya notes, is the adoption of apathy to a developed society like ours. The outcome, he fears, is not good. “People stop paying attention to news and that fundamental level of informedness required for functional democracy becomes unstable.”

In his newsletter, Warzel followed up with some of the right-wing response he got to the piece. He also writes about the Florida school shootings this week: “It’s hard to glance at the internet during our biggest national tragedies and feel like the current ecosystem is healthy. The anger, the people using both real news and misinformation solely to score political points, the visibility the current system of coverage gives to shooters, the hoaxes, the way the platforms goad people to chime in and report without the facts. It’s all broken.”