The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

France>UK>U.S……at least when it comes to not sharing junk news, according to a memo from Oxford University’s Internet Institute’s Project on Computational Propaganda. The researchers measured “bot activity and junk news” on Twitter in the final week of campaigning before the U.K. general election (from May 27–June 2) and found that junk news (defined as “misleading, deceptive or incorrect information purporting to be real news about politics, economics or culture”) made up 11.4 percent of content shared…

compared to 12.6% during the first UK sampling period, 12.5% in Germany and 5.1% and 7.6% respectively in the two election rounds in France. We also found that UK users were not sharing as much junk news in their political conversations as US users in the lead up to the 2016 elections, where the level of junk news shared was significantly higher. In the days leading up to the US election, we did a close study of junk news consumption among Michigan voters [previously covered in this column] and found users were sharing as much junk news as professional news content at around 33% of total content each.

Substantive differences between the qualities of political conversations are evident in other ways. In the US sample, 33.5% of relevant links being shared led to professional news content. In Germany this was 55.3%, and in France this was between 49.4% and 57% of relevant links across both election rounds. Similarly, in the current UK-based study we show that 53.6% of relevant links being shared led to professional news content.

So, um, maybe these European countries are learning from Americans’ mistakes? Adding insult to injury for those of us on this side of the Atlantic, “we are also able to show that individuals discussing politics over social media in the European countries sampled tend to share more high quality information sources than U.S. users,” the researchers write.

“A partisan divide in the reception of fact-checking.” A new report from the Duke Reporters’ Lab by Rebecca Iannucci and Bill Adair, written up by Poynter’s Alexios Mantzarlis, gives us more evidence of something we already knew: Conservative sites are way more likely to call fact-checking biased.

The Duke report notes that while fact-checkers take pains to assert their independence,

— Liberal websites were far more likely to cite fact-checks to make their points than conservative sites were.
— Conservative sites were much more likely to criticize fact-checks and to allege partisan bias.
— When student researchers categorized the tone of mentions, we found liberal sites made most of the positive references while the negative references came primarily from the right.
— Conservative sites made the most critical comments about fact-checking, occasionally using quotation marks (“fact-checking”) to imply it wasn’t legitimate. One even likened a fact-checker to a Bangkok prostitute.

“We believe the dramatic difference in how fact-checkers are portrayed shows they need to strengthen their outreach to conservative journalists and, particularly, to conservative audiences,” Iannucci and Adair write. “The fact checkers need to understand the reasons for the partisan divide and find ways to broaden the acceptance of their work.” They don’t, however, offer solutions for how fact checkers can do this. Poynter’s Mantzarlis has a couple of broad ideas.

Fact-checkers will need to look at what motivates conservative distrust and systemically track how readers on both sides react to conclusions that go against their party affiliation. Rigorous research should go beyond simple tallies and develop metrics that can help detect and evaluate bias in fact-checking. Conservative media critics may want to consider whether calling fact-checkers prostitutes risks further undermining the capacity of building a public discourse on shared facts.

But coming up with specific ideas seems…very hard.

On (1), I can foresee fact-checkers establishing reader panels that could provide interesting data. Can't really speak for con media!

Eight experiments suggest that people are less likely to verify statements when they perceive the presence of others, even absent direct social interaction or feedback. The notion of perceived social presence draws from the literature on Social Impact Theory and social facilitation which has examined the influence of noninteractive others — whose “mere presence” may be real, implied, or otherwise imagined — on individual behavior.

Why is this? It doesn’t seem to be “diffusion of responsibility” or “social loafing” (i.e. everyone’s waiting for someone else to call it out: In one experiment, people were actually paid for each statement they fact checked, and still “participants flagged, or fact-checked, fewer statements in the group compared to the alone condition.” The researchers also didn’t find “a consistent effect of social presence on the number of statements identified as true.” But:

Our data provide some evidence for the third route—reduced vigilance—and suggest that social contexts may impede fact-checking by, at least in part, lowering people’s guards in an almost instinctual fashion. These contexts can take the form of platforms that are inherently social (e.g., Facebook) or can be cued by features of online environments such as “likes” or “shares” that a message receives.

From “the shadowy reaches of the internet” to FoxNews.com. The New York Times’ Neil MacFarquhar and Andrew Rossback tracked how a fake story about a Russian attack on a U.S. ship spread from a Russian “opinion piece, apparently meant to be satirical” to FoxNews.com. The original fake piece was written in 2014 and traveled to Facebook, Russian TV, British tabloids The Sun and The Daily Star, and finally to FoxNews.com (it was taken down after the Times ran this story).