3 unanswered questions about Facebook hoaxes and threats

Another day, another high-profile outrage spreading virally on Facebook. This time around it’s our frequent subject here Alex Jones, of Infowars, who yesterday went on a rant in which he tiptoed very close to the line of calling for violence against special counsel Robert Mueller. Charlie Warzel has the details in BuzzFeed:

On his Monday afternoon show, Jones issued a prolonged rant against special counsel Robert Mueller, accusing him of raping children and overseeing their rape, and then pantomiming shooting the former FBI director. The show was streamed live on Jones’ personal, verified Facebook page, which has nearly 1.7 million likes.

In the clip, Jones baselessly accused Mueller of having sex with children. “They’d let Mueller rape kids in front of people, which he did,” he said on the show.

Facebook told Warzel the rant did not amount to a credible threat of violence, and left the post up. It had about 46,000 views as of this morning.

Later in the day, Facebook held a previously scheduled conference call with reporters to discuss its work on misinformation and elections. Five executives who work on issues including News Feed integrity, security policy, and elections laid out what they’re doing to improve the service. There were no major new announcements, but the question-and-answer period that followed gave reporters a chance to ask about the Infowars issue.

“We know people don’t want to see false information at the top of their News Feed,” said Tessa Lyons, the head of News Feed Integrity. Lyons went on to say that the company believes it has a responsibility to limit the distribution of hoaxes. And, in cases where those hoaxes have created an imminent threat of harm, Facebook — as of last week, in just two countries — will remove it from the platform.

Both pieces are worth reading, even if Fischer’s in particular comes across as rather pessimistic. (“Facebook may not be able to do much more than it has already tried, unless it makes a drastic change that would impact its business and long-term vision.”)

While we wait for a more comprehensive solution, I’d settle for Facebook answering some questions that never quite found answers on today’s call:

What data can Facebook share on the subject of misinformation seeing reduced distribution after being labeled as false or ? The company likes to say that posts get 80 percent fewer views on average, but it would be helpful to see numbers for specific pages. Infowars, for example.

Fact-checkers say it takes an average of three days before they are able to label a Facebook post as false. Haven’t most posts already gotten the majority of their lifetime views at that point? Doesn’t that make the strategy of “reduced distribution” significantly less effective?

Finally, a question from my boss, Nilay Patel. By what standard does Facebook say Jones’ rant against Mueller did not represent a “credible threat of violence?” When courts make such judgements, Nilay notes, they do so by outlining their reasoning and citing the relevant precedents.

“If Facebook wants to run a legal system,” he says, “it should do that too.”

In another one of its periodic efforts to persuade the Chinese government to let it open up shop there, Facebook is trying to open a “startup accelerator” there and fund it with $30 million. But it’s not exactly clear what’s happening, report Paul Mozur and Sheera Frankel:

Yet late Tuesday, in a sign of possible complications, the corporate registration was taken down from the Chinese government website, and some references to the new subsidiary appeared to be censored on social media in the country.

The moves indicated how complicated it remains for Facebook to navigate China, where it has been blocked for almost 10 years. If the subsidiary is allowed to proceed, it will be a toe in the water here for the Silicon Valley company. Facebook said it wanted to use the subsidiary to coordinate with Chinese developers in the closely censored market.

My colleague Colin Lecher interviews one of the authors of the Communications Decency Act, and its world-changing Section 230.

WYDEN: We thought it was going to be helpful. We never realized it was going to be the linchpin to generating investment in social media. We envisioned that the law would be both a sword and a shield. A shield so that you could have this opportunity, for particularly small and enterprising operations to secure capital, and then a sword [by allowing them to moderate without facing liability over the practice], which said you’ve got to police your platforms. And what was clear during the 2016 election and the succeeding events surrounding Facebook, is that technology companies used one part of what we envisioned, the shield, but really sat on their hands with respect to the sword, and wouldn’t police their platforms.

WhatsApp is doing lots of outreach to public officials in India amid the current crisis of mob violence, reports Venkat Ananth, who says it’s linked to the delayed effort to get payments approved on the app:

The team includes public policy manager Ben Supple, senior director, customer operations, Komal Lahiri and WhatsApp India communication manager Pragya Misra Mehrishi. They are now expected to meet key government officials from MeitY from Monday, sources say.

“The intense outreach efforts is essentially linked to WhatsApp wanting to protect its payments play in India,” says a Delhi-based public policy professional, who did not want to be named as he is not authorised to speak to the media. “It (WhatsApp) is really worried about Google’s efforts with Tez and the gap that will only widen if the government delays grant of permission.”

It’s not just India: WhatsApp is causing problems around the world, report Lizza Dwoskin and Annie Gowan:

Messaging platforms have hosted disinformation campaigns in at least 10 countries this year, according to a report by the Computational Propaganda Project at Oxford University. WhatsApp was the main platform for disinformation in seven of those nations, including Brazil, India, Pakistan, Zimbabwe and Mexico. Other messaging apps that have hosted disinformation include Telegram in Iran, WeChat in China and Line in Thailand.

“In the U.S., the disinformation debate is about the Facebook news feed, but globally, it’s all about closed messaging apps,” said Claire Wardle, executive director of First Draft, a nonprofit news literacy and fact-checking organization affiliated with Harvard University’s John F. Kennedy School of Government.

Facebook wouldn’t cop to Russian interference in the current midterm election campaigns. But Homeland Security found that Russian hackers have infiltrated the control rooms of US electrical utilities. “They said the campaign likely is continuing,” Rebecca Smith reports.

Facebook has signed a new, legally binding agreement with the state of Washington agreeing to remove advertisers’ ability to exclude races, religions, sexual orientations, and other protected classes in certain ad-targeting sectors, my colleague Nick Statt reports.

Here’s a research paper that would seem to support Mark Zuckerberg’s controversial statement last week that people who share fake news typically believe it is true:

People do not share fake news stories solely to spread factual information, nor because they are “duped” by powerful partisan media. Their worldviews are shaped by their social positions and their deep beliefs, which are often both partisan and polarized. Problematic information is often simply one step further on a continuum with mainstream partisan news or even well-known politicians. We must understand “fake news” as part of a larger media ecosystem. That does not mean that we should ignore platforms; we must scrutinize the ways in which algorithms and ad systems promote or incentivize problematic content, and the frequency with which extremist content is surfaced. Finally, while media literacy and fact-checking efforts are very well-intentioned, they may not be the best solutions, given the highly-polarized, mistrustful political climate of the United States.

Lots of kids signed up for Twitter before they turned 13. Twitter is hunting them down and locking them out of their accounts, even though they are of age now, reports my colleague Shoshana Woodinsky:

“For a couple of years, I couldn’t actually update my birth year on Twitter. If I tried to select my correct year, 1996, it just would be grayed out,” said Maxwell, a 22-year-old Twitter devotee, who found himself suspended last week. “On Wednesday, I checked again and noticed I could select 1996, but as soon as I saved the change, my account locked.” Though Maxwell has appealed repeatedly, he’s still locked out of the platform — at least for now.

The head of Snapchat Spectacles is the latest to leave Snap, Alex Heath reports:

Randall, who was Snap Inc.’s vice president of hardware, told employees recently he was leaving to start his own company, according to a memo that was obtained by Cheddar and confirmed by a Snap spokesperson.

Russian bots are actively promoting the hashtag #WalkAway, which supposedly is used by Democrats who have left the party to become Republicans. It turns out that many of the supposed former Democrats depicted in the campaign’s imagery were bought off Shutterstock.

When Facebook moves into its new offices in Mountain View this fall, a signature Silicon Valley perk will be missing — there won’t be a corporate cafeteria with free food for about 2,000 employees.

In an unusual move, the city barred companies from fully subsidizing meals inside the offices, which are part of the Village at San Antonio Center project, in an effort to promote nearby retailers. The project-specific requirement passed in 2014, attracting little notice because the offices were years away from opening.

When Facebook reports Q2 earnings on Wednesday, analysts are expecting — you guessed it — yet another great quarter.

“Despite all the negative headlines, we believe ad revenue should continue to drive very healthy growth,” wrote SunTrust’s Youssef Squali. Analysts think Facebook revenue will grow 43 percent over the same quarter one year ago.

We don’t have psychological studies directly looking at the ability of AI-faked video to implant false memories. But researchers have been studying the malleability of our memories for decades.

Here’s what they know: The human mind is incredibly susceptible to forming false memories. And that tendency can be kicked into overdrive on the internet, where false ideas spread like viruses among like-minded people. Which means the AI-enhanced forgeries on the horizon will only make planting false memories even easier.

Alexis Madrigal talks to Siva Vaidhyanathan, author of a new book called Antisocial Media, about whether Facebook is blinded by data. (It is, Vaidhyanathan says.)

Behaviorism is embedded in Facebook. They’ve been clear about this. Facebook is constantly tweaking its algorithms to try to dial up our positive emotional states, otherwise known as happiness. That’s one of the reasons that they measure happiness to the best of their ability, or so they think. It’s one reason that they’ve run mood changing studies (that they got into trouble for). This is the kind of social engineering that they want to engage in. It’s one of the reasons that they are trying to turn up the dial on the hedonic meter on the whole species. And that lets them ignore the edge cases, and those edge cases can be millions of people. People in Myanmar and Kenya. Women who are stalked and harassed through Facebook and have to rely on a clunky reporting system. The edge cases fall away and only recently has Facebook faced the sort of public scrutiny that has encouraged the company to take these problems seriously.

The information was available four or five years ago, longer in some cases. And they did nothing. But again, when you’re looking at that hedonic meter on your screen and you are seeing that the general happiness of Facebook users might be edging up, you can feel really good about the work you do every day and ignore the horrors on the margins.

A conservative publisher put together a “satirical” fake interview with New York congressional candidate Alexandria Ocasio-Cortez in which video of her taken from another interview is spliced in against questions designed to make her look stupid. It is a viral hit, and many people think it is real. “Without the disclaimer, it’s indistinguishable from an awkward attempt at smearing a political opponent,” my colleague Adi Robertson reports:

That distinction matters to Facebook, which protects satire while demoting (but not deleting) “false news” in the News Feed. Facebook reiterated to The Verge that “we do offer satire on Facebook, as long as it’s not violating one of our community standards policies,” like hate speech. The call is left to Facebook’s fact-checkers, who can add written context or a “satire” label if a post is sufficiently confusing — one might have been added to CRTV’s post if they hadn’t added a disclaimer. (We don’t know whether CRTVwas contacted by Facebook about the post, although we’ve reached out for clarification.)

But Facebook has acknowledged that “satire” can also be a bad-faith cover for serious misinformation attempts, and the distinction basically boils down to a poster’s intentions, which are irrelevant for people who are simply scrolling down the News Feed. Infowarsfounder Alex Jones has called himself a performance artist playing a character, and it’s not a leap to imagine Infowars or others making “satirical” conspiracy videos attacking school shooting survivors and claiming Facebook can’t censure them.