A free-speech social network disappears from the internet

It was an awful weekend of hate-fueled violence, ugly rhetoric, and worrisome retreats from our democratic ideals. Today I’m focused on two ways of framing what we’re seeing, from the United States to Brazil. While neither offers any comfort, they do give helpful names to phenomena I expect will be with us for a long while.

The first is stochastic terrorism: “The use of mass, public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random.” I encountered the idea in a Friday thread from data scientist Emily Gorcenski, who used it to tie together four recent attacks.

In her thread, Gorcenski argues that various right-wing conspiracy theories and frauds, amplified both through mainstream and social media, have resulted in a growing number of cases where men snap and commit violence. “Right-wing media is a gradient pushing rightwards, toward violence and oppression,” she wrote. “One of the symptoms of this is that you are basically guaranteed to generate random terrorists. Like popcorn kernels popping.”

On Saturday, another kernel popped. Robert A. Bowers, the suspect in a shooting at a synagogue that left 11 people dead, was steeped in online conspiracy culture. He posted frequently to Gab, a Twitter clone that emphasizes free speech and has become a favored social network among white nationalists. Julie Turkewitz and Kevin Roose described his hateful views in the New York Times:

After opening an account on it in January, he had shared a stream of anti-Jewish slurs and conspiracy theories. It was on Gab where he found a like-minded community, reposting messages from Nazi supporters.

“Jews are the children of Satan,” read Mr. Bowers’s biography.

Bowers is in custody — his life was saved by Jewish doctors and nurses — and presumably will never go free again. Gab’s life, however, may be imperiled. Two payment processors, PayPal and Stripe, de-platformed the site, as did its cloud host, Joyent. The site went down on Monday after its hosting provider GoDaddy, told it to find another one. Its founder posted defiant messages on Twitter and elsewhere promising it would survive.

Gab hosts a variety of deeply upsetting content, and to its supporters, that’s the point. Free speech is a right, their reasoning goes, and it ought to be exercised. Certainly it seems wrong to suggest that Gab or any other single platform “caused” Bowers to act. Hatred, after all, is an ecosystem. But his action came amid a concerted effort to focus attention on a caravan of migrants coming to the United States in seek of refugee.

In his final post on Gab, Bowers wrote: “I can’t sit by and watch my people get slaughtered. Screw your optics. I’m going in.”

The individual act was random. But it had become statistically probable thanks to the rise of anti-immigrant rhetoric across all manner of media. And I fear we will see far more of it before the current fever breaks.

The second concept I’m thinking about today is democratic recession. The idea, which is roughly a decade old, is that democracy is in retreat around the globe. The Economistcovered it in January:

The tenth edition of the Economist Intelligence Unit’s Democracy Index suggests that this unwelcome trend remains firmly in place. The index, which comprises 60 indicators across five broad categories—electoral process and pluralism, functioning of government, political participation, democratic political culture and civil liberties—concludes that less than 5% of the world’s population currently lives in a “full democracy”. Nearly a third live under authoritarian rule, with a large share of those in China. Overall, 89 of the 167 countries assessed in 2017 received lower scores than they had the year before.

In January, The Economist considered Brazil a "flawed democracy.” But after this weekend, the country may undergo a precipitous decline in democratic freedoms. As expected, far-right candidate Jair Bolsonaro, who speaks approvingly of the country’s previous military dictatorship, handily won election over his leftist rival.

In the best piece I read today, BuzzFeed’s Ryan Broderick — who was in Brazil for the election — puts Bolsonaro’s election into the context of the internet and social platform. Broderick focuses on the symbiosis between internet media, which excels at promoting a sense of perpetual crisis and outrage, and far-right leaders who promise a return to normalcy.

Typically, large right-wing news channels or conservative tabloids will then take these stories going viral on Facebook and repackage them for older, mainstream audiences. Depending on your country’s media landscape, the far-right trolls and influencers may try to hijack this social-media-to-newspaper-to-television pipeline. Which then creates more content to screenshot, meme, and share. It’s a feedback loop.

Populist leaders and the legions of influencers riding their wave know they can create filter bubbles inside of platforms like Facebook or YouTube that promise a safer time, one that never existed in the first place, before the protests, the violence, the cascading crises, and endless news cycles. Donald Trump wants to Make American Great Again; Bolsonaro wants to bring back Brazil’s military dictatorship; Shinzo Abe wants to recapture Japan’s imperial past; Germany’s AFD performed the best with older East German voters longing for the days of authoritarianism. All of these leaders promise to close borders, to make things safe. Which will, of course, usually exacerbate the problems they’re promising to disappear. Another feedback loop.

A third feedback loop, of course, is between a social media ecosystem promoting a sense of perpetual crisis and outrage, and the random-but-statistically-probable production of domestic terrorists.

Perhaps the global rise of authoritarians and big tech platforms are merely correlated, and no causation can be proved. But I increasingly wonder whether we would benefit if tech companies assumed that some level of causation was real — and, assuming that it is, what they might do about it.

You don’t have to go to Gab to see hateful posts. Sheera Frenkel, Mike Isaac, and Kate Conger report on how the past week’s domestic terror attacks play out on once-happier places, most notably Instagram:

On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.

A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrated the Sept. 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviation used for the Nazi salute “Heil Hitler.”

Just before the synagogue attack took place on Saturday, David Ingram posted this story about an alarming rise in attacks on Jews on social platforms:

Samuel Woolley, a social media researcher who worked on the study, analyzed more than 7 million tweets from August and September and found an array of attacks, also often linked to Soros. About a third of the attacks on Jews came from automated accounts known as “bots,” he said.

“It’s really spiking during this election,” Woolley, director of the Digital Intelligence Laboratory, which studies the intersection of technology and society, said in a telephone interview. “We’re seeing what we think is an attempt to silence conversations in the Jewish community.”

Dana Priest, James Jacoby and Anya Bourg report that Ukraine’s experience with information warfare offered an early — and unheeded — warning to Facebook:

To get Zuckerberg’s attention, the president posted a question for a town hall meeting at Facebook’s Silicon Valley headquarters. There, a moderator read it aloud.

“Mark, will you establish a Facebook office in Ukraine?” the moderator said, chuckling, according to a video of the assembly. The room of young employees rippled with laughter. But the government’s suggestion was serious: It believed that a Kiev office, staffed with people familiar with Ukraine’s political situation, could help solve Facebook’s high-level ignorance about Russian information warfare.

Natasha Lomas reports on the EU’s latest move to put pressure on Facebook over data privacy:

MEPs are urging the company to allow European Union bodies to carry out a full audit to assess data protection and security of users’ personal data, following the scandal in which the data of 87 million Facebook users was improperly obtained and misused.

In the resolution, adopted today, they have also recommended Facebook make additional changes to combat election interference — asserting the company has not just breached the trust of European users “but indeed EU law”.

The “Crush Cruz” page on Facebook first appeared on September 12th. Since then the person or people behind the page have spent almost $6,000 on dozens of Facebook ads, which doesn’t sound like much, but according to Facebook data reviewed by CNN Business, the page could have reached more than a million Texans. Facebook took away the page’s ability to run political ads in its current form after CNN Business inquired about it.

More than 187 million people are expected to cast their votes when the country goes to the polls on April 17. With six months of campaigning left, a deluge of political and social narratives — true and false — are being distributed to shape voters’ views.

In an attempt to stem that flow, Indonesia’s Ministry of Communications has established a ‘war room,’ where a surveillance team of 70 engineers monitor social media traffic and other online platforms 24 hours a day. When Bloomberg visited on Wednesday, more than a dozen engineers were keeping a close eye on posts about an incident in West Java on Oct. 22, in which a flag bearing an Islamic creed was burned, prompting outrage across the country.

Benjamin Wofford investigates the security of the US election system and comes away extremely concerned:

The country’s election vulnerability falls into three broad camps: 1) the targeting of individual campaigns, which are susceptible to email theft and other meddling; 2) the hacking of our national discourse, or “information operations,” which are the propaganda efforts designed to sow discord; and perhaps most dangerously, 3) the technology itself that underlies the country’s election infrastructure.

In the past two years, federal and state officials have scrambled to harden a system that is almost perfectly vulnerable to the kinds of meddling and mischief on offer from Russian (or other) adversaries. One reason for this vulnerability: The basic configuration of American elections dates to 1890 — a chaotic ritual designed, literally, for another century.

Tony Romm looks at the influx of Silicon Valley money into Democratic campaigns. (There’s also a perfunctory “this will lead to more claims of bias from Republicans” angle to this story that I find somewhat overstated and largely irrelevant, given that Republicans were saying this before 2018 and will continue saying it until the heat death of the universe.)

Many of these newly awakened tech workers are motivated by Trump’s controversial policies on issues including immigration, and they’re focused on closing what they perceive to be an innovation gap with the GOP, two years after Trump effectively tapped Facebook, Twitter and other data-heavy tools on his road to victory. One outgrowth of the Valley’s efforts, a service called MobilizeAmerica, has helped Baer find potential supporters in Florida’s 18th District, a chunk of the state about the size of Rhode Island. The tool helped the campaign knock on more than 2,000 doors during a campaign event held a month before Election Day, aides said.

“After the 2016 election, I think we saw a number of individuals in the tech space, in Silicon Valley and also around the country, frankly saying they wanted to use technology for good,” said Baer, who stands to become Florida’s first openly lesbian representative in Congress if she wins. “And because of that, we’ve seen a proliferation of new tools.”

Kevin Roose wades through tweets and Facebook posts to chart the alleged mail bomber’s devolution into a fringe conspiracy theorist and terrorist. Something appeared to change for him in 2016.

But before Mr. Sayoc’s accounts were taken down, The New York Times archived their contents. And a closer study of his online activity reveals the evolution of a political identity built on a foundation of false news and misinformation, and steeped in the insular culture of the right-wing media. For years, these platforms captured Mr. Sayoc’s attention with a steady flow of outrage and hyperpartisan clickbait and gave him a public venue to declare his allegiance to Mr. Trump and his antipathy for the president’s enemies.

On social media, none of this behavior is particularly out of the ordinary. In fact, to many of his followers, Mr. Sayoc may have appeared to be just one of many partisan keyboard warriors working through their rage.

Here’s a poorly sourced and likely false story about Twitter eliminating one of its core engagement mechanics — and “soon,” to boot! A normal company would deny this, but Twitter CEO Jack Dorsey has said the company is “rethinking everything,” which could presumably include a measure this drastic. Here’s a case where I find Twitter’s embrace of “transparency” awkward — its practical effect is to create more confusion and uncertainty, in ways that damage trust in the company. (Shannon Liao has more on why the like button is probably not going anywhere.)

Here’s a good product criticism of Facebook Groups from Sarah Zhang. Facebook makes it weirdly hard to reach people even in small groups that have assembled for the express purpose of making it easy to get in touch:

Other moderators have noticed how Facebook’s algorithm shapes the discussion in groups. Posts in a group, not unlike the newsfeed, are sorted algorithmically by default. “If you click on the group, it tends to be the most popular content, but it’s not the most relevant,” says Dana Lewis, a member of several diabetes groups. For example, according to Lewis, the algorithm might keep showing a post whose question has been answered. And it might de-prioritize posts from new members that don’t get much engagement—ensuring they get even less engagement in a form of algorithmic ghosting. It’s not exactly a friendly welcome to a support group. “I don’t think Facebook has done a good enough job,” says Lewis. “They have a lot of room to improve.” A Facebook spokesperson noted that members can choose to see most-recent posts first, and admins see posts in their approval queue in reverse chronological order.

On a recent Friday, Kristen O’Hara got a major promotion: to become Snap Inc.’s chief business officer. Chief Executive Officer Evan Spiegel made it official by alerting her direct reports, according to people familiar with the matter.

Two days later, he changed his mind, rescinded the offer and hired Jeremi Gorman, who oversaw ad sales at Amazon.com Inc. The switch was jarring for Snap’s sales division, as O’Hara was well-liked, according to people familiar with the matter. Now she’s gone.

Jonas Parello-Plesner writes about the time that China used LinkedIn in an effort to recruit him as a spy:

Back in 2011-2012, I was asked to connect over LinkedIn by a handsome Chinese woman representing a recruitment company, DRHR, in China. I accepted. She had LinkedIn connections to well-seasoned China scholars, which lowered my alertness. Back then I had just started a book project on how Chinese companies risk manage in fragile environments around the globe, so I was interested in connecting with Chinese companies through DRHR. Initially not much came of the connection.

On a later trip to Beijing, she suggested an opportunity to meet. My LinkedIn contact never showed but claimed she had important business in Hangzhou. Instead, at the St. Regis, a five-star hotel where foreign delegations often congregate, I was greeted by three inconspicuous Chinese men. They vaguely presented themselves as representatives of a Chinese state-sponsored think tank, but never provided me with business cards. In China, this is as awkward and unusual as being naked in a meeting. I soon understood that they worked to recruit Westerners on behalf of the Chinese party-state.

Nellie Bowles reports that parents who work the big tech platforms are more likely to restrict screen time for their children:

Ms. Stecher, 37, and her husband, [Facebook engineer] Rushabh Doshi, researched screen time and came to a simple conclusion: they wanted almost none of it in their house. Their daughters, ages 5 and 3, have no screen time “budget,” no regular hours they are allowed to be on screens. The only time a screen can be used is during the travel portion of a long car ride (the four-hour drive to Tahoe counts) or during a plane trip.

Retweets prey on users’ worst instincts. They delude Twitter users into thinking that they’re contributing to thoughtful discourse by endlessly amplifying other people’s points—the digital equivalent of shouting “yeah, what they said” in the midst of an argument. And because Twitter doesn’t allow for editing tweets, information that goes viral via retweets is also more likely to be false or exaggerated. According to MIT research published in the journal Science, Twitter users retweet fake news almost twice as much as real news. Other Twitter users, desperate for validation, endlessly retweet their own tweets, spamming followers with duplicate information.

That’s why I think Netflix could be a great acquirer for Snap. They’re both video entertainment companies at the vanguard of cultural relevance, yet have no overlap in products. Netflix already showed its appreciation for Snapchat’s innovation by adopting a Stories-like vertical video clip format for discovering and previewing what you could watch. The two could partner to promote Netflix Originals and subscriptions inside of Snapchat. Netflix could teach Snap how to win at exclusive content while gaining a place to distribute video that’s under 20 minutes long.

The alleged Florida mail bomber had sent threatening tweets to a variety of people. Some of those people reported them as threats, and Twitter, as is its wont, told the harassment victims that they had not, in fact, been harassed. Then the man allegedly sent a bunch of mail bombs and Twitter decided that, actually, those tweets were threats after all.

Twitter’s defining characteristic as a company, to my mind, is a kind of generalized haplessness. Waiting until someone sends a bomb to admit that yeah, OK, that tweet was a threat represents a new low in negligence. It doesn’t seem like the company is treating this episode like a crisis.