Why Facebook needs a Supreme Court for content moderation

What belongs on Facebook? It’s a central question in our current reckoning over social media, and given the vastness of the company’s platform, it can be exceedingly difficult to answer. “Fake news is not your friend,” the company says — but you can still post as much as you want. Alex Jones’ conspiracy theories, which inspired years of harassment against the parent of Sandy Hook shooting victims, were fine until they suddenly weren’t. Everyone seems to agree that terrorism does not belong on Facebook, though there’s still more of it there than you might expect.

But imagine you could start from scratch. What would you rule in, and what would you rule out? That’s the frame of this new episode of Radiolab, which chronicles the evolution of Facebook’s content policy from a single sheet of paper into 27 pages of comically specific rules about nudity, sex, violence, and more.

The full hour-long podcast is well worth your time. It examines three moderation debates, of escalating seriousness. The first is about when it is appropriate to show breastfeeding — an area in which Facebook has gradually become more liberal.

The second is about when you can criticize what the law calls a protected class of people — a gender, or a religion, for example. This is an area where Facebook has generally gotten more conservative. At one time, criticism of “white men” was prohibited — both words there are protective categories — while “black children” was not. The reasoning was that “children” is a non-protected class, and you can say anything about a non-protected class, as Facebook has no way of knowing whether their race has anything to do with your antipathy.

“If the rule is that any time a protected class is mentioned it could be hate speech, what you are doing at that point is opening up just about every comment that’s ever made about anyone on Facebook to potentially be hate speech,” producer Simon Adler says on the show.

This policy has since been changed, and black children are now protected from the worst forms of hate speech. “We know that no matter where we draw this line, there are going to be some outcomes that people we don’t like,” Monika Bickert, Facebook’s head of product and counterterrorism policy, told Adler. “There are always going to be casualties. That’s why we continue to change the policies.”

The third debate is the one I found most compelling. It’s a tale of two content moderation decisions, made six months apart in 2013. The first came after the Boston Marathon bombing, when images of bombing victims were posted on Facebook. At the time, the company’s policy on carnage was “no insides on the outside” — which photos from the bombing clearly violated. Adler’s anonymous former moderators told him that after some debate, an unknown Facebook executive said the pictures should remain, because they were newsworthy.

Six months later, Facebook faced a similar dilemma in Mexico, where the government and the cartels were locked in a bloody conflict. Users began posting a video of a woman being beheaded — a particularly newsworthy video, given that the government had been publicly denying reports of cartel violence. But in this case, another unnamed executive called for the video to come down. The decision led to departures on the moderation team, a former moderator says:

I think it was a mistake. Because I felt like, like why do we have these rules in place in the first place? And and it’s not the only reason, but it’s decisions like that that are the thing that precipitated me leaving.

Five years later, the company has tasked itself with making decisions like these at a global scale. It vastly expanded — and this year made public — the community guidelines by which it makes these decisions. And it committed to hiring 20,000 new employees to work on safety and security. Adler puts it this way:

Essentially what Facebook is trying to do is take the First Amendment, this high-minded principle of American law, and turn it into an engineering manual that can be executed every four seconds, for any piece of content happening anywhere on the globe.

He then cuts to a former moderator in the Philippines. Her colleagues would frequently approve content without really studying, she says, in protest of the relatively low rate of pay — about $2.50 an hour when she worked there, she says. She also largely relied on her gut, she says, erring on the side of removing even innocent nudity. “If it’s going to disturb the young audience, then it should not be there,” she says.

What to make of all this? Radiolab ends on an uncharacteristically bleak note: “I think they will inevitably fail, but they have to try, and I think we should all be rooting for them,” Adler says.

But this sentiment assumes Facebook’s system of content moderation will never evolve beyond its policy handbook. In fact, the company has already given us at least two ideas for how it might change.

One, Facebook could expand the avenues that users have to appeal moderation decisions. It started to do this in April, I reported at the time:

Now users will be able to request that the company review takedowns of content they posted personally. If your post is taken down, you’ll be notified on Facebook with an option to “request review.” Facebook will review your request within 24 hours, it says, and if it decides it has made a mistake, it will restore the post and notify you. By the end of this year, if you have reported a post but been told it does not violate the community standards, you’ll be able to request a review for that as well.

Over the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

What Facebook is describing with these ideas is something like a system of justice — and there are very few things it is working on that I find more fascinating. For all the reasons laid out by Radiolab, a perfect content moderation regime likely is too much to hope for. But Facebook could build and support institutions that help it balance competing notions of free speech and a safe community. Ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.

On Friday I wrote about how Google’s developing plans to re-enter China could trigger a crisis at the company. Later in the day, based on interviews with employees, Ellen Huet shed valuable new light on the entire story. Among the key insights here: a lot of Googlers wish they had never abandoned China in the first place; CEO Sundar Pichai believes Google entering China could have an unspecified “positive impact” on the country; and cofounder Sergey Brin — who led the charge to leave the country initially — is basically neutral now. This is also worth contemplating:

Now, the business case for engaging with China has grown, while the issue of censorship online has become more nuanced, according to the person. Germany has strong anti-hate speech rules, Thailand limits what can be said about its royal family online, and Europe has a right-to-be-forgotten law that lets people ask Google to remove old information about them from search results. To free-speech purists, these are also undesirable forms of online censorship, the person noted.

Late on Friday, Reuters reported that the FBI is trying to compel Facebook to let it listen to voice conversations that took place on its Messenger app, as part of a criminal probe. My colleague Russell Brandom says Facebook will likely have a harder time fighting this than Apple did in 2016, when it successfully resisted similar pressure after the San Bernardino shooting:

There are crucial differences in this new case, and most of them are unfavorable to Facebook. While San Bernardino used a novel legal argument against a hardened device, Facebook’s case uses a well-tested legal procedure against a protocol that wasn’t built with this attack in mind. Not all encryption is the same, and every indication is that Facebook’s Messenger encryption simply wasn’t designed to maintain privacy in the face of a court-compelled wiretap. As a result, Facebook is facing a much tougher legal fight with a much less predictable result.

David Ingram reports that academic access to data for research purposes will be restricted to content posted after January 1st, 2017 — after the 2016 election period that many researchers hoped to study.

Facebook shut down the English-language page of Telesur, an organ of the Venezuela state media, and it’s not clear why, Sam Biddle reports.

In an emailed statement to The Intercept, a company spokesperson said, “The Page was temporally unpublished to protect it after we detected suspicious activity.” The term “suspicious activity” does not appear in Facebook’s terms of service. The spokesperson would not explain what “suspicious activity” was observed on Telesur’s page, or define the term, or explain why it was initially blamed on rule-breaking by Telesur and then technical problems on the social network’s end.

Personality conflicts inevitably happen in almost any workplace, including those of feel-good activists. Ross’s erasure from the lore of the law’s passage isn’t necessarily nefarious or a deliberate attempt to avoid giving a woman credit for her accomplishments. Those involved may have genuinely felt like she didn’t need to be mentioned because she didn’t support the compromise they’d made and wasn’t going to be part of the group moving forward.

“Mary Ross was an important part of the campaign team when we were all working full steam ahead to pass a ballot measure,” Robin Swanson, the campaign consultant for the group, told me by email. “Roles shifted when she made it clear she did not support a legislative compromise because she felt it wouldn’t go far enough.”

The European Union is considering tough new laws that would force tech companies like Facebook and YouTube to delete terrorist propaganda from their platforms within 60 minutes or face fines. Note that tech companies are already reeling from a similar German law — and that one gives them 24 full hours.

Mike Caulfield looks at how Pinterest’s recommendation engine has been a boon to Qanon and other conspiracy theorists:

The UI-driven decontextualization that drove Facebook’s news crisis is actually worse here. Looking at a board, I have no idea why I am seeing these various bits of information at all, or any indication where they come from.

Facebook minimized provenance in the UI to disastrous results. Pinterest has completely stripped it. What could go wrong?

Jack Dorsey’s Look Busy 2018 tour stopped by CNN’s Reliable Sources over the weekend. In it, he acknowledged a “left-leaning” bias among Twitter employees, said that proactively moderating Twitter would be too expensive, and promised that the company is rethinking how it displays likes and retweets. If you work in communications at Twitter and want to walk me through the company strategy hear, I am all ears.

Linda Kintsler has a long piece on the history of TripAdvisor, and about how it, too, had no plan to deal with success. The site is beset by fake reviews and attacks from businesses who want bad reviews taken down. Worth reading through the prism of other platforms’ similar struggles:

On 1 November 2017, an investigation by Raquel Rutledge, a journalist at the Milwaukee Journal Sentinel, found that TripAdvisor had a habit of deleting posts detailing sexual assaults and other violent crimes on the grounds that they either violated the family-friendly policy, contained second-hand information, or hearsay, or they were deemed “off topic” by site moderators. “There’s no way to know how many negative reviews are withheld by TripAdvisor; how many true, terrifying experiences never get told; or for site users to know that much of what they see has been specifically selected and crafted to encourage them to spend,” Rutledge wrote.

On 7 November, TripAdvisor’s market value crashed by $1bn when its stock price dropped from $39 to $30 per share, its worst-ever day on the stock market. A couple of weeks later, the US Federal Trade Commission opened an ongoing investigation into the company’s business practices. “For a long time, [companies] could claim that their role was largely proactive, that all they had to do was put safeguards in place to reduce the risks of bad things happening,” says Botsman. “We’ve seen a massive pendulum swing – it’s now their responsibility when things go wrong. This is a whole new era of corporate accountability.”

One of my absolute favorite genres of content is “dating is a nightmare,” and Madison Malone Kircher has an absolute classic for us here:

The summer of scam has a new hero, and her name is Natasha Aponte. What did Ms. Aponte do to warrant this title? She used Tinder to con dozens of men into believing they were meeting her for a one-on-one date in Union Square. When the men arrived, they discovered that instead of a date … they’d be competing against each other to win it.

Twitter is losing users and bleeding money, which means it’s time to invest in squints at notes broadcasting high school football games:

“Nationally ranked teams” from California, Nevada, Indiana, Georgia, and Florida will be part of the series, and will start on September 7th and will finish on November 9th. TechCrunch notes that NFL games have been popular on the site, and that this is the first time that high school games will be streamed in this fashion. The games will be available on @adidasFballUS on both mobile and desktop devices, and will be accompanied by a Twitter timeline with additional coverage and tweets.

The NYU School of Medicine is giving Facebook an anonymous data set of 10,000 MRI exams in hopes that Facebook’s AI team can create a speedier version of the test, Matt McFarland notes. Please enjoy the (unintentional?) shade thrown here by the head of Facebook’s AI research group (emphasis mine):

Facebook started talking to NYU about the project last year because its AI team wanted to work on something with real-world benefits even as it performs basic research, said Larry Zitnick of the company’s Artificial Intelligence Research group. It plans to open-source any findings in the hope that sharing the data will encourage others to expand upon its work.

LinkedIn is giving approved researchers access to anonymized data to help them study the economy, Jeremy Kahn reports:

The initiative, called the LinkedIn Economic Graph Program, is an expansion of an earlier collaboration with outside economics researchers that the company created in 2015. That effort resulted in several path-breaking findings, the company said.

For example, researchers from the World Economic Forum used LinkedIn’s data to explore the gender gap. Jessica Jeffers, an assistant professor of finance at the University of Chicago, used LinkedIn data to examine the impact of non-compete agreements, determining that they hurt new firms and entrepreneurship.

Issie Lapowsky reports on SurfSafe, a browser extension created by two UC Berkeley undergrads that helps find the origin of images on the internet. It’s useful for figuring out if something that is being presented as new is actually from another time or context — or is simply a hoax. Browser extensions are usually DOA, but can be useful in inspiring actual browser features. So, let’s CC the Chrome, Safari, and Firefox teams here.

Kia Kokalitcheva writes about the relaunch of Islands, a college-focused social network that mimics aspects of Facebook, Snapchat, and Slack:

In the new version of Islands, users will be able to join and create group chat rooms on their campus, have a profile page that includes their Snapchat and Instragram handles, see other students who are nearby (within about 1 mile of them), and view a directory of students in their school who have signed up for the app.

Currently, 5-25% of students on active campuses are using Islands, according to Isenberg, and each user invites two others. At the end of this past spring semester, Islands’ users were sending thousands of messages per day, and Isenberg predicts that when the app rolls out in every U.S. college, users will be sending 2 million messages every day.

I read this story by my colleague Dami Lee and just screamed “why?!” the whole time. Say hello to the opposite of time well spent:

Now Giphy is announcing that it’s refreshing its homepage to prominently feature Stories, which will be curated by an editorial team. Stories will be centered around the day’s trending subjects, told through GIFs. One story will be published every hour, curated by categories of Entertainment, Sports, and Reactions.

Maya Kosoff says Twitter will never live down telling conservatives that it’s “left-leaning.” His words were not particularly well chosen, but (1) they are probably basically true, and (2) conservatives were going to say that whether or not Jack Dorsey ever did. That said, all this is all true:

Dorsey has spent much of the summer attempting to head off this type of criticism. In June, the Twitter C.E.O. dined at the upscale Georgetown restaurant Cafe Milano with a group that included White House communications adviser Mercedes Schlapp and Fox News commentator Guy Benson, in what quickly devolved into an airing of grievances. His most recent media tour began on Sean Hannity’sradio show, where he sought to reassure listeners that Twitter would not “shadow ban” them. Conservatives praised his transparency, and Hannity himself has since claimed to have a direct line of communication with Dorsey. But Dorsey should have known his time in the right-wing sun would be short-lived; the likes of Hannity and Jones have proven over and over again that they will never let up on the social-media giant, even when Twitter appears to skew explicitly in their favor. In admitting to “left-leaning” bias, and promising to stamp it out when enforcing rules, Dorsey effectively handed conservatives more ammunition, perpetuating the cycle that forces him to continually tiptoe around the right.

This is a strategy in general that we should all expect to see more and more in the world. It is, I would argue, the aggressively technologically correct strategy to run for the future. Don’t prevent leaks or try to lock down everything. Just build self-serving networks of people or bots to put out enough false information to obscure reality.

If you are a private person, don’t try to avoid having a social media profile. Instead try to have many fake ones, all sharing contradictory information about “you.”