Why Facebook Keeps Stepping In It

The social network can defend hateful ideas on its platform, or it can purge them. It keeps blundering because it doesn’t want to do either.

Facebook CEO Mark Zuckerberg testifies before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill, April 11, 2018.

Chip Somodevilla/Getty Images

For years, Facebook has walked a path paved in gold but flanked by quicksand. This week, the company stepped in it—deep. And it isn’t getting back out with its boots clean.

The gold is the treasure that Facebook accrues in its role as an automated online clearinghouse where all kinds of people post all kinds of stuff, spend time reading each other’s stuff, and encounter ads tailored to the stuff they like and do. To collect it, Facebook just has to keep people coming back and keep showing them ads. That’s harder than it sounds, but Facebook is very good at it. It’s what the company’s highly skilled workforce has been trained to do.

The quicksand is political controversy. To maintain its status as the dominant social network, Facebook in part must avoid alienating large swaths of its user base, either by taking a stand they find abhorrent or by failing to take a stand they view as obligatory. Facebook understands this, too. It’s why, for most of the company’s history, its moderators focused on removing material such as pornography, terrorism, and blatant hate speech that most people view as repellent regardless of their politics. As for political opinions, Facebook adopted a laissez-faire posture, presenting itself as an ideologically neutral platform for all ideas—even hateful ones, like the ideas of the alt-right.

Conveniently, Facebook’s news-feed algorithm tended to show people a lot of ideas that they were already inclined to agree with. So, with some exceptions, liberals found the social network to be a pretty liberal place, while conservatives found it a suitably conservative environment. If you encountered views you didn’t like, all you had to do was ignore them, and Facebook’s software would learn to keep them out of your sight. Facebook got to walk the high road to riches even as its users grew more strident.

This isn’t just about Infowars or Holocaust denial, though. The big picture is that the company is now facing intense pressure to choose between defending those who purvey the ugliest of political views or purging them.

Suffice it to say, neither of those paths is paved with gold.

So how did Facebook find itself in this predicament? It’s tempting to point to the company’s latest PR gaffes. Perhaps Facebook officials should have known better than to hold an on-the-record briefing with journalists to trumpet their accomplishments fighting fake news at a time when the likes of Infowars are still flourishing on the platform. Surely Zuckerberg should have known better than to defend Holocaust deniers, even in the limited sense of defending their right to be wrong.

To blame Facebook’s newfound transparency for this mess, however, is to get things backward. Sure, Facebook may have skirted major trouble for years by keeping its officials aloof from the press and the public and by letting the software speak for it. But the tension between promoting free speech and airing offensive views was always there, and it only mounted the longer Facebook denied it.

For years, there were critics who pointed out that Facebook’s software contained hidden biases of its own—that its news-feed rankings privileged sensation over nuance, emotion over reason, hoaxes over debunkings, and in-group solidarity over broad-mindedness. Facebook waved them away. Whether it did so out of naïveté or cynical self-interest, the effect was to keep the engagement and the ad dollars flowing with minimal human intervention.

That is, until recently. The pivotal role that Facebook played in the 2016 U.S. presidential campaign and Brexit—which bitterly divided the populaces of its two largest markets, the United States and Europe—opened it to a level of political scrutiny it had never anticipated.

For outraged Americans and Europeans, the neutrality that Facebook had long claimed suddenly rang hollow. If Facebook was so neutral, why did it seem to work so much better for Trump and Leave than it did for Clinton and Remain? What role did its personalization algorithms play in polarizing and radicalizing people, or in stoking racist attitudes? Why was its news-feed software delivering fake news and hoaxes to larger audiences than it did legitimate news stories? And what’s with all that creepy collection of people’s data, anyway—did that help get Trump elected too?

The façade cracked in other parts of the world, too. In Myanmar, Facebook reportedly became a hotbed of anti-Muslim fervor and was blamed for sowing violence against the Rohingya minority. The company has faced similar accusations in Sri Lanka, Indonesia, and elsewhere.

These are the kinds of messes from which the company can’t extricate itself just by writing more code or by boosting its engagement metrics. They require explicit value judgments, made by humans on the basis of principles. And they’re inherently political.

Facebook began coming to grips with this in late 2016, when it agreed under heavy pressure to tackle fake news and misinformation. For the first time, it accepted some responsibility to adjudicate between truth and lies on its platform. Its editorial commitments deepened when it promised Congress that it would address foreign election meddling, and still further in January 2018, when it announced a plan to promote “broadly trusted” news sources at the expense of less reputable outlets.

At every step, Facebook has tried to keep the veneer of objectivity intact, even as it waded deeper into murky value judgments. Rather than try to spot fake news, it partnered with third-party fact-checking organizations. It framed election meddling as a problem of authenticity and transparency, rather than one of politics or national loyalties. Its plan for identifying widely trusted news sources—a thorny task even for academics and professional media critics—was simply to survey Facebook users.

There are valid reasons for Facebook to prefer this approach. Tech companies and social networks are typically guided by terms of service that can be applied equally and without prejudice to all users. When they start banning people based on subjective judgments about the quality of their ideas, they’re opening the door to being held responsible for everything anyone posts. And the punishment for acknowledging human editorial judgment is real: When it was revealed in 2016 that Facebook employed journalists to edit its trending-news section, and that those journalists tended toward liberal views, the outrage from the right was swift and intense. Facebook fired the journalists and trained engineers to code some rudimentary editing features instead. The results were awful, but at least Facebook had insulated itself from claims of partisanship.

But the balancing act keeps getting trickier. On Wednesday afternoon, Facebook announced a new policy under which it will take down posts that contain misinformation that could plausibly lead to violence. That’s a reasonable standard, but one that is going to be very hard to enforce without a lot of subjective judgments—and, inevitably, outcry from those who feel they’re being unjustly censored.

Now that Facebook has shown it’s willing to get its hands dirty, the calls for it to take responsibility will only increase. The company is likely to incur one controversy after the next as it muddles its way through the morass of trade-offs between free speech and accountability.

The good news is that’s the way it should be. As my colleague April Glaser astutely noted, the root of Facebook’s problems is its sheer size. If it hadn’t come to dominate the flow of news so thoroughly, it might never have had to answer for all the bad news sources it has amplified. If its business model weren’t so reliant on its ubiquity, it could afford to offend some of the people some of the time.

Facebook is mired in this swamp not because of any recent decision or gaffe, but because the business of determining what news people see is inherently messy. Media companies face tough editorial decisions every day; it’s what they’re built for. Facebook is starting to face them too, on a grand scale. And the fact that this wasn’t what Facebook was built for is an excuse that no longer flies.

Facebook was lucky that its road to riches ran straight and narrow for so long. But now it’s twisty and treacherous, and there’s no going back. It’s going to be messy from here on out.