Should Facebook Simply Stop Moderating Its Platform?

Kalev Leetaru
, ContributorI write about the broad intersection of data and society.Opinions expressed by Forbes Contributors are their own.

Mark Zuckerberg speaks during an event at the company's headquarters (David Paul Morris/Bloomberg)

As fresh Guardian disclosures about Facebook’s moderation policies continue to trickle out, it has become ever more clear that the company's attempt to create a single universal standard for online speech that applies to the entire planet is failing. Far from the finely manicured walled garden it touts to the world, Facebook has become an overgrown backlot backing up to a dark forest filled with unspeakable horrors of hate speech, misogyny, racism, terrorism recruiting, suicide and even murder. Does Facebook stand a chance at turning the tide against criminal and toxic use of its platform or should it simply abandon moderation entirely and instead adopt the model that has served the open web for the past quarter century?

One of the most striking aspects of Facebook’s moderation policies or those of its former news feed are the picture they paint of a company pushing headfirst into extraordinarily complex areas without the most cursory understanding of how nuanced and difficult the tasks they are setting forth really are.

For a company of Facebook’s resources and influence to launch a news feed that relies exclusively on a heavily Western-biased list of news outlets to capture global events and then for it to require that an event appear in multiple Western outlets before it can be published in the news feed suggests the company launched a feature that helps drive the attention of nearly a quarter of the world’s population without talking to a single media scholar or anyone who has spent time outside of the West.

Similarly, its moderation policies bear a striking naivete that suggest the company spent little time learning from the vast history of mass-scale human review efforts or talking to the myriad companies and efforts performing such review today. If they had, they would have quickly learned that the kind of fuzzy, vague and arbitrary guidelines provided to their human reviewers, coupled with the extreme time pressures and stress those reviewers are under, would have no choice but to lead to the kinds of high-profile controversies they have found themselves embroiled in. (The company has steadfastly declined to comment on the kinds of outside expertise it turned to in the development of both systems.)

Indeed, as one Chinese firm put it, “If Facebook were in China, it would have to hire at least 20,000 reviewers for videos alone” while the company historically had just 4,500 across its entire platform and after its most recent hiring binge will still have just 7,500 to review the output of nearly two billion people. To offer some sense of scale, four years ago that combined output included a firehose of more than 350 million new photos uploaded every single day.

Add to this mix an incredibly rich, diverse and ever-changing patchwork of laws, regulations and societal norms around acceptable speech. What constitutes ordinary daily speech in one country might bring the death sentence in another. According to The Guardian, Facebook has evolved to enforce national laws only where it believes it will suffer legal jeopardy if it does not – in the case of countries where Holocaust denial material is banned by law, the country’s guidelines allegedly state that such laws should not be enforced and material banned by law should be allowed to remain in direct contravention of national law unless “we face the risk of getting blocked in a country or a legal risk.”

In short – according to the company’s own official guidelines obtained by The Guardian, Facebook has decreed that it will only enforce national laws regarding illegal content when it itself faces legal jeopardy, but if countries do not sue or block Facebook for refusing to enforce their laws, it will ignore them. Such a stance might encourage the European Union and other countries to take more aggressive legal stances towards the company in future to encourage it to enforce their laws.

In an emailed statement, a company spokesperson stated that Facebook prevents access to Holocaust Denial content in eight countries, but declined to comment further on The Guardian’s report or how strictly it enforces such bans in those eight countries other than to state that “our policy and legal teams are currently looking at our obligations in respect of Holocaust Denial.”

Ironically, for a company which has focused so much public attention on the empowerment of women, its official moderator guidelines explicitly sanction a vast array of misogynic and violent statements against women, including helpful advice on how to beat or murder women who step out of line or speak out of turn. Misogynic groups that share nude photographs of women with captions like “what is the biggest whale that you have harpooned” are allegedly entirely permitted by Facebook, with the company allegedly responding to takedown requests with the statement that the posts do not violate its Community Standards. The company did not respond to a request for comment, but according to The Guardian, the company deals with more than 54,000 reports of revenge pornography per month.

One of the most important things to remember when it comes to Facebook’s moderation efforts is that Facebook’s reviewer teams are not scouring its platform 24/7 proactively hunting through those hundreds of millions of uploaded photographs in realtime searching for anything questionable. Instead, the reviewer guidelines obtained by The Guardian are typically triggered only by a Facebook user taking explicit action to flag a piece of content as illegal or extremely harmful to themselves or others.

Imagine the woman who encounters a post “little girl needs to keep to herself before daddy breaks her face .. unless you stop [complaining] I’ll have to cut your tongue out … I hope someone kills you” and also offers the helpful advice that “to snap a [woman’s] neck, make sure to apply all your pressure to the middle of her throat.” She feels extremely physically threatened by this post, yet when she flags the post, Facebook’s reviewers respond that the post does not violate its rules and so will remain accessible to the world and shared widely.

If a woman finds a statement of violence so distressing and alarming that she takes the time to report it to Facebook only to have the company sanction the post as completely allowable because a reviewer (who could be a male from a completely different culture) who spent less than 10 seconds glancing at it wasn’t bothered by it and where the official corporate guidelines expressly sanction such statements, what does that do that woman’s well being?

Yet, this is a perfect example of the inherent tension in attempting to enforce a universal global standard of acceptable societal norms across two billion people on Earth spanning every culture, background, upbringing and value set. The world is an incredibly diverse place filled with mutually exclusive societal standards that make it all but impossible to arrive at a single standard that works for everyone on earth.

Indeed, every time Facebook becomes embroiled in yet another scandal relating to its moderation policies, it simply reemphasizes what an impossible task it has and how it can’t meet everyone’s needs with a single set of policies.

This begs the question – should Facebook moderate its platform at all?

For more than 20 years the web has operated quite successfully as a globally distributed patchwork of rules and norms. A website in Thailand is banned from criticizing the monarchy, while one in the United States can freely and openly mock the king with no legal repercussions. A blog in China cannot mention the 1989 events of Tiananmen Square, while one in the UK can. Even Wikileaks’ website has managed to survive even while it hosts classified material that is still technically illegal in the United States.

From its earliest founding, the web was built upon a decentralized model that enforced no central rules or regulations, leaving moderation and censorship to individual nations. Any computer server anywhere in the world could be connected to the Internet if its host country permitted and even if it contained content that was banned elsewhere in the world. A server hosting a site documenting the Tiananmen Square events could be connected to the web in the United States and be instantly accessible anywhere in the world. The government of China could block the server within its borders via its Great Firewall and prevent its own citizens from accessing it, but it could not force the United States to disconnect the server or block access to it for American citizens for violating Chinese content rules.

In short, instead of forming a United Nations-like central committee that would set global standards for the kind of content permitted on the web and forcing all ISPs worldwide to enforce that standard and disconnect offending computers, the early web was built as a series of content-neutral pipes connecting the world’s countries and each country set its own rules about what it would allow to be connected to the Internet within its own borders.

On the one hand, this means that hate and violence can freely exist on the web - a nationalist hate group’s website praising violence against immigrants can be freely accessed by anyone anywhere if it is hosted in a country in which it breaks no laws. Yet, on the other, such a site is likely to be marginalized, known only to its narrow community of members. If the site edges closer towards actively organizing violence or recruiting for criminal activity, as ISIS-affiliated sites have done, then at some point it will likely cross over the line of lawful speech in its host country and have its domain seized and/or operators prosecuted if they actively participated and encouraged such speech.

What if Facebook were to abruptly reverse course and abandon its attempt to define a universal standard for acceptable speech applied to all the world’s billions of people and instead adopt the model that has so successfully served the web since its founding more than two decades ago?

Under this model, the same technical and legal mechanisms used to regulate the Internet at large in each country would be used to control what content can be posted to or consumed from Facebook in that country. A Thai user located in Thailand would be subject to the same restrictions as any Thai website, while an American citizen in the US would enjoy the same freedoms as an American posting to a personal blog hosted by a US company in a US data center.

Perhaps the greatest challenge with this model is that the web is essentially a massive library in which you have to seek out the content you want, while Facebook’s model of a “social network” forcibly places you into contact with material from all over, some of which is of great interest to you and some of which may be deeply offensive or physically threatening to you. That fringe nationalist website will likely never see the light of day beyond its small user community, while on Facebook it can spread freely and confront many for whom it can cause great emotional harm.

Could Facebook abandon its global moderation model and instead adopt new self-moderation tools that afford its users the benefits of a globally connected social network with the self-determination of the open web? In much the same way a random internet surfer can make a conscious decision to actively seek out that nationalist site either to laud or attack it, or remain blissfully ignorant of its existence, so too could one imagine a Facebook in which each user has access to a rich and incredibly detailed array of moderation tools that allow them to precisely define the kind of content they wish to see, while the company focused its efforts on blocking content defined as unlawful under the laws of each country, applying fine-grained and geographically aware precise targeting that mimics the model that has served the web since its modern founding. Indeed, Zuckerberg himself has hinted at such an idea.

Would this solve Facebook’s moderation troubles? Or would it instead strengthen and polarize the filter bubble such that it splits the world apart instead of bringing it together?

At the very least, it is clear that Facebook’s current model of hiring an army of human reviewers to enforce a single universal standard developed in secret that applies to every human on earth is destined for failure simply by virtue of the world being such a diverse place filled with a myriad mutually exclusive beliefs and norms that cannot be reconciled into a single set of standards forced upon the world to drag it kicking and screaming into a utopian world of perfect harmony. Would the web’s decentralized model that has served it so successfully for nearly a quarter century work when placed against Facebook’s backdrop of trying to bring the people of the earth together? Is Facebook’s vision of bringing very different perspectives into contact with each other achievable (essentially the internet equivalent of promoting that nationalist website to the broader public to increase its visibility and awareness of it)? Or is the notion of bringing the entire world together an impossible dream as illustrated by Facebook’s impossibly complex moderation guidelines that try to mediate and referee the strife that inevitably results when conflicting norms and views are forced into contact with each other?

In the end there are no easy solutions, but perhaps if Facebook spent a bit more time listening and learning from others and hosting open dialogues with its global user community instead of blindly charging forward believing technology will solve all the world's ills, we might have a better chance of bringing the world together rather than pushing it further apart.