Don’t Force Web Platforms to Silence Innocent People

Don’t Force Web Platforms to Silence Innocent People

The U.S. House Judiciary Committee held a hearing this week to discuss the spread of white nationalism, online and offline. The hearing tackled hard questions about how online platforms respond to extremism online and what role, if any, lawmakers should play. The desire for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly in the wake of the recent massacre in New Zealand. But unfortunately, looking to Silicon Valley to be the speech police may do more harm than good.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors. Similarly, when Congress considers regulating speech on online platforms, it must consider both the First Amendment implications and how its regulations might unintentionally encourage platforms to silence innocent people.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors.

But there’s a lot platforms can do right now, starting with more transparency and visibility into platforms’ moderation policies. Platforms ought to tell the public what types of unwanted content they are attempting to screen, how they do that screening, and what safeguards are in place to make sure that innocent people—especially those trying to document or respond to violence—aren’t also censored. Rep. Pramila Jayapal urged the witnesses from Google and Facebook to share not just better reports of content removals, but also internal policies and training materials for moderators.

Better transparency is not only crucial for helping to minimize the number of people silenced unintentionally; it’s also essential for those working to study and fight hate groups. As the Anti-Defamation League’s Eileen Hershenov noted:

To the tech companies, I would say that there is no definition of methodologies and measures and the impact. […] We don’t have enough information and they don’t share the data [we need] to go against this radicalization and to counter it.

Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines to help align platform moderation practices to human rights and civil liberties principles. The Principles ask platforms

to be honest with the public about how many posts and accounts they remove,

to give notice to users who’ve had something removed about what was removed, and under what rule, and

to give those users a meaningful opportunity to appeal the decision.

Hershenov also cautioned lawmakers about the dangers of heavy-handed platform moderation, pointing out that social media offers a useful view for civil society and the public into how and where hate groups organize: “We do have to be careful about whether in taking stuff off of the web where we can find it, we push things underground where neither law enforcement nor civil society can prevent and deradicalize.”

Before they try to pass laws to remove hate speech from the Internet, members of Congress should tread carefully. Such laws risk pushing platforms toward a more highly filtered Internet, silencing far more people than was intended. As Supreme Court Justice Anthony Kennedy wrote in Matel v. Tam (PDF) in 2017, “A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all.”

Related Updates

The century-old tradition that the Espionage Act not be used against journalistic activities has now been broken. Seventeen new charges were filed yesterday against Wikileaks founder Julian Assange. These new charges make clear that he is being prosecuted for basic journalistic tasks, including being openly available to receive...

Earlier this year, a critical free speech law in Texas came under attack. Texas bill H.B. 2730, as introduced, would have gutted the Texas Citizens Protection Act, or TCPA. The TCPA has been one of the strongest laws in the nation protecting citizens against SLAPPs. SLAPP is a shorthand way...

A fight over unmasking an anonymous Reddit commenter has turned into a significant win for online speech and fair use. A federal court has affirmed the right to share copyrighted material for criticism and commentary, and shot down arguments that Internet users from outside the United States can’t...

Today we are launching TOSsed Out, a new iteration of EFF’s longstanding work in tracking and documenting the ways that Terms of Service (TOS) and other speech moderating rules are unevenly and unthinkingly applied to people by online services. As a result of these practices, posts are deleted and...

San Francisco—The Electronic Frontier Foundation (EFF) today launched TOSsed Out, a project to highlight the vast spectrum of people silenced by social media platforms that inconsistently and erroneously apply terms of service (TOS) rules.TOSsed Out will track and publicize the ways in which TOS and other speech moderation rules...

When social media platforms enforce their content moderation rules unfairly, it affects everyone’s ability to speak out online. Unfair and inconsistent online censorship magnifies existing power imbalances, giving people who already have the least power in society fewer places where they are allowed a voice online.President Donald Trump...

In the wake of the mass shootings at two mosques in Christchurch, New Zealand, that killed fifty-one people and injured more than forty others, the New Zealand government has released a plan to combat terrorist and violent content online, dubbed the Christchurch Call. The Call has been endorsed by...

The Internet, and social media in particular, is uniquely designed to promote free expression, so much so that the Supreme Court has recognized social media as the “most important places” for speech and sharing viewpoints. Like most of us. government agencies and officials have created social media profiles and...

The First Amendment protects the public’s right to use electronic devices to record on-duty police officers, EFF argued in an amicus brief filed in the U.S. Court of Appeals for the Tenth Circuit. The case, Frasier v. Evans, was brought by Levi Frasier against five Denver police officers for...

San Francisco – On Monday, May 6 at 11am, the Electronic Frontier Foundation (EFF) will argue that a San Francisco court should quash a subpoena from the Watch Tower Bible and Tract Society aimed at getting the identity of an anonymous Reddit commenter. Watch Tower is the supervising body...