Don’t Force Web Platforms to Silence Innocent People

Don’t Force Web Platforms to Silence Innocent People

The U.S. House Judiciary Committee held a hearing this week to discuss the spread of white nationalism, online and offline. The hearing tackled hard questions about how online platforms respond to extremism online and what role, if any, lawmakers should play. The desire for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly in the wake of the recent massacre in New Zealand. But unfortunately, looking to Silicon Valley to be the speech police may do more harm than good.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors. Similarly, when Congress considers regulating speech on online platforms, it must consider both the First Amendment implications and how its regulations might unintentionally encourage platforms to silence innocent people.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors.

But there’s a lot platforms can do right now, starting with more transparency and visibility into platforms’ moderation policies. Platforms ought to tell the public what types of unwanted content they are attempting to screen, how they do that screening, and what safeguards are in place to make sure that innocent people—especially those trying to document or respond to violence—aren’t also censored. Rep. Pramila Jayapal urged the witnesses from Google and Facebook to share not just better reports of content removals, but also internal policies and training materials for moderators.

Better transparency is not only crucial for helping to minimize the number of people silenced unintentionally; it’s also essential for those working to study and fight hate groups. As the Anti-Defamation League’s Eileen Hershenov noted:

To the tech companies, I would say that there is no definition of methodologies and measures and the impact. […] We don’t have enough information and they don’t share the data [we need] to go against this radicalization and to counter it.

Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines to help align platform moderation practices to human rights and civil liberties principles. The Principles ask platforms

to be honest with the public about how many posts and accounts they remove,

to give notice to users who’ve had something removed about what was removed, and under what rule, and

to give those users a meaningful opportunity to appeal the decision.

Hershenov also cautioned lawmakers about the dangers of heavy-handed platform moderation, pointing out that social media offers a useful view for civil society and the public into how and where hate groups organize: “We do have to be careful about whether in taking stuff off of the web where we can find it, we push things underground where neither law enforcement nor civil society can prevent and deradicalize.”

Before they try to pass laws to remove hate speech from the Internet, members of Congress should tread carefully. Such laws risk pushing platforms toward a more highly filtered Internet, silencing far more people than was intended. As Supreme Court Justice Anthony Kennedy wrote in Matel v. Tam (PDF) in 2017, “A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all.”

Related Updates

Special thanks to legal intern Maria Bacha who was the lead author of this post. EFF, Student Press Law Center (SPLC), Pennsylvania Center for the First Amendment (PaCFA), and Brechner Center for Freedom of Information filed an amicus brief in B.L. v. Mahanoy Area School District urging the U.S...

We pause this week to celebrate our longtime friend Gene Crick, a digital rights activist and former president of EFF-Austin, who passed away August 14 in Texas at age 73. Gene was a tireless advocate for free speech, a free, open, and inclusive Internet, and digital rights for everyone in...

Media outlets reported this week that an international student at Harvard University was deported back to Lebanon after border agents in Boston searched his electronic devices and confronted him about his friends’ social media posts. These allegations raise serious concerns about whether the government is following its own policies regarding...

The U.S. Court of Appeals for the Second Circuit last week became the first federal appellate court to rule that Section 230 bars civil terrorism claims against a social media company. The plaintiffs, who were victims of Hamas terrorist attacks in Israel, argued that Facebook should be liable...

The U.S. Department of Homeland Security (DHS) and one of its component agencies, U.S. Customs and Border Protection (CBP), released a Privacy Impact Assessment [.pdf] on CBP’s practice of monitoring social media to enhance the agency’s “situational awareness.” As we’ve argued in relation to other government social media surveillance...

The Senate Judiciary Committee recently held a hearing on “Protecting Digital Innocence.” The hearing covered a range of problems facing young people on the Internet today, with a focus on harmful content and privacy-invasive data practices by tech companies. While children do face problems online, some committee members seemed...

In a long-awaited ruling, the Second Circuit has found that the replies section on President Trump’s Twitter @realDonaldTrump is a public forum and that the President cannot block his critics from reading his tweets or participating in the forum merely because he dislikes the views they express. This ruling...

EFF is representing People for the Ethical Treatment of Animals, challenging on First Amendment grounds the practice of Texas A&M University of automatically and manually blocking PETA and its supporters from posting comments on the university's official Facebook page, a forum that is otherwise open for public comments. In response...

This month, in many parts of the world, the LGBTQ+ community is celebrating Pride and, both online and off, the tech industry has paid lip service to supporting the community. Many social media companies participate in Pride parades or offer photo filters or other digital swag for users to show...