Our starting principle is simple: Under the First Amendment, social media platforms and other online intermediaries have the right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities not to be silenced by harassment.

The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

But we won’t make the Internet fairer or safer by pushing platforms into ever more aggressive efforts to police online speech. When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier.

Indeed, for every high profile case of despicable content being taken down, there are many, many more stories of people in marginalized communities who are targets of persecution and violence. The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

That’s why we must remain vigilant when platforms decide to filter content. We are worried about how platforms are responding to new pressures to filter the content on their services. Not because there’s a slippery slope from judicious moderation to active censorship, but because we are already far down that slope.

To avoid slipping further, and maybe even reverse course, we’ve outlined steps platforms can take to help protect and nurture online free speech. They include:

For its part, rather than instituting more mandates for filtering or speech removal, Congress should defend safe harbors, protect anonymous speech, encourage platforms to be open about their takedown rules and to follow a consistent, fair, and transparent process, and avoid promulgating any new intermediary requirements that might have unintended consequences for online speech.

EFF was invited to participate in this hearing and we were initially interested. However, before we confirmed our participation, the hearing shifted in a different direction. We look forward to engaging in further discussions with policymakers and the platforms themselves.

Related Updates

Today, EU negotiators in Strasbourg struggled to craft the final language of the Copyright in the Single Digital Market Directive, in their last possible meeting for 2019. They failed, thanks in large part to the Directive’s two most controversial clauses: Article 11, which requires paid licenses for linking to news...

Social media platforms such as Facebook and Twitter provide an opportunity for everyone to have a voice on the Internet, to communicate with friends, post their views, and comment on movies or the president. However, the fact that they provide a broad, open platform for speech doesn’t automatically mean they...

EFF, as part of a coalition of over sixty other human rights groups led by Human Rights Watch and Amnesty International —still have questions for Sundar Pichai, Google’s CEO. Leaks and rumors continue to spread from Google about “Project Dragonfly,” a secretive plan to create a...

Facebook just quietly adopted a policy that could push thousands of innocent people off of the platform. The new “sexual solicitation” rules forbid pornography and other explicit sexual content (which was already functionally banned under a different statute), but they don’t stop there: they also ban...

Social media platform Tumblr has announced a ban on so-called “adult content,” a move made, it seems, in reaction to Tumblr’s app being removed from the Apple app store. But while making the app more available is in theory good for Tumblr users, in practice what’s about to...

California is still trying to gag websites from sharing true, publicly available, newsworthy information about actors. While this effort is aimed at the admirable goal of fighting age discrimination in Hollywood, the law unconstitutionally punishes publishers of truthful, newsworthy information and denies the public important information it needs to fully...

The New York Times published a blockbuster story about Facebook that exposed how the company used so-called “smear merchants” to attack organizations critical of the platform. The story was shocking on a number of levels, revealing that Facebook’s hired guns stooped to dog-whistling, anti-Semitic attacks aimed...

We’ve taken Internet service companies and platforms like Facebook, Twitter, and YouTube to task for bad content moderation practices that remove speech and silence voices that deserve to be heard. We’ve catalogued their awful decisions. We’ve written about their ambiguous policies, inconsistent enforcement, and...

Spanish version San Francisco—The Electronic Frontier Foundation (EFF) and more than 70 human and digital rights groups called on Mark Zuckerberg today to add real transparency and accountability to Facebook’s content removal process. Specifically, the groups demand that Facebook clearly explain how much content it removes, both rightly...