The bill targets Section 230, the law that shields online platforms, services, and users from liability for most speech created by others. Section 230 protects intermediaries from liability both when they choose to edit, curate, or moderate speech and when they choose not to. Without Section 230, social media would not exist in its current form—the risks of liability would be too great given the volume of user speech published through them—and neither would thousands of websites and apps that host users’ speech and media.

Under the bill, platforms over a certain size—30 million active users in the U.S. or 300 million worldwide—would lose their immunity under Section 230. In order to regain its immunity, a company would have to pay the Federal Trade Commission for an audit to prove “by clear and convincing evidence” that it doesn’t moderate users’ posts “in a manner that is biased against a political party, political candidate, or political viewpoint.”

It’s foolish to assume that anyone could objectively judge a platform’s “bias,” but particularly dangerous to put a government agency in charge of making those judgments.

It’s foolish to assume that anyone could objectively judge a platform’s “bias,” but particularly dangerous to put a government agency in charge of making those judgments.

It might be tempting to understate the bill’s danger given that it limits its scope to very large platforms. But therein lies one of the bill’s most insidious features. Google, Facebook, and Twitter would never have climbed to dominance without Section 230. This bill could effectively set a ceiling on the success of any future competitor. Once again, members of Congress have attempted to punish social media platforms by introducing a bill that will only reinforce those companies’ dominance. Don’t forget that last time Congress undermined Section 230, large tech companies cheered it on.

Don’t Let the Government Decide What Bias Is

Sen. Hawley’s bill is clearly unconstitutional. A government agency can’t punish any person or company because of its political viewpoints, or because it favors certain political speech over others. And decisions about what speech to carry or remove are inherently political.

What does “in a manner that is biased against a political party, political candidate, or political viewpoint” mean, exactly? Would platforms be forced to host propaganda from hate groups and punished for doing anything to let users hide posts from the KKK that express its political viewpoints? Would a site catering to certain religious beliefs be forced to accommodate conflicting beliefs?

What about large platforms where users intentionally opt into partisan moderation decisions? For example, would Facebook be required to close private groups that leftist activists use to organize and share information, or instruct the administrators of those groups to let right-wing activists join too? Would Reddit have to delete r/The_Donald, the massively popular forum exclusively for fans of the current U.S. president?

The bill provides no guidance on any of these questions. In practice, the FTC would have broad license to enforce its own view on which platform moderation practices constitute bias. The commissioners’ enforcement decisions would almost certainly reflect the priorities of the party that nominated them. Since the bill requires that a supermajority of commissioners agree to grant a platform immunity, any two of the five FTC commissioners could decide together to withhold immunity from a platform.

That’s the problem: this bill would let the government make decisions about whose speech stays online, one thing that the government simply cannot do under the U.S. Constitution. To see how a government might attempt to push the FTC to focus only on certain types of bias or censorship, consider President Trump’s relentless focus on perceived anti-conservative bias on social media. Before supporting the bill, conservatives in Congress may want to consider how it might be used by future administrations.

As we have argued in several recent amicusbriefs, Internet users are best served by the existence of both moderated and unmoderated platforms, both those that are open forums for all speech and those that are tailored to certain interests, audiences, and user sensibilities. This bill threatens the existence of the latter.

Section 230 Doesn’t—and Shouldn’t—Preclude Platform Moderation

Sen. Hawley’s bill comes after a long campaign of misinformation about how Section 230 works. A few members of Congress—including Sen. Hawley—have repeatedly claimed that under current law, platforms must make a choice between their right under the First Amendment to moderate speech and the liability protections that they enjoy under Section 230. In truth, no such choice exists. Under the First Amendment, platforms have the right to moderate their online platforms however they like; Section 230 additionally shields them from most types of liability for their users’ activity. It’s not one or the other. It’s both.

Indeed, one of Congress’ motivations for passing Section 230 was to remove the legal obstacles that discouraged platforms from filtering out certain types of speech (at the time, Congress was focusing its attention on sexual material in particular). In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve. Because Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts even if it lacked knowledge of the contents.

Reps. Chris Cox and Ron Wyden realized in 1995 that that precedent would hamstring the nascent industry of online moderation. That’s why they introduced the Internet Freedom and Family Empowerment Act, which we now know as Section 230.

Hawley’s bill would bring us closer to that pre-230 Internet, punishing online platforms when they take measures to protect their users, including efforts to minimize the impacts of harassment and abuse—the very sorts of efforts that Section 230 was intended to preserve. While platforms often fail in such measures—and frequently silence innocent people in the process—giving the government discretion to shut down those efforts is not the solution.

Section 230 plays a crucial, historic role in protecting free speech and association online. That includes the right to participate in online communities organized around certain political viewpoints. It’s impossible to enforce an objective standard of “neutrality” on social media—giving government license to do so would pose a huge threat to speech online.

Related Updates

Special thanks to legal intern Maria Bacha who was the lead author of this post. EFF, Student Press Law Center (SPLC), Pennsylvania Center for the First Amendment (PaCFA), and Brechner Center for Freedom of Information filed an amicus brief in B.L. v. Mahanoy Area School District urging the U.S...

We pause this week to celebrate our longtime friend Gene Crick, a digital rights activist and former president of EFF-Austin, who passed away August 14 in Texas at age 73. Gene was a tireless advocate for free speech, a free, open, and inclusive Internet, and digital rights for everyone in...

Media outlets reported this week that an international student at Harvard University was deported back to Lebanon after border agents in Boston searched his electronic devices and confronted him about his friends’ social media posts. These allegations raise serious concerns about whether the government is following its own policies regarding...

The U.S. Court of Appeals for the Second Circuit last week became the first federal appellate court to rule that Section 230 bars civil terrorism claims against a social media company. The plaintiffs, who were victims of Hamas terrorist attacks in Israel, argued that Facebook should be liable...

The U.S. Department of Homeland Security (DHS) and one of its component agencies, U.S. Customs and Border Protection (CBP), released a Privacy Impact Assessment [.pdf] on CBP’s practice of monitoring social media to enhance the agency’s “situational awareness.” As we’ve argued in relation to other government social media surveillance...

The Senate Judiciary Committee recently held a hearing on “Protecting Digital Innocence.” The hearing covered a range of problems facing young people on the Internet today, with a focus on harmful content and privacy-invasive data practices by tech companies. While children do face problems online, some committee members seemed...

In a long-awaited ruling, the Second Circuit has found that the replies section on President Trump’s Twitter @realDonaldTrump is a public forum and that the President cannot block his critics from reading his tweets or participating in the forum merely because he dislikes the views they express. This ruling...

EFF is representing People for the Ethical Treatment of Animals, challenging on First Amendment grounds the practice of Texas A&M University of automatically and manually blocking PETA and its supporters from posting comments on the university's official Facebook page, a forum that is otherwise open for public comments. In response...

This month, in many parts of the world, the LGBTQ+ community is celebrating Pride and, both online and off, the tech industry has paid lip service to supporting the community. Many social media companies participate in Pride parades or offer photo filters or other digital swag for users to show...