If you care about free speech and innovation, this should worry you. Currently, if someone posts a defamatory comment or shares extremist material on Twitter, they alone (and not Twitter) will be liable for legal damages or criminal prosecution. Twitter may of course choose to ban extremist accounts for violating its Terms of Services, but that decision lies with Twitter alone. If the law changes, then Twitter could be held liable if it fails to rapidly take down the offending content.

This may seem like a small change, but it would have massive implications for the internet ecosystem.

David Post, a professor specialising in internet law, makes the case that an obscure provision of the 1996 Telecommunications Reform act has been essential to the growth of platforms like Facebook, Twitter, YouTube and Tumblr.

The provision “immunizes all online “content intermediaries” from a vast range of legal liability that could have been imposed upon them, under pre-1996 law, for unlawful or tortious content provided by their users — liability for libel, defamation, infliction of emotional distress, commercial disparagement, distribution of sexually explicit material, threats or any other causes of action that impose liability on those who, though not the source themselves of the offending content, act to “publish” or “distribute” it.”

He argues that treating web firms as platforms and not publishers “created a trillion or so dollars of value”. Imagine if Facebook, Tumblr, Twitter and YouTube were liable to be sued or fined, whenever a user posted extremist, racist, or defamatory material. “The potential liability that would arise from allowing users to freely exchange information with one another, at this scale, would have been astronomical”. It’s easy to imagine venture capitalists passing up an opportunity to invest in Facebook, Twitter and YouTube at an early stage with those risks.

Eric Goldman, another online law professor, argues that treating online platforms as publishers will reduce competition and entrench major players. Under the current law, “new entrants can challenge the marketplace leaders without having to match the incumbents’ editorial investments or incurring fatal liability risks.”

Beyond the effect on new entrants, there’s a real risk that the free flow of ideas will be restricted by platforms over-enforcing restrictions on extremist and defamatory content. We have already seen multiple cases of platforms overreacting and banning users for seemingly mild violations. For instance, the comedian Marcia Belsky was banned from Facebook for 30 days for saying “men are scum” in response to death and rape threats. Unlike pornographic content, which can be identified algorithmically, identifying hate speech, threats and defamation relies on context. If the potential liability is high and policing abuse is labour intensive, then firms may be incentivised to shoot first and ask questions later. That could have a chilling effect on free speech.