Intermediary Liability

Related Projects

Related Topics

Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.

In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.

To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.

Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.

Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.

Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also a Senior Lecturer and Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin.

Pages

"If you are distressed by anything external, the pain is not due to the thing itself, but to your estimate of it; and this you have the power to revoke at any moment." ~ Marcus Aurelius

The last post in this series observed optimal policy thinking aims at allowing people sufficient control over technologies they may use them to apply their own capacities and, in that process, find meaning. This post explores that point further; in particular how emphasis on technology, rather than people, falls short of that aim.

In response to a global backlash in the wake of Brexit and the 2016 US presidential election, dominant tech companies are scrambling to stave off increased governmental regulation of their information handling practices. It is an attractive strategy for them to cut deals with regulators whereby they agree to follow privately negotiated rules in lieu of command-and-control regulation. With respect to content moderation, this form of hybrid public-private regulation could undermine First Amendment limits on state action that are designed to protect individual citizens from official censorship. This post explores the role of anti-piracy voluntary agreements in normalizing hybrid public-private speech regulation on the Internet.

The European Parliament recently took the final vote on the new Audiovisual Media Services Directive (AVMSD). The text, which can be considered final, now awaits only the formal approval of the Council. The AVMSD will become the first legally binding instrument to impose new, and extensive, responsibilities for content regulation on privately owned Internet platforms. They are required to establish and apply detailed rules in areas such as hate speech, child pornography, protection of children’s development and preventing terrorism.

Over the course of the last decade, in response to significant pressure from the US government and other governments, service providers have assumed private obligations to regulate online content that have no basis in public law. For US tech companies, a robust regime of "voluntary agreements" to resolve content-related disputes has grown up on the margins of the Digital Millennium Copyright Act (DMCA) and the Communications Decency Act (CDA). For the most part, this regime has been built for the benefit of intellectual property rightholders attempting to control online piracy and counterfeiting beyond the territorial limits of the United States and without recourse to judicial process.

Pages

Lately, politicians and newssources have been repeating a persistent myth about, of all things, technology law. The myth concerns a provision of the 1996 Communications Decency Act, generally known as Section 230 or CDA 230.

In recent years, lawmakers around the world have proposed a lot of new intermediary liability (IL) laws. Many have been miscalibrated – risking serious collateral damage without necessarily using the best means to advance lawmakers’ goals. That shouldn’t be a surprise. IL isn’t like tax law or farm subsidies. Lawmakers, particularly in the United States, haven’t thought much about IL in decades.

Tighterregulation of social media and other online services in now under discussion in several European countries, as well as in the UK where the government has released a white paper outlining its proposed approach to tackling online harm.

The Internet was going to set us all free. At least, that is what U.S. policy makers, pundits, and scholars believed in the 2000s. The Internet would undermine authoritarian rulers by reducing the government’s stranglehold on debate, helping oppressed people realize how much they all hated their government, and simply making it easier and cheaper to organize protests.

Pages

""The number of net intermediaries acting as gatekeepers has increased," since GoDaddy booted Daily Stormer, said Daphne Keller, who studies platforms' legal responsibilities at the Stanford Center for Internet and Society. "Suddenly the domain registrars are sitting in judgment on content and speech," joining the usual players around free speech such as Google, Facebook and Twitter."

""Legally, they don't have any responsibility around this, unless it's a federal crime [such as child pornography] or intellectual property," Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society, told CNN Tech."

"“Part of what made the Internet always great and the reason why it’s blossomed is because it was always decentralized and not subject to heavy-handed regulations,” says Omer Tene, vice president of research and education at International Association of Privacy Professionals. "The concern is that the Internet will be splintered into islands.”"

"The coalition has gathered 570,000 signatures urging Facebook to acknowledge discriminatory censorship exists on its platform, that it harbors white supremacist pages even though it says it forbids hate speech in all forms, and that black and Muslim communities are especially in danger because the hate ­directed against them translates into violence in the streets, said Malkia Cyril, a Black Lives Matter activist in Oakland, Calif., who was part of a group that first met with Facebook about their concerns in 2014."

Friday, May 24How Should Free Speech Principles Apply to the Content Policy of Internet Platforms?• Danielle Citron, University of Maryland Carey School of Law• Niall Ferguson, Stanford University• Mary Anne Franks, University of Miami Law School• Eugene Volokh, UCLA Law SchoolModerator: Nate Persily, Stanford Law School

Pages

The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.

The latest in the EU's string of internet regulatory efforts has a new target: terrorist propaganda. Just as with past regulations, the proposed rules seem onerous and insane, creating huge liability for internet platforms that fail to do the impossible.

Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?