Intermediary Liability

Related Projects

Related Topics

Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.

In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.

To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.

Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.

Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.

Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also a Senior Lecturer and Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin.

David G. Post, reviewing what the original Law and Borders paper got right (and what it got wrong), noted that the central dilemma it had identified—the conflict between an a-territorial global network and an international legal system with territoriality at its core—had certainly proved to be a profoundly challenging one. He suggested that the failure (thus far) to make much headway on these problems of “governance on the Internet” (in Bertrand de la Chapelle’s phrase) may be pushing these problems “upward,” to the institutions (e.g., ICANN) concerned with “governance of the Internet,” as they face increasing pressure to leverage their control over critical infrastructure to exercise greater control over online content and conduct.

The essay below serves as introduction to the Stanford Center for Internet and Society's Law, Borders, and Speech Conference Proceedings Volume. The conference brought together experts from around the world to discuss conflicting national laws governing online speech -- and how courts, Internet platforms, and public interest advocates should respond to increasing demands for these laws to be enforced on the global Internet.

The Law, Borders, and Speech conference at Stanford’s Center for Internet and Society asked the important question: Which countries’ laws and values will govern Internet users’ online behavior, including their free expression rights? The conference used the landmark article written in 1996 by David G. Post and David R. Johnson to examine whether twenty years on their conclusions still held true. Post and Johnson had concluded that “[t]he rise of the global computer network is destroying the link between geographical location and: (1) the power of local governments to assert control over online behavior; (2) the effects of online behavior on individuals or things; (3) the legitimacy of the efforts of a local sovereign to enforce rules applicable to global phenomena; and (4) the ability of physical location to give notice of which sets of rules apply.” They proposed that national law must be reconciled with self-regulatory processes emerging from the network itself.

Pages

When Facebook started 15 years ago, it didn’t set out to adjudicate the speech rights of 2.2 billion people. Twitter never asked to decide which of the 500 million tweets posted each day are jokes and which are hate speech. YouTube’s early mission wasn’t to determine if a video shot on someone’s phone is harmless speculation, dangerous conspiracy theory, or information warfare by a foreign government. Content platforms set out to get rid of expression’s gatekeepers, not become them.

This essay closely examines the effect on free-expression rights when platforms such as Facebook or YouTube silence their users’ speech. The first part describes the often messy blend of government and private power behind many content removals, and discusses how the combination undermines users’ rights to challenge state action. The second part explores the legal minefield for users—or potentially, legislators—claiming a right to speak on major platforms.

Hollywood writers could not have scripted it better. Merely a month before the implementation date of the General Data Protection Regulation (GDPR) in May this year, a data protection scandal roils the world. A whistleblower reveals the leakage of personal data from Facebook through Cambridge Analytica to malevolent actors aiming to influence the U.S. presidential elections. What could possibly better illustrate the crucial role of GDPR in an age where data drives not only marketing and online commerce but also fateful issues for democracy and world peace?

Prevention of terrorism is undeniably an important and legitimate aim in many countries of the world. In the course of the last years, the European Union (EU) institutions, and the Commission (EC) in particular, have shown a growing concern regarding the potential use of online intermediary platforms for the dissemination of illegal content, particularly content of terrorist nature, based on the assumption that this content can reasonably increase the danger of new terrorist attacks being committed on European soil.

Pages

"People would be allowed to use pseudonyms when posting online, but platforms could be forced to hand out the users’ private information to third parties, including private persons, seeking prosecution for defamation or other crimes.

“The chilling effect for freedom of speech is real,” said Thomas Lohninger."

"“What’s not so clear yet is whether G.D.P.R. has had an effect on privacy and on corporate data practices,” said Omer Tene, vice president and chief knowledge officer at the International Association of Privacy Professionals. “Has the underlying business model of the internet changed? Is consumer privacy better? I think those questions are very much still open.”"

"“If you want to be more skeptical, the question is does all this activity actually deliver more privacy?” said Omer Tene, Vice President at the International Association of Privacy Professionals, an industry trade body. “Ostensibly the goal isn’t just to mobilize compliance and regulatory efforts, complaints, and notifications but to actually result in better privacy for individuals on the ground. I think the jury is still out on that. It’s not clear at year end that corporate data practices are different or have changed.”"

Stanford CIS brings together scholars, academics, legislators, students, programmers, security researchers, and scientists to study the interaction of new technologies and the law and to examine how the synergy between the two can either promote or harm public goods like free speech, innovation, privacy, public commons, diversity, and scientific inquiry

Recent data-related cases, such as the Google Spain case and the dispute between Microsoft and the US Government over the access to data held by Microsoft in Ireland, highlight flaws in the current thinking on jurisdiction in both private, and public, international law.

Pages

The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.

The latest in the EU's string of internet regulatory efforts has a new target: terrorist propaganda. Just as with past regulations, the proposed rules seem onerous and insane, creating huge liability for internet platforms that fail to do the impossible.

Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?