Intermediary Liability

Related Projects

Related Topics

Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.

In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.

To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.

Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.

Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.

Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also a Senior Lecturer and Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin.

Pages

The Fourth Circuit has issued its decision in BMG v. Cox. In case you haven’t been following the ins and outs of the suit, BMG sued Cox in 2014 alleging that the broadband provider was secondarily liable for its subscribers’ infringing file-sharing activity. In 2015, the trial court held that Cox was ineligible as a matter of law for the safe harbor in section 512(a) of the DMCA because it had failed to reasonably implement a policy for terminating the accounts of repeat infringers, as required by section 512(i). In 2016, a jury returned a $25M verdict for BMG, finding Cox liable for willful contributory infringement but not for vicarious infringement. Following the trial, Cox appealed both the safe harbor eligibility determination and the court’s jury instructions concerning the elements of contributory infringement. In a mixed result for Cox, the Fourth Circuit last week affirmed the court’s holding that Cox was ineligible for safe harbor, but remanded the case for retrial because the judge’s instructions to the jury understated the intent requirement for contributory infringement in a way that could have affected the jury’s verdict.

This piece is exerpted from the Law, Borders, and Speech Conference Proceedings Volume, where it appears as an appendix. The terminology it explains is relevant for Intermediary Liability and content regulation issues generally - not only issues that arise in the jurisdiction or conflict-of-law context. The full conference Proceedings Volume contains other relevant resources, and is Creative Commons licensed.

This panel considered issues of national jurisdiction in relation to Internet platforms’ voluntary content removal policies. These policies, typically set forth in Community Guidelines (CGs) or similar documents, prohibit content based on the platforms’ own rules or values—regardless of whether the content violates any law.

Popularity doesn't equal truth. And yet Facebook's recent proposal to rank the trustworthiness of news sources based on popularity is loosely equating truth with popularity. In so doing, Facebook may be putting form over function.

Pages

When Facebook started 15 years ago, it didn’t set out to adjudicate the speech rights of 2.2 billion people. Twitter never asked to decide which of the 500 million tweets posted each day are jokes and which are hate speech. YouTube’s early mission wasn’t to determine if a video shot on someone’s phone is harmless speculation, dangerous conspiracy theory, or information warfare by a foreign government. Content platforms set out to get rid of expression’s gatekeepers, not become them.

This essay closely examines the effect on free-expression rights when platforms such as Facebook or YouTube silence their users’ speech. The first part describes the often messy blend of government and private power behind many content removals, and discusses how the combination undermines users’ rights to challenge state action. The second part explores the legal minefield for users—or potentially, legislators—claiming a right to speak on major platforms.

Hollywood writers could not have scripted it better. Merely a month before the implementation date of the General Data Protection Regulation (GDPR) in May this year, a data protection scandal roils the world. A whistleblower reveals the leakage of personal data from Facebook through Cambridge Analytica to malevolent actors aiming to influence the U.S. presidential elections. What could possibly better illustrate the crucial role of GDPR in an age where data drives not only marketing and online commerce but also fateful issues for democracy and world peace?

Prevention of terrorism is undeniably an important and legitimate aim in many countries of the world. In the course of the last years, the European Union (EU) institutions, and the Commission (EC) in particular, have shown a growing concern regarding the potential use of online intermediary platforms for the dissemination of illegal content, particularly content of terrorist nature, based on the assumption that this content can reasonably increase the danger of new terrorist attacks being committed on European soil.

Pages

""The bottom line of the case is that its legal merits barely matter, because the point is political theater," Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society, told The Hill.

"Ultimately, the use case for purely AI-driven content moderation is fairly narrow, says Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society, because nuanced decisions are too complex to outsource to machines.

“If context does not matter at all, you can give it to a machine,” she told me. “But, if context does matter, which is the case for most things that are about newsworthy events, nobody has a piece of software that can replace humans.”"

"“I don’t think it’s an impossible task. It’s a hard task, and it depends on the defaults we want to live with,” said Danielle Citron, a University of Maryland law professor specializing in online free speech and privacy issues.

That could mean delays and filters to inspect content that was possibly violent or showing non-consensual sex, Citron said."

"“There’s been a lot of confusion in [the] industry, and ambiguity in regulatory interpretation, concerning the adapting of online content distribution and ads to GDPR,” said Omer Tene, VP and chief knowledge officer with the International Association of Privacy Professionals (IAPP). “In this case, the Dutch DPA expressed a restrictive reading. In other cases, other DPAs applied the legislative language more liberally.

When you give sites and services information about yourself, where does it go? Who else will get hold of it, and what will they use it for? The recent revelations about Cambridge Analytica's acquisition of data about tens of millions of Facebook users without their knowledge or consent have prompted renewed interest in how data about us gets shared, sold, used, and misused -- well beyond what we ever expected. Join us for a SLATA/CIS lunchtime conversation with three experts from Stanford’s Center for Internet and Society as we discuss the legal and policy implications of the Cambridge Analytica scandal and responses from Congress and courts. How can we prevent this from happening again? What new problems might we create through poorly-crafted legal responses?

Ads are the lifeblood of the web -- but the legal challenges have never been greater. On May 25, Europe's privacy regime is overhauled for the first time in 20 years, and publishers, advertisers and ad tech companies alike are confused about what it all means. Struan Robertson, a product counsel working in Google's ads business for the past seven years, gives his perspective on the legal challenges facing the industry.

Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes.

Pages

The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.

The latest in the EU's string of internet regulatory efforts has a new target: terrorist propaganda. Just as with past regulations, the proposed rules seem onerous and insane, creating huge liability for internet platforms that fail to do the impossible.

Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?