Intermediary Liability

Related Projects

Related Topics

Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.

In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.

To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.

Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.

Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.

Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also a Senior Lecturer and Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin.

The law and legal professional ethics require of counsel a duty of candor in the practice of law. This includes a duty to not knowingly make false statements of fact, and to not offer evidence the lawyer knows to be false. These principles are considered essential to maintaining both substantive fairness for participants in the process, and trust in the integrity of the process for those outside of it.

Users of information tools in public contexts are not, of course, subject to the same duties. And publication of false information is generally protected by the First Amendment, unless it falls into one of the defined exceptions. I’m doubtful a law against publication of false information would be sustained.

It is, however, perfectly acceptable for most information technology platforms to adopt such a policy and seek to enforce it as best they can. That is, platforms could create and enforce rules against publication of information known to be false. A recent publication from the NYU Stern Center for Business and Human Rights contends platforms should do so. This post concurs: subject to some limitations, private platforms can and should take a position that use of their services to intentionally or carelessly spread false information violates terms of service.

In the name of “brand safety,” advertisers these days are working hard to better control where their ads appear online. Programmatic advertising with real-time bidding automates the process of online ad buying and ad placement to such an extent that the entire process takes place in the time it takes a web page to load. The process is highly efficient, but a significant downside is that ads sometimes appear alongside controversial content with which an advertiser would rather not be associated. Online pornography is the classic example, but other strains of extreme content—e.g., hate speech, conspiracism, and incitement-to-terrorism—have more recently come into focus for advertisers as threats to brand reputation.

The security of our news and media information systems matters as much as the security of personal and commercial information systems. "Information warfare" shows that harms can arise even when there is no unauthorized access, when tools are used as intended, and when there’s no compromise of user privacy settings. In both cases of cybersecurity and news/media security, the threats are asymmetric, the tools readily available, usable for many purposes, and threats are easily disguised as benign.

Policymakers increasingly ask Internet platforms like Facebook to “take responsibility” for material posted by their users. Mark Zuckerberg and other tech leaders seem willing to do so. That is in part a good development. Platforms are uniquely positioned to reduce harmful content online. But deputizing them to police users’ speech in the modern public square can also have serious unintended consequences. This piece reviews existing laws and current pressures to expand intermediaries’ liability for user-generated content.

On Friday, the European Union’s GDPR (General Data Protection Regulation) privacy regulation goes into effect. Manyarticles have been written complaining that the regulation is ambiguous, confusing and difficult to implement.

Pages

"Some cyberlaw experts fear a ruling against Grindr will put the creativity of the internet as we know it at risk. They say that requiring platforms to more closely monitor users would give an advantage to tech giants like Facebook, Twitter, and Google while hindering smaller startups with niche audiences, including Grindr. It would be more expensive to start new businesses online because of the cost of hiring watchdogs, said Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union.

"“For a reform of this scope and magnitude, it’s only expected that several months will pass before enforcement comes into focus,” said Omer Tene, VP and chief knowledge officer at the International Association of Privacy Professionals. “2018 wasn’t even a full year for GDPR.”"

"“Ultimately, regulators and courts will have to decide what is the right balance between individuals’ privacy concerns and businesses’ interest to pursue data-driven innovation,” said Omer Tene, VP and chief knowledge officer at the International Association of Privacy Professionals."

Pages

This session will explore jurisdiction in an increasingly post-Westphalian paradigm. Ideas of territorial jurisdiction are being tested (and brought into question) in an increasingly connected online world.

What and whom should the Internet forget? Who should decide? Can search engines be trusted as the guardians and censors of the online world? How are individual interests best protected on the internet? How can we strike the balance between privacy and freedom of expression online? Has privacy won the draw, in the internet information age?

After a lengthy legislative process, the GDPR is finally ready. As the most significant overhaul of data privacy laws in Europe in twenty years, it will have a profound impact on Silicon Valley technology companies offering online services in Europe. The recently announced Privacy Shield will affect most US organisations that receive personal information from Europe.

This talk will introduce the basic concepts of European data protection and privacy law and explore the fundamental differences between the European and US approaches, including by examining recent events (e.g., the Schrems case, the EU/US Privacy Shield, and the General Data Protection Regulation).

Pages

""Half the time it's, 'Oh no, Facebook didn't take something down, and we think that's terrible; they should have taken it down,' " says Daphne Keller, a law professor at Stanford University. "And the other half of the time is, 'Oh no! Facebook took something down and we wish they hadn't.' "

Rebecca Tushnet, professor at Georgetown university law school, and Andrea Matwyshyn, Professor of Law at Northeastern University, discuss one lawsuit against Google, Facebook and Twitter, which was brought by the families of the victims of the Pulse Nightclub shooting in Miami, and another suit against Google for unlawfully censoring its workers. They speak with June Grasso on Bloomberg Radio’s "Bloomberg Law."

Full episode of "Bloomberg West." Guests include Daphne Keller, director of intermediary liability at the Center for Internet and Society at Stanford Law School, David Kirkpatrick, Techonomy's chief executive officer, Radu Rusu, chief executive officer and co-founder of Fyusion, Crawford Del Prete, IDC's chief research officer, and Daniel Apai, assistant professor at The University of Arizona.