Intermediary Liability

Related Projects

Related Topics

Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.

In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.

To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.

Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.

Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.

Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also a Senior Lecturer and Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin.

Pages

"If you are distressed by anything external, the pain is not due to the thing itself, but to your estimate of it; and this you have the power to revoke at any moment." ~ Marcus Aurelius

The last post in this series observed optimal policy thinking aims at allowing people sufficient control over technologies they may use them to apply their own capacities and, in that process, find meaning. This post explores that point further; in particular how emphasis on technology, rather than people, falls short of that aim.

In response to a global backlash in the wake of Brexit and the 2016 US presidential election, dominant tech companies are scrambling to stave off increased governmental regulation of their information handling practices. It is an attractive strategy for them to cut deals with regulators whereby they agree to follow privately negotiated rules in lieu of command-and-control regulation. With respect to content moderation, this form of hybrid public-private regulation could undermine First Amendment limits on state action that are designed to protect individual citizens from official censorship. This post explores the role of anti-piracy voluntary agreements in normalizing hybrid public-private speech regulation on the Internet.

The European Parliament recently took the final vote on the new Audiovisual Media Services Directive (AVMSD). The text, which can be considered final, now awaits only the formal approval of the Council. The AVMSD will become the first legally binding instrument to impose new, and extensive, responsibilities for content regulation on privately owned Internet platforms. They are required to establish and apply detailed rules in areas such as hate speech, child pornography, protection of children’s development and preventing terrorism.

Over the course of the last decade, in response to significant pressure from the US government and other governments, service providers have assumed private obligations to regulate online content that have no basis in public law. For US tech companies, a robust regime of "voluntary agreements" to resolve content-related disputes has grown up on the margins of the Digital Millennium Copyright Act (DMCA) and the Communications Decency Act (CDA). For the most part, this regime has been built for the benefit of intellectual property rightholders attempting to control online piracy and counterfeiting beyond the territorial limits of the United States and without recourse to judicial process.

Pages

Americans have long been ignoring European data protection law, but it has not been ignoring us. Last year’s so-called “right to be forgotten” case from the EU’s highest court let people remove links about themselves from Google’s search results — and regulators insist that the links must disappear from U.S. search results, too.

In this work, I discuss the tension between gift and market economy throughout the history of creativity. For millennia, the production of creative artifacts has lain at the intersection between gift and market economy. From the time of Pindar and Simonides – and until the Romanticism will commence a process leading to the complete commodification of creative artifacts – market exchange models run parallel to gift exchange. From Roman amicitia to the medieval and Renaissance belief that “scientia donum dei est, unde vendi non potest,” creativity has been repeatedly construed as a gift.

The recent leak of a secret chapter of the Trans-Pacific Partnership’s Investor-State Dispute Settlement system (ISDS) is getting many people on both the left and the right upset. Left-wingers don’t like a system in which corporations can push back against government regulations. Right-wingers don’t like a system where U.N.-affiliated tribunals can overturn U.S. law.

Within the context of the Centre for Copyright and New Business Models in the Creative Economy (CREATe) research scope, this literature review investigates the current trends, advantages, disadvantages, problems and solutions, opportunities and barriers in Open Access Publishing (OAP), and in particular Open Access (OA) academic publishing. This study is intended to scope and evaluate current theory and practice concerning models for OAP and engage with intellectual, legal and economic perspectives on OAP.

"“When lawmakers create new rules that have never been tested by courts – like Australia's new law or the rules proposed in the UK's White Paper – and then tell platforms to enforce them, we can only expect that a broad swathe of perfectly legal speech is going to disappear,” said Daphne Keller, director of intermediary liability at the Stanford Centre for Internet and Society.

"The issue highlights the pressure on many internet platforms to attract customers by presenting a critical mass of listings to demonstrate scale, says Daphne Keller, director of intermediary liability at Stanford Law School’s Center for Internet and Society. She added that inactive or false listings don’t produce a good customer experience either. “You don’t want to have a bunch of listings in there that turn out to be dead ends,” Ms. Keller said. A Care.com spokeswoman declined to comment on Ms. Keller’s assessment."

"Calls for tighter content moderation policies have not come without concern. Some lawyers, including Annemarie Bridy, professor of law and affiliate scholar at Stanford University Center for Internet and Society, said tightly regulating speech on platforms can lead to over-censorship, or confusion about where to draw the line.

"“Its role in enabling a certain kind of technical innovation is unambiguous,” says Daphne Keller at Stanford Law School’s Center for Internet and Society. “It made it possible for investors to get behind companies who were in the business of transmitting so much speech and information that they couldn't possibly assess it all and figure what was legal or illegal.”

When you give sites and services information about yourself, where does it go? Who else will get hold of it, and what will they use it for? The recent revelations about Cambridge Analytica's acquisition of data about tens of millions of Facebook users without their knowledge or consent have prompted renewed interest in how data about us gets shared, sold, used, and misused -- well beyond what we ever expected. Join us for a SLATA/CIS lunchtime conversation with three experts from Stanford’s Center for Internet and Society as we discuss the legal and policy implications of the Cambridge Analytica scandal and responses from Congress and courts. How can we prevent this from happening again? What new problems might we create through poorly-crafted legal responses?

Ads are the lifeblood of the web -- but the legal challenges have never been greater. On May 25, Europe's privacy regime is overhauled for the first time in 20 years, and publishers, advertisers and ad tech companies alike are confused about what it all means. Struan Robertson, a product counsel working in Google's ads business for the past seven years, gives his perspective on the legal challenges facing the industry.

Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes.

Pages

The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.

The latest in the EU's string of internet regulatory efforts has a new target: terrorist propaganda. Just as with past regulations, the proposed rules seem onerous and insane, creating huge liability for internet platforms that fail to do the impossible.

Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?