Frequently asked questions on internet intermediary liability

Average:

Your rating: NoneAverage: 5(1 vote)

By (APC)
, May 2014

1. What is an “internet intermediary”?

An internet intermediary is an entity which provides services that enable people to use the internet. There are many different kinds of internet intermediaries which fall into two broad categories: “conduits” and “hosts”. “Conduits” are technical providers of internet access or transmission services. Conduits do not interfere with the content they are transmitting other than for automatic, intermediate or transient storage needed for transmission. “Hosts” are providers of content services – for instance, online platforms and storage services.

Some examples of internet intermediaries, in no particular order, include:

Internet service providers (ISPs) – companies that provide internet services such as, for example, email providers. Many IAPs, as well as network operators, are also ISPs and the term is often used interchangeably.

“Internet intermediary liability” means the legal responsibility (“liability”) of intermediaries for illegal or harmful activities performed by users through their services. “Liability” means that intermediaries have an obligation to prevent the occurrence of unlawful or harmful activity by users of their services. Failure to do so may result in legal orders compelling the intermediary to act or expose the intermediary to civil or criminal legal action.

Intermediary liability occurs where governments or private litigants can hold technological intermediaries such as ISPs and websites liable for unlawful or harmful content created by users of those services. Intermediary liability can occur in a vast array of circumstances, around a multitude of issues including: copyright infringements, digital piracy, trademark disputes, network management, spamming and phishing, “cybercrime”, defamation, hate speech, child pornography, “illegal content”, offensive but legal content, censorship, broadcasting and telecommunications laws and regulations, and privacy protection.

In performing these roles, intermediaries cannot reasonably be expected to be aware of all the content transmitted, stored or referenced on their networks, which is constantly changing and at an automatic and rapid pace. Because of this, it is argued by some that intermediaries should not be held liable for content on their networks created by third parties.

In general, intermediary liability is known as “secondary” or “indirect” liability because it does not relate directly to the intermediary’s own conduct – for example, if an intermediary spies on its own users, in violation of its legal obligations regarding the interception and security of communications. However, in some cases, such as copyright and privacy, laws blur this distinction by adopting a broad definition of “direct” infringement that can cover even the mere processing of illegal content. This has created unintended consequences: for example, the executives of a social network in Italy were held criminally liable for the uploading by its users of a video containing sensitive personal data without the consent of the interested party.

3. What are the main models of internet intermediary liability?

There are two main models:

“Generalist”: In this model, intermediary liability is judged according to the general rules of civil and criminal law. Under this model, which applies in most countries, intermediaries can be liable for content either because they directly contributed to the illegal activity (contributory liability) or because they indirectly contributed since they had the ability to control it and derived a direct financial benefit from not doing so (vicarious liability). This generalist model applies in many African countries, as well as in some areas of South-America (including Argentina and Peru).

“Safe harbour”: In this model, a legally safe place (a safe harbour) is given to intermediaries – provided their actions stay within this safe harbour, they will not be liable for user actions. This immunity from liability is subject to conditions, which can be very detailed and stringent (referred to as a “vertical” safe harbour, which is limited to one specific area, e.g. copyright or trademark law) or designed to deal with different types of activities and liability under different areas of law (referred to as a “horizontal” safe harbour, which applies across different domains).

The existence of strong safe harbours is considered a strategic factor supporting the emergence of innovative services: it provides intermediaries with the sufficient legal certainty to conduct a wide range of activities, free from the threat of potential liability and the chilling effect of potential litigation. However, there are also concerns that overly broad safe harbours make it more difficult for others to uphold their human rights online.

4. What are the ways that internet intermediaries are involved in regulating content online?

There are five main ways that internet intermediaries are involved in regulating content online:

“Notice and takedown”: This requires intermediaries to remove content that is deemed illegal, once they have notice of it.

“Notice and notice”: This requires intermediaries to notify the creator of content which is deemed illegal, before proceeding to any takedown.

“Notice and disconnection” (or if no disconnection is foreseen, “graduated response”): This requires intermediaries (so far, only ISPs) to impose on repeated infringers a series of sanctions, which escalate progressively in accordance with the repetition of alleged infringements and may, in extreme cases, culminate in the termination or degradation of services for those particular users.

“Filtering and monitoring”: This requires intermediaries to take measures to prevent the repetition of violations, including facilitating the identification of users, and identifying, removing or blocking illegal material.

“Contract regulation”: This enables intermediaries to regulate content through their own contractual terms and conditions (commonly known as “Terms of Service” or “ToS”). ToS create private regimes of content regulation that are self-enforceable, operating independently from the applicable public law framework. In the field of copyright, ToS are increasingly used to implement agreements between the content industry and ISPs, whereby ISPs adopt so-called “voluntary measures” to deter infringements. These are often an integral part of graduated response regimes.

5. How are users affected by internet intermediary liability?

Internet users may be affected by internet intermediary liability in both positive and negative ways. On one hand, the quality and variety of products or services that are available to them may be restricted or more expensive if there is a lack of competition and innovation in the intermediary market because intermediaries are unwilling to risk liability for service innovation. On the other hand, extending a law enforcement role to intermediaries poses risks to the rights to freedom of speech, privacy and due process, especially if intermediaries adopt restrictive terms and conditions on content and more human rights-intrusive procedures for the management of content in their spaces. In addition, users’ human rights are also at risk if intermediaries will not take down human rights-violating content, but the legal system is not able to offer prompt and effective remedies against the violation of individual rights. This is especially so where the private contractual regimes established by the intermediaries are inadequate.

6. Do current international best practices offer examples of safeguards in the design of intermediary liability regimes?

Yes. Our research identified the need for safeguards to prevent the misuse of notice and takedown procedures by complainants, including recommending sanctions for misrepresentation, and judicial oversight for requests of access to personal data, with limited exceptions. While best practices can be identified, these examples related mostly to a narrow range of rights-related complaints (predominantly intellectual property rights protection). More work is needed to determine best practice to safeguard other areas, such as intermediary liability for violence against women online.

The research also identified the dangers of abuse of “safe harbours” by intermediaries and states. For this reason, it is recommended that lawmakers incorporate “good faith” requirements as a condition for intermediaries to receive immunity, and define some limits to the space within which self-regulation can occur.