The European Commission recently released a Communication on Tackling Illegal Content Online. It concludes that platforms have a responsibility to develop filtering technologies, in order to identify illegal content ranging from copyright infringement to hate speech.

In some cases, it said, “fully automated deletion or suspension of content” with no human review is appropriate.

This recommendation seems to rest on dangerously flawed assumptions about both available filtering technology and real-world notice and takedown. I wrote about problems with the first assumption here. Algorithms cannot assess what speech is legal (and platform employees don’t do a great job either). Mandatory filters would lead to removal of lawful, important expression.

The Commission’s second assumption is that notice and takedown procedural protections, like counter-notice from users whose content was deleted, can correct for over-removal. But counter-notice doesn’t fix most overremoval problems, even under current laws. It certainly could not offset the avalanche of improper removals that would result from the Commission’s plan.

Annemarie Bridy and I discussed the efficacy of counter-notice in a 2016 filing with the US Copyright Office. Our submission addressed counter-notice under the US Digital Millennium Copyright Act, which includes detailed counter-notice provisions. Relevant parts of our submission are replicated below. A key takeaway is that while improper or questionable notices are common – one older study found 31% questionable claims, another found 47% — reported rates of counter-notice were typically below 1%. That’s over 30 legally dubious notices for every one counter-notice.

Another study, which came out after we finished our filing, offers more up-to-date numbers. Researchers found questionable notices to Google web search in 28% of cases, of which 4% simply identified the wrong content. (Google later reported that a stunning 99.95% of web search notices are for pages that were never in web search in the first place.) Researchers also said that, outside of the major platforms, most smaller OSPs reported taking down content “even when they are uncertain about the strength of the underlying claim.” But many of them, including those processing thousands of removals a year, never got counter-notices at all.

Similar data is not available, to the best of my knowledge, for removals based on non-copyright claims like defamation or hate speech. Anecdotally, though, I have heard that counter-notice for such removals in Europe is rare, even among Internet users who appear to have good legal arguments in support of their expression.

The Commission’s proposal, if enacted into law, would represent a drastic shift in Internet information policy. As I’ll discuss in other posts, it would entrench the economic positions of the current major platforms, make life difficult or impossible for smaller ones, and erode the distinctions between big platforms and the state. It would encourage or mandate content removals using filters that are guaranteed to misapply the law. That is not a change to make without an incredibly compelling reason, including a strong factual basis. Real-world research, like that reproduced here, is essential.

From Annemarie Bridy and Daphne Keller’s submission to the U.S. Copyright Office 512 Study, April 1, 2016:

Question 16. How effective is the counter-notification process for addressing false and mistaken assertions of infringement?

We have not seen studies or significant public data on this question, though there will be useful information in the study just published by Urban, et al.[1] Based on our own experience and discussion with other practitioners, we believe that it is rare for users to file counter-notices. Counter-notices certainly appear to be far less common than the improper removals that they are intended to counteract.

A handful of companies track counter-notices in their transparency reports. These companies don’t appear to aggregate the data over time and, in some cases, they track it using non-parallel categories so that comparison is difficult. For example:

Twitter’s transparency report includes counter-notices, though seemingly only for tweets, not for Vine or Periscope material. Twitter’s latest semiannual report says that 56,971 tweets were withheld pursuant to DMCA notice, and the company received counter-notices for 65 or 0.11%, all of which led to the restoration of the targeted content.[2]

Tumblr’s June 2015 report states that of the 77,357 posts that were removed pursuant to takedown notices, 0.08% were restored using the counter-notice process.[3] Tumblr received additional counter-notices that it did not honor, though it is unclear whether those counter-notices were rejected based on substantive copyright problems or formal noncompliance with section 512(g).

Github reports some counter-notices, but counts them in the same category with retracted DMCA notices, reporting 17 counter-notices or retractions out of 258 notices, which may each have identified numerous alleged infringements.[4]

Automattic reports, for its latest data set, that “less than 0.6% of the DMCA notices we received were later subject to a counter-notice; of those cases, we’re aware of further action being taken by the original DMCA complainant only once.”[5]

These tiny percentages are dwarfed by the portion of dubious DMCA removal requests that researchers have identified. (See studies reported in Appendix B.) Even if the studies are off by an order of magnitude in their estimates, the number of potentially mistaken or malicious notices still vastly exceeds the number of counter-notices.[6]

Importantly, the companies issuing detailed transparency reports may be relatively unique among small intermediaries in their commitment to protecting users and offering them a chance to counter-notice. It is unclear whether the thousands of other companies that have registered DMCA agents with the Copyright Office assume similar costs and inconveniences to provide a viable counter-notice process.

The ineffectiveness of the DMCA counter-notice process may be attributable to a number of causes:

Intermediaries do not have significant incentive to bother notifying a user when her content has been removed based on a DMCA notice. The section 512(g) counter-notification process refers only to 512(c) hosting providers, not other OSPs (e.g., search engines), for starters. Even for hosts, the only reason to offer counter-notice is to avoid liability to a user for improperly removing that user’s lawful content. But few believe that hosts face any meaningful risk of such liability, regardless of section 512(g). “Wrongful removal” claims against intermediaries in US courts have consistently failed, based on contractual and other defenses.[7]

It is not clear how many users actually receive notice that their content has been removed. Not all hosting services tie user-generated content to user email addresses or other contact information. For those that do, users may supply email addresses from temporary or old and unmonitored accounts. It is accordingly difficult to know how many users really find out when their posts have been removed.

The counter-notice process is intimidating. Consent to jurisdiction, which is a required element of a counter-notice under section 512(g)(3)(D), is a meaningful legal concession, and is particularly problematic for users who do not reside in the United States. In addition, the 512 section (g)(3)(C) statement under penalty of perjury, while analogous to the similar requirement for the notifier, is likely to be far more intimidating to individual users responding without benefit of counsel. And the cost of error for a user if she is mistaken about her copyright defenses is much higher than the cost of error for a copyright owner who is mistaken about her claims.

Collectively, these factors constitute a meaningful deterrent to counter-notice. The point we make here is not that Congress lacked the intent or policy basis for establishing the detailed hurdles for counter-notifiers in section 512(g). The problem is that, because counter-notice has not been an effective corrective for wrongful notices, section 512(g) alone cannot adequately protect Internet users from having their legal speech removed. For that reason, the other procedural protections for users in section 512, such as form-of-notice requirements and declarations of good faith by copyright owners, play a more important role than Congress may have foreseen. Robust interpretations and enforcement of those protections by the courts and the Copyright Office are critical to maintain the DMCA’s carefully structured balance. A more detailed discussion of these other protections in is included above in response to Question 12.

In (weak) defense of section 512(g), the transparency and expectation of procedural fairness created by the counter-notice process may be acting as a deterrent for some bad faith removal requests. It is possible, however, that the value of counter-notice is far exceeded by the value of public transparency about particular removals, such as those posted through Lumen or noted by the OSP on the page from which content has been removed. This transparency allows the identification of erroneous DMCA notices to be crowd-sourced across interested individuals online. To our knowledge there are no public datasets that would allow us to test this hypothesis.

Question 17. How efficient or burdensome is the counter-notification process for users and service providers? Is it a workable solution over the long run?

Our response to Question 16 above addresses the burden for users. For OSPs, providing a counter-notice process at any kind of scale is unavoidably somewhat burdensome. That said, requiring such a process is appropriate given the impact of wrongful removals on individual speakers. As stated above in response to Questions 1 and 12, the DMCA’s counter-notice provision is a key to the balance Congress intended to establish through the legislation. It functions as a visible and concrete, even if largely symbolic, acknowledgment of the importance of users’ expressive rights in the digital environment. Its inclusion brings some aspects of due process to the extrajudicial DMCA removal system.

As an operational matter, the counter-notice process requires a commitment of personnel and internal tracking systems—whether ad hoc or specially built by the company. Tracking multiple communications with copyright owners and counter-notice providers can be difficult in practice, since parties may send multiple communications by different channels, without including consistent identifying information. At small scale, this is a manageable problem. At large scale, it requires bespoke internal tools, significant time commitment by employees, or both.

Removing and restoring content also likely require time commitment from engineers, whose time may be, in the company’s eyes, more valuably spent in developing the company’s products. Lawyers who easily persuade their clients to dedicate engineering resources to DMCA removals may have a harder time explaining why restoration on counter-notice is a high priority. This is largely due to the fact that the value of the safe harbors for OSPs lies more or less exclusively in their limitation of liability vis-à-vis copyright owners, not users. Designing and implementing a counter-notice process may be an issue for small companies, in particular, since they are unlikely to have built any dedicated tools for DMCA compliance.

As discussed in our response to Question 16 above, restoration on counter-notice alone is not adequate to correct for the pattern of over-removal under the DMCA. That said, it is an important element of online speakers’ procedural protections.

[6] This calculation assumes that the rate of counter-notice for the data sets discussed in Appendix B is similar to the rates reported in the transparency data discussed above. We see no reason to expect otherwise.