Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post.

I don’t write any of this to defend the tweet: it was odious, unpresidential, and betrays an animus towards the press that is terrifying to see in any government official – and especially the Chief Executive of the United States of America. But inappropriate, disgraceful, and disturbing though it is, it was still just speech, and calls to suppress speech are always alarming regardless of who is asking for it to be suppressed or why.

Some have tried to defend these calls by arguing that suppressing speech is ok when it is not the government doing the suppressing. But the reason official censorship is problematic is because it drives away the dissenting voices democracy depends on hearing. Which is not to say that all ideas are worth hearing or critical to self-government; the point is that protecting opposing voices in general is what allows the meritorious ones to be able to speak out against the powerful. There is no way to split the baby so that only some minority expression gets protected: either all of it must be, or none of it will be. If only some of it is, then the person who has the power to decide which will be protected and which will not has the power to decide badly.

Consider how Trump himself would use that power. Given, as we see in his tweet, how much he wants to marginalize voices that speak against him, we need to make sure this protection remains as strong as possible, even if it means that he, too, gets the benefit of it. There simply is no way to punish one man’s speech, no matter how troubling it may be, without opening the door to better speech similarly being suppressed.

Naturally as a private platform Twitter may, of course, choose to delete this or any other Trump tweet (or any tweet or Twitter account at all) for any reason. We’ve argued before that private platforms have the right to police their services however they choose. But we have also seen how when speech is eliminated from a forum, the forum is often much poorer for it. Deciding to suppress speech is not something we should be too quick to encourage, or demand. Not even when the speech is provocative and threatening, because so much important, valid, and necessary speech can so easily be labeled that way. As Justice Holmes noted, “Every idea is an incitement.” In other words, it’s easy to justify suppressing all sorts of speech, including valid and important speech, if any viewpoint aggressively at odds with any other can be eliminated because of the challenge it presents. Courts have therefore found that speech, even speech promoting the use of force or lawlessness, may only be censored when “such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Given that even a KKK rally was found not to meet this description, these requirements for likely imminence of harm are steep hurdles that Trump’s tweet are unlikely to clear.

The truth may well be, as many fear, that Trump would actually like people to beat up journalists. It may also be true that he has some bad actors among his followers who are eager to do so. But even if people do assault journalists, it won’t be because of this tweet. It will be because Trump, as president, supports the idea. He’ll support it whether or not this tweet is deleted. After all, it’s not as though deleting the tweet will make him change his view. And it’s that view that’s the real problem to focus on here.

Because Trump has far more powerful means at his disposal to act upon his antipathy towards the media than his Twitter account affords. In fact, better that he should tweet his drivel rather than act on this malevolence in a way that actually does do direct violence to our free press. Especially because, in an administration so lacking in transparency, his tweets at least help let us know that this animus lurks within. Armed with this knowledge we can now be better positioned to defend those critical interests his presidency so threatens. Painful though it is to see his awful tweets, ignorance on this point would in no way have been bliss.

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend.

The problem is, like the First Amendment itself, speech protections only work as speech protections when they always work. When one can find exemptions here and there, all of a sudden none of these protections are effective and it chills the speech of those who were counting on them because no one can be sure whether or not the speech will ultimately be protected. In the case of Section 230, that chilling arises because if the platforms cannot be sure whether they will be protected from liability in their users’ speech, then they will have to assume they are not. Suddenly they will have to make all the censoring choices with respect to their users’ content that Section 230 was designed to prevent, just to avoid the specter of potentially crippling liability.

One of the points we emphasized in our brief was how such an outcome flouts what Congress intended when it passed Section 230. As we said then, and will say again as many times as we need to, the point of Section 230 is to encourage the most beneficial online speech and also minimize the worst speech. To see how this dual-purposed intent plays out we need to look at the statute as a whole, beyond the part of it that usually gets the most attention, at Subsection (c)(1), which is about how platforms are immune from liability manifest in their users’ speech. There is also another equally important part of the statute, at Subsection (c)(2), that immunizes platforms from liability when they take steps to minimize harmful online content on their systems. This subsection rarely gets attention, but it’s important not to overlook, especially as people look at the effect of the first subsection and worry that it might encourage too much “bad” speech. Congress anticipated this problem and built in a remedy as part of a balanced approach to encourage the most good speech and least bad speech. The problem with now holding online services liable for bad uses of their platforms is that it distorts this balance, and in distorting this balance undermines both these goals.

We used the cases of Barnes v. Yahoo and Doe 14 v. Internet Brands to illustrate this point. Both of these are cases where the Ninth Circuit did make exemptions and found Section 230 not to apply to certain negative uses of Internet platforms. For instance, in Barnes Section 230 was actually found to apply to part of the claim directly relating to the speech in question, which was a good result, but the lawsuit also included a promissory estoppel claim, and the Court decided that because it was not directly related to liability arising from content it could go forward. The problem here was that Yahoo had separately promised to take down certain content, and so the Court found it potentially liable for not having lived up to its promise. But as we pointed out, the effect of the Barnes case was that now platforms never promise to take content down. Even though Congress intended for Section 230 to help Internet platforms perform a hygiene function to help keep the Internet free of the worst content, by discouraging platforms from going the extra mile it has instead had the opposite effect from the one Congress intended. That’s why courts should not continue to find reasons to limit Section 230’s applicability. Even if they think they have good reason to find one, that very justification itself will be better advanced when Section 230’s protection can be most robust.

We also pointed out that in terms of the other policy goal behind Section 230, to encourage more online speech, divining exemptions from Section 230’s coverage would undermine that goal as well. In this case the plaintiffs want providers to have to deny terrorists the use of their platforms. As a separate amicus brief by the Internet Association explained, platforms actually want to keep terrorists off and go to great lengths to try to do so. But as the saying goes, “One man’s terrorist is another man’s freedom fighter.” In other words, deciding who to label a terrorist can often be a difficult thing to do, as well as an extremely political decision to make. It’s certainly beyond the ken of an “intermediary” to determine — especially a smaller, less capitalized, or potentially even individual one. (Have you ever had people comment on one of your Facebook posts? Congratulations! You are an intermediary, and Section 230 applies to you too.)

Even if the rule were that a platform had to check prospective users’ names against a government list, there are significant constitutional concerns, particularly regarding the right to speak anonymously and the prohibition against prior restraint, that arise from having to make these sorts of registration denial decisions this way. There are also often significant constitutional problems with how these lists are made at all. As the amicus brief by EFF and CDT also argued, we can’t create a system where the statutory protection platforms depend on to be able to foster online free speech is conditioned on coercing platforms to undermine it.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

]]>More on the First Amendment problems with DMCA Section 512http://www.digitalagedefense.org/wp/2017/02/23/more-on-the-first-amendment-problems-with-dmca-section-512/
Thu, 23 Feb 2017 17:11:52 +0000http://www.digitalagedefense.org/wp/?p=970[...]]]>Over at Techdirt there’s a write-up of the latest comment I submitted on behalf of the Copia Institute as part of the Copyright Office’s study on the operation of Section 512 of the Digital Millennium Copyright Act. As as we’ve told the Copyright Office before, that operation has had a huge impact on online free speech. (Those comments have also been cross-posted here.)

In some ways this impact is good: providing platforms with protection from liability in their users’ content means that they can be available to facilitate that content and speech. But all too often and in all too many ways the practical impact on free speech has been a negative one, with speech being much more vulnerable to censorship via takedown notice than it ever would have been if the person objecting to it (even for copyright-related reasons) had to go to court to get an injunction to take it down. Not only is the speech itself more vulnerable than it should be, but the protection the platforms depend on ends up being more vulnerable as well because platforms must risk it every time they refuse to act on a takedown notice, no matter how invalid that notice may be.

Our earlier comment pointed out in some detail how the current operation of the DMCA has been running afoul of the protections the First Amendment is supposed to afford speech, and in this second round of comments we’ve highlighted some further deficiencies. In particular, we reminded the Copyright Office of the problems with “prior restraint,” which the First Amendment also prohibits. Prior restraint is what happens when speech is punished before there has been any adjudication to prove that it deserves to be punished. The reason the First Amendment prohibits prior restraint is that it does no good to punish speech, such as by removing it, if the First Amendment would otherwise protect it – once it has been removed the damage will have already been done.

Making sure that legitimate speech cannot be removed is why we normally require the courts to carefully adjudicate whether its removal can be ordered before its removal will be allowed. But with the DMCA there is no such judicial check: people can send demands for all sorts of content to be removed, even if it weren’t actually infringing, because there is little to deter them so long as Section 512(f) continues to have no teeth. Instead platforms are forced to treat every takedown notice as a legitimate demand, regardless of whether it is or not. Not only does this mean they need to delete the content but, in the wake of some recent cases, it seems they also must potentially hold each allegation against their user, regardless of whether it was valid or not, and then cut that user off from their services when they’ve accrued too many such accusations, again regardless of they were valid or not.

]]>Tech policy in the time of Trump (cross-post)http://www.digitalagedefense.org/wp/2016/12/17/tech-policy-in-the-time-of-trump/
Sat, 17 Dec 2016 18:41:56 +0000http://www.digitalagedefense.org/wp/?p=966[...]]]>The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration.

The problem here is that our previous decades of relative political stability have allowed attitudesto become a bit too casual about the importance of free speech as an escape valve against tyranny. But now that the need to speak out is so critical for so many, perhaps it will make us all be a little less glib about it.

One area where we need to be less glib is in copyright. While I would not be surprised to see Trump do something damaging in this space (probably in furtherance of Trump TV), copyright policy has always cut across party lines, and saner policy has in the past had the support of several GOP members of Congress, some of whommay still be in office. The silver lining here is that now that the need to preserve free speech is so apparent, it may become easier to point out how copyright policy interferes with it. For instance, because President Trump, or anyone supporting him in government or otherwise, can so easily cause criticism of him to be disappeared simply by sending a takedown notice or have people cut off from their online services with simply the mere allegations of infringement (as they effectively could right now thanks to recent jurisprudence on DMCA Section 512(i)), opposing voices are extremely vulnerable. As the opposition party, Democrats in particular need to start realizing how IP rights in general (copyright and also trademark and other quasi-IP monopolies like publicity rights) have been providing censors with enormous leverage over other people’s speech. Now that these levers can be used against them and their constituencies, perhaps they will be more likely to see the problem and finally push back against it (or at least stop actively trying to make the situation even worse).

Mass surveillance/encryption. The problem with the policy debates on mass surveillance to date is that they have tended to get bogged down by the assumption that the government was inherently good, and that all the spying it did was in furtherance of protecting its people. Until now many of those who disagreed with that assumption have largely been marginalized. Now, however, it appears that millions of people will have serious doubts about the motivations of the chief executive. It is therefore going to be much harder for surveillance advocates to push the “trust us,” argument when the incoming government has already indicated its strong desire to punish its internal enemies. Libertarians were already alarmed by the power of the surveillance state, and more Democrats may start seeing things their way pretty soon. The opportunity here is that there is now a new framing to help people see what a significant constitutional violation and danger this surveillance represents.

Encryption raises the same issues, and, as with mass surveillance, the public and even other members of Congress may also soon come to the painful realization about how important it is for them and the public to have robust, workable, non-backdoored encryption available to them too. After all, as we saw with Nixon, it is not unprecedented for a President to spy on his political adversaries. But this time Trump can leverage the NSA to do it.

Net Neutrality/Intermediary immunity. There are (at least) two other policy areas where the importance of continuing to protect free speech principles remains evident. Regarding net neutrality, there’s little reason to believe Trump will have anything positive to contribute along these lines, unless he decides it is to his business advantage. But what has also become apparent from this election is the tremendous damage consolidated mass media can cause to democracy. Politics is too important to be left to just a few outlets to tell us about, yet without net neutrality that’s the situation we will be left with.

The danger posed by homogeneous media is also why bolstering the protection of internet intermediaries is so important. Their protection is what helps ensure that a diversity of voices can be heard. The unfortunate reality is that there will likely be a lot of calls by people unhappy with this election and its fallout to limit those voices, particularly those whose message is most divisive, and with them also the platforms that facilitate their speech. But it will be important to hold fast to the intermediary-shielding principles that have to date largely protected platforms from liability in their users’ content. It’s only by leaving them free to operate without fear of liability that they are most able to voluntarily refuse the most awful content and be available for the most good. Neither is the case if the government effectively takes that decision away from them with the threat of punitive law, particularly when that law will inevitably reflect the government’s own agenda regarding what it considers to be worthwhile content or not.

Internet governance. With regard to Internet governance, at least the TPP appears to be dead and with it its speech-chilling provisions. Trump claims to detest free trade treaties, and in this regard his presidency may be helpful for innovation policy, which has been poorly served by US trade representatives trying to bind the United States into secretly negotiated international trade agreements that undermine key American liberties by imposing crippling limitations and liability on tech businesses and other platforms. On the other hand, from time to time international accords are helpful and even necessary for technology businesses to continue to thrive, innovate, and employ people worldwide. (See, e.g., the former Safe Harbor rules.) Unfortunately Trump’s presidency appears to have precipitated a loss of credibility on the world stage, creating a situation where it seems unlikely that other countries will be as inclined to yield to American leadership on any further issues affecting tech policy (or any policy in general) as they may have been in the past.

The bigger concern with respect to Internet governance, however, is whether tech policy advocates from America will be taken seriously in the future, if we go back on previous promisesdeveloped in thorough processes involving all stakeholders. It was already challenging enough to convince other countries that they should do things our way, particularly with respect to free speech principles and the like, but at least when we used to tell the world, “Do it our way, because this is how we’ve safely preserved our democracy for 200 years,” people elsewhere (however reluctantly) used to listen. But now people around the world are starting to have some serious doubts about our commitment to internet freedom and connectivity for all. So we will need to tweak our message to one that has more traction.

Our message to the world now is that recent events have made it all the more important to actively preserve those key American values, particularly with respect to free speech, because it is all that stands between freedom and disaster. Now is no time to start shackling technology, or the speech it enables, with external controls imposed by other nations to limit it. Not only can the potential benevolence of these attempts not be presumed, but we are now facing a situation where it is all the more important to ensure that we have the tools to enable dissenting viewpoints to foment viable political movements sufficient to counter the threat posed by the powerful. This pushback cannot happen if other governments insist on hobbling the Internet’s essential ability to broker these connections and ideas. It needs to remain free in order for all of us to be as well.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g). As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used. But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2] Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically. Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge. The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5] Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be. Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose. The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests. Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law. The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements. A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit. But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process. These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.

[5]SeeMcIntyre, 514 U.S. at 341-42 (1995) (“The decision in favor of anonymity may be motivated by fear of economic or official retaliation, by concern about social ostracism, or merely by a desire to preserve as much of one’s privacy as possible. Whatever the motivation may be, at least in the field of literary endeavor, the interest in having anonymous works enter the marketplace of ideas unquestionably outweighs any public interest in requiring disclosure as a condition of entry.”).

[6]CompareFed. R. Civ. P. 45(a)(1)(A)(ii) (“Every subpoena must … state the title of the action and its civil-action number.”), with17 U.S.C. § 512(h) (lacking any similar requirement or other mention that the subpoena be predicated on a commenced civil action). Note that many jurisdictions explicitly forbid pre-litigation discovery. See, e.g., Cal. Code of Civ. Proc. 2035.010(b) (“One shall not employ the procedures of this chapter for the purpose … of identifying those who might be made parties to an action not yet filed.”). Many jurisdictions further require careful testing of a plaintiff’s claims before stripping Internet speakers of their anonymity. See, e.g., Krinsky v. Doe, 72 Cal.Rptr.3d 231, 241-246 (discussing standards for determining whether a plaintiff can be allowed to unmask an anonymous speaker).

[9] The abusive practices of many extortionate copyright plaintiffs illustrate why judicial oversight is required before Internet users are forced to be stripped of their privacy protection. See, e.g., AF Holdings, LLC v. Does 1-1058, 752 F. 3d 990, 992 (D.C. Cir. 2014) (describing the affairs of copyright plaintiffs who built a business on demanding money from people they discovered via subpoenas to pay settlements to avoid litigation, despite the putative plaintiffs not having a valid copyright to sue upon).

[11]Id. The court in this case also required the service provider to terminate users regardless of the impact on the user if they were forced to exist in the modern world without broadband internet connectivity. To the extent that this holding was drawn from a fair reading of the statute, while perhaps in the 20th Century the consequences of losing Internet access were negligible, in the 21st Century we know they are not. There may not be many other options for broadband access available to terminated users, and the cumbersome nature of the DMCA combined with expansive theories of secondary liability do little to encourage investment by new market entrants.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern. Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem. As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2] But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability. Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3] The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them. Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored. All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4] Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5] Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.” Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing. The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition. That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6] Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not. As discussed above in Section II.B above, there is no way for the service provider to definitively know.

[2]SeeRossi v. Motion Picture Association of America, 391 F.3d 1000, 1004 (9th Cir. 2004) (finding that “the ‘good faith belief’ requirement in subsection 512(c)(3)(A)(v) encompasses a subjective, rather than objective standard.”). With regard to Question #28, this standard is a very low bar for a takedown notice sender to hurdle and has made effective redress for people whose speech has been wrongfully removed has become more elusive.

[4] The problem of takedown abuse is particularly acute during campaign seasons, when politically-motivated takedown requests can suppress the most effective and cheapest means of communicating political messages for which timeliness is of the essence. See Center for Democracy and Technology, Campaign Takedown Troubles: How Meritless Copyright Claims Threaten Online Political Speech (Sept. 2010), https://www.cdt.org/files/pdfs/copyright_takedowns.pdf.

[5] The “takedown-and-staydown” regimes contemplated by Question #10 would only exacerbate the effects of this censorship.

[6] In other words, sharing a link to content is not the same thing as making a copy of that content.

Question #1 asks whether Section 512 safe harbors are working as intended, and Question #5 asks the related question of whether the right balance has been struck between copyright owners and online service providers. To the extent that service providers have been insulated from the costs associated with liability for their users’ content, the DMCA, with its safe harbors, has been a good thing. But the protection is all too often too complicated to achieve, too expensive to assert, or otherwise too illusory for service providers to be adequately protected.

Relatedly, Question #2 asks whether courts have properly construed the entities and activities covered by the safe harbor, and the answer is not always. But the problem here is not just that they have sometimes gotten it wrong but that there is too often the possibility for them to get it wrong. Whereas under Section 230 questions of liability for intermediaries for illegality in user-supplied content are relatively straight forward – was the intermediary the party that produced the content? if not, then it is not liable – when the alleged illegality in others’ content relates to potential copyright infringement, the test becomes a labyrinth minefield that the service provider may need to endure costly litigation to navigate. Not only is ultimate liability expensive but even the process of ensuring that it won’t face that liability can be crippling.[1] Service providers, and investors in service providers, need a way to minimize and manage the legal risk and associated costs arising from their provision of online services, but given the current complexity[2] outlining the requirements for safe harbors they can rarely be so confidently assured.

[1]See, e.g., Dmitry Shapiro, UNCENSORED – A personal experience with DMCA, The World Wide Water Cooler (Jan. 18, 2012), available athttps://web.archive.org/web/20120119032819/http://minglewing.com/w/sopa-pipa/4f15f882e2c68903d2000004/uncensored-a-personal-experience-with-dmca-umg (“UMG scoffed at their responsibilities to notify us of infringement and refused to send us a single DMCA take down notice. They believed that the DMCA didn’t apply. They were not interested in making sure their content was taken down, but rather that Veoh was taken down! As you can imagine the lawsuit dramatically impacted our ability to operate the company. The financial drain of millions of dollars going to litigation took away our power to compete, countless hours of executive’s time was spent in dealing with various responsibilities of litigation, and employee morale was deeply impacted with a constant threat of shutdown.”)

[2] While Section 230 is requires only about 800 words to articulate its protection for service providers, with the nearly 200 cited in Section II.A merely setting forth the policy purpose of the providing this protection, the DMCA is nearly five times as long, at over 4100 words.

Veoh was a video hosting service akin to YouTube that was found to be eligible for the DMCA safe harbor.[1] Unfortunately this finding was reached after years of litigation had already driven the company into bankruptcy and forced it to layoff its staff.[2] Meanwhile SeeqPod was a search engine that helped people (including potential consumers) find multimedia content out on the Internet, but it, too, was also driven into bankruptcy by litigation, taking with it an important tool to help people discover creative works.[3]

History is littered with examples like the ones above of innovative new businesses being driven out of existence, their innovation and investment chilled, by litigation completely untethered from the principles underpinning copyright law. Copyright law exists solely to “promote the progress of science and the useful arts.” Yet all too frequently it has had the exact opposite effect.

The DMCA has the potential to be a crucial equalizer, but it can only do so when the economic value of what these service providers deliver is considered by policymakers with at least as much weight as that given to the incumbent interests who complain that their previous business models may have become unworkable in light of digital technology. Service providers are economic engines employing innumerable people, directly and indirectly, and driving innovation forward while they deliver a world of information to each and every Internet user. We know economic harm is done to them and to anyone, creators and consumers, who would have benefited from their services when they are not protected.

But what needs careful scrutiny and testing are economic arguments predicated on the assumption that every digital copy of every copyrighted work transmitted online without the explicit permission of a copyright holder represents a financial loss. This is a presumption that needs careful scrutiny, with reviewable data and auditable methodology. It is quite a leap to assume that every instance (or even most instances) of people consuming “pirated” copyrighted works is an instance they would otherwise have paid the creator. For example, it tends to presume that people have unlimited amounts of money to spend on unlimited numbers of copyrighted works, and it also ignores the fact that some works may only be consumable at a price point of $0, which is something that institutions like libraries and over-the-air radio have long enabled, to the betterment of creators and the public beneficiaries of creative works alike. Furthermore, even in instances when people would be willing to pay for access to a work, copyright owners may not be offering it at any price, nor are they necessarily equitably sharing the revenues derived from creative works with the actual creators whose efforts require the remuneration.[4]

The DMCA does not adjust to reflect situations like these, nor does it incentivize copyright holders to correct their own self-induced market failures. On the contrary; it allows them to deprive the public of access to their works and to threaten the service providers enabling their access with extinction if they do not assist in disabling this access. None of these outcomes are consistent with the goals and purpose of copyright in general, and care must be taken not to allow the DMCA be a law that ensures them.

[4] As a general matter, economic considerations on the “rights holder” side should be framed from the perspective of creators in general. The economic interest of copyright owners and the economic interest of creators may not necessarily be the same: copyright owners may only profit from a specific work, but creators can benefit from general markets for their works, which many online service providers help them grow. It is ultimately the latter interest that copyright on the whole is intended to serve in order to keep creators incentivized to create.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute. Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2] And that condition is censorship. The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal. Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens. In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4] Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6] Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns. Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation. Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9] To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional. For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.

[1]47 U.S.C. § 230(c)(1) (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”).

[2] Section 230 immunizes service providers for most instances of potential liability that may arise in user-supplied content. However it does not generally provide immunity for liability related to infringement of intellectual property. 47 U.S.C. § 230(e)(2). For situations where the alleged liability is for an infringement of copyright, the DMCA becomes the only statute operating to shield service providers for liability in their users’ content.

[5] Copyright infringement is, after all, often contextual, and a finding may hinge on the existence of license agreements or analysis of how a work was used, which the service provider will be ill-equipped to properly evaluate.

[6] It can also be financially debilitating to even have to litigate the question of infringement liability. Note, for instance, the amount of litigation Google had to face as a result of denying Ms. Garcia’s takedown notice to YouTube on the basis of it lacking a valid copyright claim, even just at the preliminary injunction stage. Google v. Garcia, 786 F.3d 733, 738-9 (9th Cir. 2015). Also note that Google was ultimately vindicated. Id. at 747.

Congress in the 1990s may not have been able to predict the growth of the Internet, but it could see the direction it was taking and the value it had the potential to deliver. We see this recognition first baked into the statutory language of 47 U.S.C. Section 230 (“Section 230”), a 1996 statute that provides unequivocal immunity for service providers that intermediate content from other users:

Congress finds the following: [that t]he rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens[;[1] that t]hese services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops[;[2] that t]he Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity[;[3] that t]he Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation[;[4] and that i]ncreasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.[5]

It was therefore the policy of the United States to, among other things, “promote the continued development of the Internet and other interactive computer services and other interactive media”[6] and “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”[7]

As the Notice of Inquiry soliciting comment for this study noted,[8] Congress was still of the same view about the importance of the Internet two years later when it passed the DMCA explicitly to help “foster the continued development of electronic commerce and the growth of the Internet.”[9] As per an accompanying Senate Report, “The ‘Digital Millennium Copyright Act of 1998’ is designed to facilitate the robust development and world-wide expansion of electronic commerce, communications, research, development, and education in the digital age.”[10] As the Report continued, Congress was going to achieve this end by protecting intermediaries, observing that, “[B]y limiting the liability of service providers, the DMCA ensures that the efficiency of the Internet will continue to improve and that the variety and quality of services on the Internet will continue to expand.”[11]

At no time since then has Congress fundamentally changed its view on the value of the Internet. Nor should it. In these nearly twenty years we have seen countless businesses and jobs be added to the economy, innumerable examples of pioneering technology be innovated, myriad new markets previously unimaginable be created (including many for those in the arts and sciences to economically exploit), and enormous value returned to the economy. By protecting online service providers we have changed the world and brought the democratic promise of information and knowledge sharing to bear. It is therefore absolutely critical that we not create law that interferes with this promise. If anything, we should take this opportunity to reduce the costly friction that the more inapt portions of the existing law have been imposing instead.

To get started, here is an edited compilation of the sections that provide an overview of the argument. Sections discussing each aspect of that argument will follow.

We file this comment to drive home the point that for the Internet to be the marketplace of ideas Congress anticipated it being in 1998, and, indeed, sought for it to be, it is integral for these businesses to retain durable and reliable protection from liability arising from user-generated content. Furthermore, as long as Congress is taking the opportunity to study how the existing safe harbor has been functioning, we would flag several areas where it could be made to function better in light of these policy goals as well as areas where it should be changed to make it as protective of speech as the Constitution requires.

With respect to this study [which invited comment via responses to 30 questions], just as history is written by the victors, records are written by those asking the questions. The hazard is that questions tend to presume answers, even when the answers that they elicit may not necessarily be the answers that are most illuminating.

While there is specific input that can be proffered with respect to various parts of the statute, it would not do the inquiry justice to remain focused on statutory minutiae. The DMCA is ostensibly designed to confront a specific policy problem. It is fair, reasonable, and indeed necessary to ensure that this problem is well-defined and well-understood before determining whether, and to what extent, the DMCA is an appropriate or appropriately calibrated solution to it.

Ultimately, however, it is not possible to have a valid copyright law that in any part is inconsistent with the Progress Clause or First Amendment. To the extent that the DMCA protects intermediaries and with them the speech they foster it is consistent with both of these constitutional precepts and limitations. To the extent, however, that that DMCA suborns due process or otherwise compromises the First Amendment rights of either Internet users or service providers themselves to use and develop forums for information exchange on the Internet it is not. The statutory infirmities that have been leading to the latter outcome should therefore be corrected to make the DMCA’s protections on intermediaries and the speech they foster as durable as this important policy interest requires.
Read more:

Issues pertinent to all responses to the asked questions Section II.A – Congress protected intermediaries for a reasonSection II.B – The DMCA functions as a system of extra-judicial censorshipSection II.C – The assumptions of economic harm underpinning the DMCA must be carefully examined

Comments arising from specific questionsSection III.A – On the general effectiveness of the Safe HarborsSection III.B – Issues with the notice-and-takedown processSection III.C – First Amendment issues with counter-notifications and repeat infringer policies (and more)

]]>New Decision In Dancing Baby DMCA Takedown Case — And Everything Is Still A Mess (cross-post)http://www.digitalagedefense.org/wp/2016/03/20/new-decision-in-dancing-baby-dmca-takedown-case-and-everything-is-still-a-mess-cross-post/
Sun, 20 Mar 2016 14:30:21 +0000http://www.digitalagedefense.org/wp/?p=896[...]]]>The following originally appeared on Techdirt.

I got very excited yesterday when I saw a court system alert that there was a new decision out in the appeal of Lenz v. Universal. This was the Dancing Baby case where a toddler rocking out to a Prince song was seen as such an affront to Prince’s exclusive rights in his songs that his agent Universal Music felt it necessary to send a DMCA takedown notice to YouTube to have the video removed. Heaven forbid people share videos of their babies dancing to unlicensed music.

Of course, they shouldn’t need licenses, because videos like this one clearly make fair use of the music at issue. So Stephanie Lenz, whose video this was, through her lawyers at the EFF, sued Universal under Section 512(f) of the DMCA for having wrongfully caused her video to be taken down.

Last year, the Ninth Circuit heard the case on appeal and then in September issued a decision that generally pleased no one. Both Universal and Lenz petitioned for the Ninth Circuit to reconsider the decision en banc. En banc review was particularly important because the decision suggested that the panel felt hamstrung by the Ninth Circuit’s earlier decision in Rossi v. MPAA, a decision which had the effect of making it functionally impossible for people whose content had been wrongfully taken down to ever successfully sue the parties who had caused that to happen.

Although the updated language exorcises some unhelpful, under-litigated ideas that suggested automated takedown systems could be a “valid and good faith” way of processing takedowns while considering fair use, the new, amended decision does little to remediate any of the more serious underlying problems from the last version. The one bright spot from before fortunately remains: the Ninth Circuit has now made clear that fair use is something that takedown notice senders must consider before sending them. But as for what happens when they don’t, or what happens when they get it wrong, that part is still a confusing mess. The reissued decision doubles-down on the contention from Rossi that a takedown notice sender must have just a subjectively reasonable belief – not an objectively reasonable one – that the content in question is infringing. And, according to the majority of the three-judge panel (there was a dissent), it is for a jury to decide whether that belief was reasonable.

The fear from September remains that there is no real deterrent to people sending wrongful takedown notices that cause legitimate, non-infringing speech to be removed from the Internet. It is expensive and impractical to sue to be compensated for the harm this censorship causes, and having to do it before a jury, with an extremely high subjective standard, makes doing so even more unrealistic.
It’s possible that the Ninth Circuit may actually see the plaintiff as having been vindicated here; after all, she may still go to a jury and be awarded damages to compensate her, potentially even for the attorneys’ fees expended in fighting this fight. But note that the issue of whether she is due anything, and, if so, how much, has not yet been fully litigated, despite this case having been going on since 2007! Not everyone whose content is removed is as tenacious as Ms. Lenz or her EFF counsel, and not everyone can even begin to fight the fight when their content is unjustly removed.

Furthermore, sometimes the value in having speech posted on the Internet comes from having it posted *then*. No amount of compensation can truly make up for the effect of the censorship on a speaker’s right to be heard when he or she wanted to be heard. Consider, as we are in the thick of election season, what happens when election-related speech is taken down shortly before a vote. As was pointed out in several amicus briefs in support of the en banc rehearing, including one I filed on behalf of the Organization of Transformative Works and Public Knowledge, such DMCA-enabled censorship has happened before.

Suing won’t solve that problem, but at least the threat of a lawsuit might make someone think twice before sending a wrongful takedown notice. But if a lawsuit isn’t a realistic possibility then that deterrence won’t happen. What the parties supporting the plaintiff have been worried about is that the DMCA allows for an unprecedented form of censorship we would not normally allow. Think about it: if there were no DMCA then people who wanted content removed from the Internet would have to file well-pleaded and well-substantiated lawsuits articulating why the content in question was so wrongful that an injunction compelling its removal was justified in the face of any defense. In other words, without the DMCA, the question of fair use would get considered, and it would get considered by a judge.

But thanks to the DMCA would-be censors can save the time, cost, and burden of having to make sure they got the fair use question right before causing content to be removed – and very likely with a complete lack of judicial oversight to hold them to account if they didn’t. No judge may ever scrutinize their decision to ensure that they didn’t abuse the shortcut to censorship to the DMCA affords them. Instead, Thursday’s decision only further ensures that this sort of abuse will continue unabated.

In the original ruling the dissent had the better argument. Remember the Saturday Night Live sketch, the dissent asked, where Jane Curtain sits at the counter and orders a Coke? In response John Belushi’s character bellows at her, “No Coke. Pepsi.” That’s what Amazon did, the dissent (and Amazon) argued, when people went to its site and searched for a specific kind of watch made by Multi Time Machine.

Amazon doesn’t sell watches made by MTM, so it couldn’t return any results for that watch. Instead Amazon essentially inferred from the query, “It looks you are interested in buying a nice watch. How about these nice watches?” and listed in the search results the luxury watches it did sell.

If we stop to think about it, surely this sort of exchange happens all the time in the offline world between customer and merchant. If a person walks into a store looking to buy a luxury watch the store doesn’t sell, it’s perfectly reasonable for the store to show the customer the luxury watches it does sell instead. This scenario reflects a perfectly competitive watch market in action, where a consumer is free to either leave the store and keep searching for what it originally wanted or consider other more readily-available options. It is not the sort of situation that trademark law bars or was ever intended to bar.

What the Lanham Act prohibits is the store, whether online or offline, making it seem like the watches it could sell were MTM watches, when they in fact were not. If the customer had its heart set on an MTM watch then it’s not ok to pass off a non-MTM watch as MTM just to capture that sale. But that’s not what Amazon did. When Amazon listed the watches it did have it listed their manufacturers, none of whom were MTM.

But that’s what really bothered MTM. It does not want Amazon (or any other vendor) to be able to sell a different luxury watch when it cannot meet the customer demand for an MTM one. Under MTM’s legal theory, if a customer looks for an MTM watch and can’t get one, then the store needs to rebuff the inquiry. It wouldn’t be fair, MTM argues, for another vendor to be able to make a sale to a consumer who had only been enticed to search for a luxury watch based on MTM’s branding.

But of course it’s fair. That’s competition. MTM has no entitlement to sell its watch to anyone, regardless of how much anyone was originally interested in buying one. The Lanham Act does not and should not serve as a barrier to that sort of competition, but under the original panel ruling at the Ninth Circuit that’s basically what the court allowed it to do.

The panel did this by contorting trademark law in ways it was not supposed to be bent, conflating the type of consumer confusion it does prohibit with a different type of confusion it does not. The type of confusion the Lanham Act does prohibit is confusion as to source. In other words, would the consumer be confused about who made the product they were buying? If John Belushi had no Coke to serve Jane Curtain and yet simply slipped her a non-Coke cola, she might be confused about what she was drinking. But she wasn’t confused; he told her it was Pepsi, just as Amazon told customers searching for MTM’s watches what companies made the watches that Amazon could sell them.

The court instead had to strain its imagination to concoct hypothetical situations where a consumer might still be confused. Maybe, the court speculated, consumers would be confused and think that there was a business relationship between MTM and the brands of watches Amazon listed. Of course maybe they also still believe Santa Claus makes all watches up at the North Pole… Findings of trademark infringement are often predicated on whether “a moron in a hurry” would be confused. But that test is for whether that person would be confusion as to who made the product, not confusion about a business relationship between companies. And, as the dissent pointed out, trademark infringement can’t be found just because some dolt was eager to draw unfounded conclusions. The majority’s fear is purely hypothetical, as well as arbitrary. While it worried that people could draw erroneous conclusions about the business relationships between the players in the watch business, it did not seem equally concerned that people would ever think that Coca-Cola and Pepsi-Cola had merged. The majority’s decision included no obvious basis by which one could know which sorts of imagined business associations would be more likely to potentially confuse customers or not, and that’s a problem.

Note that the Ninth Circuit did not find that Amazon had infringed MTM’s trademark. But its decision said that because it believed people might potentially be confused about whether there was a relationship between MTM and the watch brands Amazon did sell, the question of whether that confusion amounted to trademark infringement was a question for a jury. But even if a jury might ultimately decide that Amazon’s practice did not amount to infringement, such a finding is hardly doing Amazon, or any other similarly situated intermediary, any favors if they have to first be exposed to a costly trial before being exonerated. Under prior precedent the way Amazon displayed search results was perfectly acceptable for avoiding any trademark claims predicated on confusion as to product source. For the Ninth Circuit to now expand the coverage of trademark law to incorporate other types of confusion, particularly speculative confusion arising from the individual biases of a trial judge, is extremely chilling for those businesses who need to know up front whether the way they run their business risks liability or not, and now can’t.

Earlier this week the Ninth Circuit heard oral arguments in the appeal of Lenz v. Universal. This was the case where Stephanie Lenz sued Universal because Universal had sent YouTube a takedown notice demanding it delete the home movie she had posted of her toddler dancing, simply because music by Prince was audible in the background. It’s a case whose resolution has been pending since 2007, despite the fact that it involves the interpretation of a fundamental part of the DMCA’s operation.

The portion of the DMCA at issue in this case is Section 512 of the copyright statute, which the DMCA added in 1998 along with Section 1201. As with Section 1201, Section 512 reflects a certain naivete by Congress in thinking any part of the DMCA was a good idea, rather than the innovation-choking and speech- chilling mess it has turned out to be. But looking at the statutory language you can kind of see how Congress thought it was all going to work, what with the internal checks and balances they put into the DMCA to prevent it from being abused. Unfortunately, while even as intended there are some severe shortcomings to how this balance was conceptualized, what’s worse is how it has not even been working as designed.
One such problem is with the content takedown system incorporated into Section 512. The point of Section 512 is to make it possible for intermediaries to host the rich universe of online content users depend on intermediaries to host. It does this by shifting the burden of having to police users’ content for potential copyright infringement from these intermediaries to copyright owners, who are better positioned to do it. Without this shift more online speech would likely be chilled, either because the fear of being held liable for hosting users’ infringing content would prompt intermediaries to over-censor legitimate content, or because the possibility of being held liable for user content would make being an Internet intermediary hosting it too crushingly high a risk to attempt at all.

Copyright owners often grumble about having the policing be their responsibility, but these complaints ignore the awesome power they get in return: by merely sending a takedown notice they are able, without any litigation or court order or third-party review, to cause online speech to be removed from the Internet. It is an awesome power, and it is one that Congress required them to use responsibly. That’s why the DMCA includes Section 512(f), as a mechanism to hold wayward parties accountable when they wield this powered unjustifiably.

Unfortunately this is a section of the statute that has lost much of its bite. A 2004 decision by the Ninth Circuit, Rossi v. MPAA, read into the statute a certain degree of equivocation about what the “good faith” requirement of a takedown notice actually demanded. Nonetheless, the statute on its face still requires that a valid takedown notice include a statement that the party sending it has “a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law.” (emphasis added)

The big question in this case is what the “or the law” part means in terms of making a takedown notice legitimate. No one is disputing that the notice that took down the dancing baby video was authorized by the agent in charge of administering the rights to Prince’s music (at the hearing we learned that this is no longer Universal Music, but it was back then). But copyright is always contextual. In other words, just because someone uses (e.g., by posting to the Internet) a copyrighted work does not mean they have automatically infringed that work’s copyright. There may well be circumstances enabling that use, like a license (including a statutory or compulsory license), or fair use.

Whether the “or the law” part included authorization pursuant to fair use is what a significant part of the hearing addressed. Universal said that it didn’t, arguing that fair use was only an affirmative defense. By “affirmative defense” Universal meant that fair use was just something you could argue as a defense to being accused of copyright infringement in a lawsuit but not something that existed more integrally as part of copyright law itself. As such, Universal argued, it was not necessary to consider it when sending a takedown notice claiming that the use in question was not authorized.

EFF, arguing for Lenz, disagreed, however, arguing that the articulation of fair use in the statute, at 17 U.S.C. § 107, made fair use more than just a defense; rather, it is a statutory limitation constraining the scope of the copyright owner’s exclusive rights and just as much a part of the law as the parts enumerating those rights. As a result, the EFF argued, a copyright owner sending a takedown notice always has to consider whether the rights the notice is seeking to vindicate are at all constrained by the sort of use being made of the work. If the copyright owner doesn’t do that then it could be subject to the sanctions of 512(f).

Although one can never read the tea leaves from an oral argument, the judges did not seem to buy Universal’s argument that fair use was just an affirmative defense. They seemed more persuaded by the EFF’s position that it was enough a part of the copyright statute for at least some consideration of it to be required for a takedown notice to be valid. But then the court became concerned with the question of how much consideration was needed. After all, as Universal suggested (and EFF disagreed with), there may even be some question about whether the use of Prince’s music in the dancing baby video was itself fair. Fair use is a very squishy thing always dependent on the particular context of a particular use of a copyrighted work. Often it takes massive amounts of litigation to determine whether a use was fair, so the judges spent a lot of time questioning both parties about what a copyright owner (or its agent), if the statute requires them to consider fair use, must actually do on that front in order to not run afoul of the law’s requirements when sending takedown notices.

Universal argued that because it (and other similarly situated copyright holders) needed to send millions of takedown notices it would simply be too burdensome to have to consider fair use for each and every one of them. To this the EFF suggested that tools may be available to help triage the likely contenders needing closer analysis, but something else the EFF said I think drives the point home more aptly.

The DMCA also includes a “put back” process, at Section 512(g), so that Internet users’ whose content has wrongfully been removed can have it replaced. Universal argued that this process should be enough to deal with any wrongful takedowns, as it allows for wrongfully removed content to be replaced. (Universal also argued that this “put back” notice was also necessary to give the copyright holder notice that fair use might be an issue to consider.) But if this were the case then why have a Section 512(f) in the statute at all? There is nothing in the statute that suggests that a “put back” notice needs to happen for Section 512(f) to be able to operate. Furthermore, although the record in this case was unfortunately poor as to what percentage of removed content was ever put back pursuant to 512(g) put back notices, as the EFF noted, even if it were a very small percentage of removed content, a small percentage of millions of instances suggests that quite a bit of non-infringing content is still getting removed.

Moreover, there is no reason to suspect that the content that has been restored in response to these put back notices represents the entire universe of wrongfully removed content. There is little basis to presume that everyone else who had their content removed simply shrugged it off as a fair cop. Because a put back notice can conspicuously put a user in the line of fire of a copyright owner many users might not have wanted to tempt the trouble. Also, as the EFF observed, the DMCA takedown system is fairly labyrinth and often needs the assistance of counsel to help navigate it. This form of support is likely not available to most, and even in the case of Ms. Lenz it did not readily result in her home video of her kid dancing being restored.

Ultimately Universal is arguing that this outcome is ok: despite this harm to legitimate speech, copyright owners should nonetheless be entitled to cause millions and millions of instances of user-generated content to disappear from the Internet with very little effort, inconvenience, or oversight on their part. But it’s an argument that fails to recognize just what a privilege the takedown system represents. It is a huge shortcut, giving private parties the extraordinary power to be censors over Internet content without the trouble and expense of a lawsuit to first determine whether their rights have truly been infringed. With the DMCA copyright owners become judge, jury, and executioner over other people’s speech all on their own, and when they decide to sentence content for disappearance they get to use the takedown notice as the gun to the head of the intermediary to force it do the deed.

Universal spent a lot of time arguing that the DMCA was intended to be this sort of shortcut in order to be a “rapid response” system to online infringements. But the “rapid response” the DMCA offers is that copyright owners don’t first have to go to court. Nothing in statute suggests copyright owners are entitled to a response so rapid that they are excused from exercising the appropriate care a valid takedown notice requires – or that even a lawsuit would require. As Universal would have it, they get to be censors over other people’s speech without any of the risk normally involved if they had to use the courts to vindicate their rights. Note that nothing in the DMCA precludes a copyright owner from suing an Internet user who has infringed its copyright. But with a lawsuit comes the risk that a copyright owner might have to pay the fees and costs of the defendant should their claims of infringement found unmeritorious (including because the targeted use was fair). According to Universal, however, copyright owners should face no similar consequence should the claims underpinning their takedown notices be similarly specious. Copyright owners should simply be able to cause content to be deleted at will, with no risk of any penalty to them for being wrong.

But that’s not what the statute says. As was also argued at the hearing, Section 512(f) creates the penalty necessary to deter wrongful takedowns because without there being one, all the risk of the takedown system would be borne by those whose free speech rights (both to speak freely and to freely consume what others have said) are undermined by copyright owners’ glib censorship. As the saying goes, with great power comes great responsibility, and it hardly misconstrues Congress’s intent, or the express language of the statute, to demand copyright owners to carefully exercise that responsibility before letting their takedown notices fly, and to sanction them when they don’t.

And that’s why so much outrage is warranted when bullies try to strip speakers of their anonymity simply because they don’t like what these people have to say, and why it’s even more outrageous when these bullies are able to. If anonymity is so fragile that speakers can be so easily unmasked, fewer people will be willing to say the important things that need to be said, and we all will suffer for the silence.

We’ve seen on these blog pages examples of both government and private bullies make specious attacks on the free speech rights of their critics, often by using subpoenas, both civil and criminal, to try to unmask them. But we’ve also seen another kind of attempt to identify Internet speakers, and it’s one we’ll see a lot more of if the proposal ICANN is currently considering is put into place.

In that case the critic had selected a domain incorporating Carreon’s name in order to best get his point about Carreon’s thuggery across, which the First Amendment and federal trademark law allowed him to do. When he registered the domain name he also paid extra to avail himself of the registrar’s proxy WHOIS service in order to maintain his anonymity by keeping his identifying details hidden – a service that up to now domain registrars have been permitted to offer. Unfortunately, the registrar immediately caved to Carreon’s pressure and disclosed the critic’s identifying information, thereby eviscerating the privacy protection the critic expected to have, and depended on, for his commentary.

It was a denial of anonymity that never should have happened, but under the new ICANN proposal this sort of exposure of speakers’ identifying information will only happen more often as ICANN seeks to make the privacy protections of the WHOIS proxy service less available and more flimsy, particularly in cases where IP owners dislike the speech taking place at that domain.

It is a proposal that is extraordinarily glib about its consequences for any Internet speaker preferring not to be dependent on another domain host for their online speech. First, it naively pre-supposes that the identifying information of a domain name holder would only ever be used for litigation purposes, when we sadly already know that this presumption is misplaced. As this letter to ICANN points out (linked to from the independently expressive domain name “icann.wtf”), people objecting to others’ speech often use identifying information about Internet speakers to enable campaigns of harassment against them, sometimes even with the threat of life and limb (for example, by “swatting”).

Secondly, it pre-supposes that even if this identifying information were to be used solely for litigation purposes that a lawsuit is a negligible thing for a speaker to find itself on the receiving end of, when of course it is not. In the case of Carreon’s critic he was fortunate to be able to secure pro bono counsel, but not everyone can, and having to pay for representation can often be ruinously expensive.

Thirdly it pre-supposes that there is somehow an IP-related exemption to the First Amendment, when there most certainly is not. Speech is speech and it is all protected by the First Amendment. Attempts to carve out exemptions from its protections for speech that somehow implicates IP should not be tolerated, particularly when the consequences to discourse are just as damaging to speech chilled by IP owners as they are by anyone else seeking to suppress what people may say.

It is important to hold fast on this all-speech-is-protectable principle especially because, fourthly, just because an IP owner may object to certain speech does not magically make that objection valid. Remember, Carreon’s critic was ultimately vindicated, but only after he had lost his anonymity, which Carreon was way too easily able to destroy. In fact, Carreon’s example shows how, in light of this potential for abuse, we should actually be strengthening the ability of intermediaries to resist demands to unmask their users, not making them more vulnerable to this pressure, as ICANN currently proposes.

Also, continue to watch how this issue develops and be prepared to let appropriate U.S. government representatives know that they need to ensure that all of the First Amendment protections online discourse depends on, including the right to speak anonymously, remain protected, particularly in instances like this one when they are under such direct threat. Sadly this is not the only example of how online free speech is under fire, but that’s a subject for other blog posts another day . . . .

]]>Google v. Garcia oral argument summaryhttp://www.digitalagedefense.org/wp/2015/02/21/google-v-garcia-oral-argument-summary/
Sat, 21 Feb 2015 18:33:42 +0000http://www.digitalagedefense.org/wp/?p=841[...]]]>Back in December I traveled to Pasadena to observe the oral argument in the en banc appeal of Google v. Garcia, a case I filed an amicus brief in on behalf of Techdirt and the Organization for Transformative Works. (Actually, I ultimately wrote two briefs, one in support of the en banc appeal being granted and one as part of the appeal once it was.) After the hearing I wrote a synopsis of the arguments raised during the appeal on Techdirt (originally titled, “Celine Dion And Human Cannonballs“), which I’m now cross-posting here:

Background

Garcia v. Google. If it weren’t for the Monkey Selfie, this case would have been the topic most on the lips copyright and Internet lawyers this year. The facts here, of course, are much less humorous: Garcia, an actress, was allegedly duped by a filmmaker into appearing in his eventually-titled “Innocence of Muslims” movie, which eventually turned out to be an anti-Muslim cinematic screed. A lot of people were offended, and some channeled their outrage into threats against her. Garcia sued the filmmaker for the harm she believes he caused her, but that’s not the issue here.

What is the issue is why this case has turned into such a mess, because what she really wants is for the movie to go away. So she also sued Google to make it go away – or at least have the court order Google to remove it from YouTube. The thing is, though, courts aren’t supposed to be able to simply order content to be deleted, and for some very good reasons. We have laws (notably Section 230) that insulate intermediaries from take down orders because ordering content to be taken offline means ordering content to be censored.

However, as those who have read Techdirt for any length of time know, American law seems to have a “censorship is bad except when it comes to copyrighted content” exception. Intermediaries are not insulated from demands to take down content when the person asking for its removal can claim that the reason it needs to be removed is because it violates their copyright.

But even then there are some limits on the injunctive power of a court to order content to be removed, particularly at the preliminary injunction stage, which, believe it or not given everything that’s followed, is only as far as her case had gotten. Generally speaking, preliminary injunctions are only issued when there is a likelihood that the party seeking the injunction will ultimately win the case, as well as a likelihood of irreparable harm to it if the court does not issue an injunction right away, before there has been a chance to evaluate the lawsuit on its merits. The district court considering Garcia’s request for a preliminary injunction rejected it on both counts. It didn’t appear Garcia had a valid copyright to sue Google for infringing, and even if she might have, there was no need to issue an injunction before the court had a chance to fully consider the question.

And that would have been the end of it, except the Ninth Circuit, in a three-judge panel led by Judge Kozinski, decided otherwise, first finding her a copyright interest and then using it as the basis to issue a broad injunction to Google ordering the film’s removal from YouTube (the injunction was later dialed back somewhat, but it still remained quite expansive). Which is what caused all this consternation, because if Judge Kozinski were right about her having a legitimate copyright claim, it would stand to change copyright law from how we understood Congress to have crafted it, as well as set the stage for even more efforts to censor online content.

It was not actually necessary to be at the hearing to follow along given that it was also streamed (and tweeted…). As it was, one judge out of the 11-judge panel, Judge Berzon, participated remotely. But there are always certain intangibles that can only be experienced in person, like seeing what appeared to be some representative of the defendant filmmaker distribute nicely xeroxed packets of propaganda advertising the book of the script for his “Innocence of Muslims” film to everyone in the gallery before the hearing began…

As for the hearing itself, it took about an hour and essentially ended up focusing on these two questions: whether Garcia could have a copyright interest in the 5 seconds she appeared in the final film, and whether the preliminary injunction was appropriate. But the unusual procedural posture of the case caused the two questions to frequently blur together.

Garcia’s lawyer argued first and opened with, “Cindy Lee Garcia is an ordinary women surviving under extraordinary circumstances.” She then went on to spell out some of the awful threats she had gotten, but then the judges quickly jumped in to ask how those threats bear on the preliminary injunction standard. (Note: I frequently refer to the “court” generally, rather than identify the judges specifically, although I did note some of Judge Kozinski’s lines of inquiry due to his particular effect on this case earlier.) Garcia argued that because some of these threats were death threats, that supported the argument that without the injunction she was facing irreparable harm. That may be so, the court then asked, but the possibility of irreparable harm was only one factor considered by the district court. To get her injunction there had to be a threat of irreparable harm as well as a likelihood that she would win on her copyright claim against Google. How was the district court wrong when it decided she had no copyright claim to prevail on?

One issue for Garcia (which the court kept coming back to in various respects) is that she had expressly disclaimed having a copyright in the final movie as a joint author. It’s an argument that comes up from time to time when people who worked on larger productions try to claim partial ownership in the final product on the strength of their contributions to it. As courts, including the Ninth Circuit, have considered the question they generally have looked to the intention of the parties at the outset that all the “contributions be merged into inseparable or interdependent parts of a unitary whole.” But Garcia wasn’t arguing that she now owned a piece of the final “Innocence of Muslims” film; she was arguing that she owned a copyright in her performance made during the 3.5 days of filming.

The court worried about the implications of her argument. What was to keep everyone who made a cameo in a Lord of the Rings battle scene from also claiming a copyright interest in their performance? Garcia’s answer seemed to get at the heart of her copyright claim. In the court’s example everyone knew what the deal was when they worked on the movie. They had agreed, expressly or impliedly, that their performances be captured as part of the whole. But for Garcia, she never consented to ending up in what turned out to be the “Innocence of Muslims” film. The filmmaker had duped her into agreeing to appear in one sort of movie but then used her performance in something completely different. This deception unwound the agreement to subordinate her performance into the whole and allowed her to retain her copyright in the individual contribution.

The court seemed skeptical about this theory, for a number of reasons. For one, where was the work? While on the one hand it often seems like everything is copyrightable these days, its applicability is extremely technical. It requires an (a) original (b) work of (c) authorship that is (d) fixed in a tangible medium. As Google also later argued, she hadn’t made out all of these elements in attesting to the copyrightability of her individual performance made over those 3.5 days. (There is also the issue that of her 3.5-days’ worth of performance, only 5 seconds of it ever made it into the film.)

The court also worried about what the impact of her theory would be. If her retaining a copyright in her performance hinged on the deception, then what was to stop any actor from claiming fraud or mistake and allowing them to claim copyright in their performances as well? This question was particularly relevant for Google’s position, which was argued next. Could all these people then issue takedown notices to intermediaries? As Judge McKeown noted, it would put intermediaries “at risk for thousands, millions of claims made after the fact.” Would all of them have to act to remove this content lest they end up like Google and find themselves on the receiving end of a lawsuit?

In her rebuttal Garcia argued yes. The DMCA (or “free pass card,” as she referred to it) protected intermediaries by getting them out of the dispute between the party who posted content, and the party claiming copyright in it. As long as it deleted the content as soon as it got notice, it could then let the parties fight it out. Google says taking down content is easy, she argued. We’re not asking them to do something hard.

As Google argued during its turn, however, the implications of Garcia’s argument are chilling (particularly, as we argued in our brief, for intermediaries who are not as large or well-funded as Google and for whom taking down content may well be much harder than she described for Google). If all it takes is a claim of fraud to claim a copyright interest, Google argued, it “fragments” copyright and makes every intermediary vulnerable. They can’t adjudge the merits of every copyright claim. Allowing these sorts of claims, especially if they could be predicated on but five seconds of material, would “overload the takedown system.” Intermediaries would simply have to delete content in order to protect themselves, and that would lead to the censorship of myriad protectable speech.

Other Arguments

Google made one other main point during its argument, targeting the preliminary injunction the Ninth Circuit had issued and similarly to how the EFF had questioned it in its amicus brief. The appeals court had enjoined speech, and as such there was a question of whether that was permissible under other standards governing injunctions. Garcia argued that it was, saying that there was a difference in the standard governing whether it was a mandatory injunction, which asks someone to do something, and a prohibitory injunction, which restricts someone from doing something in the future. This was a prohibitory injunction, she argued, because all the panel had decided to do is restrict further infringement. Google argued otherwise.

When it changes the status quo, it’s a mandatory injunction. Here there was speech, but as a result of the injunction speech was removed. That makes it look like a mandatory injunction and thus requires a much stronger showing than Garcia could provide that it was warranted. After all, as Judge Thomas noted, “Is there anyone in the world who doesn’t know your client is associated with this movie by now?” The damage has already been done, the “toothpaste out of the tube,” as Google put it, and nothing to be accomplished by censoring the movie now.

The court also tested Google on its argument against Garcia having a copyright, and this discussion led to the examples cited in the title, the first of which being poor Celine Dion who kept being called upon to test various theories. Why does she get a copyright in her singing performance included in Titanic, Judge Kozinski wondered, but not Laurence Olivier for his performance in a film? To which Google answered that when Celine Dion recorded her song the intention was always that the performance be a standalone work then also included in the larger one, whereas for Olivier there was never an intention that his performance ever be considered some individually copyrightable work.

In her rebuttal Garcia took issue with the Celine Dion example. If she had been singing on the bow of the ship, intending for her appearance doing that to become part of the movie, it would have been one thing. But it’s another if then the filmmakers, having captured her performance, then distribute the clip of it to pornographers to be put in their movies. Garcia’s argument is that something similar had happened here, where a performance made in one context she had allowed got used in another that she hadn’t. The question, though, is not whether the law would recognize this injury but whether copyright is the law that does. There are other laws that recognize rights that might be implicated, such as those establishing rights of publicity.

Interestingly a right of publicity case led to another detour by Judge Kozinski to test the contours of Google’s argument, and that raised the second example in the headline. Google had argued that there was no precedent “that a 5 second performance is a separate copyright work.” Judge Kozinski countered by citing Zacchini v. Scripps-Howard Broadcasting, where 15 seconds of Zacchini’s performance had been broadcast on local TV. Because Zacchini was a human cannonball, however, those 15 seconds constituted a significant part of his performance. The Supreme Court found that the rebroadcast may well have caused him an injury the law recognized. But while the Zacchinicase stands for the proposition that there can be something to protect in very short performances, it doesn’t stand for the proposition that they are necessarily protected by copyright.

This is an important distinction, because violations of rights of publicity are governed by state law, and intermediaries are insulated from injunctions ordering the removal of content reflecting these injuries by Section 230. The Garcia v. Google case has been about forcing these sorts of injuries to be evaluated through the lens of copyright solely to avoid the bar prohibiting these injunctions, and if this sort of Section 230 end-run is allowed to work here, as Google (and many amici) argued, it will enable all sorts of censorious mischief.

(Note: Judge Kozinski also spent some time exploring the impact of the Beijing AVP treaty on the case at hand, but I will leave it to others to explore the potential implications of this argument, as they are worthy of their own post). Update 2/21/15: see this post by Margot Kaminski on this topic.

]]>Garcia v. Google amicus briefhttp://www.digitalagedefense.org/wp/2014/04/16/garcia-v-google-amicus-brief/
Wed, 16 Apr 2014 16:50:04 +0000http://www.digitalagedefense.org/wp/?p=811[...]]]>On Monday I filed an amicus brief in a case sometimes referred to as “Garcia v. Google.” The case is really Garcia v. Nakoula, with Garcia being an actress who was duped by the defendant to appear in a film he was making – a film that, unbeknownst to her, turned out to be an anti-Islam screed that led to her life being threatened by many who were not happy with its message and who sought to hold her accountable for it.

There’s little question that Nakoula wronged her, and likely in a way that the law would recognize. Holding him accountable is therefore uncontroversial. But Garcia didn’t just want to hold him accountable; Garcia wanted all evidence of this film removed from the world, and so she sued Google/YouTube too in an attempt to make it comply with her wish.

Garcia is obviously a sympathetic victim, but no law exists to allow her the remedy she sought. In fact, there are laws actively preventing it, such as 47 USC Section 230 and the Digital Millennium Copyright Act (DMCA), and, believe it or not, that’s actually a good thing! Even though it may, in cases like these, seem like a bad thing because it means bad content can linger online if the intermediary hosting it can’t be forced to delete it, such a rule helps preserve the Internet as a healthy, robust forum for online discourse. It’s really an all-or-nothing proposition: you can’t make case-by-case incursions on intermediaries’ statutory protection against having to take down “bad” content without chilling their ability to host good content too.

And yet that is what happened in this case when Garcia sought a preliminary injunction to force Google to delete all copies of it from YouTube (and prevent any new copies from being uploaded). Not at the district court, which denied her request, but at the Ninth Circuit Court of Appeals earlier this year when two out of three judges on the appeals’ panel chose to ignore the statutes precluding such an order and granted it against Google anyway.

Google has now petitioned for the Ninth Circuit to review this decision, and a few days ago nearly a dozen third parties weighed in with amicus briefs to persuade the court to revisit it. Most focused on the method by which the court reached its decision (i.e., by finding for Garcia a copyright interest in the film unsupported by the copyright statute). I, however, filed one on behalf of two intermediaries, Floor64 Inc. (a/k/a Techdirt.com) and the Organization for Transformative Works, intermediaries who both depend on the statutory protection that should have prevented the court’s order, arguing that by granting the injunction in contravention of these laws preventing it, the court has undermined these and other intermediaries’ future ability to host any user-generated content. As the saying goes, bad facts make bad law, and tempted though the court may have been in this case with these facts, if its order is allowed to stand the court will have made very bad law indeed.

What would the Internet be without its intermediaries? Nothing, that’s what. Intermediaries are what carry, store, and serve every speck of information that makes up the Internet. Every cat picture, every YouTube comment, every Wikipedia article. Every streamed video, every customer review, every online archive. Every blog post, every tweet, every Facebook status. Every e-business, every search engine, every cloud service. No part of what we have come to take the Internet for exists without some site, server, or system intermediating that content so that we all can access it.

The reason Section 230 has been so helpful in allowing the Internet to thrive and become this increasingly rich resource is that by relieving intermediaries of liability for the content passing through their systems it has allowed for much more, and much more diverse, content to take root on them than there would have been had intermediaries felt it necessary to police every byte that passed through their systems out of the fear that if they didn’t, and the wrong bit got through, an expensive lawsuit could be just around the corner. Because of that fear, even if those bits and bytes did not actually comprise anything illegal intermediaries would still be tempted to over-censor or even outright prohibit scads of content, no matter how valuable that content might actually be.

But Section 230 has some limits in its protection for intermediaries, and around those edges we see the Internet’s incredible marketplace of ideas begin to corrode. One of the notable exceptions to Section 230′s protection for intermediaries arises when the content in question is accused of violating “intellectual property” law. To the extent that “intellectual property” includes copyright, however, another law steps in to provide intermediaries some protection. Section 512 of the Copyright Act, often referred to as the DMCA, instead provides intermediaries some protection if they comply with several criteria – although not nearly as effectively as Section 230. Whereas Section 230 provides protection from liability automatically, without the intermediary needing to do anything (while nonetheless encouraging them to edit out the less-desirable content by also immunizing them for doing that), the DMCA instead requires intermediaries disable access to allegedly infringing content as soon as they have been given notice of it.

Happily, recent case law on the DMCA has reaffirmed that intermediaries still do not have to actively police for potentially infringing content, which is particularly important for any site that hosts content for hundreds, if not thousands or millions, of users. Indeed, copyright permissions are contextual and the intermediary is never going to be properly equipped to know whether content on its systems was put there with permission or without. Instead the burden to police for copyright infringement remains with the copyright owner, who is much more likely to know.

But even under these regimes significant amounts of legitimate content still ends up censored from the Internet, and as things stand it may only get worse. While Section 230 also exempts from its coverage content that violates federal criminal law, state attorney generals have been lobbying to extend that exemption to each of the fifty states’ own criminal laws as well, meaning that intermediaries would no longer just risk be at risk for civil liability but also potentially criminal liability under myriad legal theories as a result of content appearing on their systems. Moreover, imperfect though the DMCA currently is, through bills like SOPA and trade agreements like TPP, even the limited protection the DMCA offers intermediaries stands to be severely diluted.

All of these proposed changes would further chill the vibrant exchange of ideas intermediaries have heretofore felt free to enable. Those of us who have come to know, love, and depend on the Internet need to stem this legal tide and even reverse it. We need to keep intermediaries out of the line of fire so that they can continue to provide the digital public forums we all depend on. Which does not mean that any specific content itself needs to be immunized – indeed, if content is defamatory, infringing, or otherwise violative of any legitimate law it can well be targeted for legal sanction. But it’s important to only hold liable the party directly responsible for such content. To do otherwise is as foolhardy as shooting the messenger who has carried a message someone else has sent. If a few too many intermediaries find themselves facing such dire consequences for delivering others’ content, soon none will be left willing to deliver any more.

]]>A law requiring deletionhttp://www.digitalagedefense.org/wp/2013/09/29/a-law-requiring-deletion/
Sun, 29 Sep 2013 19:45:20 +0000http://www.digitalagedefense.org/wp/?p=784[...]]]>This past week California passed a law requiring website owners to allow minors (who are also residents of California) to delete any postings they may have made on the website. There is plenty to criticize about this law, including that it is yet another example of a legislative commandment cavalierly imposing liability on website owners with no contemplation of the technical feasibility of how they are supposed to comply with it.

But such discussion should be moot. This law is precluded by federal law, in this case 47 U.S.C. Section 230. By its provisions, Section 230 prevents intermediaries (such as websites) from being held liable for content others have posted on them. (See Section 230(c)(1)). Moreover, states are not permitted to undermine that immunity. (See Section 230(e)(3)). So, for instance, even if someone were to post some content to a website that might be illegal in some way under state law, that state law can’t make the website hosting that content itself be liable for it (nor can that state law make the website delete it). But that’s what this law proposes to do at its essence: make websites liable for content others have posted to them.

Some might argue that the intent of the law is important and noble enough to forgive it these problems. Unlike in generations past, kids today truly do have something akin to a “permanent record” thanks to the ease of the Internet to collect and indefinitely store the digital evidence of everyone’s lives. But such a concern requires thoughtful consideration for how to best ameliorate those consequences, if it’s even possible to, without injuring important free speech principles and values the Internet also supports. This law offers no such solution.