Main navigation expanded

You are here:

Meet the New Governors, Same as the Old Governors

Enrique Armijo is the Associate Dean for Academic Affairs and an Associate Professor of Law at the Elon University School of Law.

It is a category error to assume that an old paradigm is obsolete simply because of the emergence, or even the dominance, of a new one. Although the title of Kate Klonick’s thoughtful essay sets Facebook and New York Times Co. v. Sullivan against each other by inserting a “v.” in between them, one upshot of her piece, and indeed of much of her other important work in this area,1 is that the First Amendment continues to play a critical role in resolving disputes about what people should be able to say online. What Klonick describes isn’t a face-off between social media content moderation and First Amendment law. It’s more like the story of a child who thought herself equipped by her parents to understand the world but then finds herself in a novel setting—a semester abroad, a rave—where some of the rules she was taught don’t seem to help.

To unpack some of the free expression-related issues raised by social media, it is useful to separate the content moderation practices Klonick discusses into two related but distinct types: what can be said about whom, and who can say it. With respect to the first type, the influence of the First Amendment still reigns, and for a good reason. It still works. To paraphrase Judge Easterbrook’s early critique of the law of cyberspace, general First Amendment rules have proven themselves adaptable to the “specialized endeavors” of social media,2 even if the role of applying those rules has largely shifted from courts to moderators. The converse is also true. We should be careful not to let that which we perceive as special about the social media context overwhelm the soundness, wisdom, and relevance of the general rules.

With respect to the second type of practice, however, the First Amendment has not been nearly as helpful for resolving content moderation problems. And if social media has largely failed to become the engine for social change and political discourse that many hoped it would, the influence of First Amendment ideology on questions concerning who can speak online—and in particular its valorization of anonymous speech—is one reason why.

Limited-Purpose Public Figures and Social Media

As Klonick writes, in Gertz v. Robert Welch, Inc.3 the U.S. Supreme Court extended its actual malice doctrine to defamation plaintiffs who have come to be known as “limited-purpose public figures”—otherwise private people who have voluntarily inserted themselves into controversies and thus become the subject of public discussion. The Court concluded that these people should, like public officials, have to show actual malice in defamation suits relating to the controversies of which they are a part, because they assume the risk of being talked about negatively and even falsely when they enter public debates “in order to influence the resolution of the issues involved.”4 The Court also responded to a potential unfairness in its ruling, insofar as it might sweep in people who do not choose to be a public figure. As Klonick notes, the Court said that “[h]ypothetically, it may be possible for someone to become a public figure through no purposeful action of his own, but the instances of truly involuntary public figures must be exceedingly rare.”5

Klonick is absolutely correct that the internet has put the lie to the Court’s suggestion in Gertz that someone can involuntarily enter public debate only in the rarest of cases. By making public so much of day-to-day life that was formerly private, social media has been used to thrust publicity upon many individuals through no fault of their own. As Klonick also notes, there is a significant First Amendment risk in permitting the limited-purpose public figure doctrine to take online notoriety into account. To take proper account of this risk, however, it is helpful to consider the case not of Alex from Target but of a different Alex: Alex Jones.

Earlier this year, several parents of Sandy Hook Elementary School children sued Alex Jones for defamation, pointing to Jones’s statements falsely implicating the parents in faking the shooting and deaths of their children.6 In his response to the suit, Jones has argued that the plaintiff-parents are limited-purpose public figures—that they have been discussed as part of, and have inserted themselves into, the larger controversy around gun rights in the United States—and that under Gertz they should therefore have to prove that his statements about them were made with actual malice. It is true that some Sandy Hook parents (though not the plaintiffs who have sued Jones) became vocal participants in the gun-control movement in the wake of the tragedy and that others have organized online to try and prevent future attacks. But Jones and similar defendants should not be able to expand the bounds of the controversies that they themselves create so as to raise the burden of proof for those implicated in the controversies who sue them for reputational harm. Making such individuals prove actual malice in defamation cases gets the First Amendment backward. It will encourage individuals to take the tragedies that happen to them and swallow them silently—to not get active, to not connect with others who have similar purposes, to not share in sorrow and attempt to make change.

No one would have volunteered for the kind of attention that the Sandy Hook parents have received, and no one would argue that the controversy of which they became a part was not widely discussed, particularly online. But if a court were to “dispatch with the ‘voluntary’ and ‘involuntary’ concepts altogether,” as Klonick proposes, and go on to find that the parents were public figures because of that attention alone, then future parents might not speak out at all, which would do significant harm to the marketplace of ideas that the First Amendment is intended to promote.

Courts should therefore continue to ask whether, consistent with Gertz, a plaintiff in a defamation case has acted voluntarily—exercising her will to undertake “a course of action that invites attention.”7 This inquiry should not turn on whether a speech platform has itself facilitated discussion of that person. After all, Alex from Target voluntarily appeared on Ellen. But one should not be transformed into a public figure through no affirmative, purposeful act of one’s own. The longstanding law of defamation gets this right.

Judicial doctrine has another public figure rule that remains relevant online: A defamation defendant cannot cause a plaintiff to become a public figure by dint of the statements that gave rise to the claim.8 The relevant controversy, in other words, “must have existed prior to the publication of the defamatory statement.”9 This too provides a useful heuristic for content moderation. Klonick describes a policy in which Facebook uses Google News to determine whether an individual is a public figure when deciding whether to take down bullying speech about or directed at that person. The case law suggests that if the results of that search include only stories about the complained-of bullying itself, then the victim of the bullying is a private person. Looking at the sheer number of Google News hits or at whether an individual is being discussed widely on social media obscures, rather than clarifies, these questions. Good old-fashioned First Amendment law does the job much better.

The same is true with regard to “newsworthy” yet harmful content generally. The “more protection for speech about issues and groups, less protection for speech about specific individuals” decision rule that Klonick appears to recommend (my paraphrase, not her words) loosely tracks the development of legal rules around group libel and hate speech in the years since the Supreme Court’s 1952 decision in Beauharnais v. Illinois.10 In recent decades, the federal appellate courts have concluded that cases such as Sullivan have “so washed away the foundations of Beauharnais that it [cannot] be considered authoritative.”11 Accordingly, First Amendment doctrine already calls for general statements about groups to receive more protection than statements about particular individuals. Both Facebook’s “Community Standards” and Klonick’s proposed intervention point content moderation decisions regarding takedowns in the same direction as that doctrine. And even before the First Amendment entered the tort law picture at all, many state courts faced with claims of privacy violations or intentional infliction of emotional distress took into account a public interest- and newsworthiness-based “privilege to interfere”12 with the legal interests those torts protect—the very same considerations that social media companies take into account when deciding what to take down or keep up.

The challenge for Klonick’s decision rule, or for Facebook or Twitter in applying it, is that the hard cases are not those in which offending content is either about a specific individual or about a matter that is “generally” newsworthy. The hard cases are those in which the offending content is about a specific individual who is herself newsworthy. In such a case, it is perfectly legitimate to ask—again, as current doctrine does—whether the individual is herself a primary source of or reason for that newsworthiness and, if so, to count that fact as relevant with respect to that individual’s burden of proof if she sues the source of the content for defamation. This is the question the Supreme Court asked with respect to Elmer Gertz, and it is the question courts should ask with respect to Alex from Target or a Sandy Hook parent.13 It is also a relevant question with respect to whether content about those individuals should be taken down by a social media moderator.

So, existing law is well equipped to handle public figure questions, even in the age of online oversharing. But content moderation policies concerning who may speak present an entirely different set of challenges. And the problem with these policies is not too little First Amendment, but rather too much.

For Every One Mrs. McIntyre, a Thousand Trolls

The Supreme Court has forcefully and consistently held that the right to express oneself anonymously is protected by the First Amendment. In the 1995 case McIntyre v. Ohio Elections Commission,14 the Court declared that the right of Margaret McIntyre to express her opposition to a proposed school tax levy without putting her name to that opposition was rooted in a free speech tradition older than the republic. The speaker’s “decision in favor of anonymity may be motivated by fear of economic or official retaliation,” noted the Court, or “by concern about social ostracism . . . or merely by a desire to preserve as much of one’s privacy as possible.”15

Individuals certainly do use Twitter’s ability to speak pseudonymously to express themselves in ways that would cause them harm if the expression was associated with their actual identities.16 But verbal harassment, hate speech, doxxing, death threats, revenge pornography, and the like have all been turbocharged online by that same functionality. And the targets of that kind of expressive conduct are often the equivalent of the Jehovah’s Witnesses in the seminal First Amendment cases of the late 1930s and 1940s, or the socialists in the 1920s: members of politically unpopular, historically subordinated groups.17 It thus seems clear that social media companies have overlearned the lesson of the benefits of anonymous speech, and the lesson has come at a frightening cost.

Social science research bears out the commonsense conclusion that platforms that permit speech from anonymous, fake-name, and sham accounts are less civil than those that don’t. In one study, political scientist Ian Rowe compared reader comments on a Washington Post story made on the site itself, which permitted anonymous speech, to those made in response to the article’s posting on Facebook, which has a real-name policy. The anonymous comments were more uncivil, more disinhibited, and contained more ad hominemattacks against other commenters.18 Anonymity, at least as a First Amendment-informed design principle for communications networks, tends to result in a degraded expressive environment, not an improved one.

Although it might be a marginally more civil place for political discourse than Twitter, Facebook is not free from blame. While the platform requires real names, its identity-verification policies are easy to circumvent. As we now know, this can facilitate not only harassing and offensive speech but also election interference by foreign states,[19] the dissemination of false propaganda,20 and, in the case of Myanmar’s Rohingya minority, literal genocide.21

Value judgments about forum quality are certainly relative, and each of us decides for ourselves how much ideals such as civility and trustworthiness are worth. But it bears remembering that the First Amendment is itself a significant impediment to government interventions that aim to improve deliberation and mitigate social harms on social media. Many believe that the content moderation policies of social media platforms, however self-serving or misguided, are themselves constitutionally protected speech. The First Amendment, consequently, is both a cause of the infection and an antibody that fights off several possible cures. No one should pretend that the First Amendment lights the path forward for many of the most significant problems facing online content moderation.

If we want to build a better speech space online, either the Governors or the Governed will have to lead the way. And if the Governors won’t act, it may be time to withdraw our consent to be governed by them.22

7 McDowell v. Paiewonsky, 769 F.2d 942, 949 (3d Cir. 1985); see also, e.g., Schultz v. Reader’s Digest Ass’n, 468 F. Supp. 551, 559 (E.D. Mich. 1979) (concluding there is no such thing as an involuntary public figure, given that the limited public figure category is confined to those who have thrust themselves in the vortex of a controversy); Chafoulias v. Peterson, 668 N.W.2d 642, 653 (Minn. 2003) (“‘The proper question is not whether the plaintiff volunteered for the publicity but whether the plaintiff volunteered for an activity out of which publicity would foreseeably arise.’” (quoting 1 Rodney A. Smolla, Law of Defamation § 2:32 (2d ed. 2002)).

12 David A. Anderson, Torts, Speech, and Contracts, 75 Tex. L. Rev. 1499, 1512 (1997); see also Alan E. Garfield, Promises of Silence: Contract Law and Freedom of Speech, 83 Cornell L. Rev. 261, 320–21 (1998) (“[E]ven before the First Amendment was invoked in private-facts cases, the common law recognized a First Amendment-like defense to the tort: no liability arises if the information disclosed is of legitimate concern to the public.” (internal quotation marks omitted)).

13 Although a court might not ask the same question about Charlottesville victim Heather Heyer based on the longstanding rule that postmortem defamation is not actionable, a social media content moderator could certainly consider these issues when deciding whether to take down the post about Heyer that Klonick describes at the ouset of her essay.

17 See Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media 24 (2018) (“For years Twitter ha[s] been criticized for allowing a culture of harassment to fester largely unchecked on its service, particularly targeting women, but also the LGBTQ community, racial and ethnic minorities, participants of various subcultures, and public figures.”).