Following the 2016 U.S. presidential election, “fake news”
has dominated popular dialogue and is increasingly perceived as a unique threat
to an informed democracy. Despite the common use of the term, it eludes common
definition.1 One frequent refrain is that
fake news—construed as propaganda, misinformation, or conspiracy theories—has
always existed,2 and therefore requires no
new consideration. In some ways this is true: tabloids have long hawked alien
baby photos and Elvis sightings. When we agonize over the fake news phenomenon,
though, we are not talking about these kinds of fabricated stories.

Instead, what we are really
focusing on is why we have been suddenly inundated by false
information—purposefully deployed—that spreads so quickly and persuades so
effectively. This is a different conception of fake news, and it presents a
question about how information operates at scale in the internet era. And yet,
too often we analyze the problem of fake news by focusing on individual
instances,3 not systemic features of the
information economy. We compound the problem by telling ourselves idealistic,
unrealistic stories about how truth emerges from online discussion. This
theoretical incoherence tracks traditional First Amendment theories, but leaves
both users and social media platforms ill-equipped to deal with rapidly
evolving problems like fake news.

This rupture gives us an excellent opportunity to reexamine
whether existing First Amendment theories adequately explain the digital public
sphere. This Essay proceeds in three Parts: Part I briefly outlines how social
media platforms have relied piecemeal on three discrete theories justifying the
First Amendment—the marketplace of ideas, autonomy and liberty, and
collectivist views—and why that reliance leaves platforms ill-equipped to
tackle a problem like fake news. Part II then takes a descriptive look at
several features that better describe the system of speech online, and how the
manipulation of each feature affects the problem of misinformation. Finally,
Part III concludes with the recommendation that we must build a realistic
theory—based on observations as well as interdisciplinary insights—to explain
the governance of private companies who maintain our public sphere in the
internet era.

I. moving beyond the
marketplace

As a doctrinal matter, the First Amendment restricts
government censorship, but as a social matter, it signifies even more.4 As colloquially invoked, the
“First Amendment” channels a set of commonly held values that are foundational
to our social practices around free speech. When, for example, individuals
incorrectly identify criticism as “violating First Amendment rights,” they actually
seek to articulate a set of values crucial to the public sphere, including the
ability to express and share views in society.5
The First Amendment shapes how we imagine desirable and undesirable speech. So
conceived, it becomes clear that our courts are not the only place where the
First Amendment comes to life.

One implication of this understanding is that First Amendment
theory casts a long shadow, which even private communications platforms6—like Facebook, Twitter, and
YouTube—cannot escape. Internet law scholar Kate Klonick deftly illustrates how
these three private platforms should be understood as self-regulating private
entities, governing speech through content moderation policies:

A common theme exists in all three of these
platforms’ histories: American lawyers trained and acculturated in First
Amendment law oversaw the development of company content moderation policy.
Though they might not have ‘directly imported First Amendment doctrine,’ the normative
background in free speech had a direct impact on how they structured their
policies.7

But First Amendment thinking comes in several
flavors. Which of these visions of the First Amendment have platforms embraced?

A. Existing First
Amendment Theories

Three First Amendment theories predominate: the marketplace
of ideas, autonomy, and collectivist theories. However, as this Section
demonstrates, none of these fully captures online speech.

One option is the talismanic “marketplace of ideas.”
Recognized as the “theory of our Constitution,” the
marketplace metaphor imagines that robust engagement with a panoply of ideas
yields the discovery of truth—eventually.8”More speech” should be the corrective to bad
speech like falsehoods.9
This vision predictably tilts away from regulation, on the logic that intervention
would harm the marketplace’s natural and dynamic progression.10 That progression involves
ideas ‘competing’ in the marketplace, a conception with two fundamental
shortcomings, each relevant in an era of too much available information: What
happens when individuals do not interact with contrary ideas because they are
easy to avoid? And what happens when ideas are not heard at all because there
are too many?

The marketplace also does not neatly address questions of
power, newly relevant in the internet era. The marketplace metaphor sprang
forth at a time when the power to reach the general population through “more
speech” was confined to a fairly homogenous, powerful few. Individuals may have
had their own fiefdoms of information—a pulpit, a pamphlet—but communicating to
the masses was unattainable to most. Accordingly, the marketplace never needed
to address power differentials when only the powerful had the technology to
speak at scale. The internet, and particularly social media platforms, have radically
improved the capabilities of many to speak, but the marketplace theory has not
adjusted. For example, how might the marketplace theory address powerful
speakers who drown out other voices, like Saudi Arabian “cyber troops” who
flood Twitter posts critical of the regime with unrelated content and hashtags
to obscure the offending post?11 As adopted by the platforms, the
marketplace theory offers no answer.
Put differently, the marketplace-as-platform theory only erects a building;
there are no rules for how to behave once inside. This theory yields little
helpful insight for a problem like fake news or other undesirable speech.

A second, related vision
explains First Amendment values through the lens of individual liberty.12 What
counts here is only the “fundamental rule” that “a speaker has the autonomy to
choose the content of his own message” because speech is a necessary exercise
of personal agency.13 All that matters is that one can express herself.
Naturally, this theory also creates a strong presumption against centralized
interference with speech.14 While certainly enticing—and conveniently
neutral for social media platforms interested in building a large user
base—this theory is piecemeal. Focusing only
on the self-expressive rights of the singular speaker offers no consideration
of whether that speech is actually heard. It posits no process through which
truth emerges from cacophony. In fact, it is not clear that fake news, as an
articulation of one’s self-expression, would even register as a problem under
this theory.

Third, and far less fashionable, is the idea that the First
Amendment exists to promote a system of political engagement.15 This “collectivist,” or
republican, vision of the First Amendment considers more fully the rights of
citizens to receive information as
well as the rights of speakers to express themselves. Practically and
historically, this has meant a focus on improving democratic deliberation: for
example, requiring that broadcasters present controversial issues of public
importance in a balanced way, or targeting media oligopolies that could bias
the populace. This theory devotes proactive attention to the full system of
speech.16

The republican theory, which accounts for both listeners and
speakers, offers an appealingly complete approach. The decreased costs of
creating, sharing, and broadcasting information online means that everyone can
be both a listener and a speaker, often simultaneously, and so a system-oriented
focus seems appropriate. But the collectivist vision, like the marketplace and
autonomy approaches, is still cramped in its own way. The internet—replete with
scatological jokes and Prince cover songs—involves much more than political
deliberation.17 And so any theory of speech
that focuses only on political outcomes will fail because it cannot fully
capture what actually happens on the
internet.

B. Which First
Amendment Vision Best Explains Online Speech?

Online speech platforms—bound by neither doctrine nor by any
underlying theories—have, in practice, fused all three of these visions
together.

At their inception, many platforms echoed a libertarian,
content-neutral ethos in keeping with the marketplace and autonomy theories.
For example, Twitter long ago declared itself to be the “free speech wing of
the free speech party,” straying away from policing user content except in
limited and extreme circumstances.18 Reddit similarly positions
itself as a “free speech site with very few exceptions,” allowing communities
to determine their own approaches to offensive content.19 Mark Zuckerberg’s argument that
“Facebook is in the business of letting people share stuff they are interested
in” presents an autonomy argument if ever there were one.20 In the wake of the 2015
Charlie Hebdo attacks in Paris, Zuckerberg specifically vowed not to bow to
demands to censor Facebook,21 and then did so again in 2016, when
he explained to American conservative leaders that the Facebook platform was “a
platform for all ideas.”22Taken at face value, platforms
offer little recourse in response to undesirable speech like hate speech or
fake news.

Platforms have also, however, long invoked the language of
engagement, albeit not political engagement. Platforms have long governed speech
through reference to their community or through user guidelines that prohibit
certain undesirable, but not illegal, behavior.23 For example, Reddit, which otherwise
claims a laissez-faire approach to moderation, collaborates on moderation with
a number of specific communities—including r/The_Donald, a subcommunity that
vehemently and virulently supports the forty-fifth President of the United
States.24 YouTube prohibits the
posting of pornography; at Facebook, community standards ban the posting of
content that promotes self-injury or suicide.25 None of their content
policies stem from altruism. If users dislike the culture of a platform, they
will leave and the platform will lose. For exactly that reason, platforms have
taken measures—of varying efficacy—to police spam and harassment,26 and in doing so to build a
culture most amenable to mass engagement.

Ultimately, ambiguity serves neither the platforms nor their
users. To users, hazy platform philosophy obscures any meaningful understanding
of how platforms decide what is acceptable. Many wondered, in the wake of a
recent leak, why Facebook’s elaborate internal content moderation rules could
justify deleting hate speech against white men, but allowed hate speech against
black children to remain online.27
To platforms, philosophical indeterminacy over speech theories means there are
few guiding stars to help navigate high-profile and rapidly evolving problems
like fake news.

Moreover, even taking existing First Amendment theories
separately, the fake news phenomenon illustrates how each theory fails to
account for conspicuous phenomena that affect online speech. The marketplace
theory, for example, fails to account for how easily accessible speech from
many actors might change the central presumption that ideas compete to become
true; the autonomy theory ignores that individuals are both speakers and listeners online; and the republican
theory, in focusing only on political exchanges, casts aside much of the
internet. All fail to account for how speech flows at a global and systemic
scale, possibly because such an exercise would have been arduous if not impossible
before social media platforms turned ephemeral words into indexed data.

These previously ephemeral interactions are now accessible to
a degree of granularity that can enable new theories about how speech works
globally at the systemic level. What insights might emerge if we focused on
system-level operation, looking at the system from a descriptive standpoint? In
the next Section, I will identify several systemic features of online speech,
with a particular focus on how they are manipulated to produce fake news.

II. what does the
system tell us about fake news?

As the notable First Amendment and internet scholar Jack
Balkin cautioned in 2004, “in studying the Internet, to ask ‘What is genuinely
new here?’ is to ask the wrong question.”28 What matters instead is how
digital technologies change the social conditions in which people speak, and
whether that makes more salient what has already been present to some degree.29 By focusing on what online
platforms make uniquely relevant, we can discern social conditions that
influence online speech, both desirable and not.

Below, I offer five newly conspicuous features that shape the
ecosystem of speech online. Each of these features’ manipulation exacerbates
the fake news problem, but importantly none are visible—or addressable—under
the marketplace, autonomy, or collectivist views of the First Amendment.

A. Filters

An obvious feature of online speech is that there is far too
much of it to consume. Letting a thousand flowers bloom has an unexpected
consequence: the necessity of information filters.30

The networked, searchable nature of the internet yields two
interrelated types of filters. The first is what one might call a “manual
filter,” or an explicit filter, like search terms or Twitter hashtags. These
can prompt misinformation: for example, if one searches “Obama birthplace,” one
will receive very different results than if one searches “Obama birth
certificate fake.” Manual filters can also include humans who curate what is
accessible on social media, like content moderators.31

Less visible are implicit filters, for example algorithms
that either watch your movements automatically or change based on how you manually
filter. Such filters explain how platforms decide what content to serve an
individual user, with an eye towards maximizing that user’s attention to the
platform. Ev Williams, co-founder of Twitter, describes this process as
follows: if you glance at a car crash, the internet interprets your glancing as
a desire for car crashes and attempts to accordingly supply car crashes to you
in the future.32 Engaging with a fake
article about Hillary Clinton’s health, for example, will supply more such
content to your feed through the algorithmic filter.

That suggested content, sourced through the implicit filter,
might also become more extreme. Clicking on the Facebook page designated for
the Republican National Convention, as BuzzFeed reporter Ryan Broderick learned,
led the “Suggested Pages” feature to recommend white power memes, a Vladimir
Putin fan page, and an article from a neo-Nazi website.33 It is this algorithmic
pulling to the poles, rooted in a benign effort to keep users engaged, that unearths
fake news otherwise relegated to the fringe.

B. Communities

Information filters, like the ones described above, have
always existed in some form. We have always needed help in making sense of vast
amounts of information. Before there were algorithms or hashtags, there were
communities: office break rooms, schools, religious institutions, and media
organizations are all types of community filters. The internet has changed,
however, how digital communities can easily transcend the barriers of physical
geography. The internet is organized in part by communities of interest, and
information can thus be consumed within and
produced by communities of distant but like-minded members. Both sides of this
coin matter, especially for fake news.

Those focused on information consumption have long observed
that filters can feed insular “echo chambers,” further reinforced by
algorithmic filtering.34
Even if you are the only person you personally know who believes that President
Barack Obama was secretly Kenyan-born, you can easily find like-minded friends
online.

Notably, individuals also easily produce information, shared in online communities built around
affinity, political ideology, hobbies, and more. At its best, this capability
helps to remedy the historic shortcomings of traditional media: as danah boyd
points out, traditional media outlets often do not cover stories like the
protests in Ferguson, Missouri, in 2014, the Dakota Access Pipeline protests,
or the disappearance of young black women until far too late.35 At its worst, the
capability to produce one’s own news can cultivate a distrust of vaccines or
nurture rumors about a president’s true birthplace. Through developing their
own narratives, these communities create their own methods to produce, arrange,
discount, or ignore new facts.36
So, even though a television anchor might present you with a visual of Obama’s
American birth certificate, your online community—composed of members you
trust—can present to you alternative and potentially more persuasive
perspectives on that certificate.37

Taken together, this creates a bottom-up dynamic for
developing trust, rather than focusing trust in top-down, traditional
institutions.38 In turn, that allows
communities to make their own cloistered and potentially questionable decisions
about how to determine truth—an ideal environment to normalize and reinforce
false beliefs.

C. Amplification

The amplification principle explains how misinformation
cycles through filters and permeates communities, which are in turn powered by
the cheap, ubiquitous, and anonymous power of the internet. Amplification
happens in two stages: first, when fringe ideas percolate in remote corners of
the internet, and second, when those ideas seep into mainstream media.

Take, for example, the story of Seth Rich, a Democratic
National Committee staffer found tragically murdered as a result of what
Washington, D.C. police maintain was a botched robbery gone awry. WikiLeaks
alluded to a connection between his unfortunate demise and his possibly leaking
to them with little fanfare.39 Weeks later, however, a local
television affiliate in D.C. reported that a private investigator was looking
into whether the murder was related to Rich allegedly providing email hacks of
the Democratic National Committee to WikiLeaks.40 Message boards on 4chan,
8chan, and Reddit grasped at these straws, launching their own vigilante
investigations and further inquiries.41 This is the first stage of
amplification.

The second stage begins when those with a louder bullhorn
observe the sheer volume of discussion, and the topic—true or not—becomes
newsworthy in its own right. In the case of Rich, this happened when a number
of prominent and well-networked individuals on Twitter circulated the
conspiracy to their hundreds of thousands of followers using the hashtag
#SethRich. That drew the attention of Fox News and its pundits, whose followers
range in the millions, and in turn Breitbart
and Drudge Report, which seed
hundreds of blogs and outlets.42

The amplification dynamic matters for fake news in two ways.
First, it reveals how online information filters are particularly prone to
manipulation—for example, by getting a hashtag to trend on Twitter, or by
seeding posts on message boards—through engineering the perception that a
particular story is worth amplifying. Second, the two-tier amplification
dynamic uniquely fuels perceptions of what is true and what is false.
Psychologists tell us that listeners perceive information not only logically,
but through a number of “peripheral cues” which signal whether information
should be trusted. Cues can include whether the speaker is reliable (why trust
in the source of information matters),43 a listener’s prior beliefs
(why one’s chosen communities matter),44 and, most notably, the
familiarity of a given proposition (why one’s information sources matter).45 The latter point is crucial
here: individuals are more likely to view repeated statements as true.
(Advertising subsists on this premise: of course you will purchase the
detergent you have seen before.)

Imagine, then, how many times a listener might absorb tidbits
of the Seth Rich story: on talk radio on the way to work, through water cooler
chat with a Reddit-obsessed co-worker, scrolling through Facebook, a scan of
one’s blogs, a group text, pundit shows promoting the conspiracy, or on the
local television’s evening news debunking it. Manifesting on that many
platforms will, psychological research informs us, command attention and
persuade. Even when something is as demonstrably bankrupt as the Seth Rich
conspiracy, the false headline will be rated as more accurate than unfamiliar
but truthful news.46

D. Speed

The staggering pace of sharing, and how it influences
amplification, is particularly critical for understanding the spread of fake
news.

Platforms are designed for fast, frictionless sharing. This
function accelerates the amplification cycle explained above, but also targets
it for maximum persuasion at each step. For example—before it was effectively
obliterated from the internet47—a
popular neo-Nazi blog called The Daily
Stormer hosted a weekly “Mimetic Monday,”48 where users posted dozens of image
macros—the basis of memes—to be shared on Facebook, Twitter, Reddit, and other
platforms.49 Witty and eye-catching, if
frequently appalling, macros like these allow rapid experimentation with
talking points and planting ideas. Such efforts were responsible for spreading
misinformation about French President Emmanuel Macron before the 2017 election.50 This experimental factory
is called “shitposting,”51
and the fast, frictionless sharing across platforms is the machinery that helps
the factory distribute at scale. Before social media platforms, this type of
experimentation would have been phenomenally slow, or required
resource-intensive focus groups.

Memes are a convenient way to package this information for
distribution: they are easily digestible, nuance-free, scroll-friendly, and
replete with community-reinforcing inside jokes. Automation software known as
“bots”—whether directed by governments52 or by people like the
“Trumpbot overlord” named “Microchip”—are also often credited with circulating
misinformation, because of how well they can trick algorithmic filters by
exaggerating a story’s importance.53

Bots, however, are not the only ones to blame for rapid
distribution. Almost sixty percent of readers
share links on social media without even reading the underlying content.54 Sharing on platforms is not only an exercise in communicating rational
thought, but also signaling ideological and emotional affinity.

This explains, in part, why
responses debunking fake news do not travel as quickly. For example, if one
clicks on a story because one is already ideologically inclined to believe in
it, there is less interest in the debunking—which likely means that the
debunking would not even surface on one’s feed in the first instance. It also
explains why certain false ideas are so persistent: they are designed, in an
effective and real-time laboratory, to be precisely that way.

E. Profit Incentives

Social media platforms make fake news uniquely lucrative.
Advertising exchanges compensate on the basis of clicks for any article, which
creates the incentive to generate as much content as possible with as little
effort as possible. Fake news, sensational and wholly fabricated, fits these
straightforward economic incentives. This yields everything from Macedonian
teenagers concocting stories about the American election55 to user-generated
make-your-own-fake-news generators falsely claiming that specific Indian
restaurants in London had been caught selling human meat.56 These types of websites,
particularly those that are hyperpartisan and thus primed to attract attention,
have exploded in popularity: A BuzzFeed News study illustrated that over one hundred
new pro-Trump digital outlets were created in 2016.57

There are two noteworthy elements to this uptick. First, the
mechanics of advertising on these platforms facilitates the distribution of
fake content: there is no need for a printing press, delivery trucks, or access
to airtime. Cheap distribution means more money, only strengthening the
incentive. Second, platforms render the appearance of advertisements and actual
news almost identical.58
This further muddies the water between what is financially motivated and what
is not.

III. toward a more robust theory

Thinking in terms of the full system of speech—that is,
considering filters, communities, amplification, speed, and profit
incentives—gives us a far more detailed portrait of how misinformation
flourishes online. It also provides a blueprint for what platforms are doing to
curb fake news, all of which would make little sense under the more traditional
theories described in Part I.

For example, platforms have exercised their ubiquitous
filtering capabilities to target fake news. Google recently retooled its search
engine to try to prevent conspiracy and hoax sites from appearing in its top
results,59 while YouTube decided that
flagged videos that contain controversial religious or supremacist content will
be put in a limited state where they cannot be suggested to other users,
recommended, monetized, or given comments or likes.60 And Facebook has partnered
with fact-checkers to flag conspiracies, hoaxes, and fake news; flagged
articles are less likely to surface on users’ news feeds.61 These tweaks, at least
conceptually, should influence the algorithmic filters that yield information.

Similarly, Facebook has overtly recognized that speed and
amplification can contribute to misinformation. It now deprioritizes links that
are aggressively shared by suspected spammers, on the theory that these links
“tend to include low quality content such as clickbait, sensationalism, and
misinformation.”62 Facebook is also launching
features that push users to think twice before sharing a story, by juxtaposing
their link with other selected “Related Articles.”63 Twitter specifically
targets bots, looking for those that may game its system to artificially raise
the profile of misinformation and conspiracies.64

Recognizing the profit element, Google and Facebook have both
barred fake news websites from using their respective advertising programs.65 Facebook has also eliminated the
ability to spoof domains pretending to be real publications to profit from
those who click through to the underlying sites, which are replete with ads.66 This may speak to
profit-oriented fake news, but not to propaganda and misinformation that is
fueled by nonfinancial incentives.67

These systemic features can also help us interrogate concepts
whose definitions have long been assumed. Take, for example, the concept of
censorship. Traditionally, and in the speaker-focused marketplace and autonomy
theories, censorship evokes something very specific: blocking the articulation
of speech. As prominent sociologist Zeynep Tufekci argues, however, censorship now
operates via information glut—that is, drowning out speech instead of stopping
it at the outset.68
As with the Saudi Arabian example referenced above, Tufekci points to the army
of internet trolls deployed by the Chinese and Russian governments to distract
from critical stories and to wear down dissenters through the manipulation of
platforms.69 If platforms are the
epicenters of this new censorship, misinformation is the method: the point of
censorship by disinformation is to destroy attention as a key resource.70 What results, Tufekci explains,
is a “frayed, incoherent, and polarized public sphere that can be hostile to
dissent.”71 This all becomes visible
when information filters are taken into account.

It would be easy to conclude that platforms—best positioned
to address the aforementioned features—should alone shoulder the burden to
prevent fake news. But asking private platforms to exercise unilateral,
unchecked control to censor is precarious.72 Few factors would constrain
possible abuses. For example, Jonathan Zittrain raises the possibility of
Facebook manipulating its end-users by using political affiliation to alter
voting outcomes—something that could be impervious to liability as protected
political speech.73
No meaningful accountability mechanism exists for these platforms aside from
public outcry, which relies on intermediaries to divine what platforms are
actually doing. And yet, the other extreme—a content-neutral and hands-off
approach—offers empty guidance in the face of organized fake news or other
forms of manipulation.

Instead, we must collectively build a theory that accounts
for these shifting sands, one that provides workable ideals rooted in reality.
Scaffolding for that theory can be found in what Balkin has termed the
“democratic culture” theory, which seeks to ensure that each individual can
meaningfully participate in the production and distribution of culture.74 A focus on culture, not
politics, does more than remedy the central gap of the collectivist view while
maintaining its system-wide focus. It also helps us expand our focus beyond
legal theory to relevant disciplines like social psychology, sociology,
anthropology, and cognitive science. For example, once we understand
amplification as a relevant concept, we should account for the psychology of
how people actually come to believe what is true—not only through rational
deliberation, but also by using familiarity and in-group dynamics as a proxy
for truth. Building on this frame will require more meaningful information from
the platforms themselves.

A clear theory is more important now than ever. For one, a
functioning theory can bridge the widening gap of expectations between what a
platform permits and what the public expects. Practically, an overarching
theory can also help navigate evolving social norms. Platforms make policy
decisions based on contemporary norms: for example, until recently choosing to
target and takedown accounts linked to foreign terrorists but not those linked
to white nationalists and neo-Nazis, even though both types of organizations
perpetuate fake news domestically.75 We have to understand how
definitions of tricky and dynamic concepts, like fake news, are created,
culturally contingent, and capable of evolution. Finally, and crucially, we
need a theory to help direct and hold accountable the automated systems that
increasingly govern speech online. These systems will embed cultural norms into
their design, and enforce them through implicit filters we cannot see. Only
with a cohesive theory can we begin to resolve the central conundrum
confronting social media platforms: they are private companies that have built
vast systems sustaining the global, networked public square, which is the root
of both their extraordinary value and their damnation.

Nabiha
Syed is an assistant general counsel at BuzzFeed, a visiting fellow at Yale Law
School, and a non-resident fellow at Stanford Law School. All of my gratitude
goes to Kate Klonick, Sabeel Rahman, Sushila Rao, Noorain Khan, Azmat Khan,
Emily Graff, Alex Georgieff, Sara Yasin, Smitha Khorana, and the staff of the Yale
Law Journal Forum for their incisive
comments and endless patience, and to Nana Menya Ayensu, for everything always,
but especially the coffee.

THIS ESSAY IS PART OF A Collection

The 2016 election was marked by an epidemic of "fake news," or false information made to look like credible news reports. This Collection offers a series of policy proposals and reflections on the origins of fake news and how the dissemination of misinformation online can be addressed.

See generally Jack M. Balkin, The First Amendment is an Information Policy, 41 HofstraL. Rev. 1 (2012) (analyzing the connection between the First Amendment as a governmental constraint and as posing requirements on an infrastructure of free expression).

5

Cf. Jack M. Balkin, Commentary, Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society, 79 N.Y.U. L. Rev. 1, 26-28 (2004) (arguing that “[f]reedom of speech is becoming a generalized right against economic regulation of the information industries”).

6

Since they do not implicate government action, private communications platforms like Facebook, Twitter, Reddit, and YouTube are not as clearly bound by First Amendment doctrine as their predecessors might have been. To the contrary, these platforms enjoy broad immunity from liability based on the user-generated messages, photographs, and videos that populate their pages: Section 230 of the Communications Decency Act has long given them wide berth to construct their platforms as they please. 47 U.S.C. § 230 (2012).The purpose of this grant of immunity was both to encourage platforms to be “Good Samaritans” and take an active role in removing offensive content, but also to avoid free speech problems of collateral censorship. See Zeran v. Am. Online, Inc. 129 F.3d 327, 330-31 (4th Cir. 1997) (discussing the purposes of intermediary immunity in Section 230 as not only to incentivize platforms to remove indecent content, but also to protect the free speech of platform users).

[W]hen men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market . . . .

Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting); see also United States v. Alvarez, 567 U.S. 709, 728 (2012) (plurality opinion) (describing Justice Holmes’ quotation from Abrams v. United States as “the theory of our Constitution,” and concluding that our “[s]ociety has the right and civic duty to engage in open, dynamic, rational discourse”); Citizens Against Rent Control v. City of Berkeley, 454 U.S. 290, 295 (1981) (“The Court has long viewed the First Amendment as protecting a marketplace for the clash of different views and conflicting ideas. That concept has been stated and restated almost since the Constitution was drafted.”); Red Lion Broad. Co. v. FCC, 395 U.S. 367, 390 (1969) (“It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail . . . .”).

9

Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“[T]he remedy to be applied is more speech, not enforced silence. Only an emergency can justify repression.”).

10

See Davis v. FEC, 554 U.S. 724, 755-56 (2008) (Stevens, J., concurring and dissenting in part) (“It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail.” (quoting Red Lion, 395 U.S. at 390).

See, e.g., C. Edwin Baker, Autonomy and Free Speech, 27 Const. Comment 251, 259 (2011) (asserting that the “most appealing” theory of the First Amendment regards “the constitutional status of free speech as required respect for a person’s autonomy in her speech choices”). But see Owen M. Fiss, Why the State?, 100 Harv. L. Rev. 781, 785 (1987) (arguing that the First Amendment protects autonomy as a means of encouraging public debate, rather than as an end in itself).

13

Hurley v. Irish-American Gay, Lesbian, and Bisexual Group, 515 U.S. 557, 573 (1995). In contrast, the First Amendment provides minimal protection for the autonomy interests of a speaker who is engaged in commercial speech. Such speakers do not engage in a form of self-expression when they provide the public with information about their products and services. See C. Edwin Baker, Scope of the First Amendment Freedom of Speech, 25 UCLA L. Rev. 964, 996 (1978); Martin H. Redish, The Value of Free Speech, 130 U. Pa. L. Rev. 591, 593 (1982); David A. J. Richards, Free Speech and Obscenity Law: Toward a Moral Theory of the First Amendment, 123 U. Pa. L. Rev. 45, 62 (1974).

14

See Fiss, supra note 12, at 785.

15

See, e.g., Robert Post, Reconciling Theory and Doctrine in First Amendment Jurisprudence, 88 Calif. L. Rev. 2353, 2362 (2000) (stating that “[t]he democratic theory of the First Amendment . . . protects speech insofar as it is required by the practice of self-government”).

Balkin, supra note 5, at 34 (“The populist nature of freedom of speech, its creativity, its interactivity, its importance for community and self-formation, all suggest that a theory of freedom of speech centered around government and democratic deliberation about public issues is far too limited.”).

While there has always been competition for attention, in some fashion, the lowered cost of distribution and the abundance of information to be distributed alters the stakes of the competition. See J.M. Balkin, Media Filters, the V-Chip, and the Foundations of Broadcast Regulation, 45 Duke L.J. 1131, 1132 (1996) (“In the Information Age, the informational filter, not information itself, is king.”).

The classic study is Robert K. Merton et al., Mass Persuasion: The Social Psychology Of A War Bond Drive (1946); see also Richard E. Petty & John T. Cacioppo, Attitudes and Persuasion: Classic and Contemporary Approaches (1996) (identifying major approaches to attitude and belief change).

44

For important analysis of the factors that influence which ideas are accepted and which are not, see generally Chip Heath & Dan Heath, Made To Stick: Why Some Ideas Survive and Others Die (2008) (discussing how recipient understanding and memory of ideas are improved when such ideas are conveyed according to six factors); and Dan M. Kahan & Donald Braman, Cultural Cognition and Public Policy, 24 Yale L. & Pol’y Rev. 149, 149-60 (2006) (arguing that multiple cultural factors strongly influence one’s acceptance of ideas); see also Jared Wadley, New Study Analyzes Why People Are Resistant to Correcting Misinformation, Offers Solutions, University of Michigan—Michigan News (Sept. 20, 2012), http://home.isr.umich.edu/sampler/new-study-analyzes-resistance-to-correcting-misinformation [http://perma.cc/54LG-7XKF] (examining the factors that allow the perpetuation of misinformation).

45

The dominant account of this “illusory truth effect” is that familiarity increases the ease with which statements are processed (i.e., processing fluency), which in turn is used heuristically to infer accuracy. Lynn Hasher et al., Frequency and the Conference of Referential Validity, 16 J. Verbal Learning & Verbal Behav. 107 (1977); see alsoIan Maynard Begg et al., Dissociation of Processes in Belief: Source Recollection, Statement Familiarity, and the Illusion of Truth, 121 J. Exp. Psychol.: Gen. 446 (1992) (reporting on experiments that concern the effect of repetition on the perceived truth of a statement).

Craig Silverman et al., In Spite of the Crackdown, Fake News Publishers Are Still Earning Money from Major Ad Networks, BuzzFeed News (April 4, 2017, 9:05 AM), http://www.buzzfeed‌.com/craigsilverman/fake-news-real-ads [http://perma.cc/6G38-885M] (noting that “content-recommendation ad units, which provide ads made to look like real news headlines, were by far the most common ad format on the sites reviewed”).

Balkin, supra note 5 at 4; see also id. at 33 (“A ‘democratic’ culture, then, means much more than democracy as a form of self-governance. It means democracy as a form of social life in which unjust barriers of rank and privilege are dissolved, and in which ordinary people gain a greater say over the institutions and practices that shape them and their futures.”).