Against Cyber-utopians

In my last article, I presented the primary argument for Internet regulation that Cass Sunstein makes in his book Republic.com: In no public domain can absolute freedom of speech be rationally or constitutionally defended, nor, contrary to popular belief, has it ever been. With the ice broken, I want to explore the dangerous potential of the internet and the primary reasons why total Internet freedom is problematic: namely extremism and defamation.

Independent researcher Evgeny Morozov raises many of these questions in his work, noting the benefits and costs of a system of user-generated content. Wikipedia, for example, has generated more reliable information than could ever have been possible without user contributions. But by the same token, extremist bloggers and conspiracy theorists have generated almost as much misinformation. Morozov notes that the current shift to social search, user-generated content from social networks appearing in search results, will only allow further dissemination of misinformation.

To show just how volatile content is, take the June 2011 example of Sarah Palin describing Paul Revere ringing bells at the Old North Church to alert the British that America was declaring its independence. Well, that’s not quite how it actually went. Within minutes of network TV picking upon her error, scores of devoted Palinites independently edited the Wikipedia page for Paul Revere and altered it so that Palin was right, or at least less wrong [1]. In this way, people from both sides of the political spectrum regularly contribute falsehoods to politically motivated websites and corrupt sites of user-generated generated content such as Wikipedia.

In his book The Net Delusion, Morozov dispels the myth of “cyber-utopianism” that has long dominated in the West and notably in the Obama administration: the Internet is an unquestionably liberating tool for the non-democratic world and favors the global oppressed more than their oppressors. This view was championed by former hippies caught up in the “starry-eyed digital fervor of the 1990s” who thought the Internet could deliver what the sixties couldn’t in bolstering democratic participation (xiii). What they got wrong, Morozov says, is that they wrongly assumed, “if it works in Seattle, it must also work in Shanghai,” leading U.S. leaders to unilaterally condemn Chinese web censorship as oppressive.

For Morozov, the Internet is not that simple. As much as the it is useful in uniting oppressed peoples, it can also be used to strengthen what he calls the trifecta of authoritarianism: surveillance, censorship, and propaganda. In addition to biased media outlets online, recent cases have found that a social media footprint can implicate one more easily than a secret police search. Even in the Arab spring, in which social media was said to have a remarkable effect, Morozov finds what he calls “slacktivism.” Liking a Facebook page isn’t real political activism, but it makes you feel active just the same.

In the the American Internet sphere, the intuitive, utopian idea that open knowledge would result in the dissemination of truth simply proved false. Morozov asks the reader to see for themselves: just Google search issues like global warming, 9/11, or vaccines and autism. The results are shocking. An overwhleming majority of the search results on these issues are entirely non-factual, non-scientific, and politically biased–exactly the backward ignorance the Internet has the potential to overcome for “cyber-utopians.” In an article in Slate, Morozov acknowledges the implausibility of Sunstein’s idea of legal mandates (which Sunstein himself later reconsidered in Republic.com 2.0) but urges search engines to take responsibility for misinforming users [2].

Just as Google displays a warning banner with information on suicide hotlines when one searches for information on how to commit suicide, the search engine could flag content riddled with conspiracy theories as contested. Would this really lead to the censoring “Ministry of Truth” I discussed in my last article? I doubt it. Advising caution is distinct from censorship in that it encourages rather than discourages critical thinking of the arguments presented. The goal is to enrich the discourse of opinions in a constructive way, not stifle it by censoring material in accordance with a liberal political agenda.

In the edited volume The Offensive Internet: Privacy, Speech, Reputation, Saul Levmore and Martha C. Nussbaum, professors at the University of Chicago Law School, consider the Internet’s capacity for harm that the “cyber-utopian” camp so often ignores. As they question the legal issues in play in cases of Internet defamation, there emerges a barrage of argument for the regulatory defense of identity against the offensive, malicious few that threaten the Internet. They quickly disturb the utopian view of the Internet as a stable forum of democratic progress. As Cass Sunstein notes in his contribution titled “Believing False Rumors,” the libertarian ideal that truth will triumph in the marketplace of ideas is demonstrably incorrect.

Here’s the situation (described at length in the video below): Years ago, several female law students at Chicago Law were photoshopped into pornographic content posted on a site purportedly about law school admissions. The site (created by the women’s own classmates) identified these women by name such that this content came up first when one Google-searched the students. When it came time for hiring post-graduation, the women didn’t get jobs, presumably because employers were suspicious of the web content. The women were defamed, humiliated, and had their professional reputations permanently tainted–and it was in no part their doing. What should we do about this? Who should we blame? This is the problem of Internet anonymity, and nobody is immune from it.

Though that individual has ownership over his own personal and professional life, such defamation is protected under current law, so the victim is helpless and the defamer has relative impunity. Yet, copyright-infringing material is exempt from this protection and must be taken down by law. We have a clear legal precedent for Internet regulation that companies like Google currently comply with. Taking the next step constitutes safety, not censorship. Levmore explicitly calls for the repeal of Section 230 of the 1996 Communications Decency Act, which first officially made the Internet anonymous. If the law were repealed, Levmore envisions a world in which Internet providers are non-anonymous and held responsible for libel just like print media providers. Levmore’s goal is “respectability” on the Internet, something that can’t really happen in a system of license and anonymity that allows license for “low-level speech.” The editors repeatedly note the common misconception that the First Amendment is absolute and permits all forms of speech. In reality, was never intended to provide total license. For example:

Regulation of speech is uncontroversially constitutional with respect to threats, bribery, defamatory statements, fighting words, fraud, copyright, plagiarism, and more… The First Amendment, properly understood, does not protect these forms of speech (6).

The Offensive Internet suggests additional reforms such as a “reputation reporting act” in which a person defamed online had the right to know if the lies effected their seeking employment, just as current laws do for credit report and criminal record. The problem is that one’s online reputation is one’s primary reputation in this age, yet that reputation can still be entirely out of one’s own control. Given its anonymous nature, the Internet will probably always be an offensive space. However, the least we can do is regulate content in economically constructive ways, maximizing social benefits and minimizing harmful costs, as we have in all other public domains.

Jon Catlin

Jon Catlin is a first-year at the University of Chicago studying great books and the humanities. He’s primarily interested in philosophy as it relates to happiness, Holocaust studies, religion, human rights, and other ethical questions. Jon spends his time exploring libraries, teaching young people philosophy, and taking long jogs on the Chicago lakeshore.

Attribution

Thank God for internet anonymity, otherwise I might not be comfortable speaking my mind here Preface: my intention is to be critical, not offensive.

You’re arguing that search engines ought to be flagging content that is “controversial” — but by whose standards? Since conspiracy theories seem to be what is bothering you, do we let a denial from the government suffice? There was plenty of deliberate, government misinformation about the Golf of Tonkin incident; what ended up being revealed as truth was for years denounced as conspiracy! I’m not saying the same will happen with 9/11 or the holocaust or anything like that, but when you take power away from the reader and give it to another, not impartial (and who could ever be impartial?) entity, you are begging for abuse. What is the affect on democracy, learning, trust, etc… when a corporation can stigmatize content with these “red flags”? If I’m the tobacco industry in the mid 20th century, how difficult would it be for me to talk to the handful of companies that run the biggest search engines and “persuade” them to flag medical research pertaining to cigarettes as controversial? You’re simply consolidating the power to determine truth in fewer hands, and these hands don’t have as much incentive to “find the truth” as an individual usually does.

The problem is that, whether it is print media, the internet, or word of mouth, most human beings do not approach information with a healthy level of skepticism! Yes, there is garbage all over the internet — but what I consider crap, others consider great. I want to be able to continue to make those determinations for myself. Also, to even compare Wikipedia with a bunch of truthers or radical bloggers is an absurd red herring…let’s compare the traffic that Wikipedia draws to an American Neo-Nazi website, kay? How long did those Paul Revere edits last, hmm? There is an easy fix to the problem. Treat internet speech like other speech: if it is criminal as the edited pornography seems to be, the government alerts the owner of the domain of the law breaking and forces them to remove the content. It is not very difficult, and it happens all the time. No new laws need to be written, nor do old need be removed. A few memos from the OLC and maybe a new division at the DOJ should do it just fine. Also, I’m not sure what the intended tone is, but this reads much more like journalism than an independent essay (which I don’t find problematic, per se, I’m just not sure it’s what you’re going for.) I’m disappointed that studying philosophy at UChicago seems to mean regurgitating the thoughts of the institution’s professors, though. I’ll hope for more rigorous, critical analysis in your next piece!

The Airspace staff spends over 200 hours a week to keep the site functional, accurate, and relevant. Any support helps secure the work we do and safeguard the efforts of the site’s editors, writers, marketers, designers, illustrators, and musicians.