The net delusion: the dark side of Internet freedom

"The revolution will be Twittered!" declared journalist Andrew Sullivan after protests erupted in Iran in June 2009. Yet for all the talk about the democratizing power of the Internet, regimes in Iran and China are as stable and repressive as ever. In fact, authoritarian governments are effectively using the Internet to suppress free speech, hone their surveillance techniques, disseminate cutting-edge propaganda, and pacify their populations with digital entertainment. Could the recent Western obsession with promoting democracy by digital means backfire?
In this spirited book, journalist and social commentator Evgeny Morozov shows that by falling for the supposedly democratizing nature of the Internet, Western do-gooders may have missed how it also entrenches dictators, threatens dissidents, and makes it harder—not easier—to promote democracy. Buzzwords like "21st-century statecraft" sound good in PowerPoint presentations, but the reality is that "digital diplomacy" requires just as much oversight and consideration as any other kind of diplomacy.
Marshaling compelling evidence, Morozov shows why we must stop thinking of the Internet and social media as inherently liberating and why ambitious and seemingly noble initiatives like the promotion of "Internet freedom" might have disastrous implications for the future of democracy as a whole.

You may be interested in

“The revolution will be Twittered!” declared journalist Andrew Sullivan after protests erupted in Iran in June 2009. Yet for all the talk about the democratizing power of the Internet, regimes in Iran and China are as stable and repressive as ever. In fact, authoritarian governments are effectively using the Internet to suppress free speech, hone their surveillance techniques, disseminate cutting-edge propaganda, and pacify their populations with digital entertainment. Could the recent Western obsession with promoting democracy by digital means backfire?

In this spirited book, journalist and social commentator Evgeny Morozov shows that by falling for the supposedly democratizing nature of the Internet, Western do-gooders may have missed how it also entrenches dictators, threatens dissidents, and makes it harder—not easier—to promote democracy. Buzzwords like “21st-century statecraft” sound good in PowerPoint presentations, but the reality is that “digital diplomacy” requires just as much oversight and consideration as any other kind of diplomacy.

Marshaling compelling evidence, Morozov shows why we must stop thinking of the Internet and social media as inherently liberating and why ambitious and seemingly noble initiatives like the promotion of “Internet freedom” might have disastrous implications for the future of democrac

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

chapter ten
Making History (More Than a Browser Menu)
[image: 011]
In 1996, when a group of high-profile digerati took to the pages of Wired magazine and proclaimed that the “public square of the past” was being replaced by the Internet, a technology that “enables average citizens to participate in national discourse, publish a newspaper, distribute an electronic pamphlet to the world ... while simultaneously protecting their privacy,” many historians must have giggled. From the railways, which Karl Marx believed would dissolve India’s caste system, to television, that greatest liberator of the masses, there has hardly appeared a technology that wasn’t praised for its ability to raise the level of public debate, introduce more transparency into politics, reduce nationalism, and transport us to the mythical global village. In virtually all cases, such high hopes were crushed by the brutal forces of politics, culture, and economics. Technologies, it seems, tend to overpromise and under-deliver, at least on their initial promises.
This is not to suggest that such inventions didn’t have any influence on public life or democracy. On the contrary, they often mattered far more than what their proponents could anticipate. But those effects
were often antithetical to the objectives their inventors were originally pursuing. Technologies that were supposed to empower the individual strengthened the dominance of giant corporations, while technologies that were supposed to boost democratic participation produced a population of couch potatoes. Nor is this to suggest that such technologies never had the potential to improve the political culture or make governance more transparent; their potential was immense. Nevertheless, in most cases it was squandered, as the utopian claims invariably attached to those technologies confused policymakers, preventing them from taking the right steps to make good on those early promises of progress.
By touting the uniqueness of the Internet most technology gurus reveal their own historical ignorance, for the rhetoric that accompanied predictions about earlier technologies was usually every bit as sublime as today’s quasi-religious discourse about the power of the Internet. Even a cursory look at the history of technology reveals just how quickly public opinion could move from professing an uncritical admiration of certain technologies to eagerly bashing everything they stand for. But acknowledging that criticism of technology is as old as its worship should not lead policymakers to conclude that attempts to minimize the adverse effects of technology on society (and vice versa) are futile. Instead, policymakers need to acquaint themselves with the history of technology so as to judge when the overhyped claims about technology’s potential may need some more scrutiny—if only to ensure that at least half of them get realized.
And history does contain plenty of interesting lessons. The telegraph was the first technology predicted to transform the world into a global village. An 1858 editorial in New Englander proclaimed: “The telegraph binds together by a vital cord all the nations of the earth.... It is impossible that old prejudices and hostilities should longer exist, while such an instrument has been created for an exchange of thought between all the nations of the earth.” Speaking in 1868, Edward Thornton, the British ambassador to the United States, hailed the telegraph as “the nerve of international life, transmitting knowledge of events, removing causes of misunderstanding, and promoting peace and harmony throughout the world.” The Bulletin of the American Geographical and
Statistical Society believed it to be an “extension of knowledge, civilization and truth” that catered to “the highest and dearest interest of the human race.” Before long the public saw the telegraph’s downside. Those who hailed its power to help find fugitive criminals soon had to concede that it could also be used to spread false alarms and used by the criminals themselves. Perhaps it was a sense of bitter disappointment that prompted the Charleston Courier to conclude, just two years after the first American telegraph lines were successfully installed, that “the sooner the [telegraph] posts are taken down the better,” while the New Orleans Commercial Times expressed its “most fervent wish that the telegraph may never approach us any nearer than it is at present.”
The brevity of the telegraph’s messages didn’t sit well with many literary intellectuals either; it may have opened access to more sources of information, but it also made public discourse much shallower. More than a century before similar charges would be filled against Twitter, the cultural elites of Victorian Britain were getting concerned about the trivialization of public discourse under an avalanche of fast news and “snippets.” In 1889, the Spectator, one of the empire’s finest publications, chided the telegraph for causing “a vast diffusion of what is called ‘news,’ the recording of every event, and especially of every crime, everywhere without perceptible interval of time. The constant diffusion of statements in snippets ... must in the end, one would think, deteriorate the intelligence of all to whom the telegraph appeal.”
The global village that the telegraph built was not without its flaws and exploitations. At least one contemporary observer of Britain’s colonial expansion into India observed that “the unity of feeling and of action which constitutes imperialism would scarcely have been possible without the telegraph.” Thomas Misa, a historian of technology at the University of Minnesota, notes that “telegraph lines were so important for imperial communication that in India they were built in advance of railway lines.” Many other technological innovations beyond the telegraph contributed to this expansionism. Utopian accounts of technology’s liberating role in human history rarely acknowledge the fact that it was the discovery of quinine, which helped to fight malaria, reducing the risk of endemic tropical disease, that eliminated one major barrier to
colonialism, or that the invention of printing helped to forge a common Spanish identity and pushed the Spaniards to colonize Latin America.
When the telegraph failed to produce the desired social effects, everyone’s attention turned to the airplane. Joseph Corn describes the collective exaltation that surrounded the advent of the airplane in his 2002 book The Winged Gospel. According to Corn, in the 1920s and much of the 1930s most people “expected the airplane to foster democracy, equality, and freedom, to improve public taste and spread culture, to purge the world of war and violence; and even to give rise to a new kind of human being.” One observer at the time, apparently oblivious to the economic forces of global capitalism, mused that airplanes opened up “the realm of absolute liberty; no tracks, no franchises, no need of thousands of employees to add to the cost,” while in 1915 the editor of Flying magazine—the Wired of its day—enthusiastically proclaimed that the First World War had to be “the last great war in history,” because “in less than another decade,” the airplane would have eliminated the factors responsible for wars and ushered in a “new period in human relations” (apparently, Adolf Hitler was not a subscriber to Flying). As much as one could speak of utopian airplane-centrism of the 1910s, this was it.
But it was the invention of radio that produced the greatest number of unfulfilled expectations. Its pioneers did their share to overhype the democratization potential of their invention. Guglielmo Marconi, one of the fathers of this revolutionary technology, believed that “the coming of the wireless era will make war impossible, because it will make war ridiculous.” Gerald Swope, president of General Electric Company, one of the biggest commercial backers of radio at the time, was equally upbeat in 1921, hailing the technology as “a means for general and perpetual peace on earth.” Neither Marconi nor Swope could have foreseen that seven decades later two local radio stations would use the airwaves to heighten ethnic tensions, spread messages of hatred, and help fuel the Rwandan genocide.
When Twitter’s founders proclaim their site to be a “triumph of humanity,” as they did in 2009, the public should save its applause until assessing the possibility of a Twitter-fueled genocide sweeping through
some distant foreign land, thousands of miles away from the Bay Area. Then and now, such declarations of technology’s benign omnipotence have been nothing more than poorly veiled attempts at creating a favorable regulatory climate—and who would dare to regulate humanity’s triumph? But in the earliest stages of its history, radio was also seen as a way to educate the public about politics and raise the level of political discourse; it was widely expected to force politicians to carefully plan their speeches. In the early 1920s, the New Republic applauded radio’s political effects, for the invention “has found a way to dispense with political middlemen” and even “has restored the demos on which republican government is founded.”
Not surprisingly, radio was seen as superior to the previous medium of political communications, the newspaper. As one editorial writer put it in 1924: “Let a legislator commit himself to some policy that is obviously senseless, and the editorial writers must first proclaim his imbecility to the community. But let the radiophone in the legislative halls of the future flash his absurdities into space and a whole state hears them at once.” Just the way today’s politicians are told to fear their “Macaca moment,” politicians of yester year were told to fear their “radio moment.” Like the Internet today, radio was believed to be changing the nature of political relations between citizens and their governments. In 1928, Collier’s magazine declared that “the radio properly used will do more for popular government that have most of the wars for freedom and self government,” adding that “the radio makes politics personal and interesting and therefore important.” But it didn’t take long for the public mood to sour again. By 1930 even the initially optimistic New Republic reached the verdict that “broadly speaking, the radio in America is going to waste.” In 1942 Paul Lazarsfeld, a prominent communications scholar at Columbia University, concluded that “by and large, radio has so far been a conservative force in American life and has produced but few elements of social progress.”
The disappointment was caused by a number of factors, not least the dubious uses to which the technology was put by governments. As Asa Briggs and Peter Burke point out in their comprehensive A Social History of the Media, “the ‘age of radio’ was not only the age of Roosevelt
and Churchill but also that of Hitler, Mussolini and Stalin.” That so many dictators profited so much from radio dampened the nearly universal enthusiasm for the medium, while its commercialization by big business alienated those who hoped it would make the public conversation more serious. It’s not hard to guess Lazarsfeld’s reaction to the era of Rush Limbaugh.
Radio’s fading democratizing potential did not preclude a new generation of pundits, scholars, and entrepreneurs from making equally overblown claims about television. From the 1920s onward, Orrin Dunlap, one of the first television and radio critics for the New York Times, was making an argument already familiar to those who studied the history of the telegraph, the airplane, or the radio. “Television,” wrote Dunlap, without even a shade of doubt, “will usher in a new era of friendly intercourse between the nations of the earth,” while “current conceptions of foreign countries will be changed.” David Sarnoff, head of the Radio Corporation of America, believed that another global village was in the making: “When television has fulfilled its ultimate destiny ... with this may come ... a new sense of freedom, and ... a finer and broader understanding between all the peoples of the world.”
Lee De Forest, the famed American inventor, held high hopes for the educational potential of television, believing that it could even reduce the number of traffic incidents. “Can we imagine,” he asked in 1928, “a more potent means for teaching the public the art of careful driving safety upon our highways than a weekly talk by some earnest police traffic officer, illustrated with diagrams and photographs?” That such programs never really made it to mainstream American television is unfortunate—especially in an era when drivers are texting their way to accidents and even airplane pilots work on their laptops mid-flight—but it is not the limitations of technology that are to blame. Rather, it was the limitations of the political, cultural, and regulatory discourse of the time that soon turned much of American television into, as the chair of the Federal Communications Commission, Newton Minow, put it in 1961, a “vast wasteland.”
Like radio before it, television was expected to radically transform the politics of the time. In 1932 Theodore Roosevelt Jr., the son of the
late president and then governor-general of the Philippines, predicted that TV would “stir the nation to a lively interest in those who are directing its policies and in the policies themselves,” which would result in a “more intelligent, more concerted action from an electorate; the people will think more for themselves and less simply at the direction of local members of the political machines.” Thomas Dewey, a prominent Republican who ran against Franklin Delano Roosevelt and Harry Truman in the 1940s, compared television to an X-ray, predicting that “it should make a constructive advance in political campaigning.” Anyone watching American television during an election season would be forgiven for disagreeing with Dewey’s optimism.
Such enthusiasm about television carried the day until very recently. In 1978, Daniel Boorstin, one of the most famous American historians of the twentieth century, lauded television’s power to “disband armies, to cashier presidents, to create a whole new democratic world—democratic in ways never before imagined, even in America.” Boorstin wrote these words when many political scientists and policymakers were still awaiting the triumph of “teledemocracy,” in which citizens would use television to not only observe but also directly participate in politics. (The hope that new technology could enable more public participation in politics predates television; back in 1940 Buckminster Fuller, the controversial American inventor and architect, was already lauding the virtues of “telephone democracy,” which could enable “voting by telephone on all prominent questions before Congress.”)
In hindsight, the science-fiction writer Ray Bradbury was closer to the truth in 1953 than Boorstin ever was in 1978. “The television,” wrote Bradbury, “is that insidious beast, that Medusa which freezes a billion people to stone every night, staring fixedly, that Siren which called and sang and promised so much and gave, after all, so little.”
The advent of the computer set off another utopian craze. A 1950 article in the Saturday Evening Post claimed that “thinking machines will bring a healthier, happier civilization than any known heretofore.” We are still living in the time of some of its most ridiculous predictions. And while it’s easy to be right in hindsight, one needs to remember that there was nothing predetermined about the direction in which radio
and television advanced in the last century. The British made a key strategic decision to prioritize public broadcasting and created a behemoth known as the British Broadcasting Corporation; the Americans, for a number of cultural and business reasons, took a more laissez-faire approach. One could debate the merits of either strategy, but it seems undeniable that the American media landscape could have looked very different today, especially if the utopian ideologies promoted by those with a stake in the business were scrutinized a bit more closely.
While it’s tempting to forget everything we’ve learned from history and treat the Internet as an entirely new beast, we should remember that this is how earlier generations must have felt as well. They, too, were tempted to disregard the bitter lessons of previous disappointments and assume a brave new world. Most commonly, it precluded them from making the right regulatory decisions about new technologies. After all, it’s hard to regulate divinity. The irony of the Internet is that while it never delivered on the uber-utopian promises of a world without nationalism or extremism, it still delivered more than even the most radical optimists could have ever wished for. The risk here is that given the relative successes of this young technology, some may assume that it would be best to leave it alone rather than subject it to regulation of any kind. This is a misguided view. The recognition of the revolutionary nature of a technology is a poor excuse not to regulate it. Smart regulation, if anything, is a first sign that society is serious about the technology in question and believes that it is here to stay; that it is eager to think through the consequences; and that it wants to find ways to unleash and harvest its revolutionary potential.
No society has ever got such regulatory frameworks right by looking only at technology’s bright sides and refusing to investigate how its uses may also produce effects harmful to society. The problem with cyber-optimism is that it simply doesn’t provide useful intellectual grounds for regulation of any sort. If everything is so rosy, why even bother with regulation? Such an objection might have been valid in the early 1990s, when access to the Internet was limited to academics, who couldn’t possibly foresee why anyone would want to send spam. But as access to the Internet has been democratized, it has become obvious that self-regulation
will not always be feasible given such a diverse set of users and uses.
Technology’s Double Life
If there is an overarching theme to modern technology it is that it defies the expectations of its creators, taking on functions and roles that were never intended at creation. David Noble, a prolific historian of modern technology, makes this point forcefully in his 1984 book Forces of Production . “Technology,” writes Noble, “leads a double life, one which conforms to the intentions of designers and interests of power and another which contradicts them—proceeding behind the backs of their architects to yield unintended consequences and unintended possibilities.” Even Ithiel de Sola Pool, that naïve believer in the power of information to undermine authoritarianism, was aware that technology alone is not enough to create desired political outcomes, writing that “technology shapes the structure of the battle but not every outcome.”
Not surprisingly, futurists often get it wrong. George Wise, a historian associated with General Electric, examined fifteen hundred technology predictions made between 1890 and 1940 by engineers, historians, and other scientists. One-third of those predictions came true, even if somewhat vaguely. The remaining two-thirds either were false or remained in ambiguity.
From a policy perspective, the lesson to be learned from the history of technology and the numerous attempts to foretell it is that few modern technologies are stable enough—in their design, in their applications, in their appeal to the public—to provide for flawless policy planning. This is particularly the case at the early stages of a technology’s life cycle. Anyone working on a “radio freedom” policy in the 1920s would have been greatly surprised by the developments—many of them negative—of the 1930s. The problem with today’s Internet is that it makes a rather poor companion to a policy planner. Too many stakeholders are involved, from national governments to transnational organizations like ICANN and from the United Nations to users of Internet services; certain technical parts of its architecture may change if
it runs out of addresses; malign forces like spammers and cyber-criminals are constantly creating innovations of their own. Predicting the future of the Internet is a process marked by far greater complexity than predicting the future of television because the Web is a technology that can be put to so many different uses at such a cheap price.
It’s such essential unpredictably that should make one extremely suspicious of ambitious and yet utterly ambiguous policy initiatives like Internet freedom that demand a degree of stability and maturity that the Internet simply doesn’t have, while their advocates are making normative claims of what the Internet should look like, as if they already know how to solve all of the problems. But an unruly tool in the hands of overconfident people is a recipe for disaster. It would be far more productive to assume that the Internet is highly unstable; that trying to rebuild one’s policies around a tool that is so complex and capricious is not going to work; and that instead of trying to solve what may essentially be unsolvable global problems, one would be well-advised to start on a somewhat smaller scale at which one could still grasp, if not fully master, the connections between the tool and its environment.
But such caution may suit only the intellectuals. Despite the inevitable uncertainty surrounding technology, policymakers need to make decisions, and technology plays a growing role in all of them. Predictions about how technology might work are thus inevitable, or paralysis would ensue. The best policymakers can do is to understand why so many people get them wrong so often and then try to create mechanisms and procedures that could effectively weed out excessive hype from the decision-making process.
The biggest problem with most predictions about technology is that they are invariably made based on how the world works today rather than on how it will work tomorrow. But the world, as we know, doesn’t stand still: Politics, economics, and culture constantly reshape the environment that technologies were supposed to transform, preferably in accordance with our predictions. Politics, economics, and culture also profoundly reshape technologies themselves. Some, like radio, become cheap and ubiquitous; others, like the airplane, become expensive and
available only to a select few. Furthermore, as new technologies come along, some older ones become obsolete (fax machines) or find new uses (TVs as props for playing games on your Wii).
Paradoxically, technologies meant to alleviate a particular problem may actually make it worse. As Ruth Schwartz Cowan, a historian of science at the University of Pennsylvania, shows in her book More Work for Mother, after 1870 homemakers ended up working longer hours even though more and more household activities were mechanized. (Cowan notes that in 1950 the American housewife produced singlehandedly what her counterpart needed a staff of three or four to produce just a century earlier.) Who could have predicted that the development of “labor-saving devices” had the effect of increasing the burden of housework for most women?
Similarly, the introduction of computers into the workforce failed to produce expected productivity gains (Tetris was, perhaps, part of some secret Soviet plot to halt the capitalist economy). The Nobel Prize-winning economist Robert Solow quipped that “one can see the computer age everywhere but not in the productivity statistics!” Part of the problem in predicting the exact economic and social effects of a technology lies in the uncertainty associated with the scale on which such a technology would be used. The first automobiles were heralded as technologies that could make cities cleaner by liberating them of horse manure. The by-products of the internal combustion engine may be more palatable than manure, but given the ubiquity of automobiles in today’s world, they have solved one problem only by making another one—pollution—much worse. In other words, the future uses of a particular technology can often be described by that old adage “It’s the economy, stupid.”
William Galston, a former adviser to President Clinton and a scholar of public policy at the Brookings Institution, has offered a powerful example of how we tend to underestimate the power of economic forces in conditioning the social impact of technologies. Imagine, he says, a hypothetical academic conference about the social effects of television convened in the early 1950s. The consensus at the conference would almost
certainly be that television was poised to strengthen community ties and multiply social capital. Television sets were sparse and expensive, and neighbors had to share and visit each other’s houses. Enter today’s academic conferences about television, and participants are likely to deplore the pervasive “bedroom culture,” whereby the availability of multiple televisions in just one home is perceived as eroding ties within families, not just ties within neighborhoods.
Another reason why the future of a given technology is so hard to predict is that the disappearance of one set of intermediaries is often accompanied by the emergence of other intermediaries. As James Carey, a media scholar at Columbia University, observed, “as one set of borders, one set of social structures is taken down, another set of borders is erected. It is easier for us to see the borders going down.” We rarely notice the new ones being created. In 1914 Popular Mechanics thought that the age of governments was over, announcing that wireless telegraphy allowed “the private citizen to communicate across great distance without the aid of either the government or a corporation.” Only fifteen years later, however, a handful of corporations dominated the field of radio communication, even while the public still maintained some illusions that radio was a free and decentralized media. (The fact that radios were getting cheaper only contributed to those illusions.)
Similarly, just as today’s Internet gurus are trying to convince us that the age of “free” is upon us, it almost certainly is not. All those free videos of cats that receive millions of hits on YouTube are stored on powerful server centers that cost millions of dollars to run, usually in electricity bills alone. Those hidden costs will sooner or later produce environmental problems that will make us painfully aware of how expensive such technologies really are. Back in 1990, who could have foreseen that Greenpeace would one day be issuing a lengthy report about the environmental consequences of cloud computing, with some scientists conducting multiyear studies about the impact of email spam on climate change? The fact that we cannot yet calculate all the costs of a given technology—whether financial, moral, or environmental ones—does not mean that it comes free.
No Logic for Old Men
Another recurring feature of modern technology that has been overlooked by many of its boosters is that the emergence of new technologies, no matter how revolutionary their circuitry might be, does not automatically dissolve old practices and traditions. Back in the 1950s, anyone arguing that television would strengthen existing religious institutions was inviting ridicule. And yet, a few decades later, it was television that Pat Robertson and a horde of other televangelists had to thank for their powerful social platform. Who today would bet that the Internet will undermine organized religion?
In fact, as one can currently observe with the revival of nationalism and religion on the Web, new technologies often entrench old practices and make them more widespread. Claude Fischer, who studied how Americans adopted the telephone in the nineteenth century in his book America Calling, observes that it was primarily used to “widen and deepen ... existing social patterns rather than to alter them.” Instead of imagining the telephone as a tool that impelled people to embrace modernity, Fischer proposed that we think of it as “a tool modern people have used to various ends, including perhaps the maintenance, even enhancement, of past practices.” For the Internet to play a constructive role in ridding the world of prejudice and hatred, it needs to be accompanied by an extremely ambitious set of social and political reforms; in their absence, social ills may only get worse. In other words, whatever the internal logic of the technology at hand, it’s usually malleable by the logic of society at large. “While each communication technology does have its own individual properties, especially regarding which of the human senses it privileges and which ones it ignores,” writes Susan Douglas, a scholar of communications at the University of Michigan, “the economic and political system in which the device is embedded almost always trumps technological possibilities and imperatives.”
And yet this rarely prevents an army of technology experts from claiming that they have cracked that logic and understood what radio, television, or the Internet is all about; the social forces surrounding it are thus
deemed mostly irrelevant and can be easily disregarded. Marshall McLuhan, the first pop philosopher, believed that television had a logic: Unlike print, it urges viewers to fill in the gaps in what it is they’re seeing, stimulates more senses, and, overall, nudges us closer to the original tribal condition (a new equilibrium that McLuhan clearly favored). The problem is that while McLuhan was chasing the inner logic of television, he might have missed how it could be appropriated by corporate America and produce social effects much more obvious (and uglier) than changes in some obscure sense-ratios that McLuhan so meticulously calculated for each medium.
Things get worse in the international context. The “logic” that the scholars and policymakers supposedly have access to is simply an interpretation of what a particular technology is capable of doing given a particular set of circumstances. Hermann Göring, who put radio to masterful propaganda use in Hitler’s Germany, saw its logic in very different terms than, say, Marconi.
Thus, knowing everything about a given technology still tells us little about how exactly it will shape a complex modern society. Economist William Schaniel shares this view, cautioning us that “the analytic focus of a technology transfer should be on the adopting culture and not on the materials being transferred,” simply because, while “new technology does create change,” this change is not “preordained by the technology adopted.” Instead, writes Schaniel, “the adopted technology is adapted by the adopting society to their social processes.” When gunpowder was brought to Europe from Asia, Europeans did not concurrently adopt Asian rules and beliefs about it. The adopted gunpowder was adapted by European civilizations according to their own values and traditions.
The Internet is no gunpowder; it’s considerably more complex and multidimensional. But this only adds urgency to our quest to understand the societies it is supposed to “reshape” or “democratize.” Reshape them it may, but what is of utmost interest to policymakers is the direction in which this reshaping would proceed. The only way for them to understand it is to resist technological determinism and embark on a careful analysis of nontechnological forces that constitute the environments
they seek to understand or transform. It may make sense to think about technologies as embodying a certain logic at an early stage of their deployment, but as they mature, their logic usually gives way to more powerful social forces.
The inability to see that the logic of technology, as much as one could say it exists, varies from context to context partly explains the Western failure to grasp the importance of the Internet to authoritarian regimes. Not having a good theory of the internal political and social logic of those regimes, Western observers assume that the dictators and their cronies can’t find a regime-strengthening use for the Internet, because under the conditions of Western liberal democracies—and those are the only conditions these observers understand—the Internet has been weakening the state and decentralizing power. Instead of burrowing further into the supposed logic of the Internet, Western do-gooders would be well-advised to get a more refined picture of the political and social logic of authoritarianism under the conditions of globalization. If policymakers lack a good theoretical account of what makes those societies tick, no amount of Internet-theorizing will allow them to formulate effective policies for using the Internet to promote democracy.
Is There History After Twitter?
It’s tempting to see technology as some kind of a missing link that can help us make sense of otherwise unrelated events known as human history. Why search for more complex reasons if the establishment of democratic forms of government in Europe could be explained by the invention of the printing press? As the economic historian Robert Heilbroner observed in 1994, “history as contingency is a prospect that is more than the human spirit can bear.”
Technological determinism—the belief that certain technologies are bound to produce certain social, cultural, and political effects—is attractive precisely because “it creates powerful scenarios, clear stories, and because it accords with the dominant experience in the West,” write Steve Graham and Simon Marvin, two scholars of urban geography. Forcing a link between the role that photocopies and fax machines
played in Eastern Europe in 1989 and the role that Twitter played in Iran in 2009 creates a heart-wrenching but also extremely coherent narrative that rests on the widespread belief, rooted in Enlightenment ideals, in the emancipatory power of information, knowledge, and, above all, ideas. It’s far easier to explain recent history by assuming that communism dropped dead the moment Soviet citizens understood that there were no queues in Western supermarkets than to search for truth in some lengthy and obscure reports on the USSR’s trade balance.
It is for this reason that determinism—whether of the social variety, positing the end of history, or of the political variety, positing the end of authoritarianism—is an intellectually impoverished, lazy way to study the past, understand the present, and predict the future. Bryan Pfaffenberger, an anthropologist at the University of Virginia, believes that the reason why so many of us fall for deterministic scenarios is because it presents the easiest way out. “Assuming technological determinism,” writes Pfaffenberger, “is much easier than conducting a fully contextual study in which people are shown to be the active appropriators, rather than the passive victims, of transferred technology.”
But it’s not only history that suffers from determinism; ethics doesn’t fare much better. If technology’s march is unstoppable and unidirectional, as a horde of technology gurus keep convincing the public from the pages of technology magazines, it then seems pointless to stand in its way. If radio, television, or the Internet are poised to usher in a new age of democracy and universal human rights, there is little role for us humans to play. However, to argue that a once-widespread practice like lobotomy was simply a result of inevitable technological forces is to let its advocates off the hook. Technological determinism thus obscures the roles and responsibilities of human decision makers, either absolving them of well-deserved blame or minimizing the role of their significant interventions. As Arthur Welzer, a political scientist at Michigan State University, points out, “to the extent that we view ourselves as helpless pawns of an overarching and immovable force, we may renounce the moral and political responsibility that, in fact, is crucial for the good exercise of what power over technology we do possess.”
By adopting a deterministic stance, we are less likely to subject technology—and those who make a living from it—to the full bouquet of ethical questions normal for democracy. Should Google be required to encrypt all documents uploaded to its Google Docs service? Should Facebook be allowed to continue making more of their users’ data public? Should Twitter be invited to high-profile gatherings of the U.S. government without first signing up with the Global Network Initiative? While many such questions are already being raised, it’s not so hard to imagine a future when they would be raised less often, particularly in offices that need to be asking them the most.
Throughout history, new technologies have almost always empowered and disempowered particular political and social groups, sometimes simultaneously—a fact that is too easy to forget under the sway of technological determinism. Needless to say, such ethical amnesia is rarely in the interests of the disempowered. Robert Pippin, a philosopher at the University of Chicago, argues that society’s fascination with the technological at the expense of the moral reaches a point where “what ought to be understood as contingent, one option among others, open to political discussion is instead falsely understood as necessary; what serves particular interest is seen without reflection, as of universal interest; what ought to be a part is experienced as a whole.” Facebook’s executives justifying their assault on privacy by claiming that this is where society is heading anyway is exactly the kind of claim that should be subject to moral and political—not just technological—scrutiny. It’s by appealing to such deterministic narratives that Facebook manages to obscure its own role in the process.
Abbe Mowshowitz, professor of computer science at the City College of New York, compares the computer to a seed and concrete historical circumstances to the ground in which the seed is to be planted: “The right combination of seed, ground and cultivation is required to promote the growth of desirable plants and to eliminate weeds. Unfortunately, the seeds of computer applications are contaminated with those of weeds; the ground is often ill-prepared; and our methods of cultivation are highly imperfect.” One can’t fault Mowshowitz for misreading the history of technology, but there is a more optimistic way
to understand what he said: We, the cultivators, can actually intervene in all three stages, and it’s up to us to define the terms on which we choose to do so.
The price for not intervening could be quite high. Back in 1974, Raymond Williams, the British cultural critic, was already warning us that technological determinism inevitably produces a certain social and cultural determinism that “ratifies the society and culture we now have, and especially its most powerful internal directions.” Williams worried that placing technology at the center of our intellectual analysis is bound to make us view what we have traditionally understood as a problem of politics, with its complex and uneasy questions of ethics and morality, as instead a problem of technology, either eliminating or obfuscating all the unresolved philosophical dilemmas. “If the medium—whether print or television—is the cause,” wrote Williams in his best-selling Television: Technology and Cultural Form, “all other causes, all that men ordinarily see as history, are at once reduced to effects.” For Williams, it was not the end of history that technology was ushering in; it was the end of historical thinking. And with the end of historical thinking, the questions of justice lose much of their significance as well.
Williams went further in his criticism, arguing that technological determinism also prevents us from acknowledging what is political about technology itself (the kind of practices and outcomes it tends to favor), as its more immediately observable features usually occupy the lion’s share of the public’s attention, making it difficult to assess its other, more pernicious features. “What are elsewhere seen as effects, and as such subject to social, cultural, psychological and moral questioning,” wrote Williams, “are excluded as irrelevant by comparison with the direct physiological and therefore ‘psychic’ effects of the media as such.” In other words, it’s far easier to criticize the Internet for making us stupid than it is to provide a coherent moral critique of its impact on democratic citizenship. And under the barrage of ahistorical blurbs about the Internet’s liberating potential, even posing such moral questions may seem too contrarian. Considering how the world reacted to Iran’s
Twitter Revolution, it’s hard not to appreciate the prescience of Williams’s words. Instead of talking about religious, demographic, and cultural forces that were creating protest sentiment in the country, all we cared about was Twitter’s prominent role in organizing the protests and its resilience in the face of censorship.
Similarly, when many Western observers got carried away discussing the implications of Egypt’s Facebook Revolution in April 2008—when thousands of young Egyptians were mobilized via the Internet to express their solidarity with the textile workers who were on strike in the poor industrial city of Mahala—few bothered to ask what it was the workers actually wanted. As it turns out, they were protesting extremely low wages at their factory. It was primarily a protest about labor issues, which was successfully linked to a broader anti-Mubarak constitutional reform campaign. Once, for various reasons, the labor component to the protests fizzled, other attempts at a Facebook revolution—the one with consequences in the physical world—failed to resonate, even though they attracted hundreds of thousands of supporters online. As was to be expected, most reports in the Western media focused on Facebook rather than on labor issues or demands on Mubarak to end the emergency rule imposed on Egypt since 1981. This is yet another powerful reminder that by focusing on technologies, as opposed to the social and political forces that surround them, one may be drawn to wrong conclusions. As long as such protests continue to be seen predominantly through the lens of the technology through which they were organized—rather than, say, through the demands and motivation of the protesters—little good will come of Western policies, no matter how well-intentioned.
What is, therefore, most dangerous about succumbing to technological determinism is that it hinders our awareness of the social and the political, presenting it as the technological instead. Technology as a Kantian category of understanding the world may simply be too expansionist and monopolistic, subsuming anything that has not yet been properly understood and categorized, regardless of whether its roots and nature are technological. (This is what the German philosopher
Martin Heidegger meant when he said that “the essence of technology is by no means anything technological.”) Since technology, like gas, will fill in any conceptual space provided, Leo Marx, professor emeritus at the Massachusetts Institute of Technology, describes it as a “hazardous concept” that may “stifle and obfuscate analytic thinking.” He notes, “Because of its peculiar susceptibility to reification, to being endowed with the magical power of an autonomous entity, technology is a major contributant to that gathering sense ... of political impotence. The popularity of the belief that technology is the primary force shaping the postmodern world is a measure of our ... neglect of moral and political standards, in making decisive choices about the direction of society.”
The neglect of moral and political standards that Leo Marx is warning about is on full display in the sudden urge to promote Internet freedom without articulating how exactly it fits the rest of the democracy-promotion agenda. Hoping that the Internet may liberate the Egyptians or the Azeris from authoritarian oppression is no good excuse to continue covertly supporting the very sources of that oppression. To her credit, Hillary Clinton avoided falling for technological determinism in her Internet freedom speech, saying that “while it’s clear that the spread of these [information] technologies is transforming our world, it is still unclear how that transformation will affect the human rights and welfare of much of the world’s population.” On second reading, however, this seems like a very strange statement to make. If it’s not clear how such technologies will affect human rights, what is the point of promoting them? Is it just because there is little clarity as to what Internet freedom means and does? Such confusion in the ranks of policymakers is only poised to increase, since they are formulating policies around a highly ambiguous concept.
Leo Marx suggests that the way to address the hazards of the concept of technology is to rethink whether it is still worth putting it at the center of any intellectual inquiry, let alone a theory of action. The more we learn about technology, the less it makes sense to focus on it alone, in isolation from other factors. Or as Marx himself puts it, “the paradoxical result of ever greater knowledge and understanding of technology
is to cast doubt on the rationale for making ‘technology,’ with its unusually obscure boundaries, the focus of a discrete field of specialized historical (or other disciplinary) scholarship.” In other words, it’s not clear what it is we gain by treating technology as a historical actor in its own right, for it usually hides more about society, politics, and power than it reveals.
As far as the Internet is concerned, scholarship has so far moved in the opposite direction. Academic centers dedicated to the study of the Internet—the intellectual bulwarks of Internet-centrism—keep proliferating on university campuses and, in the process, contribute to its further reification and decontextualization. That virtually any newspaper or magazine today boasts of interviews with “Internet gurus” is a rather troubling sign, for however deep their knowledge of the architecture of the Internet and its diverse and playful culture, it doesn’t make up for their inadequate understanding of how societies, let alone non-Western societies, function. It’s a sign of how deeply Internet-centrism has corrupted the public discourse that people who have a rather cursory knowledge of modern Iran have become the go-to sources on Iran’s Twitter Revolution, as if a close look at all Iran-related tweets could somehow open a larger window on the politics of this extremely complicated country than the careful scholarly study of its history.
Why Technologies Are Never Neutral
If technological determinism is dangerous, so is its opposite: a bland refusal to see that certain technologies, by their very constitution, are more likely to produce certain social and political outcomes than other technologies, once embedded into enabling social environments. In fact, there is no misconception more banal, ubiquitous, and profoundly misleading than “technology is neutral.” It all depends, we are often told, on how one decides to use a certain tool: A knife can be used to kill somebody, but it can also be used to carve wood.
The neutrality of technology is a deep-rooted theme in the intellectual history of Western civilization. Boccaccio raised some interesting
questions about it in The Decameron back in the mid-fourteenth century. “Who doesn’t know what a boon wine is to the healthy ... and how dangerous to the sick? Are we to say, then, that wine is bad simply because it is injurious to the fevered? ... Weapons safeguard the welfare of those who desire to live in peace; nevertheless; they often shed blood, not through any evil inherent in them, but through the wickedness of the men who use them to unworthy ends.”
The neutrality of the Internet is frequently invoked in the context of democratization as well. “Technology is merely a tool, open to both noble and nefarious purposes. Just as radio and TV could be vehicles of information pluralism and rational debate, so they could also be commandeered by totalitarian regimes for fanatical mobilization and total state control,” writes Hoover Institution’s Larry Diamond. Neutrality-speak crept into Hillary Clinton’s Internet freedom speech as well, when she noted that “just as steel can be used to build hospitals or machine guns and nuclear energy can power a city or destroy it, modern information networks and the technologies they support can be harnessed for good or ill.” The most interesting thing about Clinton’s analogy between the Internet and nuclear energy is that it suggests that there needs to be more not less oversight and control over the Internet. No one exactly advocates that nuclear plants should be run as their proprietors wish; the notion of “nuclear freedom” as a means of liberating the world sounds rather absurd.
Product designers like to think of tools as having certain perceived qualities. Usually called “affordances,” these qualities suggest—rather than dictate—how tools are to be used. A chair may have the affordance for sitting, but it may also have the affordance for breaking a window; it all depends on who is looking and why. The fact that a given technology has multiple affordances and is open to multiple uses, though, does not obviate the need to closely examine its ethical constitution, compare the effects of its socially beneficial uses with those of its socially harmful uses, estimate which uses are most likely to prevail, and, finally, decide whether any mitigating laws and policies should be established to amplify or dampen some of the ensuing effects. On paper, nuclear technology is beautiful, complex, safe, and brilliantly designed;
in reality, it has one peculiar “affordance” that most societies cannot afford, or at least they cannot afford it without significant safeguards.
Similarly, the reason why most schools ban their students from carrying knives is because this behavior could lead to bloodshed. That we do not know how exactly knives will be used in the hands of young people in every particular situation is not a strong enough reason to allow them; knowing how they can be misused, on the other hand, even if the chance of misuse is small, provides us with enough information to craft a restricting policy. Thus, most societies want to avoid some of the affordances of knives (such as their ability to hurt people) in certain contexts (such as schools).
The main problem with the “technology is neutral” thesis, therefore, is its complete uselessness for the purposes of policymaking. It may offer a useful starting point for some academic work in design, but it simply doesn’t provide any foundation for sensible policymaking, which is often all about finding the right balance between competing goods in particular contexts. If technology is neutral and its social effects are unknowable—it all depends on who uses it and when—it appears that policymakers and citizens can do painfully little about controlling it. The misuses of some simple technologies, however, are so widespread and easy to grasp that their undesirability in certain contexts is nothing short of obvious; it’s hard to imagine anyone making the case that knives are merely tools, open to both noble and nefarious contexts, at a PTA meeting. But when it comes to more complex technologies—and especially the Internet, with its plethora of applications—their conditional undesirability becomes far less obvious, save, perhaps, for highly sensitive issues (e.g., children gaining access to online pornography).
The view that technology is neutral leaves policymakers with little to do but scrutinize the social forces around technologies, not technologies themselves. Some might say that when it comes to the co-optation of the Internet by repressive regimes, one shouldn’t blame the Internet but only the dictators. This is not a responsible view either. Even those who argue that the logic of technology is malleable by the logic of society that adopts it don’t propose to stop paying attention to the former. Iran’s police may continue monitoring social networking sites forever,
but it’s easy to imagine a world where Facebook offers better data protection to its users, thus making it harder for the police to learn more about Iranians on Facebook. Likewise, it’s easy to imagine a world where Facebook doesn’t change how much user data it discloses to the public without first soliciting explicit permission from the user.
Thus, one can believe that authoritarian regimes will continue being avid users of the Internet, but one can make it hard for them to do so. The way forward is to clearly scrutinize both the logic of technology and the logic of society that adopts it; under no circumstances should we be giving technologies—whether it’s the Internet or mobile phones—a free pass on ethics. All too often the design of technologies simply conceals the ideologies and political agendas of their creators. This alone is a good enough reason to pay closer attention to whom they are most likely to benefit and hurt. That technologies may fail to achieve the objectives their proponents intended should not distract us from analyzing the desirability of those original agendas. The Internet is no exception. The mash-up ethos of Web 2.0, whereby new applications can be easily built out of old ones, is just more proof that the Internet excels at generating affordances. There is nothing about it suggesting that all such affordances would be conducive to democratization. Each of them has to be evaluated on its own terms, not lumped under some mythical “tool neutrality.” Instead, we should be closely examining which of the newly created affordances are likely to have democracy-enhancing qualities and which are likely to have democracy-suppressing qualities. Only then will we be able to know which affordances we need to support and which ones we need to counter.
It’s inevitable that in many contexts, some of the affordances of the Web, like the ability to remain anonymous while posting sensitive information, could be interpreted both ways, for example, positively as a means of avoiding government censorship but also negatively as a means of producing effective propaganda or launching cyber-attacks. There will never be an easy solution to such predicaments. But then this is also the kind of complex issue that, instead of being glossed over
or assumed to be immutable, should be addressed by democratic deliberation. Democracies run into such issues all the time. What seems undeniable, however, is that refusing to even think in terms of affordances and positing “tool neutrality” instead is not a particularly effective way to rein in some of technology’s excesses.
chapter eleven
The Wicked Fix
[image: 012]
In 1966 the University of Chicago Magazine published a brief but extremely provocative essay by Alvin Weinberg, a prominent physicist and head of Oak Ridge National Laboratory, once an important part of the Manhattan Project. Titled “Can Technology Replace Social Engineering?” the essay, best described as an engineer’s cri de coeur, argued that “profound and infinitely complicated social problems” can be circumvented and reduced to simpler technological problems. The latter, in turn, can be solved by applying “quick technological fixes” to them, fixes that are “within the grasp of modern technology, and which would either eliminate the original social problem without requiring a change in the individual’s social attitudes, or would so alter the problem as to make its resolution more feasible.”
One of the reasons why the essay received so much attention was because Weinberg’s ultimate technological fix—the one that could end all wars—was the hydrogen bomb. As it “greatly increases the provocation that would precipitate large-scale war,” he argued, the Soviets would recognize its destructive power and hold considerably less militarist attitudes as a result. This was an interesting argument to make
in 1966, and the essay still has relevance today. Weinberg’s fascination with “technological fixes” was largely the product of an engineer’s frustration with the other, invariably less tractable, and more controversial alternative of the day: social engineering. Social engineers, as opposed to technologists, tried to influence popular attitudes and social behavior of citizens through what nontechnologists refer to as “policy” but what Weinberg described as “social devices”: education, regulation, and a complicated mix of behavioral incentives.
Given that technology could help accomplish the same objectives more effectively, Weinberg believed that social engineering was too expensive and risky. It also helped that “technological fixes” required no profound changes in human behavior and were thus more reliable. If people are given to bouts of excessive drinking, Weinberg’s preferred response would be not to organize a public campaign to caution them to drink responsibly or impose heavier fines for drunk driving but to design a pill that would help to dampen the influence of the alcohol. Human nature was corrupt, and Weinberg’s solution was to simply accept this and work around it. Weinberg was under no illusion that he was eliminating the root causes of the problem; he knew that technological fixes can’t do that. All technology could do was to mitigate the social consequences of that problem, “to provide the social engineer broader options, to make intractable social problems less intractable ... and [to] buy time—that precious commodity that converts social revolution into acceptable social evolution.” It was a pragmatic approach of a pragmatic man.
Upon publication, Weinberg’s essay launched a heated debate between technologists and social engineers. This debate is still raging today, in part because Google, founded by a duo of extremely ambitious engineers on a crusade to “organize the world’s information and make it universally accessible and useful,” has put the production of technological fixes on something of an industrial scale. Make the world’s knowledge available to everyone? Take photos of all streets in the world? How about feeding the world’s books into a scanner and dealing with the consequences later? Name a problem that has to deal with information, and Google is already on top of it.
Why the Ultimate Technological Fix Is Online
It’s not all Google’s fault. There is something about the Internet and its do-it-yourself ethos that invites an endless production of quick fixes, bringing to mind the mathematician John von Neumann’s insightful observation that “technological possibilities are irresistible to man. If man can go to the moon, he will. If he can control the climate, he will” (even though on that last point, von Neumann may have been a bit off ). With the Internet, it seems, everything is irresistible, if only because everything is within easy grasp. It’s the Internet, not nuclear power, that is widely seen as the ultimate technological fix to all of humanity’s problems. It won’t solve them, but it could make them less visible or less painful.
As the Internet makes technological fixes cheaper, the temptation to apply them even more aggressively and indiscriminately also grows. And the easier it is to implement them, the harder it is for internal critics to argue that such fixes should not be tried at all. In most organizations, low cost—and especially in times of profound technological change—is usually a strong enough reason to try something, even if it makes little strategic sense at the time. When technology promises so much and demands so little, the urge to find a quick fix is, indeed, irresistible. Policymakers are not immune to such temptations either. When it’s so easy and cheap to start a social networking site for activists in some authoritarian country, a common gut reaction is usually “It should be done.” That cramming personal details of all dissidents on one website and revealing connections among them may outweigh the benefits of providing activists with a cheaper mode of communication only becomes a concern retroactively. In most cases, if it can be done, it will be done. URLs will be bought, sites will be set up, activists will be imprisoned, and damning press releases will be issued. Likewise, given the undeniable mobilization advantages of the mobile phone, one may start singing its praises before realizing that it has also provided the secret police with a unique way to track and even predict where the protests may break out.
The problem with most technological fixes is that they come with costs unknown even to their fiercest advocates. Historian of science
Lisa Rosner argues that “technological fixes, because they attack symptoms but don’t root out causes, have unforeseen and deleterious side effects that may be worse than the social problem they were intended to solve.” It’s hard to disagree, even more so in the case of the Internet. When digital activism is presented as the new platform for campaigning and organizing, one begins to wonder whether its side effects—further disengagement between traditional oppositional forces who practice real politics, no matter how risky and boring, and the younger generation, passionate about campaigning on Facebook and Twitter—would outweigh the benefits of cheaper and leaner communications. If the hidden costs of digital activism include the loss of coherence, morality, or even sustainability of the opposition movement, it may not be a solution worth pursuing.
Another problem with technological fixes is that they usually rely on extremely sophisticated solutions that cannot be easily understood by laypeople. The claims of their advocates are, thus, almost impenetrable to external scrutiny, while their ambitious promise—the elimination of some deeply entrenched social ill—makes such scrutiny, even if it is possible, hard to mount. Not surprisingly, the dangerous fascination with solving previously intractable social problems with the help of technology allows vested interests to disguise what essentially amounts to advertising for their commercial products in the language of freedom and liberation. It’s not by coincidence that those who are most vocal in proclaiming that the most burning problems of Internet freedom can be solved by breaking a number of firewalls happen to be the same people who develop and sell the technologies needed to break them. Obviously they have no incentive to point out that one needs to be fighting other, nontechnological problems or to disclose problems with their own technologies. The founders of Haystack rarely bothered to highlight the flaws in their own software—let alone disclose that it was still in the testing stage—and the media never bothered to ask. As the Haystack fiasco so clearly illustrates, even being able to ask the right technological questions requires a good grasp of the sociopolitical context in which a given technology is supposed to be used.
This points to another commonly overlooked problem: Our growing commitment to the instruments we use to implement “technological fixes” for what may be important global problems greatly restrains our ability to criticize those who own the rights to those fixes. Every new article or book about a Twitter Revolution is not a triumph of humanity; it is a triumph of Twitter’s marketing department. In fact, Silicon Valley’s marketing geniuses may have a strong interest in misleading the public about the similarity between the Cold War and today: The Voice of America and Radio Free Europe still enjoy a lot of goodwill with policymakers, and having Twitter and Facebook be seen as their digital equivalents doesn’t hurt their publicity.
What We Talk About When We Talk About Code
Perhaps most disturbingly, reframing social problems as a series of technological problems distracts policymakers from tackling problems that are nontechnological in nature and cannot be reframed. As the media keep trumping the role that mobile phones have played in fueling economic growth in Africa, policymakers cannot afford to forget that innovation by itself will not rid African nations of the culture of pervasive corruption. Such an achievement will require a great deal of political will. In its absence, even the fanciest technology would go to waste. The funds for the computerization of Sudan would remain unspent, and computers would remain untouched, as long as many of the region’s politicians are “more used to carrying AK-47s and staging ambushes than typing on laptops,” as a writer for the Financial Times so aptly put it.
On the contrary, when we introduce a multipurpose technology like a mobile phone into such settings, it can often have side effects that only aggravate existing social problems. Who could have predicted that, learning of the multiple money transfer opportunities offered by mobile banking, corrupt Kenyan police officers would demand that drivers now pay their bribes with much-easier-to-conceal transfers of air time rather than cash? In the absence of strong political and social institutions, technology may only precipitate the collapse of state power, but
it is easy to lose sight of real-world dynamics when one is so enthralled by the supposed brilliance of a technological fix. Otherwise policymakers risk falling into unthinking admiration of technology as panacea, which the British architect Cedric Price once ridiculed by pondering, “Technology is the answer, but what was the question?”
When technological fixes fail, their proponents are usually quick to suggest another, more effective technological fix as a remedy—and fight fire with fire. That is, they want to fight technology’s problems with even more technology. This explains why we fight climate change by driving cars that are more fuel-efficient and protect ourselves from Internet surveillance by relying on tools that encrypt our messages and conceal our identity. Often this only aggravates the situation, as it precludes a more rational and comprehensive discussion about the root causes of a problem, pushing us to deal with highly visible and inconsequential symptoms that can be cured on the cheap instead. This creates a never-ending and extremely expensive cat-and-mouse game in which, as the problem gets worse, the public is forced to fund even newer, more powerful tools to address it. Thus we avoid the search for a more effective nontechnological solution that, while being more expensive (politically or financially) in the short-term, could end the problem once and for all. We should resist this temptation to fix technology’s excesses by applying even more technology to them.
How, for example, do most Western governments and foundations choose to fight Internet censorship by authoritarian governments? Usually by funding and promoting technology that helps circumvent it. This may be an appropriate solution for some countries—think, for example, of North Korea, where Western governments have very little diplomatic and political leverage—but this is not necessarily the best approach to handle countries that are nominally Western allies.
In such cases, a nearly exclusive focus on fighting censorship with anticensorship tools distracts policymakers from addressing the root causes of censorship, which most often have to do with excessive restrictions that oppressive governments place on free speech. The easy availability of circumvention technology should not preclude policymakers from more ambitious—and ultimately more effective—ways
of engagement. Otherwise, both Western and authoritarian governments get a free pass. Democratic leaders pretend that they are once again heroically destroying the Berlin Wall, while their authoritarian counterparts are happy to play along, for they have found other effective ways to control the Internet.
In an ideal world, the Western campaign to end Internet censorship in Tunisia or Kazakhstan would primarily revolve around exerting political pressure on their West-friendly authoritarian rulers and would deal with the offline world of newspapers and magazines as well. In many of these countries, muzzling journalists would continue to be the dominant tactic of suppressing dissent until, at least, more of their citizens get online and start using it for more activities than just using email or chatting with their relatives abroad. Allowing a handful of bloggers in Tajikistan to circumvent the government’s system of Internet controls means little when the vast majority of the population get their news from radio and television.
Except for his ruminations about hydrogen bombs and war, Weinberg did not discuss how technological fixes might affect foreign policy. Nevertheless, one can still trace how a tendency to frame foreign policy problems in terms of technological fixes has affected Western thinking about authoritarian rule and the role that the Internet can play in undermining it. One of the most peculiar features of Weinberg’s argument was his belief that the easy availability of clear-cut technological solutions can help policymakers better grasp and identify the problems they face. “The [social] problems are, in a way, harder to identify just because their solutions are never clear-cut,” wrote Weinberg. “By contrast, the availability of a crisp and beautiful technological solution often helps focus on the problem to which the new technology is the solution.”
In other words, just because policymakers have “a crisp and beautiful technological solution” to break through firewalls, they tend to believe that the problem they need to solve is, indeed, that of breaking firewalls, while often this is not the case at all. Similarly, just because the Internet—that ultimate technological fix—can help mobilize people around certain causes, it is tempting to conceptualize the problem in
terms of mobilization as well. This is one of those situations in which the unique features of technological fixes prevent policymakers from discovering the multiple hidden dimensions of the challenge, leading them to identify and solve problems that are easily solvable rather than those that require immediate attention.
Many calls to apply technological fixes to complex social problems smack of the promotion of technology for technology’s own sake—a technological fetishism of an extreme variety—which policymakers should resist. Otherwise, they run the risk of prescribing their favorite medicine based only on a few common symptoms, without even bothering to offer a diagnosis. But as it is irresponsible to prescribe cough medicine for someone who has cancer, so it is to apply more technology to social and political problems that are not technological in nature.
Taming the Wicked Authoritarianism
The growing supply of technological and even social fixes presupposes that the problem of authoritarianism can be fixed. But what if it is simply an unsolvable problem to begin with? To ask this question is not to suggest that there will always be evil and dictators in the world; rather, it is to question whether, from a policy-planning perspective, one can ever find the right mix of policies and incentives that could be described as a “solution” and could then be applied in completely different environments.
In 1972, Horst Rittel and Melvin Webber, two influential design theorists at the University of California at Berkeley, published an essay with the unpromising title of “Dilemmas in a General Theory of Planning.” The essay, which quickly became a seminal text in the theory of planning, argued that, with the passing of the industrial era, the modern planner’s traditional focus on efficiency—performing specific tasks with low inputs of resources—has been replaced by a focus on outputs, entrapping the planner in an almost never-ending ethical investigation of whether the produced outputs were socially desirable. But the growing complexity of modern societies made such investigations difficult to conduct. As planners began to “see social processes as the links tying
open systems into large and interconnected networks of systems, such that outputs from one become inputs to others,” they were no longer certain of “where and how [to] intervene even if [they] do happen to know what aims [they] seek.” In a sense, the sheer complexity of the modern world has led to planning paralysis, as the very solutions to older problems inevitably create problems of their own. This was a depressing thought.
Nevertheless, Rittel and Webber proposed that instead of glossing over the growing inefficiency of both technological and social fixes, planners—and policymakers more generally—should confront this gloomy reality and acknowledge that no amount of careful planning would resolve many of the problems they were seeking to tackle. To better understand the odds of success, they proposed to distinguish between “wicked” and “tame” problems. Tame or benign problems can be precisely defined, and one can easily tell when such problems have been solved. The solutions may be expensive but are not impossible and, given the right mix of resources, can usually be found. Designing a car that burns less fuel and attempting to accomplish checkmate in five moves in chess are good examples of typical tame problems.
Wicked problems, on the other hand, are more intellectually challenging. They are hard to define—in fact, they cannot be defined until a solution has been found. But they also have no stopping rule, so it’s hard to know when that has happened. Furthermore, every wicked problem can be considered a symptom of another, “higher-level” problem and thus should be tackled on the highest possible level, for “if ... the problem is attacked on too low a level, then success of resolution may result in making things worse, because it may become more difficult to deal with the higher problems.”
Solutions to such problems are never true or false, like they are in chess, but rather good or bad. As such, there could never be a single “best” solution to a wicked problem, as “goodness” is too contentious of a term to satisfy everyone. Worse, there is no immediate or ultimate test for the effectiveness of such solutions, as their side effects may take time to surface. In addition, any such solution is also a one-shot operation. Since there is no opportunity to learn by trial and error, every
trial counts. Unlike a lost chess game, which is seldom consequential for other games or non-chess-players, a failed solution to a wicked problem has long-term and largely unpredictable implications far beyond its original context. Every solution, as the authors put it, “leaves traces that cannot be undone.”
The essay contained more than a taxonomy of various planning problems. It also contained a valuable moral prescription: Rittel and Webber thought that the task of the planner was not to abandon the fight in disillusionment but to acknowledge its challenges and find ways to distinguish between tame and wicked problems, not least because it was “morally objectionable for the planner to treat a wicked problem as though it were a tame one.” They argued that the planner, unlike the scientist, has no right to be wrong: “In the world of planning ... the aim is not to find the truth, but to improve some characteristic of the world where people live. Planners are liable for the consequences of the actions they generate.” It’s a formidable moral imperative.
Even though Rittel and Webber wrote the essay with highly technical domestic policies in mind, anyone concerned with the future of democracy promotion and foreign policy in general would do well to heed their advice. Modern authoritarianism, by its very constitution, is a wicked, not a tame, problem. It cannot be “solved” or “engineered away” by a few lines of genius computer code or a stunning iPhone app. The greatest obstacle that Internet-centric initiatives like Internet freedom pose to this fight is that they misrepresent uber-wicked problems as tame ones. They thus allow policymakers to forget that the very act of choosing one solution over another is pregnant with political repercussions; it is not a mere chess game they are playing. But while it is hard to deny that wicked problems defy easy solutions, it doesn’t mean that some solutions wouldn’t be more effective (or at least less destructive) than others.
From this perspective, a “war on authoritarianism”—or its younger digital sibling, a “war for Internet freedom”—is as misguided as a “war on terror.” Not only does such terminology mask the wicked nature of many problems associated with authoritarianism, concealing a myriad of complex connections between them, it suggests—falsely—that such a war can be won if only enough resources are mobilized. Such aggrandizement
is of little help to a policy planner, who instead should be trying to grasp how exactly particular wicked problems relate to their context and what may be done to isolate and tackle them while controlling for side effects. The overall push, thus, is away from the grandiose and the rhetorical—qualities inherent in highly ambiguous terms like “Internet freedom”—and toward the miniscule and the concrete.
Assuming that wicked problems lumped under the banner of Internet freedom could be reduced to tame ones won’t help either. Western policymakers can certainly work to undermine the information trinity of authoritarianism—propaganda, censorship, and surveillance—but they should not lose sight of the fact that all of them are so tightly interrelated that by fighting one pillar, they may end up strengthening the other two. And even their perception of this trinity may simply be a product of their own cognitive limitations, with their minds portraying the pillars they can fight rather than the pillars they should fight.
Furthermore, it’s highly doubtful that wicked problems can ever be resolved on a global scale; some local accomplishments—preferably not only of the rhetorical variety—is all a policymaker can hope for. To build on the famous distinction drawn by the Austrian philosopher Karl Popper, policymakers should not, as a general rule, preoccupy themselves with utopian social engineering—ambitious, ambiguous, and often highly abstract attempts to remake the world according to some grand plan—but rather settle for piecemeal social engineering instead. This approach might be less ambitious but often more effective; by operating on a smaller scale, policymakers can still stay aware of the complexity of the real world and can better anticipate and mitigate the unintended consequences.
Prophecies Versus Profits
Technological fetishism and a constant demand for technological fixes inevitably breed demand for technological expertise. Technological experts, as clever as they may be on matters concerning technology, are rarely familiar with the complex social and political context in which the solutions they propose are to be implemented.
Nevertheless, whenever nontechnological problems are viewed through the lens of technology, it’s technological experts who get the last word. They design solutions that are often more complex than the problems they were trying to solve, while their effectiveness is often impossible to evaluate, as multiple solutions are being tried at once and their individual contributions are often hard to verify. Even the experts themselves have no full control over those technologies, for they trigger effects that could not have been anticipated. Still, this doesn’t prevent the inventors from claiming their technologies behave according to a plan. It is hard to disagree with John Searle, an American philosopher at the University of California at Berkeley, when he writes that “the two worst things that experts can do when explaining ... technology to the general public are first to give the readers the impression that they understand something they do not understand, and second to give the impression that a theory has been established as true when it has not.”
Chances are that the technological visionaries we count on to guide us into a brighter digital future may excel at solving the wrong kind of problems. Their proposed solutions are technological by definition, for it’s only by touting the benefits of technology that these visionaries have become publicly essential (or as the writer Chuck Klosterman poignantly remarked, “the degree to which anyone values the Internet is proportional to how valuable the Internet makes that person”). Since the only hammer such visionaries have is the Internet, it’s not surprising that every possible social and political problem is presented as an online nail.
Thus, most digital visionaries see the Web as a Swiss army knife ready for any job at hand. They rarely alert us to the information black holes created by the Internet, from the sprawling surveillance apparatus facilitated by the public nature of social networking to the persistence of myth making and propaganda, which is much easier to produce and distribute in a world where every fringe movement blogs, tweets, and Facebooks. The very existence of such black holes suggests that we may not always be able to shape the effects of the Internet as we would like.
The political philosopher Langdon Winner was right when he observed in 1986 that “the sheer dynamism of technical and economic
activity in the computer industry evidently leaves its members little time to ponder the historical significance of their own activity.” Winner could not foresee that the situation would only get worse in the era of the Internet, now that the perpetual revolution it has unleashed has shortened the time and space left for analytical thinking. Nevertheless, Winner’s conclusion—that “don’t ask; don’t tell” is “the unspoken motto for today’s technological visionaries”—still rings true today. Their technological fetishism combined with a strong penchant for populism—perhaps just a way of making the “little guys” in their fan base, now armed with iPhones and iPads, feel important—prevents most Internet gurus from asking uncomfortable questions about the social and political effects of the Internet. And why would they ask those questions if they might reveal that they, too, have little control over the situation? It’s for this reason that the kind of future predicted by such gurus—and they do need to predict some plausible future to argue that their “fix” would actually work—is rarely reflective of the past.
The technologists, especially technology visionaries who invariably pop up to explain technology to the wider public, “largely extrapolate from today or tomorrow while showing painfully limited interest in the past,” as Howard Segal, another historian of technology, once mused. This, perhaps, explains the inevitable barrage of utopian claims every time a new invention comes along. After all, it’s not historians of technology but futurists—those who prefer to fantasize about the bright but unknowable future rather than confront the dark but knowable past—that make the most outrageous claims about the fundamental, world-transforming significance of any new technology, especially if it is already on its way to making the cover of Time magazine.
As a result, excessive optimism about what technology has to offer, bordering at times on irrational exuberance, overwhelms even those with superior knowledge of history, society, and politics. For better or worse, many such people don’t have the resources (and time) for studying how every new iPhone app contributes to the progress of civilization and are thus in desperate need of expert judgment on how technology really transforms the world. It’s thanks to their overblown
claims about yet another digital revolution that so many Internet gurus end up advising those in positions of power, compromising their own intellectual integrity and ensuring the presence of Internet-centrism in policy planning for decades to come.
Hannah Arendt, one of America’s most treasured public intellectuals, was aware of this problem back in the 1960s, when the “scientifically minded brain trusters”—Alvin Weinberg was just one of many; another whiz kid with a penchant for computer modeling, Robert McNamara, was put in charge of the Vietnam War—were beginning to penetrate the corridors of power and influence government policy. “The trouble [with such advisers] is not that they are cold-blooded enough to ‘think the unthinkable,’” cautioned Arendt in “On Violence,” “but that they do not ‘think.’” “Instead of indulging in such an old-fashioned, uncomputerizable activity,” she wrote, “they reckon with the consequences of certain hypothetically assumed constellations without, however, being able to test their hypothesis against actual occurrences.” A cursory glimpse at the overblown and completely unsubstantiated rhetoric that followed Iran’s Twitter Revolution is enough to assure us that not much has changed.
It was more than just the constant glorification of technical, largely quantitative expertise at the expense of erudition that bothered Arendt. She feared that increased reliance on half-baked predictions uttered by self-interested technological visionaries and the futuristic theories they churn out on an hourly basis would prevent policymakers from facing the highly political nature of the choices in front of them. Arendt worried that “because of their inner consistency ... [such theories] have a hypnotic effect; they put to sleep our common sense.” The ultimate irony of the modern world, which is more dependent on technology than ever, is that, as technology becomes ever more integrated into political and social life, less and less attention is paid to the social and political dimensions of technology itself. Policymakers should resist any effort to take politics out of technology; they simply cannot afford to surrender to the kind of apolitical hypnosis that Arendt feared. The Internet is too important a force to be treated lightly or to be outsourced to know-all consultants. One may not be able to predict its impact on
a particular country or social situation, but it would be foolish to deny that some impact is inevitable. Understanding how exactly various stakeholders—citizens, policymakers, foundations, journalists—can influence the way in which technology’s political future unfolds is a quintessential question facing any democracy.
More than just politics lies beyond the scope of technological analysis; human nature is also outside its grasp. Proclaiming that societies have entered a new age and embraced a new economy does not automatically make human nature any more malleable, nor does it necessarily lead to universal respect for humanist values. People still lust for power and recognition, regardless of whether they accumulate it by running for office or collecting Facebook friends. As James Carey, the Columbia University media scholar, put it: “The ‘new’ man and woman of the ‘new age’ strikes one as the same mixture of greed, pride, arrogance and hostility that we encounter in both history and experience.” Technology changes all the time; human nature hardly ever.
The fact that do-gooders usually mean well does not mitigate the disastrous consequences that follow from their inability (or just sheer lack of ambition) to engage with broader social and political dimensions of technology. As the German psychologist Dietrich Dörner observed in The Logic of Failure, his masterful account of how decision-makers’ ingrained psychological biases could aggravate existing problems and blind them to the far more detrimental consequences of proposed solutions, “it’s far from clear whether ‘good intentions plus stupidity’ or ‘evil intentions plus intelligence’ have wrought more harm in the world.” In reality, the fact that we mean well should only give us extra reasons for scrupulous self-retrospection, for, according to Dörner, “incompetent people with good intentions rarely suffer the qualms of conscience that sometimes inhibit the doings of competent people with bad intentions.”
After Utopia: The Cyber-Realist Manifesto
A few months after Hillary Clinton’s speech on Internet freedom, Ethan Zuckerman, a senior researcher at Harvard University’s Berkman Center for Internet and Society and a widely respected expert on Internet
censorship, penned a poignant essay titled “Internet Freedom: Beyond Circumvention,” one of the first serious attempts to grapple with the policy implications of Washington’s new favorite buzzword. In it, Zuckerman made an important argument that building tools to break through authoritarian firewalls wouldn’t be enough, because there are too many Internet users in China to make it affordable and too many nontechnological barriers to freedom of expression on the Web. “We can’t circumvent our way around censorship.... The danger in heeding Secretary Clinton’s call is that we increase our speed, marching in the wrong direction,” he wrote.
His own contribution to the debate was to elucidate several theories that may help policymakers better understand how the Internet can nudge authoritarian societies toward democratization. “To figure out how to promote internet freedom, I believe we need to start addressing the question: ‘How do we think the Internet changes closed societies?’” wrote Zuckerman. He listed three good potential answers. One such theory states that providing access to suppressed information may eventually push people to change opinion of their governments, precipitating a revolution. Another one posits that if citizens have access to various social networking sites and communication tools like Skype, they are able to better plan and organize their antigovernment activity. A third theory predicts that by providing a rhetorical space where different ideas can be debated, the Internet will gradually empower a new generation of leaders with a more modern set of demands.
As Zuckerman correctly points out, all of these theories have some intellectual merit. The additional assumptions that he makes, either explicitly or implicitly, is that the American government has a separate pot of money to spend on Internet freedom issues; that most of this money would invariably go to fund technological rather than political solutions; and that the best thing to do is to prioritize which tools are needed the most. Zuckerman’s suggestion, then, is that policymakers first need to figure out which theory is to guide their efforts in online space and then rely on it to allocate their resources. Thus, if they expect to enact change by mobilizing citizens to rise up against their governments, they need to ensure that tools like Twitter and Facebook are
widely available and resistant to both attempts to block access to them and DDoS attacks. In contrast, if they stick to the “liberated by facts” theory, they would need to prioritize access to blogs of the opposition as well as websites like Wikipedia, BBC News, and so forth.
Instead of formulating a better theory to complement Zuckerman’s, one needs to ponder what breeds demand for such theories in the first place. While it is hard to disagree with his warning that, in their pursuit of Internet freedom nirvana, policymakers may be speeding up in the wrong direction, Zuckerman’s neo-Weinbergian philosophy of action seems much more ambiguous. It is founded on a belief that once policymakers understand the “logic” of the Internet, which in Zuckerman’s interpretation, inherently favors those challenging autocracy and power but in ways that we may not yet understand, they will be able to formulate smarter Internet policies and can then pursue a host of technological solutions to accomplish the objectives of those policies. Thus, from Zuckerman’s perspective, it’s important to articulate numerous theories by which the Internet may be transforming autocracies and then act on those that best match the empirical reality.
In the meantime, the mental gymnastics of proposing and evaluating theories may also add meaning to the term “Internet freedom,” which even Zuckerman acknowledges to be currently empty. It’s this last point that is most troubling: Even though Zuckerman agrees that Internet freedom offers a poor foundation for effective foreign policy, he is nevertheless eager to propose—somewhat cynically—all sorts of fixes to make this foundation last for a year or two longer than it might otherwise. Unfortunately, those rare intellectuals who do know a great deal about both the Internet and the rest of the world—Zuckerman is also an Africa expert—prefer to spend their time seeking marginal improvements to wrong-headed policies, unable or unwilling to see through the pernicious Internet-centrism that permeates them and to reject their very foundation. (The situation is certainly not helped by the fact that the State Department funds some of Zuckerman’s projects at Harvard, as he himself acknowledged in the essay.)
But an even greater problem with Zuckerman’s approach is that, should the “logic” of the Internet defy his expectations and prove elusive,
nonexistent, or inherently antidemocratic, the rest of the proposed course of action also falls apart and is at best irrelevant and at worst deceptive. That the Internet may also be strengthening rather than undermining authoritarian regimes; that placing it at the cornerstone of foreign policy helps Internet companies deflect the criticism they so justly deserve; that a dedication to the highly abstract goal of promoting Internet freedom complicates a thorough assessment of other parts of foreign and domestic policies—these are not the kind of insights one is likely to gain while groping for a theory to justify one’s own penchant for cyber-utopianism or Internet-centrism. As a result, many of these concerns barely register when future policies are being crafted.
The way forward is not to keep coming up with new theories until they match one’s existing biases about what the logic of the Internet is or should be like. Instead, one should seek to come up with a philosophy of action to help design policies that have no need for such logic as their inputs. But while it’s becoming apparent that policymakers need to abandon both cyber-utopianism and Internet-centrism, if only for the lack of accomplishment, it is not yet clear what can take their place. What would an alternative, more down-to-earth approach to policymaking in the digital age—let’s call it cyber-realism—look like? Here are some preliminary notes that future theorists may find useful.
Instead of trying to build a new shiny pillar to foreign policy, cyber-realists would struggle to find space for the Internet in existing pillars, not least on the desks of regional officers who are already highly sensitive to the political context in which they operate. Instead of centralizing decision making about the Internet in the hands of a select few digerati who know the world of Web 2.0 start-ups but are completely lost in the world of Chinese or Iranian politics, cyber-realists would defy any such attempts at centralization, placing as much responsibility for Internet policy on the shoulders of those who are tasked with crafting and executing regional policy.
Instead of asking the highly general, abstract, and timeless question of “How do we think the Internet changes closed societies?” they would ask “How do we think the Internet is affecting our existing policies on country X?” Instead of operating in the realm of the utopian and the
ahistorical, impervious to the ways in which developments in domestic and foreign policies intersect, cyber-realists would be constantly searching for highly sensitive points of interaction between the two. They would be able to articulate in concrete rather than abstract terms how specific domestic policies might impede objectives on the foreign policy front. Nor would they have much tolerance for a black-and-white color scheme. As such, while they would understand the limitations of doing politics online, they wouldn’t label all Internet activism as either useful or harmful based solely on its outputs, its inputs, or its objectives. Instead, they would evaluate the desirability of promoting such activism in accordance with their existing policy objectives.
Cyber-realists wouldn’t search for technological solutions to problems that are political in nature, and they wouldn’t pretend that such solutions are even possible. Nor would they give the false impression that on the Internet concerns over freedom of expression trump those over energy supplies, when this is clearly not the case. Such acknowledgments would only be factual rather than normative statements—it may well be that concerns over freedom of expression should be more important than concerns over energy supplies—but cyber-realists simply would not accept that any such radical shifts in the value system of the entire policy apparatus could or should happen under the pressure of the Internet alone.
Now would cyber-realists search for a silver bullet that could destroy authoritarianism—or even the next-to-silver-bullet, for the utopian dreams that such a bullet can even exist would have no place in their conception of politics. Instead, cyber-realists would focus on optimizing their own decision-making and learning processes, hoping that the right mix of bureaucratic checks and balances, combined with the appropriate incentive structure, would identify wicked problems before they are misdiagnosed as tame ones, as well as reveal how a particular solution to an Internet problem might disrupt solutions to other, non-Internet problems.
Most important, cyber-realists wouldn’t allow themselves to get dragged into the highly abstract and high-pitched debates about whether the Internet undermines or strengthens democracy. Instead,
they would accept that the Internet is poised to produce different policy outcomes in different environments and that a policymaker’s chief objective is not to produce a thorough philosophical account of the Internet’s impact on society at large but, rather, to make the Internet an ally in achieving specific policy objectives.
Cyber-realists would acknowledge that by continuing to flirt with Internet-centrism and cyber-utopianism, policymakers are playing a risky game. Not only do they squander plenty of small-scale opportunities for democratization that the Internet has to offer because they look from too distant a perspective, but they also inadvertently embolden dictators and turn everyone who uses the Internet in authoritarian states into unwilling prisoners. Cyber-realists would argue that this is a terribly expensive and ineffective way to promote democracy; worse, it threatens to corrupt or crowd out cheaper and more effective alternatives. For them, the promotion of democracy would be too important an activity to run it out of a Silicon Valley lab with a reputation for exotic experiments. Above all, cyber-realists would believe that a world made of bytes may defy the law of gravity but absolutely nothing dictates that it should also defy the law of reason.
Table of Contents
Title Page
Dedication
Introduction
chapter one - The Google Doctrine
Hail the Google Doctrine
The Unimaginable Consequences of an Imagined Revolution
A Revolution in Search of Revolutionaries
Where Are the Weapons of Mass Construction?
How NASDAQ Will Save the World
From Milk Shakes to Molotov Cocktails
Why Hipsters Make Better Revolutions
In Search of a Missing Handle
chapter two - Texting Like It’s 1989
WWW&W
Cyber Cold War
Nostalgia’s Lethal Metaphors
Why Photocopiers Don’t Blog
Which Tweet Killed the Soviet Union?
Hold On to Your Data Grenade, Comrade!
When the Radio Waves Seemed Mightier Than the Tanks
chapter three - Orwell’s Favorite Lolcat
How Cable Undermines Democrac