PEOPLE THINK of history in the long term, but history, in fact, is a very sudden thing.” This observation from one of Philip Roth’s novels applies with particular force to the contemporary cult of human rights. Most people today believe that the prominence of rights is the almost-inevitable conclusion of a long process of moral development. Originating in Greco-Roman philosophy and Judeo-Christian religion, so the story goes, the idea of human rights expressed a cosmopolitan vision of universal humanity, which went on to find expression in modern times in the English Civil War, the French and American Revolutions, various antislavery movements, the Second World War, and the struggles against colonialism and racism. The history of the West is a continuous unfolding of this majestic idea, and if contemporary Western societies are superior to others, past and present, it is because of their respect for personal liberties.

Cited at the beginning of The Last Utopia by Samuel Moyn, professor of history at Columbia University, Roth’s observation encapsulates the central theme of Moyn’s brilliantly illuminating book. For anyone who reached adulthood in the United States and other Western countries during the past ten or twenty years, human rights are an immemorial inheritance, only now properly developed, which provides the only possible framework for moral and political thought. Over the last few decades, Moyn writes:

a new field has crystallized and burgeoned. Almost unanimously, contemporary historians have adopted a celebratory attitude toward the emergence and progress of human rights, providing recent enthusiasms with uplifting backstories.

According to this now-deeply-entrenched view, the journey toward our present state of virtue and enlightenment was a long but straight road:

In recasting world history as raw material for the progressive ascent of international human rights, [many contemporary historians] have rarely conceded that earlier history left open diverse paths into the future, rather than paving a single road toward current ways of thinking and acting.

This was not just a way of reading (or misreading) history. The focus on human rights had large practical consequences. From Jimmy Carter onward, this tenet came to be invoked as “the guiding rationale of the foreign policy of states.” Almost never used in English before the 1940s, “human rights” were mentioned in the New York Times five times as often in 1977 as in any prior year of the newspaper’s history. By the nineties, human rights had become central to the thinking not only of liberals but also of neoconservatives, who urged military intervention and regime change in the faith that these freedoms would blossom once tyranny was toppled. From being almost peripheral, the human-rights agenda found itself at the heart of politics and international relations.

In fact, it has become entrenched in extremis: nowadays, anyone who is skeptical about human rights is angrily challenged to explain how they can condemn Nazism—as if the only options that exist in political thought are rights-based liberal universalism or out-and-out moral relativism. The fact that those who led the fight against Nazism understood the conflict in quite different terms, with Winston Churchill seeing it simply but not inaccurately as a life-and-death struggle between civilization and barbarism, is not considered relevant.

The contemporary human-rights movement is demonstrably not the product of a revulsion against the worst crimes of Nazism. For one thing, the Holocaust did not figure in the deliberations that led up to the Universal Declaration of Human Rights adopted by the UN in 1948. As Moyn notes, “In real time, across weeks of debate around the Universal Declaration in the UN General Assembly, the genocide of the Jews went unmentioned in spite of the frequent invocation of other dimensions of Nazi barbarity.” And Raphael Lemkin, the moving spirit in promoting the convention against genocide that was adopted a day before the Universal Declaration, was fully aware that at the time, genocide was by and large considered separate from (and perhaps less important than) human rights. Contrary to received history, the rise of human rights had very little to do with the worst crime against humanity ever committed.

INDEED, THE primacy of this ideal is very recent. In the late 1970s, clearly a full thirty years after World War II, it all came about quite abruptly. And the ascendancy of rights as we now understand them came as a response, in part, to developments in the academy. As Moyn astutely notes, “In a tiny bibliography on rights composed by political theorists in 1978, next to no authors treated ‘human rights’ as such.” My own experience confirms the accuracy of this observation. When I began teaching political philosophy in Britain in the early seventies, rights theory was only one among several traditions, and by no means the one most closely studied. There were versions of utilitarianism, some scornful of rights (with Jeremy Bentham describing them as “nonsense upon stilts”), others that accepted that rights have important social functions (as in John Stuart Mill), but none of them asserted that rights were fundamental in ethical and political thinking. There were various kinds of historicism—the English thinker Michael Oakeshott’s conservative traditionalism and the American scholar Richard Rorty’s postmodern liberalism, for example—that viewed human values as cultural creations, whose contents varied significantly from society to society. There was British theorist Isaiah Berlin’s value pluralism, which held that while some values are universally human, they conflict with one another in ways that do not always have a single rational solution. There were also varieties of Marxism which understood rights in explicitly historical terms.

In all of these perspectives, human rights were discussed—when they were mentioned at all—as demands made in particular times and places. Some of these demands might be universal in scope—that torture be prohibited everywhere was frequently (though not always) formulated in terms of an all-encompassing necessity, but no one imagined that human rights comprised the only possible universal morality. “A universalism based on international rights,” as Moyn writes, “could count as only one among others in world history.” Until but a few decades ago, anyone who had been well educated understood that most of the varieties of universalism that have ever existed either lacked the very idea of rights (as in Aristotle and Thomas Aquinas) or else invoked them in order to reach authoritarian conclusions (as did Thomas Hobbes).

Undermining the narrative of a virtually inevitable human evolution, the notion that rights are the foundation of society came only with the rise of the Harvard philosopher John Rawls’s vastly influential A Theory of Justice (1971). In the years following, it slowly came to be accepted that human rights were the bottom line in political morality. Early modern political theorists like John Locke may have asserted the importance of rights in ways that helped shape the American Constitution; but rights were dictates of natural law, which had to be obeyed because they emanated from God. Immanuel Kant’s view was essentially the same. The belief that rights are fundamental in political ethics is a late twentieth-century fancy. Interestingly, Rawls did not argue for any sort of global governance—as Moyn points out, Rawls accepted “the plurality of nations.” Also, unlike a later generation of philosophers, Rawls was conversant with other traditions of thinking and took them seriously. Moyn explains, “When John Rawls famously reclaimed individual rights . . . it had no apparent consequences for either the general or the philosophical ascent of human rights (an expression Rawls did not use).” Even so, it was Rawls’s work that was chiefly responsible for the triumph of the narrow type of liberalism that has since dominated Anglo-American political philosophy. The result was to promote a type of liberal legalism in which the rule of law was simply assumed, while politics was virtually ignored.

THE MOST damaging effect of Rawls’s work was the neglect of the state that it produced. The natural rights that were asserted in the early modern period by Hobbes and other thinkers were closely linked with the modern state that was emerging at the time. As Moyn notes, the “freestanding individual of natural rights . . . was explicitly modeled on the assertive new state of early modern international affairs.” Hobbes was insistent that the right to self-preservation can be protected by a state that accepts no limits on its authority to act—otherwise, there is only a “war of all against all” in which everyone must be on guard against everyone else. Other rights theorists such as Locke, more recognizable as liberals in a modern sense, wanted to impose substantive limits on what governments could legitimately do; but they too were clear that rights could only be respected in the context of an effective modern state. Human rights might in some sense exist prior to the state, but without the state they counted for nothing.

Consider interwar Central Europe, an example Moyn does not discuss. Most likely nothing could have prevented the dissolution of the Hapsburg monarchy, but the result of actively promoting its dismemberment, as Woodrow Wilson did, was a “war of all against all” among the fledgling nation-states, in which minorities were the losers (none more so than Jews, who had nowhere to go). This was not an unpredictable development, for as should be clear, human rights and the nation-state are inextricably joined. As Moyn puts it, “The alliance with state and nation was not some accident that tragically befell the rights of man: it was their very essence, for the vast bulk of their history.” The Universal Declaration of Human Rights, which seems to embody the modern-day utopia, encouraged visionaries to look to a time when rights would transcend sovereign states, but still the focus on individual countries remained.

Indeed, for many in the period from the 1950s up to the 1970s, when human rights acquired their present focus on the moral claims of individuals, rights had meaning only in the context of a sovereign state. The full interpretation of equating human rights with a fundamental entitlement to national self-determination—the collective right to rule oneself and not be controlled by others—emerged only with the anticolonial movement. When the UN endorsed the Declaration on the Granting of Independence of Colonial Countries and Peoples in 1960, human rights and national self-determination became practically equivalent. At this point, the question of whether human rights were better protected in postcolonial regimes lost much of its meaning. Since the most fundamental freedom had been secured by the fact of independence, anything else was secondary.

IT IS partly the loss of the insight that human rights can only be secured by an effective state that explains the failure of the regime-change policies promoted by neoconservatives and liberal hawks over the past decade. If rights are what humans possess in the absence of a repressive regime, all that needs to be done to secure human rights is to remove the despot in question. But if rights are empty without the state to protect them, then the nature of the government that can be reasonably expected to emerge when tyranny has been overthrown becomes of crucial importance. The political ideas that are taught in universities do not often shape political practice in any direct fashion. But there can be little doubt that those who promoted the Iraq War believed the removal of Saddam Hussein would allow something like liberal democracy to flourish in the country, and in believing this, they showed that their thinking had been molded by theories of rights that ignored the crucial role of the state.

A willed ignorance of history was also at work. If rights are universally human, embodying a kind of natural freedom that appears as the accretions of history are wiped away, the past has little significance. But if human rights are artifacts that have been constructed in specific circumstances, as I would argue, history is all-important; and history tells us that when authoritarian regimes are suddenly swept aside, the result is often anarchy or a new form of tyranny—and quite often a mix of the two.

Breaking up systems of colonial or despotic power may be a requirement of justice and humanity, but it often goes with violent conflict and ethnic cleansing. A scenario of this kind has been enacted in post-Saddam Iraq. Constructed from provinces of the Ottoman Empire and containing populations with a long history of enmity, Iraq could not be democratized without a high risk of intercommunal conflict and a near certainty of Kurdish secession. So much was obvious to Gertrude Bell, the British civil servant who more than anyone else created the state of Iraq, when she argued in the early twenties that power must be kept in the hands of the Sunnis, despite their smaller numbers, “otherwise we will have a theocratic state, which is the very devil.” The current regime in Iraq—an unstable combination of popular theocracy with anarchy—would not have surprised her. What Bell could not have anticipated, but what was clear by the time of the invasion in early 2003, was that the destruction of Saddam’s tyranny would empower Iran, which is rapidly emerging as the principal state builder in the country.

Unlike Iraq, which under Saddam’s rule was a variety of present-day dictatorship, Afghanistan has never been ruled by a modern state. Even the writ of the Soviets, who during their period of occupying the country exercised power with a degree of ruthlessness unimaginable for coalition forces today, did not run much beyond the capital. Now Afghanistan is not so much a failed state as a pseudostate, in which power rests with tribes, warlords and drug dealers. The belief that human rights can be secured in these conditions is even more delusional than in the case of Iraq. Whatever else happens after the bulk of allied troops withdraw, the Taliban will be a potent presence in any government that is formed. Even if the pretense of democratic institutions is maintained, vital freedoms—not least of all for women, who have already been compromised by the Karzai regime—are likely to be extinguished altogether. Democracy cannot protect human rights when the most powerful political forces in the country reject them as illegitimate.

Of course, there were other reasons for intervention in Iraq apart from the defense of human rights, and the unhappy Afghan drift from an initially legitimate, limited and successful operation to destroy terrorist bases to the present inchoate mission is partly explained by geopolitical factors. But the fact remains that regime change in both countries was supported in Western states in part because it was believed intervention could promote human rights. If failure was predictable in each case, what accounts for Western elites supporting the use of force to achieve objectives that clearly could not be realized?

The answer, Moyn suggests, is in the fact that the idea of rights has seized hold of the utopian imagination. Human rights provide “a moral alternative to bankrupt political utopias”—a replacement for the universal political projects that shaped much of the dark history of the twentieth century. The human-rights movement shared the vision that fueled utopian politics—not just the anticapitalist politics of old-fashioned Communist parties, but also internationalist and anticolonialist movements, liberation theology and vain attempts to forge “socialism with a human face.” Communist rule proved to be unprecedentedly tyrannical, postcolonial regimes were sometimes as repressive as their predecessors and even heroic dissidents against totalitarian rule (such as Aleksandr Solzhenitsyn) were not always the liberals that Western supporters imagined them to be. Real-world politics never delivered the utopian dream, so human-rights activists insulated themselves from these disillusioning facts by assuming a moral stance that affected to transcend politics. Not having to make the painful choices and shabby compromises that always go with active political engagement, they could enjoy an uplifting sense of moral purity along with the comforting conviction that if anything went wrong, it was not their fault.

MOYN’S ACCOUNT of the utopian origins of the contemporary human-rights movement is impressively worked out and largely convincing. But it fails to capture what is truly unrealistic in the human-rights project, and as a result underestimates the damaging effects of its ascendancy. Like many writers on utopian movements today, Moyn shrinks from attacking the impulse that inspires them. Instead, anxious to pay his respects to the movement, he writes:

To give up church history is not to celebrate a black mass instead. I wrote this book out of intense interest in—even admiration for—the contemporary human rights movement, the most inspiring mass utopianism Westerners have had before them in recent decades.

One reason for Moyn’s positive evaluation of the human-rights movement comes from his partial understanding of utopianism. Its agenda, he writes, “is a recognizably utopian program: for the political standards it champions and the emotional passion it inspires, this program draws on the image of a place that has not yet been called into being.”

The core of utopianism, however, is not in the fact that the world it envisions is as yet nonexistent. Antislavery movements are sometimes invoked with the aim of showing that utopianism can have positive results. But however difficult they may have been to achieve, the goals of abolitionists were in no sense unachievable. We know that a society without slavery is possible, if only because history (ancient and modern) contains numerous examples of such places. A project is utopian when it can be known in advance that its central objectives cannot be realized. This may be because these aims are impossible in any human society, or because they cannot be achieved in particular communities in any future that can reasonably be anticipated. Marx’s Communism belongs in the first category, along with Lenin’s Bolshevism. The attempt to install liberal democracy in Iraq, or a modern state in Afghanistan, belongs in the latter, as did the attempt to export a Western-style market economy into post-Communist Russia. Again, though I cannot argue the point here, I am confident that the belief that China will eventually adopt something like a Western mode of government is no less utopian.

It is worth noting that the distinction between two types of utopianism has nothing to do with the familiar polarity of reform and revolution. Sometimes—as in Communist Eastern Europe—revolution is necessary in order to achieve piecemeal change. Equally, a project can be utopian when its advocates believe it can be achieved incrementally. Debates about gradual improvement or total transformation are just a distraction. The question is whether the goals of the project are possible at all, and here the human-rights movement is utopian in both senses of the term.

If securing rights presupposes an effective state, as early modern thinkers acknowledged and contemporary liberals have forgotten, the human-rights agenda is plainly utopian in much of the world. Many of the nearly two hundred actually existing sovereign states are collapsed, corroded, criminalized or weak. Incapable of maintaining a rudimentary peace, the task of sustaining a government, let alone rights, is beyond their competence. Contemporary human-rights movements have followed recent liberal philosophy in focusing on states as the principal violators of personal liberties; but in many countries it is tribal militias, organized crime or violent fundamentalists that are the larger threat. Anarchy is as inimical to freedom as tyranny, sometimes more so. That is one reason why regimes of the kind that exist in post-Communist Russia and China have secured a certain popular legitimacy.

NO DOUBT human-rights advocates will argue that states in the rest of the world can gradually be raised to Western standards of government. At a time when wealth and power are moving quite rapidly from north and west to east and south (returning to a historical normalcy that prevailed until only a few centuries ago), the notion that Western governments can any longer act as moral tutors to the world has a delusional quality, but let that pass. The deeper truth is that the human-rights project is utopian everywhere—including the Western countries in which it is currently so deeply entrenched.

The core of the project is to place certain basic freedoms in a realm of constitutional law where they are beyond any possibility of political attack. But all the panoply of rights will not stop a government from violating the most basic protections when political elites—along with most people—support such encroachments, or simply do not care. Torture was prohibited in Austria-Hungary in 1776 by Maria Theresa, an absolute ruler. Over two hundred years later, it was rehabilitated for a time in the world’s preeminent constitutional democracy—the United States. If it is true that torture is once again prohibited, this is not because the barbaric practice has been legally challenged, but because of a political change.

The intrinsically utopian nature of the human-rights project lies in what its advocates most prize about their movement—its antipolitical orientation. Basic human freedoms do not form a harmonious system whose dictates can be decided by a court, as Rawls and the human-rights movement imagine; they are often at odds with one another—freedom of information with the freedom that goes with privacy, for example. The role of politics is to devise a modus vivendi among these rival liberties. At the back of the rights movement is a vision of an ideal constitution that could in principle be installed everywhere. But such a framework is impossible even in theory, for there are ethical and political conflicts that admit no single, right solution. If neo-Nazism could be countered in the countries where it is reemerging by curbs on free expression and political association, would it be wrong to impose such limits? The answer depends not on any imaginary “rights” but on the effectiveness of the restrictions. Where they have a decent chance of working, I for one would happily support them.

The human-rights project may have played a part in helping to frame a universal moral minimum—what Moyn, following the jurist Henry Steiner, describes as their “catastrophe prevention” role—despite the fact that, as Lemkin’s struggle to secure the convention against genocide shows, the two have not always gone together. The role of the rights movement as the vehicle for a covert type of utopian ideology and politics is more questionable. Of course, there can be no question of rolling back the ascendancy of rights. Though they cannot be relied on, the legal protections that go with these freedoms do impose some restraint on power. Where the human-rights project has become harmful is in nurturing the sickly dream of a time when the intractable dilemmas of ethics and politics will be overcome, transcended in an empire of law.

Human rights are not the last utopia—just the one we must presently live with. The pursuit of the impossible is too much a part of the modern Western tradition ever to be truly renounced. The idea that utopianism will disappear is itself a utopian dream. The most that can be hoped for is that the piety which surrounds human rights will be tempered from time to time with a little skeptical doubt. It is hard to think of a better start than Moyn’s seminal study.

John Gray is the author of Black Mass: Apocalyptic Religion and the Death of Utopia (Farrar, Straus and Giroux, 2007), which in 2008 received the Lannan Foundation Notable Book Award. His newest book, The Immortalization Commission, will be published in March 2011 by Farrar, Straus and Giroux.