Small Precautions

Thursday, January 08, 2015

Unfortunately, the answer to "who is to blame" is not simple. One perhaps useful way to think about responsibility for terrorism is as a series of concentric circles, each less causally immediate as you move out from the center, but each also more all-encompassing and therefore harder to address.

The inner circle is the immediate killers, their personal psychology, experiences, etc. The next circle are the (in this case jihadi) violent ideologists, as well as their tacit supporters. The next is the intolerances and lack of assimilative mechanisms in the (in this case) historically non-Muslim societies in which many Muslims find themselves implanted (often by birth rather than individual choice). The next is the failure of some Muslims to reconcile their religious faith to the exigencies of a global system which is predicated on information openness and which includes many liberals, and indeed many Islam-haters -- who Muslims cannot avoid seeing and hearing. In the next circle out we find the unjust and indecent historical practices by the West against the non-West (including "the non-West at home"), which obviously includes but is not particular to the treatment of countries and peoples in the Islamic world.

No doubt one may adduce other layers, and certainly one can debate the order of the circles: they overlap in non-concentric orbits, so that different ones are prior in some places than others: the "blame" for terrorism is very different in Borno than in the banlieues. The key point, however, is that, depending on which of these circles one privileges, one can come to very different conclusions about who to blame for particular acts of terrorism.

An essentially historical method, this approach is almost certainly not amenable to quantification, but it does have the virtue of explaining why different people of decent intent and intellectual integrity can come up with highly divergent interpretations of responsibility for terrorist atrocities. This same mode of analysis can be used to assess responsibility for other kinds of terrorism too, from Breivik and McVeigh, to ETA and the IRA, to today's Naxalites and anarchists at the turn of the 20th century.

Monday, April 21, 2014

I have just finished Thomas Piketty's Capital in the Twenty-first Century (Harvard UP), and it is magnificent. The Book We Need Now. Much thoughtful ink has been spilled on the technical and historical aspects of the book. What's most striking to me is the respectful hearing it is getting even on the right, which acknowledges (one might even say surrenders to) the central thrust of the arguments it makes about the shape and sources of galloping inequality and the dire social and thus political implications thereof.

While I have little to offer technically on the subject, I thought it might be interesting to try to situate the book historiographically, via some personal reflections of my experience reading the book.

Piketty is precisely my age, and has apparently been on precisely the same political trajectory. That trajectory is defined by two formative aspects of our youth: on the one hand, we're both children of post-68 leftist intellectuals, who passed to us in equal measure a respect for the values of socialist humanism and a distrust for the institutions of political power; on the other hand, the central political experiences of our childhoods were the belligerent revanchism of Reagan/Thatcher, the corrupt cynicism of Mitterand/Gonzalez, and the feckless foolishness of Gorbachev—capstoned by the collapse of Eastern European Communism in the very year we reached our majority.

Along with the impression left by post-Tienanmen China's capacity to generate (highly inegalitarian) wealth, this collapse produced two crucial psycho-political instincts in people of our specific age and political upbringing. First, it generated a deep disbelief in the utopian nostrums of so-called actually existing socialisms, which we were just old enough to have believed was a "permanent alternative" to liberal capitalism, but just young enough never to have personally committed to, despite our upbringings. (This is a very microgenerational experience: for those even four or five years younger or older than us, at least one of these does not apply.) Second, it led us to appreciate the economic importance of price mechanisms, innovation and competitiveness, without generating any love for capitalism as a system or any respect for the self-regard of the rich, who people with our background regard less as exemplars of meritocracy than as avaricious parasites. For us, TINA is the Big Lie of our times: just because socialism failed as a political project was no reason to believe the story (that the Right in our countries told about the lesson of this failure) that capitalism was humane, and not still an ecologically rapacious form of social vampirism.

What I find beautiful about Piketty's book is that it crystallizes and speaks to and for this worldview — that is, to the sensibility of a "red diaper" GenXer. It is a book written by someone who watched the socialist-utopian eidolon of his elders implode, without ever buying into the liberal-utopian promises made (or the sense of political limits imposed) by the successor regime(s).

Saturday, April 19, 2014

The New York Times had a fascinating profile of Paul Kingsnorth, whose "Uncivilization" project looks squarely in the face of the failures of the environmental movement to stop to wholesale ecological ransacking of the planet, and decides to throw in the towel and adopt a stance of resignation, and instead of fighting the inevitable, to focus on preparing "to live through it with dignity and honor."

Now, this is a position that on one level I find quite congenial. As I've remarked many, manytimes, climate change is not a "problem" (which implies there are "solutions"), nor even a "wicked problem," but rather is a "dilemma." And dilemmas require existential readjustments rather than a frantic search for solutions.

The long-term future is of a tremendously crowded, drastically climate-altered planet that has been strip-mined of all the easily accessible resources, and pock-mocked with toxic dumps. A focus on creating a social world that sustains human and political decency in the face of that reality strikes me as at least as noble a calling as describing oneself as an environmentalist while trying to invent better batteries for coal-powered technorati-mobiles.

Where I get off the bus is with the Uncivilization team's the counter-Enlightenment discourse, which as always tends to operate in some nether space between tedious and sinister:

The myth of progress is to us what the myth of god-given warrior prowess was to the Romans, or the myth of eternal salvation was to the conquistadors: without it, our efforts cannot be sustained. Onto the root stock of Western Christianity, the Enlightenment at its most optimistic grafted a vision of an Earthly paradise, towards which human effort guided by calculative reason could take us. Following this guidance, each generation will live a better life than the life of those that went before it. History becomes an escalator, and the only way is up. On the top floor is human perfection. It is important that this should remain just out of reach in order to sustain the sensation of motion.

Recent history, however, has given this mechanism something of a battering. The past century too often threatened a descent into hell, rather than the promised heaven on Earth. Even within the prosperous and liberal societies of the West progress has, in many ways, failed to deliver the goods. Today’s generation are demonstrably less content, and consequently less optimistic, than those that went before. They work longer hours, with less security, and less chance of leaving behind the social back- ground into which they were born. They fear crime, social breakdown, overdevelopment, environmental collapse. They do not believe that the future will be better than the past. Individually, they are less constrained by class and convention than their parents or grandparents, but more constrained by law, surveillance, state proscription and personal debt. Their physical health is better, their mental health more fragile. Nobody knows what is coming. Nobody wants to look.

It’s not that any of this is empirically wrong, mind you; it’s just that the only morally non-monstrous approach is forward, not backward. At this point, with seven billion technorationality-dependent people on the planet, to reject technorationality is implicitly to accept or perhaps even to embrace a terrible triaging of humanity as the only possible future. We must continue to invest in the search for better technologies, if only to slow down the arrival of the inevitable. I'm with George Monbiot and Naomi Klein's critique of Kingsnorth's suggestion that we should "step back":

Monbiot responded that “stepping back” from direct political action was equivalent to a near-criminal disavowal of one’s moral duty. “How many people do you believe the world could support without either fossil fuels or an equivalent investment in alternative energy?” he asked. “How many would survive without modern industrial civilization? Two billion? One billion? Under your vision, several billion perish. And you tell me we have nothing to fear.” Naomi Klein also sees a troubling abdication in Kingsnorth’s work. “I like Paul, but he’s said rather explicitly that he’s giving up,” she told me. “We have to be honest about what we can do. We have to keep the possibility of failure in our minds. But we don’t have to accept failure. There are degrees to how bad this thing can get. Literally, there are degrees.”

In the end, we need to do both: we need to both prepare ourselves existentially for a world of less, and to work without surcease to slow that coming of less. Above all, we need to fight against those who use the claim that the world of less is not coming as a way to justify their own seizure and arrogation of more and more for their own private selves.

Saturday, February 15, 2014

Since I find Storify a rather mystifying app, I'm going to take a series of tweets I made yesterday and turn them into a brief blogpost instead. The motivation for this was some reading I've been doing lately about the origins of academic advising to policy-making and how economics came to assume the preeminent role in policymaking in the 20th century United States.

Among his many world-historical sins, Woodrow Wilson was the first US president to formally assemble a team of academic advisors. Known as "The Inquiry," this group of men (and I believe it was all men) advised the President in preparation for the peace settlement that would follow World War I, particularly with respect to how the territorial boundaries ought to be redrawn. Based out of the New York Public Library, the Inquiry included many of the nation's top geographers (Isaiah Bowman), journalists (Walter Lippmann, Walter Wyle), lawyers (Louis Brandeis, David Hunter Miller), historians (James Truslow Adams [coiner of the term "American Dream], James Shotwell, Paul Monroe, Frank Golder - of course, not Charles Beard).

From our current historical conjuncture, what's most striking about the Inquiry, however, is the category of scholars who were almost completely absent from the Inquiry, namely economists. The only one of note was Simon Patten, whose voice on the Inquiry was relatively faint and whose role remained peripheral. Such marginalization of economists from a major policy intervention would be unthinkable in today's policy climate, in which economists wield unchallenged hegemony. It also reflects President Wilson's ideological reading of the moral purpose of World War I, namely to make national self-determination possible. Assessing the geographic contours of "a nation" required understanding the history (or, more accurately, the mythicized memories) of the peoples in question, and how this history could be legally mapped onto particular territories: a set of questions that evidently "belonged" to the disciplines of history, geography, and law. Whether these entities were economically viable or sensible was sidelined—by implication, something for the nations themselves to address, once their historically appropriate boundaries had been established. (That sense of priorities in itself would be unlikely today.) Indeed, Patten's marginalization within the Inquiry stemmed from his proposal that the new Europe be divided up not according to the principle of national self-determination, but in terms of natural economic units—a grubbily material suggestion ill-suited to Wilson's messianic quest to remake Europe.

After World War I, the core members of the Inquiry would go on to found the Council of Foreign Relations (you can read a celebratory in-house history of that transition here). CFR's particular profile in the wonkish world today continues to reflect these origins: based in New York, it is one of the few think tanks that remains unbeholden to economics as a discipline, offering space in particular for social scientists and journalists to expound in non-technical language (for an idealized audience of engaged businessmen) on the appropriate bases of long-range strategy for American foreign policy.

What's perhaps even more surprising is that economists remained marginal to academic policy-advising even as late as the 1930s, when President Franklin Roosevelt assembled his "Brain Trust," which he modeled after the previous Democratic President's "Inquiry." Unlike the Inquiry, which was addressing a challenge for which economics might plausibly seem a secondary matter, the urgent task facing the Brain Trust in 1933 was the crisis of the Great Depression, now well into its fourth devastating year. And yet even in this circumstance, in which the policy question at hand appeared to be of an essentially economic nature, the community of expertise that FDR called together was dominated by lawyers and historians, more than economists. (The original brain trust consisted of three Columbia law professors, Raymond Moley, Rexford Tugwell, and Adolph Berle; and of the extended group of about 20 people, only 3 were economists.)

In fact, the rise to policy hegemony of economics would take place during World War II. Peter Galison has argued that economists ascended to the top position above all because of their decisive planning and assessment roles the strategic bombing survey, which culminated in the nuclear annihilation of Hiroshima and Nagasaki. (Though as one reply to the tweet noted, economists were also central to the War Production board, supplanting the predominance of that businessmen had taken in that task during the previous war.) Although academics from every discipline supported the War effort, the economists, along with the physicists, managed to get (or take) the lion's share of the political credit. What the physicists got in return for their effort was tens of billions of dollars worth of funding for the next 70 years, and what the economists got in return was a permanent seat at the policy-making table, institutionalized in 1946 with the establishment of the Counsel of Economic Advisors. As Angus Burgin has observed, "For better or for worse, we now live in an era in which economists have become our most influential philosophers, and when decisions made by economistic technocrats have broad and palpable influence on the practice of our everyday lives."

By contrast, historians' voice in policy-advising has been reduced to a somewhat hush-hush once-a-year dinner with the President. As to whether the economists' rise to policy-advising hegemony has been good for our governance, well, your mileage may vary...

Key readings on "The Inquiry":

Laurence Gelfand, The Inquiry: American Preparations for Peace, 1917-1919

Sunday, January 26, 2014

I just read Habits of the Heart for the first time since it came out, and whoa is it a period piece, sure to be at the center of reading lists about the 1980s. A quintessential examination of the mental space of middle class white America, in the late Cold War years, the book is a curiously normative document framed as a piece of positive sociology. Its immense popularity no doubt stemmed from this balancing act, as well as the great learning wrapped up within Bellah's mellifluous if curiously relaxed and at times repetitive prose.

Despite the nuances, at the end of the day, the argument is quite simple: that the narcissistic pursuit of material abundance (what Bellah in an earlier phase of his career had celebrated as "modernization") has revealed itself to Americans as quite empty (the book refuses the Marxist language of "alienation" though it could well be rewritten in that frame), and the choice over how to move forward is between what they refer to as the "therapeutic model," on the one hand, and a return to communitarian integration, focused around family and religion.

Certainly the critique of therapeutalism is sound. Basically, therapy is designed to make people accept the purely individualistic premises of American social life that are the primary target of HotH: "The problem with therapy is not that intimacy is tyrannically taking over too much of public life. It is that too much of the purely contractual structure of the economic and bureaucratic world is becoming an ideological model for personal life.... The prevalence of contractual intimacy and procedural cooperation, carried over from the boardroom to bedroom and back again, is what threatens to obscure the ideals of both personal virtue and public good." (127) "While the emphasis on connectedness and community would seem to be an advance over 'noncaring self-actualization,' one must ask whether the relentless emphasis on self-interest does not raise doubts as to whether there has really been a shift." (135).

These quotes shows why HotH typifies one of the main directions that people could go as the invested technocratic hopes for what is here referred to as "the Administered society" fade away, especially if they refused to embrace what was not yet being called neoliberalism (e.g. the contractual structure of the economic as an ideological model for personal life) -- e.g. communitarianism, of which it is the great representative. Turning away from technocratic managerialism, the book also offers a deep critique of the enduring American cult of individualism, both in its "utilitarian" form (the rush to get ahead and/or keep up) and in its "expressive" form (the desire to "find oneself" by define one's own private ethics and system of belief).
So what should Americans do once the give up on these materialistic and personalized conception of fulfillment? At every page, the book reinforces the notion that the proper cure to what ails the American soul (here called "heart") is to return to republican political values and the communal integration, especially those offered by tolerant religious sects. Enlightenment universalism was a noble fiction, the book suggests, one that we are too sophisticated to believe in any longer. The book closes with a methodological call for sociology to reassert itself as "public philosophy," that is, as the profession of norms: the assertion of belief and moral advocacy, presumably in the cause of the norms the book has drummed into the reader for 300 pages.

In other words, to be slightly anachronistic (and a little mean), HofH is "1000 points of light" for liberals. The text is highly symptomatic of that worldview for all the things it doesn't do, and for all the things it doesn't acknowledge not doing. It barely acknowledges that it is not about all of America, but specifically about white middle class, suburban America. It remains completely uninterested in any broader transnational context for the struggles it talks about. Its critique of contemporary economic life focuses more on what corporate practices does to the interior lives of workers, rather than on social injustices perpetrated or reinforced by these structures. It shamelessly blends fact and value, claiming that all Americans yearn for the solutions that they pose, whether or not they quite realize it (again, while they studiously avoid Marxist jargon, the shadow of "false consciousness" shrouds much of the argument). There is no acknowledgement of the darker aspects of American history, not just in the vicious inter-communal hatreds (these are treated as having faded, making the same sorts of assumptions about historical directionality that Bellah made in his modernization-advocating days), but also in the intra-communal repressiveness which is essential to the integrating function that communities serve. Bellah implicitly assumes that there is a basic compatibility between community and individual, that is, that communal endeavor is the best way to achieve individual fulfillment, rather than the abnegation of the same. To which one can only say, that really depends on what your community makes of your individual desires.

Finally, there is a curious note about the anxiety of influence: while Bellah returns obsessively to Tocqueville as the touchstone for the communitarianism he calls for, the book barely acknowledges (except via brief, largely dismissive footnotes) other sociological investigators who have plowed the same terrain with striking different results, notably the Lynds, David Riesman, and Christopher Lasch.

Sunday, December 01, 2013

Over at The New Inquiry Mike Konczal has an excellent review of Philip Mirowski's new book, Never Let a Serious Crisis Go to Waste. Mirowski's book consists of two distinct pieces. The first is an effort to define that most abused of contemporary academic buzzwords, neoliberalism; the second is an account of how and why the economics profession as a whole has failed to reform itself in the wake of the 2007-8 financial crisis and global depression. Konczal criticizes both halves of this argument, more fairly in the case of the first half than the second.

While appreciating what Mirowski is trying to do in offering a clear definition of "neoliberalism" - a term that, as he says, should be "added to "the 'avoid jargon' list on graduate-school syllabi" - Konczal takes issue with Mirowski's heterodox reading of this term. In brief, Mirowski argues that neoliberalism is best seen not as an ideology that aims at "free markets" - that is, at getting government out of the regulatory game, but rather as a system in which the government sets up markets that favor capital over labor. By contrast, Konczal argues that neoliberalism is better seen as class warfare, tout court.
But is this really the choice? Can't it be both?

While I agree with Konczal that neoliberalism is a code word for class warfare, things get complicated depending on how literally we take the metaphor of war. Does this metaphor necessarily imply a systematic strategy handed down by the standing committee of the ruling class (a role Mirowski elsewhere has seemed to impute to the Mont Pelerin Society)? That seems implausible. However, a more reasonable reading of the metaphor is that there is a struggle for power going on between two different categories of interest groups, but that one side is much more effective in organizing its efforts (mainly because the collective action problems of capital holders are far fewer than those of laborers; that further, the former have many more resources at their disposal than the latter). This more modest understanding of the "war" metaphor also permits us to understand that there are meaningful debates within the class of capital holders as to the best strategy for sustaining capital accumulation.

This latter point is the focus of Konczal's critique of Mirowski. As Konczal usefully points out, the central policy debate today is within neoliberalism. On the one hand, there are those who think that the best way to improve capital formation (marketed to the masses as "growth") is for the state to be help construct and regulate markets in order to ensure socially-optimal outcomes, which includes regularizing profits for capitalists and avoiding social outcomes which put political stability at risk within a democratic society. This is what Mirowski calls neoliberalism. On the other hand, there are those who call for the state to get out of the regulatory game altogether. Mirowski calls this latter group libertarianism, in contrast to (his definition of) neoliberalism, but Konczal argues (following David Harvey) that this is in fact not as a completely different ideological formation, but rather as a debate among sympathizers with capital over political strategy.

Where I part ways with Konczal is over the practical politics involved. As politically sympathetic as I am to the idea that neoliberalism all amounts to class warfare in the final analysis, we must admit that these two broad strategies constitute more and less humane ways in which to prosecute that war. It is simply too crude to dismiss these differences as irrelevant. The interim analysis matters as much as the final analysis, in other words. It's all well and good to observe that the post-DLC Democratic Establishment are neoliberals who side with capital over labor in the class struggle, but it is politically obtuse not to see a difference between the Tea Party and Obama. The latter may be a class warrior, but he is in favor of a more humane way of conducting that war. What's certain is that the term "neoliberalism" is unhelpful for clarifying any of this.

The second part of Mirowski's book is about how and why the economics profession has refused to come to terms with its abject failures over the last decade. As many people have pointed out, one would think that the failure of the economics profession to foresee the global financial crisis would have caused some serious soul-searching for the profession about the scientificity of its claims. For economists, the failure to foresee 2007-8 should have been as much of a come-to-jesus as the failure to foresee the collapse of the USSR was for Sovietologists and international relations scholars. But there's been no serious effort to reform the field. (One small but telling measure of the lack of accountability is that one of the head cheerleaders for the system that imploded in 2008 was rewarded with a Nobel Prize this year.) What Mirowski wants to do is to ask why this is so.

Rather than rehearse Mirowski's argument, I would just say the following: The problem with economics is not with its substantive accomplishments as a field. In terms of creating reliable predictions or offering useful assessments of real-world phenomena, economics probably does a bit better than most other kinds of social science. (No one else really anticipated the meltdown either, right?) No, the real problem with economics has to do with economists' overweening self-confidence and their (concomitant) intellectual hegemony. To put it another way, the problem is not that they're wrong more often than the rest of us, it's that they're not right that much more often and despite this fact, policymakers continue to defer to them far too much. It's the politics of it. What needs to be explained is how, despite their failures, economists have managed to establish and defend their warrant in the public sphere much more effectively than any other group of social scientists. (Possible exception: demographers.) Mirowski has spent his career assessing how economics as a profession pulled off this confidence trick. His answer is: by emulating the discursive structure of 19th century physics (that is: mathematical modeling of closed systems whose 'natural' propensity is assumed to be towards equilibrium), economics has made its claims about the proper social organization and distribution of scarce goods seem like an objective and scientific matter, rather than a fundamentally political question about norms and institutions. Konczal finds this part of the book weak, since Mirowski doesn't propose an alternative and better form of economics. But this seems to me unfair: the lack of a cure shouldn't distract us from the value of the diagnosis.

Tuesday, October 01, 2013

Founded by Christian patriots in a tax revolt, America has always resisted the tyranny of an out-of-touch and tax-happy government. From 1776 forward, Americans lived lives defined by respect for hard work, family, and God. People across the world looked to the United States as a model of consensual democratic capitalism. But with the coming of the Great Depression, America took a terrible turn. Via the New Deal, the Democrat Party leveraged the false socialist ideas of Keynes to justify the building of a giant tax-and-spend welfare state that created a huge number of unfunded entitlements. This in turn forged a widespread culture of dependency where expectations of free handouts replaced the virtues of self-reliance. The results were predictable: public institutions rotted, the economy stagnated, moral relativism proliferated, and US prestige in the world went into freefall. All of this culminated the pathetic presidency of Jimmy Carter — a man who found even rabbits too scary to confront, to say nothing of Communists and Islamofascists.

It was at this point Ronald Reagan emerged to save the country. While the craven GOP Establishment had long resisted his essential wisdom, Reagan's stood up against the liberal socialists at home and against the Communists from Afghanistan to Central America, The result was a decisive turnaround in the US economy and unconditional surrender by Soviet Union in the Cold War. Nonetheless, despite the obvious lessons of the Reagan years, liberals continued to betray the country by promoting socialism. From Clinton to Obama, their hatred for America's true working class of entrepreneurs and job creators has been unabated, with their overt class warfare serving as a meager distraction from their fundamental hatred of white people. America has no choice but to prevent any further destruction of the country's economy and values, by any means necessary, whether that means shutting down the government or preventing the raising of the debt limit. Only by holding the line against the socialists can we ever return the country to the Christian virtues in whose name it was founded.

Thursday, August 22, 2013

A lot of people have made a big deal out of the fact that Daniel Ellsberg has been a vigorous defender of Bradley Chelsea Manning (and also, a little more tentatively, of Edward Snowden). Ellsberg has repeatedly emphasized that "Manning c'est moi."

For people of a certain age and set of political commitments, Ellsberg's defense of Manning has raised a specter: Does doubting the propriety of these contemporary leakers implicitly reneg on one's long-held political commitments about the Vietnam War? Does thinking that these leakers were out of line mean that one has become some sort of Establishment-kowtowing moral midget?

The salient difference between Manning and Ellsberg, it seems to me, has to do less with the nature of their acts, than with the nature of the wars they were aiming to undermine.

Let us begin by observing plainly that as a general matter leaking is an ethical violation: a violation of the trust of the organization one belongs to. This violation may be justified, of course, if the organization one belongs to is itself committing gross ethical violations that justify the fundamental disruption of the mission of these organizations, or of specific wrong-doers within these organizations.

In this sense, the burden of proof must be on the whistleblower: to prove that the crimes he is exposing are so serious not only that the disruption of the organization is justified but also that the leak is the only way to achieve this disruption. Any other view, it seems to me, is tantamount to suggesting that anyone at any level of any organization has the unilateral right to expose anything he wants any time he sees fit—and plainly no large organization of any sort can be run on these terms, regardless of mission.

The distinction, to me, between Manning and Ellsberg has to do with the nature of the war and the government in each case. From my historical vantage point, the Vietnam War (by 1965 probably and by 1971 certainly) absolutely was such a criminal enterprise that it justified radical measures of institutional disruption of the sort that Ellsberg attempted. Crucially, it was a criminal enterprise not just in its origins, but in its daily implementation: Millions of Vietnamese were being killed, and 100+ American draftees per week were dying in the name of a cause which was not just hopeless but in itself a grave injustice. Under these circumstances, throwing one's body on the gears to stop the operation of the machine, to paraphrase Mario Savio, was more than justified, it was approaching a duty. Ellsberg, for this reason, remains a hero to me.

What I would ask is this: do you believe the same of our current war(s)? To defend Manning's actions, one must make the case that the general operations of the institutions he was attempting to disrupt — not just the Iraq War, let us be clear, but the State Department generally — are so corrupt and evil that they demand such disruption. We are not talking about the origins of the war, the lies that justified it to begin with, but its actual practice. Was it a systematically criminal affair, justifying wholesale leaking in order to disrupt the general functioning of the state (as opposed to narrowly targeted retail leaks about specific incidents)?

To put it in comparative terms: Is Obama's White House no different in its essentials from Nixon's? Is the way the current Long War being prosecuted as criminally murderously as the way the Vietnam War was waged? Perhaps most provocatively: are the Islamic fundamentalist non-state actors, against which the US national security apparatus is primarily arrayed today, no different in their essentials from the national liberation movements of the Global South that the US national security apparatus of 1971 was primarily arrayed against? Believing that Manning is no different from Ellsberg requires answering all these questions mostly in the affirmative.

Friday, August 02, 2013

Among people who study illicit economies and trafficking it's become fashionable to suggest that illicit markets have always been with us, and that they always will be. Peter Andreas, for example, has recently written a very good book on how smuggling has been at the heart of virtually every major episode in the history of the American republic, from the Boston Tea Party to gun-running during the civil war to Prohibition to the War on Terror. Andreas argues, further, that efforts to stem or at least control smuggling have been at the heart of a variety of US state-building exercises, ranging from how the federal government's desire to cut down on excise duty evasion led to the creation of the professional civil service, to how efforts to crack down on the transportation of illicit substances helped provide a critical outlet for the Federal Bureau of Investigation to increase its reach in the 1920s. And at one level, it's certainly true that as long as there have been and continue to be different human communities with different values, there will always be people who attempt to prevent certain kinds of commerce between these communities (call them moralists and governments) and there will be others who will try to make money by arbitraging the price differences these regulations create (call them deviant entrepreneurs).

But before we take that story as too pat, maybe it's worth historicizing this concept of the illicit. One place to start with that might be with the following interesting little ngram, on the rise of the term "black market":

What we can see here is that prior to the late 1930s, the concept of the black market did not exist -- or at least there was no term for it. Deviant entrepreneurship may be a venerable practice -- Prohibition was of course one of the major episodes of US history during the "long 1920s" -- but the concept of a black market is something that appeared relatively late.

Why was this? Some clues can be found by looking at the evolution of the usage of the term. If we go to JSTOR to search for the earliest uses of the term, we discover something quite interesting. The earliest articles using the term address a very specific sort of black market, namely, the black market for currency. The handful of articles that talk about black markets in the 1930s all refer to occasions where deviant entrepreneurs were creating markets to route around fixed exchange rate regimes. The historical context for the rise of these "black" markets (a clue that the idea was a new one is that these early articles all use scare quotes) was the monetary situation of the 1930s: the gold standard had collapsed, but most governments were refusing to allow their currencies to "float."

To simplify: imagine if the dollar and the sterling have an official "pegged" exchange rate of 1:1. Imagine, further, that because of differences in the productivity in the two economies, a dollar can buy two widgets but a pound only one. Given this price disparity for widgets, the relative price of these two currencies should float to 1:2. However, the British government refuses to see the pound decline -- as this will reduce the widget-purchasing power of its people. Such a scenario creates two effects: first, many British sellers of widgets will insist on being paid in dollars, not sterling, creating a shortage in the sterling market for widgets, while others will secret the widgets out of the country in order to sell them for dollars; second, deviant entrepreneurs will emerge who will provide liquidity to exchange dollars for pounds at the true "market" rate, thereby allowing sterling holders to get their hands on the widgets. Such unsanctioned currency markets are what are being referred to in these documents. (For contemporary examples, see Argentina and Egypt.) In sum, these earliest uses of the term "black market" reflect a particular historical moment: the effort of governments to impose capital controls in the wake of the collapse of the gold standard during the Great Depression. For a characteristic early example of this usage of the term "black market," see Paul Einzig, "The Unofficial Market in Sterling," The Economic Journal 49:196 (December 1939).

By the early 1940s, however, a new meaning for the term "black market" begins to emerge. This usage refers to markets that route around government efforts to impose rationing in the context of the Second World War. The earliest example of this usage on JSTOR is by Miriam S. Farley: "Japan Experiments with Rationing," Far Eastern Survey 9:17 (14 August 1940). Farley explains how the Japanese, by then in year three of their war with China, having already imposed rations on gasoline, were now commencing to ration "commodities in general daily use," specifically sugar. (This is interesting because it underscores the extent to which Japan was already suffering from commodity supply issues, a year and half before the American phase of the war in Asia officially began.) The article closes by noting drily that there have been "many abuses of the gasoline rationing system" and that the fear was that this black market (no scare quotes) would spread to the other markets as rationing was extended to these.

In a sense, this new usage was an extension of the original currency control-related concept. What governments are doing when they impose capital controls, after all, is rationing the currency -- the rationing of other goods was merely an extension of the same logic of government control over the economy's key resources.

As the Ngram above suggests, the concept of the black market soared throughout the war years, as virtually every country imposed rationing quotas on a growing list of commodities, and as unofficial markets arose in parallel to supply the well-to-do with goods that the government deemed should be distributed on a basis other than market purchasing power. This ration-breaking conceptualization of "black markets" reached its apex in the immediate postwar years, as the deprivation conditions in war-ravaged Europe and Asia meant that price controls and quotas were essential for keeping the desperate populations alive. But the very omnipresence of such rationing also created unprecedented opportunities for deviant entrepreneurs to make windfalls by constructing "black market" supply chains that circumvented the quotas. While size estimates of such markets in places like Berlin and Vienna in the the first years after the war are inherently unreliable, they were undoubtedly huge. The black market was the defining feature of the immediate postwar economic landscapes in these locations. Indeed, one of the greatest movies of the twentieth century, "The Third Man" (1949, based on Graham Greene's screenplay), centers on the adventures of Harry Lime (played by Orson Welles), who runs a black market in (as it turns out, counterfeit) medicine in the ruins of postwar Vienna. This is how Lime famously justified himself:

As the Ngram above shows, it is just as "The Third Man" was released in the late 1940s that the term "black market" (still understood primarily as a unofficial market designed to route around rationing strictures) began to dip in popularity. Here again historical context explains why. The Marshall Plan began in April 1948, and with it the economies of Western Europe begin to overcome the chronic shortages of goods that had led governments to feel the need to impose rationing (and thus created opportunities for deviant entrepreneurs like Harry Lime). With the end of shortages and rationing, the empirical phenomenon of "black markets" went into decline, and hence chatter about the phenomenon also declined a bit, though the concept, having been invented, never quite went away.

It is only the 1960s that the current modern sense of the term black market begins to arise, that is, as a concept that is primarily focused on goods that are not so much rationed as considered per se sinful (as it were, haramin the Islamic jurisprudence sense). While the older sense of the term black market -- referring to unlicensed markets for goods that are considered otherwise wholesome, such as food, medicine or fuel -- has continues to get used in a limited way, from the 1960s forward the term black market increasingly refers to markets for things that on some fundamental, material level are considered inherently beyond the moral pale: above all narcotics, but also prostitution, killing of endangered species, human trafficking, and so on. Of course, there had long been prohibitions on various sorts of goods in American society (notably on alcohol during capital-P Prohibition) but in the 1920s nobody at the FBI referred to what Al Capone was doing with the term "black market."

Fully exploring the nuanced post-1960 meanings of the term "black market" is beyond the scope of this blog post, but one thing worth underscoring here is the shift in the ethical constitution of black markets. In the 1930s and 1940s, black markets arose because rich people wanted to get around government-imposed regulations designed to enforce communal standards of egalitarian solidarity. Quotas were predicated on the notion that during depression and wartime, all members of the national community were supposed to suffer deprivation together. In other words, if there was a limited supply of sugar or gasoline, then government would dictate that everyone (after the government's own set-aside, naturally) would get the exact same amount of sugar, or if not exactly the same amount, then some amount based on a moral economy of distributive justice. Of course, rich people have never been content with getting the same as what everyone else has of these desirable goods, and there have always been those who were willing to either embezzle the government's share or sell their own share (or act as a broker for either of these) in order to meet the desires of those with money. If the presence of the black markets reveal the limits of such egalitarian-communitarian ideals during the 1940s, we should nonetheless pay a certain respect to the moral principles which the deviant entrepreneurs were attempting to subvert. Black markets, in this sense, are the tribute virtue pays to vice.

By contrast, the moral principles that underpin the laws that are the object of today's black markets are rooted in a very different set of moral principles. Today's black markets are designed to route around moral prohibitions concerning certain types of consumptions. In other words, if the government regulations that led to black markets in the 1930s and 1940s were about ensuring that everyone would have access to certain classes of commodities (medicine, food, fuel), the government regulations that lead to most of today's black markets are about ensuring that no one has access to certain classes of goods and services, be it prostitution or drugs or rhino horns.

Wednesday, May 29, 2013

The 1970s is generallyacknowledged as the greatest decade for cinema, especially for Hollywood, as the shackles of the old censorship were thrown off and a brilliant new generation of filmmakers arrived, steeped in European filmmaking and committed to showing the seamier side of American life in its unvarnished glory. It was also the decade that essentially invented the industrial model for blockbuster production.

So many great ones: Jaws. Star Wars. The Conversation. The Godfathers. McCabe and Mrs. Miller. The Sting. Carrie. Nashville. Scenes from a Marriage. Apocalypse Now. The Exorcist. Badlands. Days of Heaven. Close Encounters of the Third Kind. The Spy Who Loved Me. Shaft. Deep Throat. A Clockwork Orange. Aguirre, the Wrath of God. Annie Hall. Manhattan. Blazing Saddles, Young Frankenstein. Animal House. Chinatown. One Flew Over the Cuckoo's Nest. Alien. The Texas Chainsaw Massacre. Barry Lyndon. The Exorcist. Rocky. Etc.

Precisely because this new generation had such an unprecedented commitment to confronting the more louche and sordid aspects of American life (one reason why Steven Spielberg, though of that generation by age, has never really fit in with it), the movies of that era are also an incredible window into the social anxieties of the era, or at any rate the social anxieties of the white men who were making these movies. In no particular order, here are the dozen+ indispensable movies for capturing this "structure of feeling" of the 1970s.

Thursday, April 18, 2013

I've spent a good part of the last few weeks thinking about development assistance, especially development assistance in (subnational) conflict zones. For three years, I've supported a World Bank project researching the impact of development initiatives on such conflicts in Asia. It's been an absolutely amazing experience, which I will blog more about this summer, when the results are published.

Perhaps the most amazing finding, to me, is just how little donors know about where their money goes in these places, much less about what the impact of their projects may be. I've decided to coin a term to describe this: The Fog of Development.

The Fog of Development has many driving factors, but the underlying one is epistemic: we just don't know what we know. And this has at least three dimensions:

Development agencies don't, and apparently can't, follow the money. So we have don't have any sort of precise sense of what effect the "development" spending is having. (You can look from the ground up at individual projects to make project assessments, but it's near-impossible to track a particular dollar from the donor down to the village.)

The developmental state, if it does collect development-related data, does so extremely badly, and it collects conflict-related data even more haphazardly. Or, if it does collect these things well, it usually refuses to share them, or actively distorts them. In ways you as an analyst cannot determine.

When foreign developmental agencies try to collect the impact data themselves, they usually fail, for two reasons. First, they often refuse to answer "sensitive" questions, for fear of being excluded (or for fear of their lives). Second, if they do ask the "sensitive" (i.e. crucial) questions, they are probably not going to get honest answers from respondents, who suffer from the same incentives and fears, but worse.

The drug business in Africa, which used to be controlled by South Americans and was mainly about smuggling toot to Europe, is undergoing a strategic metamorphosis.

First, the product strategy is changing: West Africa is becoming a producer-for-export of horse and tina, while continuing to run Peruvian flake to Europe. Domestically produced hash is also a growing export.

Second, the supply chain strategy is changing: Africa is now exporting to Asia (the supply chain used to run in reverse, with meth being brought in from Asia in exchange for wildlife and illicit minerals). Domestic consumption of harder stuff is growing. (I expect a crack/meth epidemic in Lagos in a couple of years.) Tactically, traffickers are also breaking shipments into smaller batches and using sea as much as air links.

Third, the personnel strategy is changing: West African (mainly Nigerian) gangs, instead of acting as essentially hired help, are taking over the management of business from Chinese and Latin Americans (aping the pattern whereby the Mexicans took over the North American cocaine supply chains from the Colombians).

Friday, February 15, 2013

Last week, Ben Alpers mused on the possibility of an intellectual history of “irrational” thought. His particular examples were the late writings of Philip K. Dick, which might (charitably) be described as mystical, and the writing of L. Ron Hubbard, which might (less charitably) be called commercial in nature. Whether either of these sets of texts is “irrational” is open to debate, but what I found most interesting about Alpers’s riff was his suggestion that virtually any text is a potential subject for intellectual history.

I have to confess, I had a pretty allergic reaction to this claim. It bothered me for much the same reason that Daniel Rodgers’s much-celebrated Age of Fracture rubbed me the wrong way: it didn’t seem to offer any basis for defining what sorts of texts were appropriate to include in a discussion of “the ideas of the age.” Alpers stated quite resolutely in the comments to his piece that he really felt that in fact, yes, any text could be given the intellectual history treatment. The discussion then migrated over to Twitter, where I started asking, rhetorically, whether it would be appropriate to write an intellectual history of, say, Glenn Beck’s half-baked rantings. When I got some affirmatives to that, I went yet further, and asked, How about an intellectual history of Honey Boo-Boo‘s utterances? Yes? Then how about an intellectual history of moans by porn stars? All verily texts, no?

What these admittedly snarky tweets were trying to get at, however, was actually something quite vital: my point was that, in fact, everyone has their standards for what qualifies as worthy of “intellectual” treatment — it’s just that some people’s standards are a lot lower than others’. What I’d like to do in this post is sketch out, in a more serious manner, what I take to be the appropriate limits of the enterprise of intellectual history.

I should begin by noting that I am definitely favor broadening the scope of topics that intellectual historians turn their sights on. For example, my own work (both scholarly and in industry) has dealt primarily with “policy intellectuals,” a topic/category that would likely have seemed bizarre to intellectual historians of Henry May or Perry Miller’s era. In the intervening years, however, we’ve seen many wonderful intellectual histories written on all manner of unlikely topics, ranging from punk music (Greil Marcus’s Lipstick Traces) to cooking (Nathan Myhrvold, Chris Young, and Maxime Bilet’s Modernist Cuisine, v. 1) to college football (Brian Ingrassia’s The Rise of the Gridiron University). While all of these books are arguably cultural histories as much as intellectual histories, in each case they demonstrated that these topics contain surprising intellectual depths and textual complexities.

The question that books like this raise, then, is precisely the one that Alpers and I took up: is there any topic at all that isn’t amenable to intellectual history?

To answer this question, we must begin by noting that the central methodological feature of intellectual history is *close textual reading* — that is, careful, sustained interpretation of a text in order to “unpack” meanings that might otherwise appear obscure, inscrutable, or difficult. (I use the word “text” very broadly here – it could include music, art, etc.) Within that broad rubric, there are of course many different approaches to this sort of careful reading: Exegesis, Talmudism, New Criticism, Deconstruction, and so on. But regardless of these particulars, what sets intellectual history off from other forms of historical inquiry is this sustained, close examination of texts.

To some extent, virtually all historians do this sort of reading in their work. In other words, most historiography contains intellectual-historical “moments,” in which the historian engages in a sustained close reading of some text, be it a personal letter, a political speech, or a private diary. But it’s also clear that close reading is not the only or even the primary sort of reading that most historians do. When historians examine, for example, bills of lading, or birth and death certificates, or bureaucratic forms, or financial accounts, they certainly scrutinize them for credibility, but they don’t usually analyze them in terms of complicated textuality: these documents are mainly assumed to be modestly functional in their communicative purpose. In sum, the sort of close reading I am referring to as the hallmark of the intellectual-historical method is in fact generally, and rightly, reserved for certain types of texts.

So, what are the types of texts that deserve this sort of sustained critical reading?

I would argue that to be worthy of the intellectual-historical approach, a text needs to have two essential features. First, the text should have been “carefully wrought” by its author(s); that is, it should be something that was self-consciously produced to be read with care. Texts produced for simple functional reasons really don’t merit this approach: there’s no such thing as a close reading of a telephone book (as even Avital Ronnel would probably admit). Second, the text needs to be one that situates itself within a tradition of similar texts. Most such texts will usually signal this intertextuality in relatively overt ways, that is, by referencing or commenting on other, similar texts. When you think of what it is that intellectual historians mainly do when they engage in close reading, it is precisely unpacking these intertextualities.

Keeping these two defining features of “intellectual history-worthy” texts in mind, it becomes clear why it makes no sense to speak of an intellectual history of, say, Honey Boo-Boo, or porn movie grunts, or elevator music – these are simply not works of ideas that require exegesis. These texts are, fundamentally, not intellectual objects – they are objects that operate in terms other than intellectual ones. To treat them as intellectual objects is at best silly. To be harsh about it, there’s simply no reason to apply our pearly methodology to such swinish texts. Or, if you prefer a martial metaphor, applying intellectual history methods to such texts is likely killing a fly with a bazooka.

I should hasten to add that this is not at all to say that such objects are without historical interest. They absolutely are. Historians of American life in the early years of the 21st century will certainly need to reckon with the significance of Reality TV and ubiquitous pornography. My point here, rather, is that such reckonings will be much better accomplished using tools other than close textual reading. For example, future historians may wish to contextualize such objects in terms of the political economy of their production, or examine the psychological (and physiological) nature of the reactions the produce in audiences, or any number of other techniques. As they say in Britain: horses for courses.
Before concluding, let me address one possible objection directly, which is whether the distinction I’m making is a fundamentally elitist, Arnoldian one. The answer is, yes it is. In practice, the distinction I’m making almost always ends up reproducing a high/low culture distinction. And I have no problem defending this distinction, for in fact some cultural objects are much more “rich,” “complex,” “textured,” “difficult,” or whatever other metaphor you wish use to describe what separates great works from non-great ones.

Let me go further and say this: everyone knows this in their gut, even if twenty years of reading French Theory has caused them to somehow forget it. Honey Boo-Boo’s or Jenna Jameson’s utterances just aren’t as rich a vein to mine using the technique of close reading as, say, John Rawls or bell hooks. Anyone who claims otherwise is, I’m sorry, full of it. (And yes, in making this sort of absolute claim I fully realize that I am engaged in disciplinary boundary work.) My ultimate point is that we all have our standards, we all have our intellectual hierarchies, and the only question is how honest and transparent we are about them.Postscript: I should note one thing I did change my mind about over the course of last week’s Twitter exchange (yes, I remain capable of occassionally changing my mind!): perhaps it is true, after all, that virtually all overtly political texts fall into the category of “texts worthy of close reading.” This is because, however clumsy or misconceived, virtually all political texts are not only self-consciously wrought, but also situate themselves in some sort of inter-textual tradition, if only implicitly. In which case, maybe Glenn Beck is worthy of the intellectual history treatment, after all, and maybe Daniel Rodgers wasn’t quite so unreasonable when he chose to start Age of Fracture with Peggy Noonan. My gut still tells me that including these sorts of writers defines the project of “intellectual history” awfully far down, but there’s at least a case to be made that their writings meet the criteria I’ve laid out above. So I’ll have to think more on this one.

Wednesday, October 24, 2012

In 2006 RAND staged a wargame to think through the implications of a nuclear terror incident. They created a specific scenario - a tactical nuclear device being detonated by a terrorist organization in the Long Beach harbor - and then staged a role-play to determine how key stakeholders would react and work together. The experience must have been incredible, because even the write-up is riveting.
When I revisited this text today, however, what struck me with particular force was RAND's assessment (this is in 2006, remember) of what the longer-term economic implications of such an event would be:

The attack is likely to have dramatic economic consequences well beyond the Los Angeles
area:

Many loans and mortgages in Southern California might default.

Some of the nation’s largest insurance companies might go bankrupt.

Investors in some of the largest ﬁnancial markets might be unable to meet contract obligations for futures and derivatives.

While exact outcomes are difficult to predict, these hypothetical consequences suggest alarming vulnerabilities. Restoring normalcy to economic relations would be daunting, as would meeting the sweeping demands to compensate all of the losses.

As some of you will no doubt observe, all of these consequences in fact did come to pass just two years after this report was issued - as a result of the Lehman Brothers default, the consequent collapse of AIG, and the cascade effects which are still creating malign reverberations throughout the global economy, above all in the Eurozone.

Usually when people say that something would be "like a nuclear bomb going off" they are exaggerating; but in the case of the Lehman default, it is accurate.

Saturday, October 20, 2012

A couple nights ago I had a little Twitterflurry with @Interdome and @ekstasis, that went a little something like this:

The point of departure for this debate was my tweeting the following, which linked to @CoreyRobin's review of Daniel Rodgers's Age of Fracture (an important recent book that I confess deeply disliking -- but that's for a separate blogpost):

Now @Interdome (AKA Adam Rothstein, whom I don't know) has responded with a thoughtful blogpost, in which he defends post-structuralist theory. The nub of his argument is that one should not conflate postmodernity, as an era, with the body of theory that became popular contemporaneously to that era (properly called post-structuralist theory, but which in popular writing is commonly referred to as postmodern theory). Rothstein argues, further, that any nostalgia for modernist certitudes is misguided, since the primary products of those certitudes were oppression, war, and overconsumption. Finally, he takes umbrage at the suggestion that today's GOP is somehow practicing a kind of "vulgar post-structuralism" in its attacks on science.

Let's take those arguments one at a time. (And let me also acknowledge that this debate is in some ways a rerun, with contemporary signifiers, of the Habermas-Lyotard debate from thirty years ago.)

Obviously it's illegitimate to conflate a critical theorist with the object of his or her theory. It would be ridiculous to blame Marx for nineteenth century capitalism's rapaciousness, say, or William F. Buckley for the inefficiencies of Great Society welfare programs. Likewise it's absurd to suggest that, say, Judith Butler is responsible for climate change denialism, or to suggest that someone who talks about racism is actually perpetuating that racism (one of the American right's dumbest recent tropes).

What I was suggesting, however, was not that post-structuralists were identical to or responsible for the revanchist politics of the late 20th century. Rather, I was referring to the political effect that post-structural theory has had, as well as the ugly ways it has been vulgarized.

To make this case, we need to begin by historicizing poststructuralist theory, that is, to recognize that post-structuralism was as much an artifact of the era of conservative ascendance as supply-side economics or Madonna music videos. Rothstein depicts post-structuralist theory as occupying some Archimedean point outside the historical epoch in which it arose. He suggests that it offers a timeless set of epistemic truths, rather than itself being embedded in (and dare I say a symptom of) that era's own pathologies. By contrast, I would suggest that, now that Minerva's Owl has alighted, it is evident that post-structuralist theory suffered in significant ways from what is known in the intelligence community as "mirror-imaging," that is, the tendency of analysts to take on the characteristics of the objects they are assessing.

In what sense does post-structuralist theory (covertly?) reflect the pathologies of postmodernity? To fully answer that question would take many volumes -- it's part of Daniel Rodgers's project in Age of Fracture -- but let me here put my finger on the two things that I think are most salient. Both of these matters speak directly to the question of whether post-structuralism, despite overtly being critical of how power has operated in the postmodern era, in fact has contributed to the dominant, reactionary politics of that era.

The first point derives from post-structuralist theory's profound epistemological skepticism -- the "hermeneutic of suspicion" that Rothstein refers to. This epistemological skepticism in its sophisticated form represents a compelling critique of the verities of positivistic modernism, but in the hands of inept practitioners (or cynics) can easily shade into vulgarian anti-scientific discourse. This is why the infamous "Sokal hoax" was so embarrassing: it revealed that the language of post-structuralism was all too easily appropriated to render a specious anti-scientific argument in a manner that the leading practitioners of that body of theory did not find objectionable.

Second, the "incredulity toward metanarratives," which Lyotard described as the hallmark of the postmodern outlook, all too easily degenerates into a rejection of any sort of narrative, just as anti-authoritarianism can blur into a rejection of all authority. It's important to note that poststructuralists didn't just assert that the particular master narratives of high modernism were bunk, but rather that all master narratives were bunk, and usually pernicious bunk, at that. In this vein, I think it's altogether too easy to say that Lyotard was just describing something he saw out there in the culture: he was himself deeply engaged in the debunking of metanarratives -- very specifically, the metanarrative of the working class's subjective emancipation through collectivization. In other words, postmodernism's epistemic agenda was specifically aimed at undermining the grandest of the modernist collective action programs, namely socialism. In the French context, post-structuralism was thus "counter-revolutionary" in a very specific and explicit way.

But as post-structuralist theory migrated to the United States in the 1980s (there to be reimagined as a holistic thing called "French Theory") it was taken up by folks with a different political agenda from post-1968 French anti-communists. In the United States, post-structuralism emerged as a toolkit for unmasking dominant narratives of race, class, and gender that privileged white, male, middle-class life in America. Over the course of the last two decades of the twentieth century, at the very moment when neoliberalism was running wild and "left" high modernism was collapsing, post-structuralist theory was sweeping through the American academy as a tool for "deconstructing" various dominant paradigms. Post-structuralist theory (at least in its pre-9/11 maximalist moment) came to be about the radical critique of all socially-received categories. There was no such thing as race, gender, class, nation, tribe, etc.; these were all just myths that the Man had put over on us in order to push his own domination. (I'm caricaturing, but not by much.) In the context of American culture war politics during the 1980s and 1990s, this political agenda seemed on its face to broadly speaking "left," if by left we simply mean a generalized sympathy for the excluded and the poor and opposition to unbridled capitalism and the lingering white supremacist dimensions of American life.

Unfortunately, however, the story doesn't end there. Here it's important to make a basic point: any effective politics cannot end simply with the countering of oppressive cultural categories; ultimately, it must be about collective action. By definition, collective action requires that people act in unison, which in turn requires that people put aside their differences. You don't need to be sociobiologist to observe that people tend to be selfish, so the question is, what is it that gets people to be willing to put aside those individual differences? In practice, it is almost always some narrative that gets an agglomeration of people see themselves as forming some sort of unity of shared interest or identity that trumps their differences. Collective action cannot survive without such narratives.

This brings me to my point about how post-structuralism ended up being reactionary as a matter of praxis: in draining the amniotic fluid of socially-constructed collective categories, it was all but inevitable that the collective action baby would end up getting aborted. Specifically, post-structuralism's radical attack on all collective categories as "socially-constructed," while good for undermining narratives of oppression that relied on such categories, has had a reactionary political effect where it undermined narratives and categories that are necessarily for progressive political action, notably the category of social class. Incredulity toward meta-narratives doesn't just undermine white supremacism, it doesn't just undermine socialism, it undermines the very possibility of collective action, because in practice collective action almost always depends on some meta-narrative which is capable of getting people to put aside their inevitable differences and pursue the collective goal. And without collective action, we're left without emancipatory hope. In sum, by undercutting the narrative bases for collective action, post-structuralism has been reactionary in effect if not in intent.

It's important to note here that I've slipped in a crucial assumption into this argument, and I'd like to bring that out. I'm assuming that collective action is something we should actually wish for. Here, inevitably, we get into the biggest of all historical judgments, which is whether the collective actions undertaken in the name of modernism were, ultimately, a good thing or not. Rothstein is clearly a skeptic. He sees the symbolic high points of modernist collective action as (I quote) "imperialism, world wars, American-made cars, and boom-towns." Take just a small step further and you're with Adorno, declaring that Auschwitz is modernity's apotheosis.

I guess your mileage may vary on this, but I don't accept the wholesale rejection of the project of modernity. I can't sit and look back on the history of the collective actions undertaken over the last three hundred years and say humanity as a whole would be or have been better off if none of that had been undertaken, and we instead had remained under the various anciens regimes. Sure, there have been terrible depredations during our long current historical epoch, but ultimately I am with Habermas at an ethical level: what we need to cure modernity's ills is more of it -- it's an incomplete project.

A big part of my antipathy to post-structuralism's anti-collective action effects has to do with my growing horror about global elites' failure to act against climate change. The simple, fundamental truth is that if we don't manage to create a broadly accepted metanarrative about the need to stop the runaway train of GHG emissions, then we will destroy civilization in ways that even the most hardened skeptic of modernity should blanch at.

And this is where we get back to my argument that the contemporary right is engaged in a kind of "vulgar post-structuralism." Indeed, the right's critique of climate science uses tropes (admittedly in crude form) that should be all too familiar to post-structuralist critics of categories like race or gender. The climate change deniers claim that the narrative about anthropogenic climate change as a threat to civilization is not "true" but rather is "socially constructed and politically biased balderdash" designed by elites who dress up their socialist, one-worldist agenda in the meretricious language of science.

Now, Rothstein may not like this, but this mode of argumentation derives directly from the post-structuralist efforts in the 1980s and 1990s to deconstruct categories like race and gender: these weren't "real" things, just categories that certainly elites had invented in order to divide and oppress. Oh, but we were sincere, and they're just cynics! I can hear the defense. I'm not convinced. There might be some PR flaks for oil companies who are mere cynics, but there are many climate skeptics who sincerely believe that the whole of climate science is an elite ruse designed to allow for the imposition of socialism and one world government. The post-structuralist theory-derived cognitive and political tools serve their arguments all too well. We may not like what modernity brought us, but what humanity desperately needs, despite everything, is not no "schemes to improve the human condition," but rather better schemes to improve the human condition. If we are to avoid boiling the planet, we cannot resign ourselves simply to straying through the infinite nothing.

The bitter truth is that the only hope of confronting the runaway climate change problem is by galvanizing humanity-scale political action. It is only if humanity can find a way to see itself as sharing a single fate that the necessary collective political actions will be remotely possible. But prospects for such action have been profoundly undermined by a philosophical agenda whose most notable practitioner famously declared that the very category of Man was no more solid "than a face drawn in sand at the edge of the sea." So much for galvanizing political action.

So, to sum up, it seems to me that the big and important way to understand post-structuralist theory is to situate it as an artifact of the late 20th century era of post-Fordist neoliberalism (e.g. "postmodernity" in David Harvey and Frederic Jameson's terms). To put a bumper-sticker on it, I see post-structuralism as rooted in the same skepticism about state action, and indeed collective social action of all sort, as the right-wingers on the other side of the political aisle. You read a "post-structuralist" like James Ferguson, and he's as skeptical of the state as anyone in the IMF ever was. This is the fundamental sense in which post-structuralism was a part of its era, rather than simply a critical voice describing that era.

But I can't leave it there. Because, much as I hate to admit this, in a profound way I accept the post-structuralist critique: I too am incredulous of metanarratives. I accept the post-structuralist credo that, in the desert of the real, ethical action of any sort may be impossible, and that negation and refusal may be all that's left. I accept that, now that the high modernist baby's been aborted, it's not going back into the womb.

But I can't say that any of this makes me feel very happy. Hence my position of "melancholic wistfulness": I feel acutely the loss of that moment when the modernist baby seemed so full of promise. Just because I recognize that the baby was never going to grow up to be the healthy child of high modernist dreams is no reason for me to celebrate the abortion with the post-structuralists. It's a tragedy.

There is a close parallel between the euro crisis and the international banking crisis of 1982. Then the IMF and the international banking authorities saved the international banking system by lending just enough money to the heavily indebted countries to enable them to avoid default but at the cost of pushing them into a lasting depression. Latin America suffered a lost decade.

Today Germany is playing the same role as the IMF did then. The details differ, but the effect is the same. The creditors are in effect shifting the whole burden of adjustment onto the debtor countries and avoiding their own responsibility for the imbalances. Interestingly, the terms “center,” or “core,” and “periphery” have crept into usage almost unnoticed, although it is obviously inappropriate to describe Italy and Spain as periphery countries. In effect, however, the introduction of the euro relegated some member states to the status of less developed countries without either the European authorities or the member countries realizing it. In retrospect, that is the root cause of the euro crisis.

Just as in the 1980s, all the blame and burden is falling on the “periphery” and the responsibility of the “center” has never been properly acknowledged. In this context the German word Schuld is revealing: it means both debt and guilt. German public opinion blames the heavily indebted countries for their misfortune.

This is precisely the argument I made two years ago. Back then I made the exact same comparison, and explained why this German approach to the resolution of the Eurocrisis put into question the very existential and moral premise of European Union:

The question is whether the Germans can get away with imposing what amounts to a structural adjustment program (SAP) on their fellow euro-zone members. In other words, are the Germans going to be allowed to do to PIIGs what the US did to Latin America in the aftermath of the 1982 debt crisis.

That story is worth remembering in some detail. What happened in that case was that US banks, flush with petrodollars from the Middle East, had gone on a huge lending spree in the 1970s to Latin American governments, which used the money on a mixture of corrupt payoffs for rich elites and promises of social welfare for the middle classes. By the early 1980s, as interest rates skyrocketed, these countries were no longer able to service their debts. Mexico declared in 1982 that it was not going to pay, several other Latin American countries followed suit, and for a few months that winter it looked possible that the entire global capitalist banking system might implode.

To make a very complicated story short, what happened next was that the U.S. and the IMF agreed to restructure the Latin Americans' debts, in exchange for the imposition of "structural adjustment." The SAPs contained a number of critical elements, which in principle were designed to ensure the fiscal health of the debtor governments, but which also entailed a form of national and transnational class warfare: the rolling back of state ownership of key industries; the lowering of tariff barriers; the restriction of the autonomy of unions; the curtailing of price controls on food, water and other life essentials; and the scaling back of social welfare promises.

This process of economic restructuring is most often remembered as having been responsible for producing a so-called "Lost Decade," in which economic growth rates plummeted across Latin America. But arguably what went lost was something much bigger than a mere decade of productivity. In fact, the SAPs ultimately involved the wholesale abandonment of an entire social-political vision, namely the promise of "development" as a process of building "social modernist" welfare states akin to those enjoyed in the Global North. In other words, it spelled the end of a certain kind of social dream, a certain kind of political ideal -- the dream that they would one day converge with the wealth and lifestyle of the North.

Now, the U.S. bankers and politicians could get away with destroying this dream in part because they themselves didn't really believe in that dream any longer (if indeed they ever had); in part because the U.S. people felt no political or social solidarity with the Latin Americans; and in part because Latin American elites were disunified in their response to the demands of Washington and New York.

By contrast, the whole point of the European Union is supposed to be about pan-continental political solidarity in the name of building social welfare states. Furthermore, the social democratic nature of all the European governments means that throwing the middle classes under the banking bus is anathema - especially if it's "our" (Greek, Spanish, etc.) middle classes and "their" (German, French) banks.

So that's the key question: Is the European Union a fundamentally socially democratic institution? a collection of social and political equals who will stand together in a time of hardship? If that's the case, then the Germans will have to pay. Or alternately, will the Germans succeed in getting the taxpayers and social service consumers in the PIIGs to pay? In which case the beautiful dream of pan-European solidarity will be revealed as a lie, and it's hard to see how the European Union survives as a political project.

Soros's analysis from two weeks ago makes precisely the same point about what the outcome of imposing such a structural adjustment program will be for the comity of European nations:

The European Union that will emerge from this process will be diametrically opposed to the idea of a European Union that is the embodiment of an open society. It will be a hierarchical system built on debt obligations instead of a voluntary association of equals. There will be two classes of states, creditors and debtors, and the creditors will be in charge. As the strongest creditor country, Germany will emerge as the hegemon. The class differentiation will become permanent because the debtor countries will have to pay significant risk premiums for access to capital and it will become impossible for them to catch up with the creditor countries.

The divergence in economic performance, instead of narrowing, will become wider. Both human and financial resources will be attracted to the center and the periphery will become permanently depressed. Germany will even enjoy some relief from its demographic problems by the immigration of well-educated people from the Iberian Peninsula and Italy instead of less qualified Gastarbeiter from Turkey or Ukraine. But the periphery will be seething with resentment.

Is that really the Europe the Germans want? Or maybe it's just the only Europe they're willing to pay for.