sexta-feira, 24 de junho de 2011

He doesn’t use e-mail—yet his name is inextricably linked with technological progress. An avid sailor who never strays far from shore, Robert Solow is one of the most adventurous minds in economics, but worked in the same university office overlooking Boston’s Charles River for more than half a century.A self-styled solver of puzzles, who eschews grandiose ideas, Solow developed a landmark model that fundamentally changed research on how economies develop and grow. Now Professor Emeritus at the Massachusetts Institute of Technology (MIT), Solow won the Nobel Prize in economics in 1987 for his seminal contributions to growth theory.“Here is a scholar whose work has left an indelible imprint on his discipline,” said Princeton professor Alan Blinder. “Not just a model, mind you, but even a residual bears his name!” (Blinder, 1989).Child of the DepressionWe meet on one of those beautiful, crisp, sunny New England days that are the last gasp of fall before winter sets in. He is a lanky man, with a warm smile. Solow’s room in the MIT economics department also has a view of the Boston skyline; it’s an office he had occupied for the better part of 60 years, and that he relinquished a few weeks later. “This is the only full-time academic job I’ve ever had. So I’m not a bird of passage; I settled here.”As an assistant professor he would never have merited such a magnificent office, he hastens to inform me, but when the economics department moved into its new building in 1952, Solow, who had been on the faculty for only a couple of years, was already a close friend and colleague of the late Paul Samuelson, one of the most important economic theoreticians of the 20th century. It was understood that he had to have the office next to Samuelson—who, of course, had to have the best office in the department.Born in New York in 1924, Solow has lived through both the Great Depression and the Great Recession. The son of a furrier who traded with the Soviet Union, he grew up in Brooklyn. The events of the Depression left an indelible imprint on the minds of many future pioneers in economics, and Solow was no exception. “I was very much aware, even as a kid, that something bad had happened and that it was called the Depression. And it meant that there were a lot of people out of work and a lot of people were poor and hungry, and that stuck with me. It was an important thing in my life and probably has a lot to do with attitudes I have, even now.”After his arrival on a scholarship to Harvard at the age of 16, his interest in the underlying factors behind social upheaval led him to study sociology and anthropology, together with some elementary economics (and some not-so-elementary economic tomes, such as in Wassily Leontief’s just-published Structure of the American Economy). But the attack on Pearl Harbor in December 1941 prompted him to drop his studies and sign up immediately as a private in the U.S. Army. Had he waited to graduate, he could have enlisted as an officer, but “defeating Nazism was simply the most important thing to do at that time,” he said. He joined a signals intelligence unit (he knew both Morse code and German) and saw active duty in North Africa and Italy.As soon as he got back home, he married his sweetheart, economic historian Barbara Lewis, to whom Solow has been married for more than 65 years.On his return to Harvard in 1945, Solow decided—at Lewis’s suggestion—to study economics, becoming Leontief’s pupil, research assistant, and, eventually, lifelong friend. He credits Leontief with his transformation from graduate student to professional economist. As his tutor, Leontief would assign Solow a paper to read each week for discussion during their next meeting.In those days, economics was not very mathematical, and Solow lacked college-level mathematics, but he got sick of being given only nontechnical papers—one can hear the indignation and determination in his voice: “I wasn’t going to allow that to happen, read the second-rate papers because I couldn’t read the first-rate articles.” So he enrolled in the necessary mathematics courses in calculus and linear algebra.It was a fortuitous decision. Not only did it earn him an assistant professorship at MIT (to teach probability and statistics), it also meant that Solow was able to speak the same language as Samuelson and to keep up with him intellectually—a feat he likens to “running as hard as you can, all the time.” Samuelson, in turn, described Solow as the “consummate economist’s economist.”They were colleagues and friends for the next 60 years, and whenever Solow was offered a position at another university, he would stipulate that he would move only if Samuelson’s office were moved alongside his. This never quite worked out, and was one of the reasons both men ended up spending their careers at MIT.Reconstruction and decolonizationPost–World War II reconstruction in industrialized countries and economic development in newly independent colonies meant that growth theory was the topic for economists in the 1950s. Before Solow’s contribution, the field did exist, but it was a somber one. Seminal papers by Roy Harrod in 1939 and Evsey Domar from 1946 onward had postulated that steady long-run growth was a possible but an exceedingly unlikely outcome that teetered on a knife edge in the standard macroeconomic models of the time. For steady growth to prevail, the economy’s saving rate had to match exactly the product of the capital output ratio and the rate of growth of the labor force.But in the Harrod-Domar growth model, these three variables—the saving rate, the capital-output ratio, and labor force growth—were fixed and exogenous—given by assumptions on preferences, technology, and demographics, respectively. There was no reason for the required equality to hold, and if it did not, the model predicted that the economy would be subject to ever-increasing fluctuations.Solow came into this debate with two valuable insights. First, despite the 1890s recession, Great Depression, and World War II, Solow thought it was historically untenable that the main characteristic of capitalist economies should be explosive volatility (either growing without bound or shrinking out of existence) rather than stable growth (with occasional crises). Nor did he accept predictions that a higher saving rate would lead to increased long-run growth.Second, of the outside influences of the Harrod-Domar model, Solow’s attention was naturally drawn to his research specialty: the production side. This choice made his reputation. In his 1956 “A Contribution to the Theory of Economic Growth,” Solow showed that relaxing the production technology to allow a flexible capital-output ratio made steady-state growth not only possible, but a natural outcome. Growth theory could rid itself of reliance on finely balanced configurations. And as all students of economics now know, the long-run growth rate in Solow’s model is independent of the saving rate.He did not stop there. Not satisfied with the prospect of much spilling of ink by growth theorists following his 1956 article, Solow further shook up empiricists with his “Technical Change and the Aggregate Production Function” in 1957. He used his theoretical model to decompose the sources of growth among capital, labor, and technological progress. And he showed that technological change, rather than capital accumulation, was the main driver of long-run growth. This “technical change residual”—so called because it is the part of growth that cannot be explained by identifiable factors such as capital accumulation or labor force growth—would forever bear his name.

Appropriately enough, a century after his birth in 1911, Marshall McLuhan has found a second life on the Internet. YouTube and other sites are a rich repository of McLuhan interviews, revealing that the late media sage still has the power to provoke and infuriate. Connoisseurs of Canadian television should track down a 1968 episode of a CBC program called The Summer Way, a highbrow cultural and political show that once featured a half-hour debate about technology between McLuhan and the novelist Norman Mailer.

Both freewheeling public intellectuals with a penchant for making wild statements, Mailer and McLuhan were well matched mentally, yet they displayed an appropriate stylistic contrast. Earthy, squat, and pugnacious, Mailer possessed all the hot qualities McLuhan attributed to print culture. Meanwhile, McLuhan adopted the cerebral and cavalier cool approach he credited to successful television politicians like John F. Kennedy and Pierre Trudeau, who responded to attacks with insouciant indifference.

Early on in the program, McLuhan and Mailer tackle the largest possible issue, the fate of nature:McLuhan: We live in a time when we have put a man-made satellite environment around the planet. The planet is no longer nature. It’s no longer the external world. It’s now the content of an artwork. Nature has ceased to exist.

Mailer: Well, I think you’re anticipating a century, perhaps.

McLuhan: But when you put a man-made environment around the planet, you have in a sense abolished nature. Nature from now on has to be programmed.

Mailer: Marshall, I think you’re begging a few tremendously serious questions. One of them is that we have not yet put a man-made environment around this planet, totally. We have not abolished nature yet. We may be in the process of abolishing nature forever.

McLuhan: The environment is not visible. It’s information. It’s electronic.

Mailer: Well, nonetheless, nature still exhibits manifestations which defy all methods of collecting information and data. For example, an earthquake may occur, or a tidal wave may come in, or a hurricane may strike. And the information will lag critically behind our ability to control it.

McLuhan: The experience of that event, that disaster, is felt everywhere at once, under a single dateline.

Mailer: But that’s not the same thing as controlling nature, dominating nature, or superseding nature. It’s far from that. Nature still does exist as a protagonist on this planet.

McLuhan: Oh, yes, but it’s like our Victorian mechanical environment. It’s a rear-view mirror image. Every age creates as a utopian image a nostalgic rear-view mirror image of itself, which puts it thoroughly out of touch with the present. The present is the enemy.

It’s a measure of McLuhan’s ability to recalibrate the intellectual universe that in this debate, Mailer — a Charlie Sheen–style roughneck with a history of substance abuse, domestic violence, and public mental breakdowns — comes across as the voice of sobriety and sweet reason. Mailer once observed that McLuhan “had the fastest brain of anyone I have ever met, and I never knew whether what he was saying was profound or garbage.” Many others were similarly divided. It was easy to be overawed by McLuhan’s quick-wittedness, his startling erudition, and his ability to describe the familiar world in shockingly fresh language while remaining uncertain about the ultimate value of his ideas.

McLuhan has strong claims to being the most important thinker Canada has ever produced. In his first book, The Mechanical Bride, published in 1951, he established himself in the emerging field of cultural studies by offering a caustic survey of the dehumanizing impact of popular magazines, advertising, and comic strips. By the 1960s, he had widened his lens to examine the power of media as a whole. In The Gutenberg Galaxy, he offered a map of modern history by highlighting the hitherto-unexplored effect of print in shaping how we think. This was followed by Understanding Media, which prophesied that new electronic media would rewire human consciousness just as effectively as print once did, giving birth to a “global village” where people all over the world would be linked via communication technology.

McLuhan has also long been a fiercely polarizing figure, especially during the height of his fame in the 1960s and ’70s. For instance, the American novelist and social critic Tom Wolfe praised him in the most extravagant terms: “At the turn of the nineteenth century and in the early decades of the twentieth there was Darwin in biology, Marx in political science, Einstein in physics, and Freud in psychology. Since then there has been only McLuhan in communications studies.” Meanwhile, the German essayist and poet Hans Enzensberger denounced McLuhan as a “reactionary” and a “charlatan,” a shallow theorist who attempted to “dissolve all political problems in smoke” and promised “the salvation of man through the technology of television.”

One of the most contentious aspects of McLuhan’s life and work was his devout Catholicism, which some critics saw as antithetical to his academic pursuits. In 1971, the British intellectual Jonathan Miller published a short monograph on McLuhan as part of Fontana Books’ Modern Masters, a series of pocket guides on important thinkers. Unrelentingly hostile, Miller argued that McLuhan’s ideas were rooted in a reactionary Catholicism and had little basis in science. According to Miller, the “hidden bias” of McLuhan’s work was that it was “strongly animated by Catholic piety.” He claimed that “McLuhan found it necessary to elaborate a psychological theory which owes considerably more to the unacknowledged authority of St. Thomas Aquinas than it does to any of the scientific sources he openly refers to.” A running theme of Miller’s book is that McLuhan’s ideas were cloaked in the impartial language of science, but carried with them implicit moral values based on his Catholicism.

Since McLuhan’s death in 1980, there has been an outpouring of biographical and exegetical texts, ranging from a hefty collection of his letters, to a superb biography by Philip Marchand, to insightful explications of his work by writers like Douglas Coupland. Arguably, this thriving book industry is paradoxical for an author associated with the death of print culture. But the benefit of this ever-growing body of literature is that it allows us to revisit the debates about McLuhan’s work with a fresh batch of evidence. As it turns out, his relationship with Catholicism was more complicated and layered than his critics allowed, serving not as a hidden bias but rather as a spur toward creativity. His faith provided him with special insights that enabled him to become the Marx of the media age and the Darwin of the digital revolution.

CRITICS LIKE MILLER are dead accurate on one point: the absolute centrality of Catholicism to McLuhan’s intellectual life. McLuhan was born in Edmonton to a generically Protestant family. His father, a good-natured but unsuccessful businessman, was a Methodist, while his mother, a strong-willed public speaker and actress, was a Baptist. He grew up in Winnipeg and would later claim that much of his personal life was shaped by his horrified reaction to that industrial city, which led him to search for a more humane culture in Europe.

In a 1935 letter to his mother explaining his increasing interest in Catholicism, McLuhan noted that “I simply couldn’t believe that men had to live in the mean mechanical joyless rootless fashion that I saw in Winnipeg.” The young McLuhan was a romantic anti-industrialist who came to conclude that Protestantism was to blame for the ills of the modern world. His thinking was much influenced by the Catholic apologist G. K. Chesterton, who advocated “distributist” politics that sought to restore the guild ideals of the Middle Ages as a counterforce to both capitalism and socialism. In the same letter to his mother, McLuhan noted that “I need scarcely indicate that everything that is especially hateful and devilish and inhuman about the conditions and strain of modern industrial society is not only Protestant in origin, but it is their boast(!) to have originated it.”

In converting to Catholicism in 1937, McLuhan was joining a Church he saw as a refuge from the ills of modernity, a litany of evils that included everything from sexual promiscuity to wives bossing around their husbands. At the time, the Church was under the sway of Pius IX’s “Syllabus of Errors,” an 1864 proclamation condemning the idea that “the Roman Pontiff can, and ought to reconcile himself … with progress, liberalism, and modern civilization.” McLuhan admired the fascist Spanish dictator Francisco Franco as a necessary bulwark against godless communism and anarchism. He thought that feminism and the “homosexual cult” were working in tandem to undermine the natural authority of men over the family.

If he had remained so reactionary, his ideas would have been no more intellectually challenging than those of Michael Coren or Pat Buchanan, cartoon Catholics for whom Church doctrine is largely useful as a blunt instrument with which to attack political foes. McLuhan’s great saving grace, however, was his ceaseless curiosity, which led him to expand his intellectual framework. Even in the years before his conversion, he wrestled with theologians whose thinking challenged his own prejudices.

He made an extensive study of his contemporary Jacques Maritain, who was attempting to update the philosophy of St. Thomas Aquinas as a way of making a rapprochement between Catholicism and modernity. As a neo-Thomist, Maritain argued that Catholic social thought was compatible with pluralism and democracy (in the abstract) and contemporary North American society (in particular). These ideas were radical in the 1930s and ’40s, but they would eventually influence the direction of the Church in the great doctrinal revolution of the 1960s, Vatican II.

Maritain frequently lectured at St. Michael’s College, at the University of Toronto, whose faculty McLuhan joined in 1946. McLuhan was attracted to the “lucidity and order” with which Maritain expounded the ideas of Aquinas. If McLuhan had any critique, it was that Maritain did not go far enough to integrate Catholicism with developments in the social sciences. McLuhan also took inspiration from the avant-garde theology of Pierre Teilhard de Chardin, a Jesuit scientist who argued for a congruence between evolutionary theory and the doctrine of redemption. In a 1952 review of The Mechanical Bride, Father Walter Ong, a Jesuit intellectual who studied under McLuhan, drew connections between McLuhan’s theories and de Chardin’s concept of a “noosphere” where “the whole world [is] alerted simultaneously everyday to goings-on in Washington, Paris, London, Rio de Janeiro, Rome, and … Moscow.”

McLuhan’s pioneering studies of popular culture were part of a sea change in Catholic intellectualism, as the Church gave up the siege mentality of earlier decades and tried to offer a more nuanced and positive account of modern life. As well, the Church began to move away from its defence of authoritarianism to support pro-democracy political movements around the world. McLuhan underwent his own political evolution: the young man who admired Franco became the academic who engaged in a long correspondence with Pierre Trudeau. And while The Mechanical Bride condemns the comic strip Blondie for undermining the patriarchal ideal of the man as the natural head of the household, in later writings, such as Understanding Media, McLuhan deliberately eschewed traditionalist strictures, because he thought it was more important to understand the world than to condemn it. As he told an interviewer in 1967, “The mere moralistic expression of approval or disapproval, preference or detestation, is currently being used in our world as a substitute for observation and a substitute for study.”

On moral matters, he remained very conservative. He was adamantly anti-abortion, for example. But part of his achievement as a mature thinker was his ability to bracket off whatever moral objections to the modern world he might have had and to concentrate on exploring new developments — to be a probe. Indeed, although he joined the Church as a refuge, his faith gave him a framework for becoming more hopeful and engaged with modernity. This paradox might be explained by the simple fact that as he deepened in his faith he acquired an irenic confidence in God’s unfolding plan for humanity. In a 1971 letter to an admirer, McLuhan observed, “One of the advantages of being a Catholic is that it confers a complete intellectual freedom to examine any and all phenomena with the absolute assurance of their intelligibility.”

Indeed, his faith made him a more ambitious and far-reaching thinker. Belonging to a Church that gloried in cathedrals and stained glass windows made him responsive to the visual environment, and liberated him from the textual prison inhabited by most intellectuals of his era. The global reach and ancient lineage of the Church encouraged him to frame his theories as broadly as possible, to encompass the whole of human history and the fate of the planet. The Church had suffered a grievous blow in the Gutenberg era, with the rise of printed Bibles leading to the Protestant Reformation. This perhaps explains McLuhan’s interest in technology as a shaper of history. More deeply, the security he felt in the promise of redemption allowed him to look unflinchingly at trends others were too timid to notice.

A century after his birth, what is McLuhan’s status as a thinker? Much more robust than his critics would have expected. Consider again the statement that so shocked Mailer: “Nature from now on has to be programmed.” Living as we do in an age grappling with climate change and proposals to control the planet’s temperature through geoengineering, McLuhan’s observations seem like a sober recital of facts. His core insight was a simple one: technology isn’t just an external tool; it also changes how we think. “The medium is the message” means that each new technology humanity has invented, from the wheel to the alphabet to the Internet, creates new mental habits and new patterns of thought. Anyone addicted to Facebook understands what he meant: our tools aren’t separate from us but rather interact with us and alter, be it ever so slightly, who we are.

As a scholar, McLuhan had a multitude of flaws. He was often sloppy and made many factual errors. But to judge him simply in terms of whether all his quotations and citations are accurate is to misunderstand the role of a master thinker. Like Marx and Freud, he was an intellectual agitator, a conceptual mind expander, the yeast in the dough. After Marx, we can no longer ignore the reality of class difference; after Freud, we can’t pretend that our mental life isn’t saturated with sexual impulses; after McLuhan, we can’t imagine that technology is just a neutral tool. Moreover, like Darwin and Marx, McLuhan is no longer just one man but rather a living and evolving body of thought. The literary critic Guy Davenport once argued that McLuhan was a “half-mad genius” and “one of those strange figures whose brilliance can be articulated by others though not by themselves.”

Davenport may have gone too far: works like The Gutenberg Galaxy remain fertile reading. But it is true that to fully appreciate the profoundness of McLuhan’s thinking, you need to read books like Hugh Kenner’s The Mechanic Muse, Walter Ong’s Orality and Literacy: The Technologizing of the Word, and Nicholas Carr’s The Shallows: What the Internet Is Doing to Our Brains. These sober, scholarly works about the interaction between technology and culture build on McLuhan’s work while avoiding his tendency toward blunt hyperbole. Kenner shows how modernist literature emerged out of industrial culture, and Ong demarcates how the shift from orality to literacy changed the way we think, a process Carr sees as being replicated as we move on to electronic communication. Taken together, they demonstrate the solidity of the intellectual framework McLuhan created. In this new century, countless other thinkers will find inspiration from his work; he has become an inescapable part of the world’s intellectual heritage.

segunda-feira, 20 de junho de 2011

According to Hegel, history is idea-driven. According to almost everyone else, this is foolish. What can “idea driven” even mean when measured against the passion and anguish of a place like Libya?

But Hegel had his reasons. Ideas for him are public, rather than in our heads, and serve to coordinate behavior. They are, in short, pragmatically meaningful words. To say that history is “idea driven” is to say that, like all cooperation, nation building requires a common basic vocabulary.

One prominent component of America’s basic vocabulary is ”individualism.” Our society accords unique rights and freedoms to individuals, and we are so proud of these that we recurrently seek to install them in other countries. But individualism, the desire to control one’s own life, has many variants. Tocqueville viewed it as selfishness and suspected it, while Emerson and Whitman viewed it as the moment-by-moment expression of one’s unique self and loved it.

After World War II, a third variant gained momentum in America. It defined individualism as the making of choices so as to maximize one’s preferences. This differed from “selfish individualism” in that the preferences were not specified: they could be altruistic as well as selfish. It differed from “expressive individualism” in having general algorithms by which choices were made. These made it rational.

This form of individualism did not arise by chance. Alex Abella’s “Soldiers of Reason” (2008) and S. M. Amadae’s “Rationalizing Capitalist Democracy” (2003) trace it to the RAND Corporation, the hyperinfluential Santa Monica, Calif., think tank, where it was born in 1951 as “rational choice theory.” Rational choice theory’s mathematical account of individual choice, originally formulated in terms of voting behavior, made it a point-for-point antidote to the collectivist dialectics of Marxism; and since, in the view of many cold warriors, Marxism was philosophically ascendant worldwide, such an antidote was sorely needed. Functionaries at RAND quickly expanded the theory from a tool of social analysis into a set of universal doctrines that we may call “rational choice philosophy.” Governmental seminars and fellowships spread it to universities across the country, aided by the fact that any alternative to it would by definition be collectivist. During the early Cold War, that was not exactly a good thing to be.

The overall operation was wildly successful. Once established in universities, rational choice philosophy moved smoothly on the backs of their pupils into the “real world” of business and government (aided in the crossing, to be sure, by the novels of another Rand—Ayn). Today, governments and businesses across the globe simply assume that social reality is merely a set of individuals freely making rational choices. Wars have been and are still being fought to bring such freedom to Koreans, Vietnamese, Iraqis, Grenadans, and now Libyans, with more nations surely to come.

At home, anti-regulation policies are crafted to appeal to the view that government must in no way interfere with Americans’ freedom of choice. Even religions compete in the marketplace of salvation, eager to be chosen by those who, understandably, prefer heaven to hell. Today’s most zealous advocates of individualism, be they on Wall Street or at Tea Parties, invariably forget their origins in a long ago program of government propaganda.

Rational choice philosophy, to its credit, made clear and distinct claims in philosophy’s three main areas. Ontologically, its emphasis on individual choice required that reality present a set of discrete alternatives among which one could choose: linear “causal chains” which intersected either minimally, trivially, or not at all. Epistemologically, that same emphasis on choice required that at least the early stages of such chains be knowable with something akin to certainty, for if our choice is to be rational we need to know what we are choosing. Knowledge thus became foundationalistic and incremental.

But the real significance of rational choice philosophy lay in ethics. Rational choice theory, being a branch of economics, does not question people’s preferences; it simply studies how they seek to maximize them. Rational choice philosophy seems to maintain this ethical neutrality (see Hans Reichenbach’s 1951 “The Rise of Scientific Philosophy,” an unwitting masterpiece of the genre); but it does not. Whatever my preferences are, I have a better chance of realizing them if I possess wealth and power. Rational choice philosophy thus promulgates a clear and compelling moral imperative: increase your wealth and power!

Today, institutions which help individuals do that (corporations, lobbyists) are flourishing; the others (public hospitals, schools) are basically left to rot. Business and law schools prosper; philosophy departments are threatened with closure.

Rational choice theory came under fire after the economic crisis of 2008, but remains central to economic analysis. Rational choice philosophy, by contrast, was always implausible. Hegel, for one, had denied all three of its central claims in his “Encyclopedia of the Philosophical Sciences” over a century before. In that work, as elsewhere in his writings, nature is not neatly causal, but shot through with randomness. Because of this chaos, we cannot know the significance of what we have done until our community tells us; and ethical life correspondingly consists, not in pursuing wealth and power, but in integrating ourselves into the right kinds of community.

Critical views soon arrived in postwar America as well. By 1953, W. V. O. Quine was exposing the flaws in rational choice epistemology. John Rawls, somewhat later, took on its sham ethical neutrality, arguing that rationality in choice includes moral constraints. The neat causality of rational choice ontology, always at odds with quantum physics, was further jumbled by the environmental crisis, exposed by Rachel Carson’s 1962 book “The Silent Spring,” which revealed that the causal effects of human actions were much more complex, and so less predicable, than previously thought.

These efforts, however, have not so far confronted rational choice individualism as Hegel did: on its home ground, in philosophy itself. Quine’s “ontological relativity” means that at a sufficient level of generality, more than one theory fits the facts; we choose among the alternatives. Rawls’ social philosophy relies on a free choice among possible social structures. Even Richard Rorty, the most iconoclastic of recent American philosophers, phrased his proposals, as Robert Scharff has written, in the “self-confident, post-traditional language of choice.”

If philosophers cannot refrain from absolutizing choice within philosophy itself, they cannot critique it elsewhere. If they did, they could begin formulating a comprehensive alternative to rational choice philosophy — and to the blank collectivism of Cold War Stalinism — as opposed to the specific criticisms advanced so far. The result might look quite a bit like Hegel in its view that individual freedom is of value only when communally guided. Though it would be couched, one must hope, in clearer prose.

John McCumber is Professor of Germanic Languages at UCLA. He is the author of “Time in the Ditch: American Philosophy and the McCarthy Era” (2001) and, two forthcoming books, “On Philosophy: Notes From a Crisis” and “Time and Philosophy: A History of Continental Thought.”

From day one, immense challenges faced the coalition of international institutions that opted for a liquidity approach to address Greece’s debt solvency problems. Now that this coalition is stumbling and bickering publicly, the outlook for Greece has taken a significant turn for the worse. Even as George Papandreou, the Greek prime minister, prepares to reshuffle his cabinet, he must know his nation’s predicament is now extremely hard to reverse.

It is now commonly accepted that Greece’s predicament is due to two inter-related problems: the economy is unable to grow, and the debt burden is enormous. Yet neither has influenced sufficiently the approach that has been adopted by the crisis management coalition, consisting of the Greek government, its European creditors (namely other eurozone governments, the European Commission and the European Central Bank) and the International Monetary Fund.

Instead, the focus has been on dramatic austerity for Greece and massive loans from the official creditors. Not surprisingly, every economic, financial and social indicator for the Greek economy has deteriorated. This has happened both in absolutes term and, more alarmingly, relative to the coalition’s already grim expectations. Such failure naturally encourages a blame game, and sadly that is exactly what is now happening.

Judging from other crisis management episodes around the world, it is normal for both the Greek government and its people to feel let down by European neighbours who they feel under-appreciate the sacrifices made by its population, especially since these same creditors refused to lower interest rate on new loans. Equally, it is normal for the creditors to complain that it is Greece that is not doing enough to counter what is, after all, a home-grown problem.

In principle, these gaps need not be fatal. Yet the current attempts to bridge them are nowhere near enough. They would do little beyond, at best, prolonging for a few months an already unsustainable situation. More likely, they would be undermined rapidly by two recent developments that suggest that the current approach to crisis management in Greece is coming to its end.

First, and most importantly, the Greek government is losing control of the streets. As protests turn increasingly ugly, the pursuit of a national political consensus becomes even more elusive. This is especially true if all Mr Papandreou, or another leader, can offer is a step back to a discredited approach that involves sacrifices with no evidence of lasting benefits.

Second, even if Greece can deliver, European creditors fundamentally disagree among themselves as to how best to support the country — other than to push the IMF to lend more. Some, led by Germany, want fairer burden-sharing with the private sector, rather than to continue to fund both the needs of the Greek economy and full repayments to private lenders that are now exiting the country. But the ECB strongly opposes this, especially now that its balance sheet is contaminated by large holdings of Greek bonds.

Responding properly to all this is an engineering nightmare and a political headache. Critically, it now requires giving up on at least one, and more likely at least two, of the three principles that have underpinned the coalition’s approach to Greece: avoiding a debt restructuring, a currency devaluation and a change in the fiscal set up of the eurozone.

Europe faces a moment of truth. The sooner this is recognised, the greater the chance of shifting to a “plan B”. If not the prospects are stark: the already-difficult outlook facing the three bail-out countries (Greece, Ireland and Portugal) will surely be compounded by a decade of internal economic implosion. The task must now be to limit fundamental contagion to countries that are yet to be bailed out (notably Spain), and to maintain the integrity of the Euro. But the time for action is fast running out.

Mohamed El-Erian, Chief Executive and co-CIO of Pimco, the world’s leading bond manager.

In the high crisis just two years back, the cult of John Maynard Keynes saw a dramatic revival. Deficits were acceptable, stimulus plans became law, books entitled Return of the Master and The Keynes Solution rushed into print. Enthusiasts spoke of a “new New Deal.” Today, although the economy has not recovered, and although unemployment remains near 9 percent, none of this remains.Barack Obama declined to become a third Roosevelt. His Bernard Baruch proved to be Robert Rubin. There is no Wagner in the Senate, no Eccles or Currie at the Federal Reserve. The agencies that harbored Leon Henderson and the young John Kenneth Galbraith do not exist. If Keynes were alive today and came to visit, one wonders who in official Washington would see him.The new dawn of the Keynesian idea has gone dark.That it was a false dawn goes without saying. People who had actually read and understood Keynes never came close to power. Those who did come to power under Obama were False Keynesians. They would support a “stimulus,” but only if it were limited and temporary. To Lawrence Summers, a two-year program met the definition of “sustained.” $800 billion spread over two years—about 3 percent of a GDP in free-fall—qualified as “substantial.” Ben Bernanke and Christina Romer, both of whom had reputations as experts on the Great Depression, were closer to Milton Friedman’s view of that matter—that the Fed did it—than to Keynes.The False Keynesians also relied on forecasting models that were conceptually anti-Keynesian because they incorporated the notion of a “natural rate of unemployment.” The models assumed that economic recovery would occur, returning us to an unemployment rate near 5 percent after five years. This would happen—so said the models—no matter what the policies were. The models thus defied the commonsense perception that we were in a deep and systemic crisis. In 1930 Keynes wrote, “The world has been slow to realize that we are living this year in the shadow of one of the greatest economic catastrophes of modern history.” In 2009 we realized it. But our computers, and the technicians who ran them, overruled us.As a result, policies were inadequate and the results fell short. In March 2009 I predicted in The Washington Monthly that a temporary program—rather than strategic effort coupled with forceful financial reform—would not foster business investment and sustainable renewed growth. As the stimulus package wore off, the economic recovery would be slow. This prediction came true with disastrous political effects for Obama. And so the False Keynesians went home—Romer back to Berkeley, Summers to Harvard. The reputation of Keynesianism is just part of their collateral damage.After the midterm elections, all attention turned to the victors’ agenda: the federal budget deficit, the public debt, spending cuts, and the cause of “entitlement reform”—our Orwellian phrase for slashing Social Security and Medicare. How can we understand this march of budget-cutters and free-market fundamentalists? Where do their ideas come from? Unlike the Reagan revolutionaries of 30 years ago, they have no academic messiah, no newspaper apostles, and, so far as one can tell, no sacred text. “Monetarism” plays no role, nor does “supply-side economics.” They are not really “Austrians,” though some claim as much. If they are “slaves of some defunct economist”—then of whom?The answers are not far to seek. Adam Smith and David Ricardo—and also their acolytes, the late 19th-century Social Darwinists Herbert Spencer and William Graham Sumner—can be heard murmuring in the vapors of our present discourse. And far more than Marx or Keynes, Thurman Arnold and Thorstein Veblen can help us grasp what their message actually is.Adam Smith, the most humane and optimistic of all economists, adapted his theory of value from the Physiocrats he’d encountered in France, who held that economic value arose on the land. Smith was not comfortable with that, so instead he wrote that value was vested by labor in physical products, which could then be exchanged. Those who made things were “productive” and those who did not were not. Government (including soldiers), alongside the arts and domestic service, fell into the unproductive category. These activities were necessary, even desirable, but only up to a point. They had to be supported out of “revenue,” economic rent, and did not accumulate as wealth. A country that allowed too much of those sorts of things would become poor.This idea contradicts the accounts of national income that give us our modern definitions of economic activity and growth. Government purchases are indeed part of the GDP. So are the labors of ballet dancers and college professors. The accounts make no distinction between public and private spending or between tangible and intangible wealth.Yet Smith’s idea appeals powerfully to instinct—even to common sense. Surely there must be “good” and “bad” spending. Just as we approve of factories, just as we dislike “planned obsolescence” in the private sector, so we consider much of what government does to be “wasteful” and “fundamentally unproductive.” Waste, of course, is a burden by definition, and unproductive activity is to be kept to a minimum. The issue, once framed in this way, becomes one not of whether to cut, but of “how much” and “what” and “on whom.” We forget entirely (until the victims remind us) that, by accounting, budget cuts will reduce income, cost jobs, and cause economic activity—and business profits—to fall.Then there is the question of whether the fall in spending, profits, and jobs will be made up quickly and easily by some other sector. Leaving aside exports, here there are two possibilities: private consumption and business investment.On this issue David Ricardo championed another Frenchman, Jean-Baptiste Say, whose Law held that savings creates investment or, equivalently, that supply creates demand. If there was ever an excess of production, then prices would fall, demand would increase, and that would take care of it. Thus it was impossible for there to be a general glut, meaning sustained mass unemployment. The system was self-correcting; crises did not happen. This powerful confidence now sustains the Tea Party; they have blotted the collapse of the private banking sector from their minds.The rabbit in Ricardo’s hat was the nature of money in his time—mainly coins and paper backed by gold or silver. The quantity of money thus didn’t fall in a glut, and its purchasing power would rise as prices fell. Consumption and investment would take up the slack.

But we no longer live in that world. In our credit-money economy, purchasing power goes away when banks stop lending, and the money stock falls. This is why Milton Friedman and Anna Schwartz could blame the Depression on the Federal Reserve, and why Ron Paul favors abolishing the Fed and a return to the gold standard.With gold-money unavailable, the Republican staff of the Joint Economic Committee has a new paper on how big budget cuts might support economic growth. They offer no theory, just citations to empirical papers that turn out to be highly implausible or else unsupportive. But this work, alongside the balanced-budget amendment cosponsored by all the Republican senators, presumes that some force will drive up business investment to a more-than-offsetting degree, creating a larger and more private economy than we have now. The Keynesian “multiplier” is negative in this view.This argument dovetails with the line of the business lobbies, who whine on about regulations and “uncertainty,” as if we hadn’t spent the past 30 years deregulating everything in sight. In their version of the story, interference by government is a choke-leash on the animal forces of free-market dynamism. Lift regulation, they say, and business investment will rise to the challenge of replacing the demand and incomes lost to budget cuts. While the public-spending multiplier is negative, the private-investment multiplier is anything but.How can this be? It’s an old theme, redolent of the Social Darwinists’ view of the divine right of the rich to rule. Thurman Arnold in The Folklore of Capitalism captured the spirit in this description of a 1936 meeting with bankers, businessmen, lawyers, and others up in arms because the Interstate Commerce Commission was proposing a cut in fares on the New Haven Railroad:[One] gentleman present had the statistical data on why the railroad would suffer. In order to take care of the increased traffic, new trains would have to be added, new brakemen and conductors hired, more money put into permanent equipment. All such expenditures would, of course … remove persons from relief rolls, stimulate the heavy goods industries, and so on. This, however, was argued to be unsound. Since it was done in violation of sound principle it would damage business confidence, and actually result in less capital goods expenditures, in spite of the fact that it appeared to the superficial observer to be creating more…And Thorstein Veblen, in The Theory of Business Enterprise, in 1904, explained what the sound principle underlying it all was. The bottom of the matter was emotional: “Depression is primarily a malady of the affections of the business men. … Any proposed remedy, therefore, must be of such a nature as to reach this emotional seat of the trouble. … What is required is a business coalition … loosely called a ‘trust’.”There you have it: business people need to be in charge. And more than that: they need to feel in charge. Anything else is fundamentally unsound.That is what made Keynes insufferable. It wasn’t that as a young man he liked boys. It wasn’t that he taught that that thrift is a vice, or that savings are pathological, that deficits are helpful, that debt is necessary, that interest rates should be kept low, that the economy should be run at full employment for the good of all. It wasn’t even his reference at the end of The General Theory to the “euthanasia of the rentier.”No, it was the fact that Keynesian policy required Keynes. And if Keynes were in charge, then the captains of industry could not be. Larry Summers is not Keynes. But he did give the impression, for a while, of running the show. This was a fatal error. It was the impression of making policy that business and the Tea Party could not stand. A better policy would not have been better liked.With Jeffrey Immelt, we now have a business face and no economic policy at all. The president has learned. Whether it will save him be remains to be seen. A full government of business people would be much more authentic.Meanwhile, in the halls of Congress, as well as at Westminster and in Frankfurt and Brussels and Berlin, the ghosts of Smith and Ricardo mutter on about unproductive government and how savings create investment. So they cut and cut, and when that doesn’t work they call for more cuts. And the penetrating voices of Arnold and Veblen can be heard too, explaining what is really behind it.Madmen in authority, distilling their frenzies indeed. Keynes got that right.

James K. Galbraith is the author of The Predator State: How Conservatives Abandoned the Free Market and Why Liberals Should Too. He teaches Keynes at the University of Texas at Austin.