Ashley Furniture Industries Inc., already facing a possible $1.7 million fine for alleged safety violations at its massive factory in Arcadia, was accused Tuesday of new infractions and of failing to report worker injuries.

The Occupational Safety and Health Administration said a 56-year-old employee of the giant furniture maker lost his right ring finger after a March 11 accident on a machine that the agency had cited as unsafe just one month earlier.

Ashley also failed to report the injury as required, OSHA said. The agency learned of it from a family member of the victim, an OSHA spokeswoman said.

Another Ashley employee was similarly injured in January on the same type of machine, OSHA said. The company also failed to report that injury as required, the agency spokeswoman said.

The latest citations carry proposed fines totaling $83,200. The bulk of that stems from two alleged violations that OSHA deems “willful,” meaning that they were committed “with intentional, knowing or voluntary disregard for the law’s requirement, or with plain indifference to employee safety and health.”

“Workers at Ashley Furniture cannot count on their company to do what’s right when it comes to safety,” Mark Hysell, OSHA’s area director in Eau Claire, said in a statement. “These workers are at risk because this company is intentionally and willfully disregarding OSHA standards and requirements.”

In February, OSHA accused Ashley of 38 safety violations and said the firm was emphasizing profit over worker safety.

This is what a Walker presidency would look like on a national scale. Gutted workplace safety laws and a whole mess of corporate giveaways in exchange for campaign money. That’s Republican economic ideology in a nutshell.

One of the most compelling points Rick Perlstein makes in his excellent The Invisible Bridge is that Ronald Reagan was consistently and radically underestimated as a potential political force by the national media, public intellectuals, DC insiders, etc., until practically up to the moment he was on the edge of winning the GOP nomination in 1976.

This makes me at least begin to wonder if something similar might not be happening with Donald Trump. Now obviously there are enormous differences between the backgrounds, the careers, and the personalities of the two men, but there are also some striking similarities:

(1) Both mastered the art of manipulating their contemporary media environments.

(2) Both manifested a fine understanding of how to make outrageous statements in a way that ingratiated them with their political bases, precisely because the national media reaction to those statements allowed them to pose as victims of supposed media and/or elite bias.

(3) Both spent a good part of their lives as at least putatively wishy-washy Democrats, before discovering that selling racial demagoguery to the contemporary Republican party base was about as hard as selling beer at a baseball game on a 90-degree day.

(4) Both spent most of their careers being dismissed as clownish lightweights.

In a GOP presidential field that isn’t exactly stacked with political talent, the notion that Trump can’t win the nomination is at least premature. As is the idea that he can’t be elected president.

However things play out from here — I find it hard to see a path other than Grexit — the troika’s program for Greece represents one of history’s epic policy failures. Even if you ignore the economic and human toll, it was an utter failure in terms of restoring solvency. In 2009, before the program, Greek debt was 126 percent of GDP. After five years, debt was … 177 percent of GDP.

How did that happen? Did the Greeks continue massive borrowing? As the chart shows, the answer is a definite no. Greek debt at the end of 2014 was only 6 percent higher than it was at the end of 2009. Admittedly, that number reflects a significant haircut on private debt along the way, but it was still nothing like the continued borrowing binge some imagine.

What happened instead was, of course, the collapse of GDP — itself largely the result of the austerity program.

What this suggests is that the troika program was simply infeasible, and would have been infeasible no matter how willing the Greeks had been to make sacrifices. The more they cut, the worse things got, because of Fisherian debt deflation.

I suppose you can argue that structural reforms might have delivered a boost in competitiveness, but the truth is that there’s very little evidence supporting the conventional faith in such reforms.

Some of my more conventional contacts like to insist that Greek austerity was unavoidable, and it’s true that one way or another Greece was going to have to achieve a primary surplus. If currency devaluation had been an option, this would have required much less austerity, because of the boost from easier monetary policy; but within the euro a lot of austerity was indeed something that had to happen. But the key point is that the austerity ended up being not just incredibly painful but completely futile, because it wasn’t accompanied by massive debt relief.

Is this kind of futility always the case? Not necessarily; if you try to do the arithmetic here, it becomes clear that a lot depends on the initial level of debt. If Greece had received major debt forgiveness, it would still have gone through hell, but with at least some hint of an eventual exit. Instead it was pushed into a cycle of ever-worse pain without hope.

This entry passed through the Full-Text RSS service - if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

OK, this is real: Greek banks closed, capital controls imposed. Grexit isn’t a hard stretch from here — the much feared mother of all bank runs has already happened, which means that the cost-benefit analysis starting from here is much more favorable to euro exit than it ever was before.

Clearly, though, some decisions now have to wait on the referendum.

I would vote no, for two reasons. First, much as the prospect of euro exit frightens everyone — me included — the troika is now effectively demanding that the policy regime of the past five years be continued indefinitely. Where is the hope in that? Maybe, just maybe, the willingness to leave will inspire a rethink, although probably not. But even so, devaluation couldn’t create that much more chaos than already exists, and would pave the way for eventual recovery, just as it has in many other times and places. Greece is not that different.

Second, the political implications of a yes vote would be deeply troubling. The troika clearly did a reverse Corleone — they made Tsipras an offer he can’t accept, and presumably did this knowingly. So the ultimatum was, in effect, a move to replace the Greek government. And even if you don’t like Syriza, that has to be disturbing for anyone who believes in European ideals.

A strange logistical note: I’m on semi-vacation this week, doing a bicycle trip in an undisclosed location. It’s only a semi-vacation because I didn’t negotiate any days off the column; I’ll be in tomorrow’s paper (hmm, I wonder what the subject is) and have worked the logistics so as to make Friday’s column doable too. I was planning to do little if any blogging, and will in any case do less than I might have otherwise given the events.

This entry passed through the Full-Text RSS service - if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

First they ignore you. Then they laugh at you. Then they attack you. Then you win.

I remember one of the first TV debates I had on the then-strange question of civil marriage for gay couples. It was Crossfire, as I recall, and Gary Bauer’s response to my rather earnest argument after my TNR cover-story on the matter was laughter. “This is the loopiest idea ever to come down the pike,” he joked. “Why are we even discussing it?”

Those were isolating days. A young fellow named Evan Wolfson who had written a dissertation on the subject in 1983 got in touch, and the world immediately felt less lonely. Then a breakthrough in Hawaii, where the state supreme court ruled for marriage equality on gender equality grounds. No gay group had agreed to support the case, which was regarded at best as hopeless and at worst, a recipe for a massive backlash. A local straight attorney from the ACLU, Dan Foley, took it up instead, one of many straight men and women who helped make this happen. And when we won, and got our first fact on the ground, we indeed faced exactly that backlash and all the major gay rights groups refused to spend a dime on protecting the breakthrough … and we lost.

In fact, we lost and lost and lost again. Much of the gay left was deeply suspicious of this conservative-sounding reform; two thirds of the country were opposed; the religious right saw in the issue a unique opportunity for political leverage – and over time, they put state constitutional amendments against marriage equality on the ballot in countless states, and won every time. Our allies deserted us. The Clintons embraced the Defense of Marriage Act, and their Justice Department declared that DOMA was in no way unconstitutional the morning some of us were testifying against it on Capitol Hill. For his part, president George W. Bush subsequently went even further and embraced the Federal Marriage Amendment to permanently ensure second-class citizenship for gay people in America. Those were dark, dark days.

I recall all this now simply to rebut the entire line of being “on the right side of history.” History does not have such straight lines. Movements do not move relentlessly forward; progress comes and, just as swiftly, goes. For many years, it felt like one step forward, two steps back. History is a miasma of contingency, and courage, and conviction, and chance.

But some things you know deep in your heart: that all human beings are made in the image of God; that their loves and lives are equally precious; that the pursuit of happiness promised in the Declaration of Independence has no meaning if it does not include the right to marry the person you love; and has no force if it denies that fundamental human freedom to a portion of its citizens. In the words of Hannah Arendt:

“The right to marry whoever one wishes is an elementary human right compared to which ‘the right to attend an integrated school, the right to sit where one pleases on a bus, the right to go into any hotel or recreation area or place of amusement, regardless of one’s skin or color or race’ are minor indeed. Even political rights, like the right to vote, and nearly all other rights enumerated in the Constitution, are secondary to the inalienable human rights to ‘life, liberty and the pursuit of happiness’ proclaimed in the Declaration of Independence; and to this category the right to home and marriage unquestionably belongs.”

This core truth is what Justice Kennedy affirmed today, for the majority: that gay people are human. I wrote the following in 1996:

Homosexuality, at its core, is about the emotional connection between two adult human beings. And what public institution is more central—more definitive—of that connection than marriage? The denial of marriage to gay people is therefore not a minor issue. It is the entire issue. It is the most profound statement our society can make that homosexual love is simply not as good as heterosexual love; that gay lives and commitments and hopes are simply worth less. It cuts gay people off not merely from civic respect, but from the rituals and history of their own families and friends. It erases them not merely as citizens, but as human beings.

We are not disordered or sick or defective or evil – at least no more than our fellow humans in this vale of tears. We are born into family; we love; we marry; we take care of our children; we die. No civil institution is related to these deep human experiences more than civil marriage and the exclusion of gay people from this institution was a statement of our core inferiority not just as citizens but as human beings. It took courage to embrace this fact the way the Supreme Court did today. In that 1996 essay, I analogized to the slow end to the state bans on inter-racial marriage:

The process of integration—like today’s process of “coming out”—introduced the minority to the majority, and humanized them. Slowly, white people came to look at interracial couples and see love rather than sex, stability rather than breakdown. And black people came to see interracial couples not as a threat to their identity, but as a symbol of their humanity behind the falsifying carapace of race.

It could happen again. But it is not inevitable; and it won’t happen by itself. And, maybe sooner rather than later, the people who insist upon the centrality of gay marriage to every American’s equality will come to seem less marginal, or troublemaking, or “cultural,” or bent on ghettoizing themselves. They will seem merely like people who have been allowed to see the possibility of a larger human dignity and who cannot wait to achieve it.

I think of the gay kids in the future who, when they figure out they are different, will never know the deep psychic wound my generation – and every one before mine – lived through: the pain of knowing they could never be fully part of their own family, never be fully a citizen of their own country. I think, more acutely, of the decades and centuries of human shame and darkness and waste and terror that defined gay people’s lives for so long. And I think of all those who supported this movement who never lived to see this day, who died in the ashes from which this phoenix of a movement emerged. This momentous achievement is their victory too – for marriage, as Kennedy argued, endures past death.

I never believed this would happen in my lifetime when I wrote my first several TNR essays and then my book, Virtually Normal, and then the anthology and the hundreds and hundreds of talks and lectures and talk-shows and call-ins and blog-posts and articles in the 1990s and 2000s. I thought the book, at least, would be something I would have to leave behind me – secure in the knowledge that its arguments were, in fact, logically irrefutable, and would endure past my own death, at least somewhere. I never for a millisecond thought I would live to be married myself. Or that it would be possible for everyone, everyone in America.

For six months now the Greek government has been waging a battle in conditions of unprecedented economic suffocation to implement the mandate you gave us on January 25.

The mandate we were negotiating with our partners was to end the austerity and to allow prosperity and social justice to return to our country.

It was a mandate for a sustainable agreement that would respect both democracy and common European rules and lead to the final exit from the crisis.

Throughout this period of negotiations, we were asked to implement the agreements concluded by the previous governments with the Memoranda, although they were categorically condemned by the Greek people in the recent elections.

However, not for a moment did we think of surrendering, that is to betray your trust.

After five months of hard bargaining, our partners, unfortunately, issued at the Eurogroup the day before yesterday an ultimatum to Greek democracy and to the Greek people. An ultimatum that is contrary to the founding principles and values of Europe, the values of our common European project.

They asked the Greek government to accept a proposal that accumulates a new unsustainable burden on the Greek people and undermines the recovery of the Greek economy and society, a proposal that not only perpetuates the state of uncertainty but accentuates social inequalities even more.

The proposal of institutions includes: measures leading to further deregulation of the labor market, pension cuts, further reductions in public sector wages and an increase in VAT on food, dining and tourism, while eliminating tax breaks for the Greek islands.

These proposals directly violate the European social and fundamental rights: they show that concerning work, equality and dignity, the aim of some of the partners and institutions is not a viable and beneficial agreement for all parties but the humiliation the entire Greek people.

These proposals mainly highlight the insistence of the IMF in the harsh and punitive austerity and make more timely than ever the need for the leading European powers to seize the opportunity and take initiatives which will finally bring to a definitive end the Greek sovereign debt crisis, a crisis affecting other European countries and threatening the very future of European integration.

Fellow Greeks, right now weighs on our shoulders the historic responsibility towards the struggles and sacrifices of the Greek people for the consolidation of democracy and national sovereignty. Our responsibility for the future of our country.

And this responsibility requires us to answer the ultimatum on the basis of the sovereign will of the Greek people.

A short while ago at the cabinet meeting I suggested the organization of a referendum, so that the Greek people are able to decide in a sovereign way. The suggestion was unanimously accepted.

Tomorrow the House of Representatives will be urgently convened to ratify the proposal of the cabinet for a referendum next Sunday, July 5 on the question of the acceptance or the rejection of the proposal of institutions.

I have already informed about my decision the president of France and the chancellor of Germany, the president of the ECB, and tomorrow my letter will formally ask the EU leaders and institutions to extend for a few days the current program in order for the Greek people to decide, free from any pressure and blackmail, as required by the constitution of our country and the democratic tradition of Europe.

Fellow Greeks, to the blackmailing of the ultimatum that asks us to accept a severe and degrading austerity without end and without any prospect for a social and economic recovery, I ask you to respond in a sovereign and proud way, as the history of the Greek people commands.

To authoritarianism and harsh austerity, we will respond with democracy, calmly and decisively.

Greece, the birthplace of democracy will send a resounding democratic response to Europe and the world.

I am personally committed to respect the outcome of your democratic choice, whatever that is. And I’m absolutely confident that your choice will honor the history of our country and send a message of dignity to the world.

In these critical moments, we all have to remember that Europe is the common home of peoples. That in Europe there are no owners and guests. Greece is and will remain an integral part of Europe and Europe is an integral part of Greece. But without democracy, Europe will be a Europe without identity and without a compass.

I invite you all to display national unity and calm in order to take the right decisions. For us, for future generations, for the history of the Greeks. For the sovereignty and dignity of our people.

OK, I didn’t see that coming: even though I have come out as a lukewarm opponent of TPP, I assumed that it would happen anyway — the way trade deals (or in this case, dispute settlement and intellectual property deals that pretend to be about trade) always do. But no, or not so far.

A brief aside: I don’t think it’s right to call this a case of Washington “dysfunction”. Dysfunction is when we get outcomes nobody wants, or fail to do things everyone wants done, because there doesn’t seem to be any way to package the politics. In this case, however, people who oppose TPP voted down key enabling measures — that is, they got what they wanted. Calling this “dysfunction” presumes that this deal is a good idea — and that kind of presumption is precisely what got successfully challenged yesterday.

Or to put it another way, one way to see this is as the last stand of the Davos Democrats.

If you talk to administration officials — or at least if I talk to them (they may be telling me what they think I want to hear) — they offer a fairly sophisticated defense of this deal. It’s about geopolitics, they say — America has to be in the game here lest others (obviously including China) supplant our influence; meanwhile, they argue that the troubling aspects of the deal aren’t as troubling as they sound (they make a decent case on dispute settlement, less so on intellectual property). And they argue that the deal would actually improve labor protections in poor countries.

I’m not fully convinced, but this is a reasonable discussion.

But the overall selling of TPP, to some extent by the administration and much more so by its business allies, has been nothing like this. Instead, it has been all lectures from Those Who Know How the Global Economy Works — the kind of people who go to Davos and participate in earnest panels on the skills gap and the case for putting Alan Simpson in charge of everything — to the ignorant hippies who don’t. You know, ignorant hippies like Joseph Stiglitz and Elizabeth Warren.

This kind of thing worked in the 1990s, when Davos Man actually did seem to know how the world works. But now Davos Democrats are known as the people who told us to trust unregulated finance and fear invisible bond vigilantes. They just don’t have the credibility to pull off arguments from authority any more. And it doesn’t say much for their perspicacity that they apparently had no idea that the world has changed.

TPP’s Democratic supporters thought they could dictate to their party like it’s 1999. They can’t.

This entry passed through the Full-Text RSS service - if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

David Runciman wrote a brief essay http://www.lrb.co.uk/v37/n10/david-runciman/notes-on-the-election in the LRB about the results of the British election. I want to focus on one peculiar passage. Runciman observes:

The two countries that have seen the greatest rise in inequality over the past couple of decades are Britain and the United States. Both have a first-past-the-post system designed to offer a clear choice between two main parties. Yet whichever of the two parties wins, the drift towards inequality has been inexorable.

This is, well, nuts or maybe just inexplicable coming from a political thinker of Runciman’s reputation. It tells us nothing about why inequality has accelerated or what might be done to mitigate it. Runciman conflates the British and US political systems because they both have “first past the post” voting—but he somehow neglects to then distinguish them because the US has a presidential model with separation of powers across three branches of government and a widely dispersed federalism, and the UK has a parliamentary model. Which means, of course, as nearly every knowledgable political writer has been screaming during the this time of divided US government, that the US system does not at all offer a “clear choice between two main parties.” In fact, as Juan Linz famously pointed out, in a presidential system two major parties or coalitions can both claim legitimacy by controlling a respective branch of government. (And thus the US can have, simultaneously, two warring “Prime Ministers”, eg, President Obama and Speaker Boehner.)

The American system offers a decidedly murky choice; Because the congressional party (whose election is spread over three cycles) does not merely oppose, but also obstructs the presidential party, the US way of democracy provides the electorate with no logical party accountability—presidential “failures” can be caused by minority legislative parties because the presidential party only appears to voters—and to Runciman, apparently—to be the governing party, but is not. The US system is really enormously different from the UK system. If Runciman had wished to argue that the Congress, whether controlled by Republicans or Democrats, has, in recent decades, abdicated the making and execution of foreign policy to the president, he’d have a point. But he writes as if clear party control of the levers of American politics was built into the system.

And there’s no need for the whole history lesson here, but that’s exactly how it wasn’t designed in the first place. It was, in fact, designed by people who did not anticipate the development of coherent political parties at all and, in fact, loathed the very idea (even if many of them then proceeded to become rather shrewd party politicians in the next phase of their careers). The whole point, as imagined by men who, with certain important exceptions, were very much determined not to replicate the powers of a monarchy in their fledgling nation, was to create conditions that would force elites to compromise and to limit the power of the propertyless (let alone the slaves) to even enter into the discussion. Compromise between powerful interests, not the clarity of unitary authority, was supposed to occur not only between the branches of government, but also between the national government and those of the states (and between the North and the slaveholding sub-nation of the South). There is absolutely nothing structurally about the American system of government, either in its inception or in its current dissipated condition, that offers voters a “clear choice” regarding domestic politics. (Even the rare historical circumstances that have seemingly given one party or the other effective control, eg, FDR’s already balkanized Democrats for, at most four years in the mid 1930s, in fact allowed a cross-party coalition of reactionaries to make the New Deal for “whites only.” http://www.amazon.com/When-Affirmative-Action-White-Twentieth-Century/dp/0393328511)

Later in the essay, Runciman expresses shock that the purportedly smooth running American political structure has crashed into a ditch like the regional trains that its warring parties of equal legitimacy refuse to fund. He writes contemptuously, comparing the squalid Brits with the squalid Yanks, “It is blackmail and veto power, with small groups clamouring to get what they want from the people in charge. This is the current model of American politics, which for all its premium on clarity and executive power is also extremely messy, with all sorts of minor players holding the big boys to ransom.”

But writers and scholars like Norm Ornstein, Thomas Mann, Jacob Hacker and Paul Pierson (and many many others) have written copiously about how and why divided government does not engender clarity in the current iteration of the American presidential system. Runciman seems wholly unaware of this literature.

Sorry to be so sour, but has Runciman ever read The Federalist? Or Madison, in particular? Or just a good history about the ratification of the American constitution http://www.amazon.com/Plain-Honest-Men-American-Constitution/dp/0812976843? To frame his essay with this spurious comparison made it impossible for me to take the rest of his argument seriously.

The new edition of Jacobin, focusing on technology and politics, is out now. Four-issue subscriptions start at only $19.

Modern, fast, processed food is a disaster. That, at least, is the message conveyed by newspapers and magazines, on television cooking programs, and in prizewinning cookbooks.

It is a mark of sophistication to bemoan the steel roller mill and supermarket bread while yearning for stone­ ground flour and brick ovens; to seek out heirloom apples and pumpkins while despising modern tomatoes and hybrid corn; to be hostile to agronomists who develop high-yielding modern crops and to home economists who invent new recipes for General Mills.

We hover between ridicule and shame when we remember how our mothers and grand­mothers enthusiastically embraced canned and frozen foods. We nod in agreement when the waiter proclaims that the restaurant showcases the freshest local produce. We shun Wonder Bread and Coca-Cola. Above all, we loathe the great culminating symbol of Culinary Modernism, McDonald’s — modern, fast, homogenous, and international.

Like so many of my generation, my culinary style was created by those who scorned industrialized food; Culinary Luddites, we may call them, after the English hand workers of the nineteenth century who abhorred the machines that were destroying their traditional way of life. I learned to cook from the books of Elizabeth David, who urged us to sweep our store cupboards “clean for ever of the cluttering debris of commercial sauce bottles and all synthetic flavorings.”

I progressed to the Time-Life Good Cook series and to Simple French Cooking, in which Richard Olney hoped against hope that “the reins of stubborn habit are strong enough to frustrate the famous industrial revolution for some time to come.” I turned to Paula Wolfert to learn more about Mediterranean cooking and was assured that I wouldn’t “find a dishonest dish in this book . . . The food here is real food . . . the real food of real people.” Today I rush to the newsstand to pick up Saveur with its promise to teach me to “Savor a world of authentic cuisine.”

Culinary Luddism involves more than just taste. Since the days of the counterculture, it has also presented itself as a moral and political crusade. Now in Boston, the Oldways Preservation and Exchange Trust works to provide “a scientific basis for the preservation and revitalization of traditional diets.

Meanwhile Slow Food, founded in 1989 to protest the opening of a McDonald’s in Rome, is a self­-described Greenpeace for Food; its manifesto begins, “We are enslaved by speed and have all succumbed to the same insidious virus: Fast Life, which disrupts our habits, pervades the privacy of our homes and forces us to eat Fast Foods . . . Slow Food is now the only truly progressive answer.” As one of its spokesmen was reported as saying in the New York Times, “Our real enemy is the obtuse consumer.”

At this point I begin to back off. I want to cry, “Enough!” But why? Why would I, who learned to cook from Culinary Luddites, who grew up in a family that, in Elizabeth David’s words, produced their “own home-cured bacon, ham and sausages . . . churned their own butter, fed their chickens and geese, cherished their fruit trees, skinned and cleaned their own hares” (well, to be honest, not the geese and sausages), not rejoice at the growth of Culinary Luddism? Why would I (or anyone else) want to be thought “an obtuse consumer”? Or admit to preferring unreal food for unreal people? Or to savoring inauthentic cuisine?

The answer is not far to seek: because I am an historian.

As an historian I cannot accept the account of the past implied by Culinary Luddism, a past sharply divided between good and bad, between the sunny rural days of yore and the gray industrial present. My enthusiasm for Luddite kitchen wisdom does not carry over to their history, any more than my response to a stirring political speech inclines me to accept the orator as scholar.

The Luddites’ fable of disaster, of a fall from grace, smacks more of wishful thinking than of digging through archives. It gains credence not from scholarship but from evocative dichotomies: fresh and natural versus processed and preserved; local versus global; slow versus fast: artisanal and traditional versus urban and industrial; healthful versus contaminated and fatty. History shows, I believe, that the Luddites have things back to front.

That food should be fresh and natural has become an article of faith. It comes as something of a shock to realize that this is a latter-day creed. For our ancestors, natural was something quite nasty. Natural often tasted bad.

Fresh meat was rank and tough; fresh milk warm and unmistakably a bodily excretion; fresh fruits (dates and grapes being rare exceptions outside the tropics) were inedibly sour, fresh vegetables bitter. Even today, natural can be a shock when we actually encounter it. When Jacques Pepin offered free-­range chickens to friends, they found “the flesh tough and the flavor too strong,” prompting him to wonder whether they would really like things the way they naturally used to be. Natural was unreliable. Fresh fish began to stink. Fresh milk soured, eggs went rotten.

Everywhere seasons of plenty were followed by seasons of hunger when the days were short. The weather turned cold, or the rain did not fall. Hens stopped laying eggs, cows went dry, fruits and vegetables were not to be found, fish could not be caught in the stormy seas.

Natural was usually indigestible. Grains, which supplied from fifty to ninety percent of the calories in most societies have to be threshed, ground, and cooked to make them edible. Other plants, including the roots and fibers that were the life support of the societies that did not eat grains, are often downright poisonous. Without careful processing green potatoes, stinging taro, and cassava bitter with prussic acid are not just indigestible, but toxic.

Nor did our ancestors’ physiological theories dispose them to the natural. Until about two hundred years ago, from China to Europe, and in Mesoamerica, too, everyone believed that the fires in the belly cooked foodstuffs and turned them into nutrients. That was what digestion was. Cooking foods in effect pre-digested them and made them easier to assimilate. Given a choice, no one would burden the stomach with raw, unprocessed foods.

So to make food tasty, safe, digestible and healthy, our forebears bred, ground, soaked, leached, curdled, fermented, and cooked naturally occurring plants and animals until they were literally beaten into submission.

To lower toxin levels, they cooked plants, treated them with clay (the Kaopectate effect), leached them with water, acid fruits and vinegars, and alkaline lye. They intensively bred maize to the point that it could not reproduce without human help. They created sweet oranges and juicy apples and non-bitter legumes, happily abandoning their more natural but less tasty ancestors.

They built granaries for their grain, dried their meat and their fruit, salted and smoked their fish, curdled and fermented their dairy products, and cheerfully used whatever additives and preservatives they could — sugar, salt, oil, vinegar, lye — to make edible foodstuffs.

In the twelfth century, the Chinese sage Wu Tzu-mu listed the six foodstuffs essential to life: rice, salt, vinegar, soy sauce, oil, and tea. Four had been unrecognizably transformed from their naturally occurring state.

Who could have imagined vinegar as rice that had been fermented to ale and then soured? Or soy sauce as cooked and fermented beans? Or oil as the extract of crushed cabbage seeds? Or bricks of tea as leaves that had been killed by heat, powdered, and compressed? Only salt and rice had any claim to fresh or natural, and even then the latter had been stored for months or years, threshed, and husked.

Eating fresh, natural food was regarded with suspicion verging on horror, something to which only the uncivilized, the poor, and the starving resorted. When the compiler of the Confucian classic, the Book of Rites (ca. 2oo BC), distinguished the first humans — people who had no alternative to wild, uncooked foods – from civilized peoples who took “advantage of the benefits of fire . . . [who] toasted, grilled, boiled, and roasted,” he was only repeating a commonplace.

When the ancient Greeks took it as a sign of bad times if people were driven to eat greens and root vegetables, they too were rehearsing common wisdom. Happiness was not a verdant Garden of Eden abounding in fresh fruits, but a securely locked storehouse jammed with preserved, processed foods.

Local food was greeted with about as much enthusiasm as fresh and natural. Local foods were the lot of the poor who could neither escape the tyranny of local climate and biology nor the monotonous, often precarious, diet it afforded. Meanwhile, the rich, in search of a more varied diet, bought, stole, wheedled, robbed, taxed, and ran off with appealing plants and animals, foodstuffs, and culinary techniques from wherever they could find them.

By the fifth century BC, Celtic princes in the region of France now known as Burgundy were enjoying a glass or two of Greek wine, drunk from silver copies of Greek drink­ing vessels. The Greeks themselves looked to the Persians, acclimatizing their peaches and apricots and citrons and emulating their rich sauces, while the Romans in turn hired Greek cooks. From around the time of the birth of Christ, the wealthy in China, India, and the Roman Empire paid vast sums for spices brought from the distant and mysterious Spice Islands.

From the seventh century AD, Islamic caliphs and sultans transplanted sugar, rice, citrus, and a host of other Indian and Southeast Asian plants to Persia and the Mediterranean, transforming the diets of West Asia and the shores of the Mediterranean. In the thirteenth century, the Japanese had naturalized the tea plant of China and were importing sugar from Southeast Asia.

In the seventeenth century, the European rich drank sweetened coffee, tea, and cocoa in Chinese porcelain, imported or imitation, proffered by servants in Turkish or other foreign dress. To ensure their own supply, the French, Dutch, and English embarked on imperial ventures and moved millions of Africans and Asians around the globe. The Swedes, who had no empire, had a hard time getting these exotic food­stuffs, so the eighteenth-century botanist Linnaeus set afoot plans to naturalize the tea plant in Sweden.

We may laugh at the climatic hopelessness of his proposal. Yet it was no more ridiculous than other, more successful, proposals to naturalize Southeast Asian sugarcane throughout the tropics, apples in Australia, grapes in Chile, hereford cattle in Colorado and Argentina, and Caucasian wheat on the Canadian prairie. Without our aggressively global ancestors, we would all still be subject to the tyranny of the local.

As for slow food, it is easy to wax nostalgic about a time when families and friends met to relax over delicious food, and to forget that, far from being an invention of the late twentieth century, fast food has been a mainstay of every society.

Hunters tracking their prey, fishermen at sea, shepherds tending their flocks, soldiers on campaign, and farmers rushing to get in the harvest all needed food that could be eaten quickly and away from home. The Creeks roasted barley and ground it into a meal to eat straight or mixed with water, milk, or butter (as the Tibetans still do), while the Aztecs ground roasted maize and mixed it with water to make an instant beverage (as the Mexicans still do).

City dwellers, above all, relied on fast food. When fuel cost as much as the food itself, when huddled dwellings lacked cooking facilities, and when cooking fires might easily conflagrate entire neighborhoods, it made sense to purchase your bread or noodles, and a little meat or fish to liven them up.

Before the birth of Christ, Romans were picking up honey cakes and sausages in the Forum. In twelfth-century Hangchow, the Chinese downed noodles, stuffed buns, bowls of soup, and deep-fried confections. In Baghdad of the same period, the townspeople bought ready-cooked meats, salt fish, bread, and a broth of dried chick peas. In the sixteenth cen­tury, when the Spanish arrived in Mexico, Mexicans had been enjoying tacos from the market for generations. In the eighteenth century, the French purchased cocoa, apple turnovers, and wine in the boulevards of Paris, while the Japanese savored tea, noodles, and stewed fish.

Deep-fried foods, expensive and dangerous to prepare at home, have always had their place on the street: doughnuts in Europe, churros in Mexico, andagi in Okinawa, and sev in India. Bread, also expensive to bake at home, is one of the oldest convenience foods. For many people in West Asia and Europe, a loaf fresh from the baker was the only warm food of the day.

To these venerable traditions of fast food, Americans have simply added the electric deep fryer, the heavy iron griddle of the Low Countries, and the franchise. The McDonald’s in Rome was, in fact, just one more in a long tradition of fast food joints reaching back to the days of the Caesars.

What about the idea that the best food was country food, handmade by artisans? That food came from the country goes without saying. The presumed corollary — that country people ate better than city dwellers — does not.

Few who worked the land were independent peasants baking their own bread, brewing their own wine or beer, and salt­ing down their own pig. Most were burdened with heavy taxes and rents paid in kind (that is, food); or worse, they were indentured, serfs, or slaves.

Barely part of the cash economy, they subsisted on what was left over. “The city dwellers,” remarked the great Roman doctor Galen in the second century AD, “collected and stored enough grain for all the coming year immediately after the harvest. They car­ried off all the wheat, the barley, the beans and the lentils and left what remained to the countryfolk.”

What remained was pitiful. All too often, those who worked the land got by on thin gruels and gritty flatbreads north of the Alps. French peasants prayed that chestnuts would be sufficient to sustain them from the time when their grain ran out to the harvest still three months away. South of the Alps, Italian peasants suffered skin eruptions, went mad, and in the worst cases died of pellagra brought on by a diet of maize polenta and water.

The dishes we call ethnic and assume to be of peasant origin were invented for the urban, or at least urbane, aristocrats who collected the surplus. This is as true of the lasagne of northern Italy as it is of the chicken konna of Mughal Delhi, the mooshu pork of imperial China, the pilafs, stuffed vegetables, and baklava of the great Ottoman palace in Istanbul, or the mee krob of nineteenth-century Bangkok. Cities have always enjoyed the best food and have invariably been the focal points of culinary innovation.

Nor are most “traditional foods” very old. For every prized dish that goes back two thousand years, a dozen have been invented in the last two hundred. The French baguette? A twentieth-century phenomenon, adopted nationwide only after World War II. English fish and chips? Dates from the late nineteenth century, when the working class took up the fried fish of Sephardic Jewish immigrants in East London. Fish and chips, though, will soon be a thing of the past.

It’s a Balti and lager now, Balti being a kind of stir-fried curry dreamed up by Pakistanis living in Birmingham. Greek moussaka? Created in the early twentieth century in an attempt to Frenchify Greek food. The bubbling Russian samovar? Late eighteenth century. The Indonesian rijsttafel? Dutch colonial food. Indonesian padang food? Invented for the tourist market in the past fifty years.

Tequila? Promoted as the national drink of Mexico during the 1930s by the Mexican film industry. Indian tandoori chicken? The brain­child of Hindu Punjabis who survived by selling chicken cooked in a Muslim-style tandoor oven when they fled Pakistan for Delhi during the Partition of India. The soy sauce, steamed white rice, sushi, and tempura of Japan? Commonly eaten only after the middle of the nineteenth century.

The lomilomi salmon, salted salmon rubbed with chopped tomatoes and spring onions that is a fixture in every Hawaiian luau? Not a salmon is to be found within two thousand miles of the islands, and onions and tomatoes were unknown in Hawaii until the nineteenth century. These are indisputable facts of history, though if you point them out you will be met with stares of disbelief.

Not only were many “traditional” foods created after industrialization and urbanization, a lot of them were dependent on it. The Swedish smorgasbord came into its own at the beginning of the twentieth century when canned out-of-season fish, roe, and liver paste made it possible to set out a lavish table. Hungarian goulash was unknown before the nineteenth century, and not widely accepted until after the invention of a paprika-grinding mill in 1859.

When lands were conquered, peoples migrated, populations converted to different religions or accepted new dietary theories, and dishes — even whole cuisines — were forgotten and new ones invented. Where now is the cuisine of Renaissance Spain and Italy, or of the Indian Raj, or of Tsarist Russia, or of medieval Japan? Instead we have Nonya food in Singapore, Cape Malay food in South Africa, Creole food in the Mississippi Delta, and Local Food in Hawaii. How long does it take to create a cuisine? Not long: less than fifty years, judging by past experience.

Were old foods more healthful than ours? Inherent in this vague notion are several different claims, among them that foods were less dangerous, that diets were better balanced.

Yet while we fret about pesticides on apples, mercury in tuna, and mad cow disease, we should remember that ingesting food is, and always has been, inherently dangerous. Many plants contain both toxins and carcinogens, often at levels much higher than any pesticide residues. Grilling and frying add more.

Some historians argue that bread made from moldy, verminous flour, or adulterated with mash, leaves, or bark to make it go further, or contaminated with hemp or poppy seeds to drown out sorrows, meant that for five hundred years Europe’s poor staggered around in a drugged haze subject to hallucinations.

Certainly, many of our forebears were drunk much of the time, given that beer or wine were preferred to water, and with good reason. In the cities, polluted water supplies brought intestinal diseases in their wake. In France, for example, no piped water was available until the 1860s.

Bread was likely to be stretched with chalk, pepper adulterated with the sweepings of warehouse floors, and sausage stuffed with all the horrors famously exposed by Upton Sinclair in The Jungle. Even the most reputable cookbooks recommended using concentrated sulphuric acid to inten­sify the color of jams.

Milk, suspected of spreading scarlet fever, typhoid, and diphtheria as well as tuberculosis, was sensibly avoided well into the twentieth century when the United States and many parts of Europe introduced stringent regulations. My mother sifted weevils from the flour bin; my aunt reckoned that if the maggots could eat her home-cured ham and survive, so could the family.

As to dietary balance, once again we have to distinguish between rich and poor. The rich, whose bountiful tables and ample girths were visible evidence of their station in life, suffered many of the diseases of excess.

In the seventeenth century, the Mughal Emperor, Jahangir, died of overindulgence in food, opium, and alcohol. In Georgian England, George Cheyne, the leading doctor, had to be wedged in and out of his carriage by his servants when he soared to four hundred pounds, while a little later Erasmus Darwin, grandfather of Charles and another important physician, had a semicircle cut out of his dining table to accommodate his paunch.

In the nineteenth century, the fourteenth shogun of Japan died at age twenty-one, probably of beriberi induced by eating the white rice available only to the privileged. In the Islamic countries, India, and Europe, the well-to-do took sugar as a medicine; in India they used butter; and in much of the world people avoided fresh vegetables, all on medical advice.

Whether the peasants really starved, and if so how often, particularly outside of Europe, is the subject of ongoing research. What is clear is that the food supply was always precarious: if the weather was bad or war broke out, there might not be enough to go around. The end of winter or the dry season saw everyone suffering from the lack of fresh fruits and vegetables, scurvy occurring on land as well as at sea.

By our standards, the diet was scanty for people who were engaged in heavy physical toil. Estimates suggest that in France on the eve of the Revolution one in three adult men got by on no more than 1,800 calories a day, while a century later in Japan daily intake was perhaps 1,850 calories. Historians believe that in times of scarcity peasants essentially hibernated during the winter. It is not surprising, therefore, that in France the proudest of boasts was “there is always bread in the house,” while the Japanese adage advised that “all that matters is a full stomach.”

By the standard measures of health and nutrition — life expectancy and height — our ancestors were far worse off than we are. Much of the blame was due to the diet, exacerbated by living conditions and infections which affect the body’s ability to use the food that is ingested. No amount of nostalgia for the pastoral foods of the distant past can wish away the fact that our ancestors lived mean, short lives, constantly afflicted with diseases, many of which can be directly attributed to what they did and did not eat.

Historical myths, though, can mislead as much by what they don’t say as by what they do. Culinary Luddites typically gloss over the moral problems intrinsic to the labor of producing and preparing food. In 1800, 95 percent of the Russian population and 80 percent of the French lived in the country; in other words, they spent their days getting food on the table for themselves and other people.

A century later, 88 percent of Russians, 85 percent of Greeks, and over 50 percent of the French were still on the land. Traditional societies were aristocratic, made up of the many who toiled to produce, process, preserve, and prepare food, and the few who, supported by the limited surplus, could do other things.

In the great kitchens of the few — royalty, aristocracy, and rich merchants — cooks created elaborate cuisines. The cuisines drove home the power of the mighty few with a symbol that everyone understood: ostentatious shows of more food than the powerful could possibly consume. Feasts were public occasions for the display of power, not private occasions for celebration, for enjoying food for food’s sake. The poor were invited to watch, groveling as the rich gorged themselves.

Louis XIV was exploiting a tradition going back to the Roman Empire when he encouraged spectators at his feasts. Sometimes, to hammer home the point while amus­ing the court, the spectators were let loose on the leftovers. “The destruction of so handsome an arrangement served to give another agreeable entertainment to the court,” observed a commentator, “by the alacrity and disorder of those who demolished these castles of marzipan, and these mountains of preserved fruit.”

Meanwhile, most men were born to a life of labor in the fields, most women to a life of grinding, chopping, and cooking. “Servitude,” said my mother as she prepared home­cooked breakfast, dinner, and tea for eight to ten people three hundred and sixty five days a year.

She was right. Churning butter and skinning and cleaning hares, without the option of picking up the phone for a pizza if something goes wrong, is unremitting, unforgiving toil. Perhaps, though, my mother did not realize how much worse her lot might have been.

She could at least buy our bread from the bakery. In Mexico, at the same time, women without servants could expect to spend five hours a day — one third of their waking hours — kneeling at the grindstone preparing the dough for the family’s tortillas. Not until the 1950s did the invention of the tortilla machine release them from the drudgery.

In the eighteenth and early nineteenth centuries, it looked as if the distinction between gorgers and grovelers would worsen. Between 1557 and 1825 world population had doubled from 5oo million to a billion, and it was to double again by 1925.

Malthus sounded his dire predictions. The poor, driven by necessity or government mandate, resorted to basic foods that produced bountifully even if they were disliked: maize and sweet potatoes in China and Japan, maize in Italy, Spain and Romania, potatoes in northern Europe.

They eked out an existence on porridges or polentas of oats or maize, on coarse breads of rye or barley bulked out with chaff or even clay and ground bark, and on boiled potatoes; they saw meat only on rare occasions. The privation continued. In Europe, 1840 was a year of hunger, best remembered now as the time of the devastating potato famine of Ireland.

Meanwhile, the rich continued to indulge, feasting on white bread, meats, rich fatty sauces, sweet desserts, exotic hothouse-grown pineapples, wine, and tea, coffee, and chocolate drunk from fine china. In 1845, shortly after revolutions had rocked Europe, the British Prime Minister Benjamin Disraeli described “two nations, between whom there is no intercourse and no sympathy . . . who are formed by a different breeding, are fed by a different food, are ordered by different manners, and are not governed by the same laws . . . THE RICH AND THE POOR.”

In the nick of time, in the 1880s, the industrialization of food got under way long after the production of other common items of consumption such as textiles and clothing had been mechanized. Farmers brought new land into production, utilized reapers and later tractors and combines, spread more fertilizer, and by the 1930s began growing hybrid maize. Steamships and trains brought fresh and canned meats, fruits, vegetables, and milk to the growing towns. Instead of starving, the poor of the industrialized world survived and thrived.

In Britain the retail price of food in a typical workman’s budget fell by a third between 1877 and 1887 (though he would still spend seventy-one percent of his income on food and drink). In 1898 in the United States a dollar bought forty-two percent more milk, fifty-one percent more coffee, a third more beef, twice as much sugar, and twice as much flour as in 1872. By the beginning of the twentieth century, the British working class were drinking sugary tea from china teacups and eating white bread spread with jam and margarine, canned meats, canned pineapple, and an orange from the Christmas stocking.

To us, the cheap jam, the margarine, and the starchy diet look pathetic. Yet white bread did not cause the “weakness, indigestion, or nausea” that coarse whole wheat bread did when it supplied most of the calories (not a problem for us since we never consume it in such quantities). Besides, it was easier to detect stretchers such as sawdust in white bread. Margarine and jam made the bread more attractive and easier to swallow. Sugar tasted good, and hot tea in an unheated house in mid-winter provided good cheer.

For those for whom fruit had been available, if at all, only from June to October, canned pineapple and a Christmas orange were treats to be relished. For the diners, therefore, the meals were a dream come true, a first step away from a coarse, monotonous diet and the constant threat of hunger, even starvation.

Nor should we think it was only the British, not famed for their cuisine, who were delighted with industrialized foods. Everyone was, whether American, Asian, African, or European.

In the first half of the twentieth century, Italians embraced factory-made pasta and canned tomatoes. In the second half of the century, Japanese women welcomed factory-made bread because they could sleep in a little longer instead of having to get up to make rice. Similarly, Mexicans seized on bread as a good food to have on hand when there was no time to prepare tortillas.

Working women in India are happy to serve commercially made bread during the week, saving the time-consuming business of making chapatis for the weekend. As supermarkets appeared in Eastern Europe and Russia, housewives rejoiced at the choice and convenience of ready-made goods.

For all, Culinary Modernism had provided what was wanted: food that was processed, preservable, industrial, novel, and fast, the food of the elite at a price everyone could afford. Where modern food became available, populations grew taller, stronger, had fewer diseases, and lived longer. Men had choices other than hard agricultural labor, women other than kneeling at the metate five hours a day.

So the sunlit past of the Culinary Luddites never existed. So their ethos is based not on history but on a fairy tale. So what? Perhaps we now need this culinary philosophy. Certainly no one would deny that an industrialized food supply has its own problems, problems we hear about every day. Perhaps we should eat more fresh, natural, local, artisanal, slow food. Why not create a historical myth to further that end? The past is over and gone. Does it matter if the history is not quite right?

It matters quite a bit, I believe. If we do not understand that most people had no choice but to devote their lives to growing and cooking food, we are incapable of comprehending that the foods of Culinary Modernism — egalitarian, available more or less equally to all, without demanding the disproportionate amount of the resources of time or money that traditional foodstuffs did — allow unparalleled choices not just of diet but of what to do with our lives.

If we urge the Mexican to stay at her metate, the farmer to stay at his olive press, the housewife to stay at her stove instead of going to McDonald’s, all so that we may eat handmade tortillas, traditionally pressed olive oil, and home-cooked meals, we are assuming the mantle of the aristocrats of old. We are reducing the options of others as we attempt to impose our elite culinary preferences on the rest of the population.

If we fail to understand how scant and monotonous most traditional diets were, we can misunderstand the “ethnic foods” we encounter in cookbooks, restaurants, or on our travels. We let our eyes glide over the occasional references to servants, to travel and education abroad in so-called ethnic cookbooks, references that otherwise would clue us in to the fact that the recipes are those of monied Italians, Indians, or Chinese with maids to do the donkey work of preparing elaborate dishes.

We may mistake the meals of today’s European, Asian, or Mexican middle class (many of them benefiting from industrialization and contemporary tourism) for peasant food or for the daily fare of our ancestors. We can represent the peoples of the Mediterranean, Southeast Asia, India, or Mexico as pawns at the mercy of multinational corporations bent on selling trashy modem products — failing to appreciate that, like us, they enjoy a choice of goods in the market, foreign restaurants to eat at, and new recipes to try.

A Mexican friend, suffering from one too many foreign visitors who chided her because she offered Italian, not Mexican food, complained, “Why can’t we eat spaghetti, too?” If we unthinkingly assume that good food maps neatly onto old or slow or homemade food (even though we’ve all had lousy traditional cooking), we miss the fact that lots of industrial foodstuffs are better. Certainly no one with a grindstone will ever produce chocolate as suave as that produced by conching in a machine for seventy two hours. Nor is the housewife likely to tum out fine soy sauce or miso.

And let us not forget that the current popularity of Italian food owes much to the availability and long shelf life of two convenience foods that even purists love, high-quality factory pasta and canned tomatoes. Far from fleeing them, we should be clamoring for more high-quality industrial foods.

If we romanticize the past, we may miss the fact that it is the modern, global, industrial economy (not the local resources of the wintry country around New York, Boston, or Chicago) that allows us to savor traditional, peasant, fresh, and natural foods.

Virgin olive oil, Thai fish sauce, and udon noodles come to us thanks to international marketing. Fresh and natural loom so large because we can take for granted the preserved and processed staples — salt, flour, sugar, chocolate, oils, coffee, tea — produced by agribusiness and food corporations. Asparagus and strawberries in winter come to us on trucks trundling up from Mexico and planes flying in from Chile.

Visits to charming little restaurants and colorful markets in Morocco or Vietnam would be impossible without international tourism. The ethnic foods we seek out when we travel are being preserved, indeed often created, by a hotel and restaurant industry determined to cater to our dream of India or Indonesia, Turkey, Hawaii, or Mexico. Culinary Luddism, far from escaping the modern global food economy, is parasitic upon it.

Culinary Luddites are right, though, about two important things. We need to know how to prepare good food, and we need a culinary ethos. As far as good food goes, they’ve done us all a service by teaching us to how to use the bounty delivered to us (ironically) by the global economy.

Their culinary ethos, though, is another matter. Were we able to turn back the clock, as they urge, most of us would be toiling all day in the fields or the kitchen; many of us would be starving. Nostalgia is not what we need.

What we need is an ethos that comes to terms with contemporary, industrialized food, not one that dismisses it, an ethos that opens choices for everyone, not one that closes them for many so that a few may enjoy their labor, and an ethos that does not prejudge, but decides case by case when natural is preferable to processed, fresh to preserved, old to new, slow to fast, artisanal to industrial.

Such an ethos, and not a timorous Luddism, is what will impel us to create the matchless modern cuisines appropriate to our time.

Contrary to many analysts’ assumption that putting Democrats into office is the best way to substantially increase the minimum wage, workplace actions and protests targeting low-wage employers could be the best strategy. These actions focus public attention on low wages and help pave the way for local and state ballot referenda to raise the minimum wage.

More importantly, direct pressure — through boycotts, protests, labor strikes, or supply chain interruptions — on McDonald’s, Walmart, and other powerful firms can “adversely affect” their bottom line, especially given “increasing public focus on matters of income inequality,” as McDonald’s company documents recently warned. This pressure can simultaneously yield direct concessions: some fast-food and retail chains have reacted to recent protests by granting raises to unruly workers, and a few have promised company-wide increases.

But beyond this immediate impact, the changes wrought by direct protest can also neutralize the affected firms’ opposition to raising the minimum wage to the level they are (now) paying their workers. Some may even lobby the government for such an increase to reduce their competitive disadvantage. This logic motivated certain US businesses to support the 1891 Meat Inspection Act, the 1906 Pure Food and Drug Act, and other landmark regulatory laws, because they saw the laws as forcing their competitors to honor standards they were already being forced to meet.

…

Targeting corporations can even make sense when corporations aren’t the most visible enemies of reform, as in the immigrant rights struggle. In March 2011, dozens of Arizona-based corporate executives wrote a letter to state legislators asking that they refrain from passing further anti-immigrant bills like the infamous SB 1070, which was in 2010.

The problem, they explained, was that “boycotts were called against [the] state’s business community” in response to the law. The boycotts were so “harmful to [their] image” that “Arizona-based businesses saw contracts cancelled or were turned away from bidding,” and “sales outside of the state declined” (the boycotts also led many Mexican companies to stop trading with Arizona businesses).

The threat to their profits led them to insist on a change in public policy. The result? Within a week, the Republican-controlled legislature rejected five bills designed to further criminalize immigrants.

This is all why it really doesn’t matter if Hillary Clinton supports the Trans Pacific Partnership or Keystone XL Pipeline. What matters is if she is scared to support it because it would cost her real political capital to do so. Ultimately putting Democrats into office makes the process of change much, much easier, but it isn’t enough and is certainly not a final point. Elections are merely the consolidation of power over the past election cycle, not the end of the game. Those were disappointed with Obama should largely be disappointed with themselves because they misunderstood how politics work in the United States. Hopefully, they learn the right lessons from that disappointment.

You may remember Joseph Epstein as the purveyor of right-wing identity politics for people who consider Roger Kimball too nuanced and unrepetitive. You may also be aware of the conservative idea that there is only one objective standpoint, that of the white heterosexual straight male. So it may not surprise you to know that Epstein is the man to distill the latter idea into 180-proof self-parody:

Now have we come to the point where we elect presidents of the United States not on their intrinsic qualities but because of the accidents of their birth: because they are black, or women, or, one day doubtless, gay, or disabled—not, in other words, for themselves but for the causes they seem to embody or represent, for their status as members of a victim group?

This is the kind of thing that doesn’t really require refutation. Ditto his whining about the fact that people have the temerity to criticize an essay in which he wrote that “I have said that I think homosexuals curse, and I am afraid I mean this quite literally, in the medieval sense of having been struck by an unexplained injury, an extreme piece of evil luck, whose origin is so unclear as to be, finally, a mystery.” (It should go without saying that the essay is also larded with sub-Allan Bloom complaints about relativism on college campuses that Epstein, like so many others, has already written innumerable times.) But that doesn’t mean I don’t appreciate Chait stepping up to the pinata:

Yes, that’s right. America used to elect presidents on “intrinsic qualities” rather than “accidents of their birth.” And this process resulted in the election of forty-three consecutive white men, an outcome Epstein must regard as an extreme coincidence. The last president to be elected on the basis of intrinsic qualities rather than accidents of birth was George W. Bush, whose birth circumstances, Epstein apparently believes, had no bearing upon his career trajectory.

[…]

In a larger sense, of course, the very existence of Epstein’s piece serves to disprove its thesis. If it is still possible for a white man to write an incoherent farrago of self-pity whose only shred of evidence directly undercuts its thesis, and have such drivel thrown onto the cover of a national magazine, then white men are probably still doing okay.

When Bernie Sanders announced his presidential bid, I saw several comments from people who may have supported the third party campaigns of people like Ralph Nader in the past respond by asking when the people who supposedly prefer primary challenges to third parties would start criticizing Sanders for challenging Hillary Clinton. The upshot of these statements is that those who oppose third party bids are actually Democratic Party hacks who just want to protect their beloved Al Gore or John Kerry or Barack Obama or Hillary Clinton or whoever the party centrists select.

I can’t speak for anyone else, but for myself, who is a very strong critic of anything to do with third parties on the left, this is absolutely not true. I think Sanders’ run is great. Here’s the thing about Bernie Sanders as opposed to say Ralph Nader–he is neither consumed with his own ego nor an idiot who doesn’t understand American politics. Rather, he is challenging Hillary Clinton in a way that is going to force her to move to the left on real issues, through the primary process. That matters a lot.

Relatedly, I strongly endorse almost everything in this Bhaskar Sunkara piece on Bernie Sanders.* Sanders is no revolutionary socialist. He’s really pretty comparable to a good Great Society liberal like Hubert Humphrey or Ed Muskie. But by running for president within the Democratic Party instead of a pointless, quixotic third party campaign, he gives voice to the Democratic Party base that may be OK with Hillary Clinton as the nominee but would sure like her to be significantly farther to the left. By making socialism not a dirty word but rather an appealing option to DLC corporatism, Sanders represents a threat. It’s not that I think Hillary Clinton believes in her soul that her husband’s policies of mass incarceration need to be reversed or that the Trans Pacific Partnership is really flawed. I don’t care what she feels. I don’t look to politicians for sincerity. I care what she feels she has to do in order to motivate the base to vote for her and support her in her presidency. Bernie Sanders is making her do more work there. And she won’t be able to completely repudiate those positions once in office.

The Bernie Sanders campaign has its limitations. It’s not bringing socialism to the United States. But if we recognize what it is doing, it has real significance and should be wholly supported by the entire Democratic base. In an increasingly polarized nation, the centrist voters the Clintons were made to appeal to are almost nonexistent, Beltway writers notwithstanding. She should have to work to win over a Democratic base that is moving to the left. The more work she has to do, the more likely she will govern to the left.

In the end, politics is not about which candidate you want to have a beer with or the left-wing version of this, which is about which candidate seems to hold your feelings deepest in their heart. It’s about the expression of power. A legitimate run by Sanders finally shows the Democratic Party that the party left not only wants change but actually understands how politics work and how to enforce discipline. It shows that the left has organized enough to pull the party left. And that would be a big win for progressive forces.

* I don’t agree with the whole “transcend the Democratic Party” point because replacing it with a Socialist Party isn’t ever going to happen and to try and do so would suck out the energy to create concrete gains for working people. But understanding that right now the way to do that it effectively to infiltrate the Democratic Party nomination process is good enough. Plus he’s a socialist editor of a major leftist journal so what is he supposed to say?

When you boil them down, defenses of the NCAA cartel boil down to a “if things were different, they wouldn’t be the same” argument. Allegedly, the mystique of the NCAA comes down to players being forbidden from receiving anything but scrip as direct compensation, and also having extraordinary, unique bans on third party compensation that don’t apply to any other students imposed on them. People are not offended by everyone else in the NCAA raking in as much cash as they possibly can, but end the exploitation of players in high-revenue sports and the edifice would crumble.

The most important response to this argument, of course, is “who cares?” If the popularity of NCAA sports depends on gross exploitation and egregious double standards, then it’s not worth saving. Sentimentality and trivial aesthetic preferences are pathetically weak justifications for denying the people taking the most risk and generating the most value fair compensation.

But here’s the thing: I don’t believe that the argument is correct on its own terms. Owners asserted, after all, that free agency would destroy the popularity of pro sports, when in fact the popularity of pro sports exploded after free agency. What fans will rant about to talk radio hosts has little connection with their future behavior. In comments in the last thread, I think djw put the point brilliantly:

What’s particularly absurd about the first complaint is that at big-time sports schools, Football and Basketball resemble a professional team already in all the relevant ways: some of the best athletes in the world who treat athletics like more than a full time job, extremely high level of competition and performance, tons of money, marketing, and TV contracts, lots of people making obscene amounts of money, world class facilities, etc. The only real difference is that the people who do the most important and risky labor don’t get paid/get paid in dubious company script. It’s enormously popular.

On the other hand, there are hundreds of DII and DIII schools where the same sports teams resemble the amateur ideal a great deal more–no compensation, HS+ level facilities, part-time coaches, practice and travel schedules that let athletes be students in a meaningful sense, etc. Nobody cared. I attended one of those schools, I only heard my team was playing for a national title by watching sportscenter. (But I did watch UW on TV every week).

Bitter scribe’s assumption is that even though every single step toward professionalism so far has made college sports more popular, that one last step will someone how ruin everything. Let’s just say he’s got a substantial unmet burden of proof here.

The fact that the popularity of college sports is inversely correlated with how closely they embody the Noble Ideals of Amateurism makes claims that compensating players fairly will destroy college sports implausible in the extreme.

Some musings inspired by the Indiana backlash and the backlash to the backlash:

Some “meta” preliminaries: obviously, freedom of religion at its core is a non-negotiable requirement for any society wishing to plausibly call itself a liberal democracy. What is the core? The right to join and form religious organizations, worship freely, and speak openly about one’s religion in the larger society, regardless of the degree of overlap between the content of these religious views and the mainstream of official state ideologies. Of course, there’s a lot more to how states support freedom of religion as a matter of practice; from tax-exempt status to cooperative educational and charitable projects to the possibility of exemptions from general religious law. By saying such things are not the ‘core’ of freedom of religion, I don’t mean to suggest they are inappropriate or wrong, or even unnecessary. But unlike the religious freedom’s core, they should be understood as negotiable—that is, they’re the proper subject for democratic deliberation and contestation, and there ought be to no particular expectation there’s a universal proper liberal-democratic answer to these kinds of questions. Which are most appropriate for a particular political society is based, to a significant degree, on local circumstances. Any democratic society that finds itself debating whether to honor the core of religious freedom has badly gone off the rails, but a democratic society debating the non-core scope of religious freedom is just doing what democracies do.

On a less meta level, my own views on religious exemptions are quite fluid. I find myself shifting between being mostly (but never entirely) comfortable with the pre-Smith status quo balancing tests and original RFRA framework, and mostly (but never entirely) resigned to a Smith-like restrictive approach. I teach a seminar on multicultural policy in every Spring, so it’s not like I haven’t thought about it much; I just seem prone to change dramatically the relative weighting of different goals and values.

If there’s a pattern to shifts in my uncertainty, though, it’s probably that I find myself drifting toward a more restrictive approach. In watching the politics of the Indiana law and its backlash, I think I’m getting a better sense of why that’s the case. What’s currently underway is what I’ll call the weaponization of religious exemptions. To explain what I mean by this, here are some classic examples of requests for religious exemptions: permission to use otherwise illegal substances for religious ceremonies, such as the Smith plaintiffs and Peyote, Catholics and sacramental wine during prohibition, Rastafari and marijuana);exemptions from zoning laws for the construction of Sukkahs and rules regarding the religious use of public property for the constructions of eruvs; exemption from mandatory military service, schooling requirements, or vaccinations; exemptions from incest laws (regarding Uncle/Niece marriages for some communities of Moroccan Jews); Native American religious groups seeking privileged access to sacred spaces on federally owned land;exemptions to Sunday closing laws for seventh-day Sabbatarians. I find some of these easy to support and others profoundly problematic, but they collectively share a common feature: they are fundamentally defensive in character. Their primary objective is to protect a practice or tradition or community, and little more. These exemptions are political but not in the sense that their exercise is directed toward the larger community in any concrete, meaningful sense. In these cases, the end sought in pursuing the exemption is, more or less, the exemption itself.

The requested accommodation in City of Boerne is a kind of transitional case. The exemption sought was to modify a church in a Historical District where such modifications were not permitted. While the exemption was clearly sought for the purpose of the exercise of religious activity, it wasn’t really a religious exemption per se—they wanted a bigger, more modern facility for more or less the general kind of reasons a private business or homeowner might have liked an exemption—accommodate more people, better amenities, etc. There was no connection between their status as a religious group and the nature of the particular exemption they were seeking; in essence they were arguing that the RFRA gives them license to avoid a law they found inconvenient. (Hypothetically, if a religious organization sought an exemption to historic zoning on grounds that their religion prohibited worshiping in buildings over a certain age for ceremonies, this case would have more merit.) Turning religious exemptions into a license for religious groups to evade general laws when inconvenient seems entirely deserving of pushback.

But this is only a partially weaponized use of religious exemptions; they’re being used as a weapon to advance the Church’s goals, but not striking against their political enemies. The quintessential case of a weaponized religious exemption is, of course, Hobby Lobby; Obamacare was to be the subject of a blitzkrieg, to be hit with any and every weapon imaginable, and that’s what the RFRA provided. Their efforts to make the claim appear credible could hardly be lazier or more half-assed. One possible check on weaponization, in a better and more decent society, could conceivably be a sense of embarrassment or shame; exposing one’s religious convictions as a cynical political tool to be wielded against one’s political enemies might be hoped to invoke enough embarrassment that it might be avoided, but we were well past that point. A remarkable document of this trend is this post from Patrick Deneen–fully, openly aware of the fundamental absurdity of Hobby Lobby’s case, cheering them on nonetheless. I mean, you’d think they’d at least have found a company owned by Catholics.

In light of that case, the transparent push for a super-RFRA deployable in private torts is not quite as egregious. It’s passing a bill that is by no means guaranteed to get them the results they want (my understanding is that no attempt to defend discriminatory behavior under any RFRA has yet been successful), and has plenty of other potential applications, some of which may be salutary. But the politics of it are undeniable; as in Kansas, Arizona and elsewhere, it’s plainly the case that this is simply the latest effort in the longstanding war on full social equality for gay and lesbian people. (If not having an RFRA at the books on the state level is such a grave threat to religious liberty, why haven’t we been hearing more about this since 1997, seeing as most states have no such law?) That this is a considerably less ambitious project in denying social equality than most previous battles fought in this war merely reflects the ground they’ve lost recently.

As I mentioned earlier, I don’t have confident or strongly held views about the ideal and proper scope of religious exemptions, although I’ve probably been drifting further from the RFRA framework and closer to Smith. The backlash against the Indiana bill—a bill that, private torts provision aside, isn’t that different from something that once passed the house unanimously and the senate with 97 votes—not to mention even conservative Republicans vetoing similar legislation in Arizona and Arkansas–suggests something very real has changed. The assumption on the right is that it’s liberals who’ve changed; we don’t support religious freedom like we did back in the 90’s. They’re not entirely wrong about that, but it’s an incomplete view about what has changed. Insofar as liberals changed their minds about the proper scope of religious exemptions, they didn’t do so in a vacuum, they changed their mind about it because the context we’re now in—facing an utterly shameless political movement that treats any conceivable political tool as fair game to achieve its political ends—is just simply not the kind of environment that fits well with an expansive approach to religious exemptions. The personal, faith-based nature of religious conviction makes it clearly inappropriate for the state to question the sincerity of the professed belief, even when that insincerity is obvious and barely concealed; which in turn makes exemptions easier to support in an environment where there’s some degree of trust that this process won’t be routinely abused. As noted earlier, which approach to exemptions best serves the interests of justice and freedom depends to a significant degree on the details of the society in question. We may have been something closer to that kind of society suited for expansive religious exemptions in the past, and we may someday be that kind of society at some point in the future, but it’s becoming difficult to deny we’re not such a society now.

Branko Milanovic notes Lee Kwan Yew’s explanation of the success of Singapore and other Asian economies; partly Confucian culture, partly air conditioning. If you’ve ever tried to walk around Singapore, you know whereof he speaks.

The same factor plays a major role in explaining differential US regional growth, and thereby hangs a tale.

The rise of the US sunbelt can be understood largely as a response to the emergence of widespread air conditioning, which made places that are warm in the winter attractive despite humid, muggy summers. It’s a gradual, long-drawn-out response, because location decisions have a lot of inertia; few people would choose de novo to live in the old industrial towns of upstate New York, but the existing housing stock and the fact that people have family and social networks prevent quick abandonment. So to this day temperature is a good predictor of state population growth. I’ve taken the NOAA data and divided states into three groups by average temperature: Group I is colder than Rhode Island, Group III warmer than California:

These are places where summer would be really oppressive without air conditioning. (Actually, I find it oppressive with — in Texas, in particular, indoor spaces are freezing. But that’s another story.)

Now, these states have several things in common besides high temperatures. They’re all very conservative. And all of them that were states before the Civil War were slave states. These commonalities are, of course, all interrelated. Hot states had slaves because they were suitable for planation agriculture; and today’s red states are, pretty much, the slave states of 150 years ago.

Now, all of this raises some interesting problems for the assessment of economic policy. Because they’re politically conservative, hot states tend to have low minimum wages and low taxes on rich people. And someone who is careless, cynical, or both, could easily take the faster growth of these states as evidence that conservative economic policies work. That is, charlatans and cranks can, all too easily, end up claiming credit for economic and demographic trends that are actually the result of air conditioning.

This entry passed through the Full-Text RSS service - if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

I made this observation in comments on Chris’ ideal theory post, and got some pushback, so I thought I’d take a look back at the data

Both the number and the percentage of families in poverty dropped sharply during the 1960s when the “War on Poverty” was being waged actively, and remained near their all-time lows through the Nixon and Carter years until 1979, when the Volcker recession hit, followed by the election of Ronald Reagan. These events can reasonably be said to mark the point at which the government unequivocally changed sides.

The number of households in poverty has risen steadily since then and is now higher than in 1959, the year for which the poverty level was first defined by Mollie Orshansky. The poverty rate has remained consistently higher than in the 1970s, except for a brief deep at the peak of the late-1990s boom.

A quick note on the data: Unlike most other countries, the US uses an absolute poverty line, rather than a measure set relative to median income. Orshansky estimated a food budget that was adequate but austere by the standards of 1960, then multiplied the cost by 3 on the basis that the food share of a poor families budget should be 1/3. It’s been adjusted for inflation since then, but not increased in real terms. It might be argued that the CPI overstates inflation somewhat. But much the same point applies to measures of GDP per person which has increased dramatically over the 35 years since the US government stopped fighting poverty and started fighting the poor.

The measure includes cash transfers, but not some non-cash benefits, notably including the Supplemental Nutrition Assistance Program (food stamps) the Earned Income Tax Credit introduced in 1975 . The result is to understate the progress made on poverty reduction since the 1960s, but not to change the basic story. Food stamps have been under attack from the right since the early years of the Reagan Administration. EITC had bipartisan support until the 1990s, but is also now under attack.

The first main point is that unlike many smart observers I’m not really much more optimistic than I was yesterday. And second, it’s like how much more hackish could Scalia be? And the answer is none. None more hackish:

Particularly remarkable, however, was this exchange:

SCALIA: What about Congress? You really think Congress is just going to sit there while all of these disastrous consequences ensue? I mean, how often have we come out with a decision such as the ­­ you know, the bankruptcy court decision? Congress adjusts, enacts a statute that takes care of the problem. It happens all the time. Why is that not going to happen here?

VERRILLI: Well, this Congress? [laughter]

VERRILLI: You know, I mean, of course, theoretically — of course, theoretically they could.

SCALIA: I don’t care what Congress you’re talking about. If the consequences are as disastrous as you say, so many million people ­­ without insurance and whatnot — yes, I think this Congress would act.

Scalia’s argument, of course, came straight from a land of willful fantasy. It’s tempting to dismiss Scalia’s comments as politically naïve, but I think it’s more pernicious than that. Scalia has long shown an affinity for the most witless Fox News talking points. Republicans have been making a conscious effort to reassure the court that they have a plan should the court gut the ACA. Needless to say, they don’t actually have any plan — pretending to have a plan is their only plan. Indeed, Republicans in Congress are so dysfunctional that they can barely even pretend to have a serious alternative, and any attempt to fix the law would assuredly be stillborn.

The Republican alternative should the court willfully misread the law and ruin the federally established exchanges is a con somewhat less sophisticated than selling oceanfront property in Wyoming — but it’s good enough for Scalia! That tells you all you need to know about the extent of his fidelity to judicial ideals.

There are two additional examples of hackery and cynicism I didn’t have space to get to but are also relevant:

There is an additional element of disingenuousness in Carvin citing the relatively low number of states that have undergone the “thankless task” of creating their own exchanges after the IRS ruled that subsidies would be universally available. One reason so many states declined to establish exchanges is that Michael Cannon spent a great deal of time flying around the country and urging states not to do so. The architects of the suit obviously thought that even with the subsidies offered through federal changes many states would establish their own exchanges – and, indeed, even with this organized campaign 17 did. States had a reasonable opportunity to create their own exchanges and that’s all Congress wanted. But it’s yet another new level of bad faith for the troofers to actively thwart the creation of state exchanges and then use this as a reason to wreck the federal exchanges after projecting their own views about federal power onto legislators who don’t share them.

Let’s consider another of Scalia’s talk radio soundbites: “This is not the most elegantly drafted statute. It was ­­ it was pushed through on expedited procedures and didn’t have the kind of consideration by a conference committee, for example, that ­­ that statutes usually do.” The “expedited procedures” claim is just erroneous; both the Senate and then the House passed the ACA using ordinary procedures, and then there was a set of amendments passed through reconciliation. The implicit claim that the ACA was passed in unseemly haste is a joke to anyone who actually remembers the interminable process. It is true that the bill did not have the usual benefit of being harmonized through a conference committee. But the reason that this didn’t happen is that the Republican minority in the Senate would not have permitted a vote on a new bill. It’s a neat scam: A Republican minority prevents Congress from functioning properly, and then their political allies on the Supreme Court use this as an excuse to willfully misread the resulting statute, with disastrous consequences for many people. When the same Supreme Court justice to then assert that congressional Republicans would never, ever dream of seeing large numbers of people go without health insurance it just completes the shameless hack cycle.

The grand theory of Republican politics and constitutionalism in 2015 would seem to be “stop hitting yourself.” Stripping health insurance from millions of people based on a legal theory that would be laughed out of any courtroom not dominated by partisan Republicans is a logical endpoint.

Henry Farrell has a truly brilliant essay on how the evolution of Silk Road, the dark-web trading platform for forbidden transactions, can be viewed as an experiment in political philosophy. I can’t do better than his own blurb:

The Silk Road might have started as a libertarian experiment, but it was doomed to end as a fiefdom run by pirate kings.

It’s an awesome read.

This entry passed through the Full-Text RSS service - if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

But there are also a lot of good comments, and I think this point by Pseudonym is particularly important:

The real question at issue isn’t whether they should be compensated but whether they should be barred from being compensated, and by a national cartel with a monopoly on the path to professional status.

Apologists for the NCAA cartel tend to assume that they’re advocating for athletes being treated like other students. But this is completely untrue. What they’re defending is in fact a set of unique and extraordinary burdens being placed on athletes. Virtually no other students are banned from receiving compensation from voluntary third parties, and this is because it won’t make a lick of sense. Why on earth shouldn’t a music student be able to take a paying gig or a journalism student sell a story? Similarly, we don’t claim that scholarship students working as RAs or in the bookstore can’t be compensated, or that staff and faculty who get tuition vouchers for family members don’t need to be additionally compensated for their work. These rules aren’t about ensuring that athletes are “really” students or whatever; they’re about attempting to preserve competitive balance. And this isn’t a good reason to allow athletes to be exploited, even before we get to the fact that the NCAA doesn’t have anything remotely resembling competitive balance even with these rules.

Like most NCAA critics, I’m not arguing that student-athletes are employees subject to minimum wage laws solely for being members of teams. I’m saying that if either colleges or third parties want to pay them market value for their services, they should not forbidden from making the deals. This allows us to quickly dispense with non-sequiturs about the cross-country team or (even sillier) the Dungeons&Dragons club. Most athletic events and intramural activities don’t produce any revenue, so there’s not going to be any money for the participants beyond scholarship money, and that’s fine with me. There might be some cases in which a rich donor really wants an alma mater to have a great cross-country team and offers recruits cash on the barrelhead. And that’s fine with me — I don’t see why donors can give money to universities that enable them to hire a new Associate Vice Provost and Assistant Under Dean For Proactive Strategic Dynamism but should be prohibited from giving money to athletes directly.

Finally, defenses of the NCAA tend to be rife with a rhetorical technique we’ve discussed recently: someone with an indefensible position changing the subject to an allegedly superior alternative that isn’t actually on offer. The obvious problem for NCAA apologists that Paul’s post raises is why athletes should be forbidden cash compensation — not only by universities but by third parties — because of the Noble Ideals of Amateurism and the Sanctity of the Groves of Academe while everybody else involved with the NCAA is allowed to fill up wheelbarrows full of cash and deposit them in university-provided cars and drive off to get a university-provided oil change. One answer is to say that all of the other NCAA-related profit-taking should be stopped. The obvious problem is that it’s not going to be, and in the meantime we have to treat athletes based on the system as it is. If coaches start getting paid like associate professors of English and the NCAA gives its games to networks for free while banning advertising and ticket prices are capped at $10, we can talk about whether scholarships are adequate compensation. (We still don’t need to talk about bans on third party compensation, because these are just terrible policy under any possible system of college athletics.) Until then, players should not be forbidden from getting any compensation they’re able to negotiate.

When Chicago Teachers Union President Karen Lewis announced in October that she wasn’t entering the mayoral race, incumbent Rahm Emanuel must have breathed a sigh of relief. Lewis, the leader of the 2012 CTU strike, was widely expected to challenge Emanuel in the February 24 election as an anti-austerity, pro-labor Democrat. Some polls even showed her with more support among likely voters than the deep-pocketed mayor.

From privatizing city services to closing mental health facilities that served the city’s most vulnerable residents to shuttering forty-nine public schools in predominately poor black communities in favor of charter schools, Emanuel’s record has provided plenty of grist for left candidates of varying stripes. While his most formidable opponent has stepped aside, a number of insurgent opposition candidates are still opposing Emanuel and other neoliberal incumbents.

With Lewis out of the race, the CTU endorsed Jesus “Chuy” Garcia. A member of the predominately Latino community of Pilsen in the city’s Lower South Side, Garcia first gained prominence by winning a seat for alderman in 1987 during the administration of Harold Washington, an anti-machine politician and Chicago’s first black mayor. Despite his experience working as a community and political leader, Garcia hasn’t attracted an outpouring of support: many still view him as an insider, even if he is decidedly more progressive than Emanuel.

But opposition to Emanuel is also coming from more militant quarters. In the 25th Ward, an area that covers much of the Lower West Side, including Pilsen and parts of Chinatown, Jorge Mújica, a Mexican-born immigrant and labor rights activist, is challenging alderman Danny Solis as an open socialist.

His bid comes out of the Chicago Socialist Campaign, an umbrella coalition of independent left-wing parties and organizations, including the International Socialist Organization, Democratic Socialists of America, the Socialist Party, Solidarity, and others. Although Mújica isn’t the favorite, he has picked up endorsements from several unions, including AFSCME Local 31.

Chicago-based activist Alec Hudson recently spoke with Mújica about the role of alders in improving the lives of workers, socialist strategies for electoral politics, and the future of the immigrant rights movement. The interview has been edited and condensed for clarity.

You moved to Chicago and the United States for the first time in 1987. How would you say the city has changed politically since you arrived?

Well I came to Chicago right after the death of Harold Washington. There were four Latino wards at that time, the first four new aldermen who were all supportive of Harold Washington. I think they went through the Council Wars, and then after Washington through the election of Richard Daley.

So the city has changed a lot, neighborhood by neighborhood they’re not the same as they were when I first came. The Loop’s transformation — it was skid row, and nobody wanted to go to the Loop or the South Loop after 5 PM. Now it’s a thriving deluxe condo hub.

How did you come to get involved in the labor movement? And if you’re elected as an alderman, how can you help workers?

Well, it’s been my life. I organized my first labor union at a printer shop, the editorial house where I was working in Mexico City when I was about twenty-two. It was the largest editorial house in Latin America, Fondo de Cultura Económica, and I organized it because we were paid less than minimum wage.

I organized workers there and then participated in several labor union efforts in Mexico, and when I came here to the US my first job was translating collective-bargaining agreements into Spanish for Latino members of — at that moment, it was International Ladies’ Garment Workers Union, which became UNITE, and then UNITE-HERE.

So I’ve been involved with labor for a long time. I’ve been the president of the general press unit of the Chicago Newspaper Guild, I organized Univision with NABET Local 41. And now with Arise Chicago, my particular job is to work with groups and try to organize them, maybe into associations but if possible into labor unions.

I have been an organizer of two sections, two groups of labor unions, one with Workers United and one with Teamsters Local 705, the latter of which after seven months of striking were able to get their first collective bargaining agreement — which is a big success because less than 50 percent of all workers who organize and vote in favor of a labor union get a first contract. So that’s a double success.

But it’s what I do, and that’s what we propose to do with the aldermanic office. I have put it sometimes like: if you can imagine the Department of Labor for the City of Chicago, that’s what we would like the aldermanic office to be, an active office that helps workers and works with them to organize and get better working conditions. Not a service office in the sense that you become a client and I do the work for you, but an organizing office to work with workers, to organize workers and support them. That’s what we would like to do with that office.

With so much opposition to Rahm from various social and labor organizations around the city, and with even some of your opponents coming from some of those social and labor organizations as well, why did you decide to run in this election cycle? What about your socialist campaign is different from these other anti-Rahm progressive candidates?

Well, first of all I was drafted to the campaign, because it was a decision that there should be a socialist campaign. There was a lot of research put into where to run a socialist campaign, and then when the decision was made that the 25th Ward was a possibility, the Chicago Socialist Campaign offered me to run, and I said, “Yeah, why not?”

I said, “Okay, let’s do it, let’s openly talk about socialism” and the different platforms and ideas that socialism can bring — let’s open up a space for socialism in the public electoral movement. But we need a third party. I’m tired of people talking about the lesser evil, there is no lesser evil in this case. Now we have a real choice, now we can vote for a socialist.

What was the background to forming the Chicago Socialist Campaign?

Kshama Sawant. The victory she won in Seattle surprised everybody and prompted everybody to try to do similar campaigns. There was a good deal of discussion to see if it was reasonable, if it was doable, and if it was worth it to do in Chicago. And the conclusion was, “Yes, let’s do it, as long as we have a chance.”

Some of the people in the Chicago Socialist Campaign are not participating in this particular campaign because they live all over Chicago and they are participating in local elections, but it was a collective effort supported by all of these various organizations.

What role do you think electoral campaigns like yours play in revitalizing an independent socialist left? Why an electoral campaign, and do you think the Chicago Socialist Campaign will go beyond this election?

I think it should. Since the 1940s the US hasn’t seen a strong Left participating publicly in an electoral campaign, and it’s hard since it’s been years since we’ve seen a candidate. But I think we have to do it. Having seen a third party of sorts coming from the Right — with the Tea Party becoming its own party and with the libertarians forming their own parties — we need to see a leftist party coming as a third option.

This is not just for this election. This campaign is to give more spirit to many people in places trying to do the same thing. A win in the city of Chicago is going to be a huge victory. We’re the third largest city in the US, and a socialist alderman would be a huge success. So that’s what we feel I’ve got to do. Let’s get together afterwards, analyze, review, and discuss — we love discussing — but let’s see what we can do in other places so it won’t just be Sawant, it will be Sawant, Mújica, and other people.

What made you decide to run with an independent organization when there is still so much power in the two-party system on a national level?

That’s one of the biggest discussions we have to have. We have a situation in Chicago right now where you have Chuy Garcia, who’s supposed to be an independent Democrat, running against the Democratic Party machine. So there is a Democratic Party “machine,” and there’s the rest of the Democratic Party, which to me is just an umbrella name for many people.

If you wanted to do politics in the city of Chicago, for the past fifty years you had to be a member of the Democratic Party. I am familiar with that because in Mexico, if you wanted to do something in politics you had to be a member of the PRI. There wasn’t any other way to do politics. If you were in the opposition or in another party, you went nowhere, you didn’t get elected.

So what we live under here is the dictatorship of one party. The mayor, all aldermen, all commissioners — well, there may be a few Republican Cook County commissioners — but basically you have to be part of that umbrella with the Democratic Party.

What happens with Garcia? At one point he was very close to the Communist Party back in the 1970s with Harold Washington and all of that stuff. Since he is in the Democratic Party, do we have to criticize him for it? I say we skip it — it’s not a problem, it’s not the Democratic Party as a well-oiled machine.

There are contradictions between different groups within the Democratic Party, we stay away from it, let’s not criticize people who honestly believe they are to the left of the machine. We won’t support them but let’s not pound on them, let’s pound on Rahm Emanuel. That’s the point.

So it’s a big discussion, I think, because we have our differences. We aren’t supporting Chuy, and we aren’t being endorsed by him. I think he’s actually afraid of us in some way — he doesn’t want to be identified with any socialists. But let’s have that discussion.

Do you think the CTU’s role in this election has been diminished because Karen Lewis is not running?

They are playing a major role of course: they are bankrolling several candidates, they think they can win several aldermanic offices. Good, excellent! But it would have been the same discussion because Karen Lewis would have said she is a Democrat anyway. We had a discussion there in the campaign; where would we be if it was Karen Lewis instead of Chuy Garcia, with Lewis saying she is a Democrat? What would we have done?

That’s part of the discussion we need to have still. Are some parts of the Democratic Party okay because they’re backed by a very powerful union, in the sense that it’s a fighting union? Is that good enough? But if she says, “I’m a Democrat,” does it mean total rejection on our part?

Those are some of the discussions we have to have, and I think that for further reference, whenever we have campaigns we have to discuss that first and launch the campaign later, because in the middle of the campaign you start discussing and debating those things and you debate a lot and you don’t do enough on the ground. If you are trying to win a campaign, you have to be doing things on the ground.

So we had a contradiction here. It was well worth the discussion, we all learned a lot, and I think those are things we have to have clear before we participate in the bourgeois electoral process.

You were one of the lead organizers behind the 2006 immigrant rights march in Chicago. What are some of your reflections on the state of immigration in Chicago, and how do you see yourself utilizing your position as alderman to help protect immigrant rights in the city?

One of the obvious measures for the city would be to have a municipal ID. New York City has one, San Francisco has one, and Chicago should have one. Going beyond that, I think we should try to push for the right to vote for everyone who pays taxes. I don’t care about “citizenship” in that sense. Up until the 1920s, in several states you could vote simply because you paid taxes.

What happened to this so-called immigration movement was that it wasn’t a movement. It was a broad coalition of a lot of organizations which each represented some particular interest, and in the end these organizations sold out to the Democratic Party, instead of marching, instead of using public pressure, instead of civil disobedience. All of which was part of the line they pushed before these organizations — you know, nonprofit corporations — decided to go the electoral way: elect enough Democrats, and we will have a majority in both houses, and then we will have immigration reform.

Of course that didn’t happen, and nine years later we know it. We knew it from the beginning, that’s why we opposed it. But many people didn’t know it, so it was an experience for them. Many of them are disenchanted right now and saying, “We supported Democrats a whole lot and see how they pay us!” Half a million people deported, families divided, etc.

I think the Chicago Office for New Americans that Rahm Emanuel instituted has to be transformed into an office for all immigrants, not just “new citizens”; his plan is only a plan of integration, learn English, become “American,” still with the melting pot theory that when you become a citizen you have to forget everything else and become “American.” I don’t like that, so I think that office should be dedicated to all immigrants, not just undocumented or new citizens.

And again, just going back to the first question, undocumented workers are very vulnerable, but they do have rights, and they have to know that they have rights. So the aldermanic office and the potential office for all immigrants in Chicago should help them to enforce those rights. It should help them whether they are discriminated against or their wages are stolen — that’s what the aldermanic office should be.

Do you have any final thoughts?

Support a socialist! It sounds very weird, but we need money, we need volunteers, we have thousands of pieces of propaganda, and putting pieces in the mail is very expensive, but we have to do it and we have done it. And help us on February 24, help us nationwide. It’s not just a tiny ward in the city of Chicago — this is a socialist campaign on a national level.

Paul Krugman points out how arguments that claim not enough Americans have college degrees work as smokescreens to obscure the real drivers of social and economic inequality:

[M]y sense is that there’s a new form of issue-dodging packaged as seriousness on the rise. This time, the evasion involves trying to divert our national discourse about inequality into a discussion of alleged problems with education.

And the reason this is an evasion is that whatever serious people may want to believe, soaring inequality isn’t about education; it’s about power. . .

The education-centric story of our problems runs like this: We live in a period of unprecedented technological change, and too many American workers lack the skills to cope with that change. This “skills gap” is holding back growth, because businesses can’t find the workers they need. It also feeds inequality, as wages soar for workers with the right skills but stagnate or decline for the less educated. So what we need is more and better education.

My guess is that this sounds familiar — it’s what you hear from the talking heads on Sunday morning TV, in opinion articles from business leaders like Jamie Dimon of JPMorgan Chase, in “framing papers” from the Brookings Institution’s centrist Hamilton Project. It’s repeated so widely that many people probably assume it’s unquestionably true. But it isn’t. . .

[T]here’s no evidence that a skills gap is holding back employment. After all, if businesses were desperate for workers with certain skills, they would presumably be offering premium wages to attract such workers. So where are these fortunate professions? . . .

While the education/inequality story may once have seemed plausible, it hasn’t tracked reality for a long time. “The wages of the highest-skilled and highest-paid individuals have continued to increase steadily,” the Hamilton Project says. Actually, the inflation-adjusted earnings of highly educated Americans have gone nowhere since the late 1990s.

So what is really going on? Corporate profits have soared as a share of national income, but there is no sign of a rise in the rate of return on investment. How is that possible? Well, it’s what you would expect if rising profits reflect monopoly power rather than returns to capital.

As for wages and salaries, never mind college degrees — all the big gains are going to a tiny group of individuals holding strategic positions in corporate suites or astride the crossroads of finance. Rising inequality isn’t about who has the knowledge; it’s about who has the power.

It’s always suspicious when “everyone” is in favor of something. For a couple of generations now, almost all opinion-makers across the ideological spectrum have held the view that more formal education is an almost magical panacea for fundamental social and economic problems. On the glibertarian/corporatist right, this view dovetails nicely with a commitment to individual achievement as opposed to structural changes: as long as there’s a poor black kid going to Princeton (there probably is at least one) then Land of Opportunity, Shining City on a Hill, Bootstraps — you know the drill.

In other words, as long as the educational system helps make the class structure something less than completely rigid, then it’s A-OK for the top .01% percenters to pay a lower effective tax rate than the average American, while unions are wrecked and median wages fall, corporate profits soar, etc., because after all this poor black kid got a full ride to Princeton, got into HBS, and now he’s got Jamie Dimon’s job. (OK this didn’t actually happen, but the point is that it could happen, which is all that counts in glibertarian land).

On the liberal left, the commitment to higher ed as a magic bullet is based on a less morally obnoxious but even more economically dubious belief, to wit the theory that sending more people to college ameliorates structural unemployment via enhancement of human capital. As Krugman points out, the problem with this theory is that it doesn’t appear to be true, or at least not any more.

Who benefits from the ubiquity of these beliefs among all right-thinking people? One obvious group of beneficiaries consists, as Krugman notes, of the current Lords of Capital. There’s another group he doesn’t mention, which includes those atop our ever-growing Educational Industrial Complex, who benefit from a system that has quasi-socialized of the cost of higher ed (in the form of more than $1.2 trillion in educational loans, only 37% of which are currently in timely repayment), and quasi-privatized the immense profits it generates.

On Thursday, Germany refused any negotiations with Greece, and the European Central Bank (ECB) refused to accept Greek bonds as collateral (since there are no guarantees that the Greek government will carry out the “adjustment” plan). Although this does not amount to an immediate push to kick Greece out of the eurozone, it is certainly a threat in that direction.

In order to understand the motivations behind this recalcitrance, and the competing interests at work, Germany’s special relationship with the euro must be understood. The eurozone’s stated goal was to create a currency strong enough to build a unified European financial bloc that could compete with the US and China.

However, this was never the full truth. This “unified” bloc has always been composed of competing nation states, and the big, industrialized countries at the center have been keen on making the peripheral economies dependent on the core.

With the introduction of the single currency, there was a devaluation of Germany’s deutsche mark in comparison to the other national currencies. This meant not only that labor value was diminished, but also that the country’s manufactured products became cheaper and more competitive in the world market.

The resulting overvaluation of the southern countries’ national currencies solidified them as peripheral economies and established export markets for German products. Their productive sectors destroyed, the peripheral economies became dependent on imports, especially from Germany.

Germany, then, clearly benefits from Greece’s presence in the eurozone; a Grexit is not in its economic interest. Nonetheless, German Chancellor Angela Merkel is sending a veiled threat that this is what might happen. Why?

Syriza’s victory and Podemos’s rise have presented Germany with a conundrum. If Germany accepts the anti-austerity demands, writes off a significant part of the Greek debt, and reformulates the way in which the loans and interest rates are set, it would not only risk the economic stability of the German finance system, but would be an admission that its austerity policies have been a complete failure.

This could cause a domino effect in Europe, bolstering left forces that seek to profoundly reshape the structures and dynamics of the European Union (EU). A Grexit in these conditions would likely mean a more severe economic crisis in Germany and, therefore, the polarization of society. Situated in this position, Merkel knows that the only way to stop a rising left is to augment the Right. In France, England, and Germany, conservative forces have the upper hand, while in Greece, the Left hasn’t won hegemony in what’s still a heavily polarized society.

Across the Atlantic, President Obama has indicated some support for the Greek government, saying that the time for austerity in Europe is over and the time for growth must begin. But what is Obama’s motivation, if any, in backing a government of the Left?

The US has no interest in a European financial crisis triggered by the failure of Greece’s banking system. This would imperil the aims of the TTIP trade agreement, which is intended to strengthen the economic position of the US and the EU.

Obama also knows that Russian President Vladimir Putin is available to finance Greece if all other financial options fail. Such a move would not be out of benevolence, but realpolitik. After Ukraine, Obama does not want Russia to have another ally in the European continent, much less one belonging to the EU (exiting the euro, of course, does not mean exiting the EU).

Greek Prime Minister Alexis Tsipras has accepted Putin’s invitation to meet in May. This is leverage that the Greek government can use to try to force Germany to accept its demands. At the same time, Merkel has been summoned to the White House for an emergency meeting on February 9.

It would be politically naive to suppose that Obama would accept a government of the Left that has the ability to provoke a political hurricane across Europe. Even if Obama is prioritizing US domination, he is aware of the dangers that Syriza poses to the political and economic hegemony of European capitalism.

The question that remains open is if Obama and Merkel can find a way to prevent the growth and success of the Left, while at the same time keeping Russia out of the picture. This is where the political dispute starts: how to prevent the rise of the (semi-moderate) right in Europe, which would be a much better solution for all the parties involved — with the exception, of course, of the people of Greece and Europe.

As I have argued before, Syriza’s position to not build their electoral program around the currency question and exiting the EU is not a weakness, but a strength: if Greece is forced back to its national currency — and wages drop as a result — the blame will be placed on European elites, not on Syriza.

On Thursday, 15,000 rallied in Athens to back the government and its position in negotiations. Syriza will need this kind of strong, mobilized support — and more — from the Greek people to prevent it from backtracking. Greek society remains very split, and this victory of the Left could easily be turned upside down if Syriza fails to fulfill its electoral program.

There are more pro-government demonstrations in the works. Some have floated the idea of a referendum on whether the government should continue the negotiations. Other actions of support and solidarity must occur in the upcoming weeks and months to revitalize the movement and strengthen the government’s hand against its creditors. Concrete acts of international solidarity are also necessary, including demonstrations, strikes, and donations to solidarity organizations.

Finally, there is a very clear challenge for the German left. Active resistance to Merkel’s impoverishing policies is urgently needed, as is avoiding the trap of defending the national economy against Greece’s demands out of a fear of Grexit-induced crisis in Germany. This resistance must occur at all levels: the political and economic, across social movements, in both the trade unions and Die Linke. Only an organized left can change the public narrative around the euro and beat back an ascendant far right.

Relations between Greece and its creditors are not improving. Was this bad diplomacy on the part of Tsipras/Varoufakis? Maybe, but my guess is that there was nothing they could do to avoid a bitter confrontation short of immediate betrayal of the voters who put them in office. And creditor-country officials are acting as if they still expect that to happen, just as it has repeatedly over the past five years.

But they’re almost surely wrong. The dynamics are very different this time, and failing to understand them could all too easily lead to unnecessary disaster.

Actually, let me stress the “unnecessary” aspect. What Greece is asking for — although German voters probably don’t know this — is not a fresh infusion of money. All that’s on the table is a reduction in the primary surplus — that is, a reduction in Greek payments on existing debt. And we have often been told that everyone understands that the official target surplus, 4.5 percent of GDP, is unreasonable and unattainable. So Greece is, in effect, only asking that it get to recognize the reality everyone supposedly already understands.

Why, then, are things boiling over? Partly because what “everyone knows” has never been explained to northern European electorates, so that the time to recognize reality is always at some future date. Partly also, I suspect, because creditors have come to expect the symbolism of debtor governments abjectly abandoning their campaign promises in the name of responsibility, and are waiting for the new Greek government to pay the usual tribute of humiliation.

But as I said, the dynamic is very different this time.

I’ve long believed that Matthew Yglesias hit on something really important when he noted that small-country politicians generally have personal incentives to go along with troika demands even if they are against their nation’s interests:

Normally you would think that a national prime minister’s best option is to try to do the stuff that’s likely to get him re-elected. No matter how bleak the outlook, this is your dominant strategy. But in the era of globalization and EU-ification, I think the leaders of small countries are actually in a somewhat different situation. If you leave office held in high esteem by the Davos set, there are any number of European Commission or IMF or whatnot gigs that you might be eligible for even if you’re absolutely despised by your fellow countrymen. Indeed, in some ways being absolutely despised would be a plus. The ultimate demonstration of solidarity to the “international community” would be to do what the international community wants even in the face of massive resistance from your domestic political constituency.

But a genuine government of the left, as opposed to the center-left, is very different — not because its policy ideas are wild and crazy, which they aren’t, but because its officials are never going to be held in high esteem by the Davos set. Alexis Tsipras is not going to be on bank boards of directors, president of the BIS, or, probably, an EU commissioner. Varoufakis doesn’t even like wearing ties — which, consciously or not, is a way of declaring visually that he is not going to play the usual game. The new Greek leaders will succeed or fail, personally, based on what happens to Greece; there will be no consolation prizes for failing conventionally.

Do Berlin and Brussels understand this? If not, they are operating under a dangerous misconception.

Recommended article: Chomsky: We Are All – Fill in the Blank.This entry passed through the Full-Text RSS service - if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

That Florina quote is crazy. "I mean, I got measles as a kid. We used to all get measles". I assume the next sentence was "Yeah, sure, 1 - 2 percent of us died, but who cares, it's just human life."

Chris Christie’s endorsement of parental choice over public health while we have a measles epidemic strikes me as yet another disqualifying aspect of his judgment, character and personality in his bid for the presidency. Here’s some important context for his remarks – Christie:

Michael, what I said was that there has to be a balance and it depends on what the vaccine is, what the disease type is and all the rest. And so I didn’t say I’m leaving people the option. What I’m saying is that you have to have that balance in considering parental concerns because no parent cares about anything more than they care about protecting their own child’s health and so we have to have that conversation, but that has to move and shift in my view from disease type. Not every vaccine is created equal and not every disease type is as great a public health threat as others. So that’s what I mean by that so that I’m not misunderstood.

His office is now qualifying even more:

To be clear: The Governor believes vaccines are an important public health protection and with a disease like measles there is no question kids should be vaccinated. At the same time different states require different degrees of vaccination, which is why he was calling for balance in which ones government should mandate.

That’s a relief. And, of course, parents always have the ultimate say over their children. But a public official should not, in my view, be messing around with basic concepts of public health, and giving any credence to anti-vaxxers. So why the equivocation when we need more public support for childhood vaccination?

My best answer is that any potential GOP candidate has to cater to the Christianist right, and the critical HPV vaccine is not exactly popular with that section of the population. Lo and behold, Carly Fiorina is saying something similar as well:

I think there’s a big difference between — just in terms of the mountains of evidence we have — a vaccination for measles and a vaccination when a girl is 10 or 11 or 12 for cervical cancer just in case she’s sexually active at 11. So, I think it’s hard to make a blanket statement about it. I certainly can understand a mother’s concerns about vaccinating a 10-year-old … I think vaccinating for measles makes a lot of sense. But that’s me. I do think parents have to make those choices. I mean, I got measles as a kid. We used to all get measles … I got chicken pox, I got measles, I got mumps.

An alternative explanation may, of course, be that president Obama has strongly endorsed childhood vaccinations and therefore any GOP candidate has to disagree. I’m not sure which interpretation is accurate, but neither is exactly encouraging.

When protesters disrupted the Supreme Court on the anniversary of the Citizens United opinion last week, they trumpeted a familiar slogan: “money isn’t speech.” In the four years since the ruling, this phrase has been the linchpin of the liberal critique of American campaign-finance jurisprudence, the idiom of the movement to get “money out of politics.”

Yet there are profound limitations to a politics that attempts to distinguish between money and speech. While attempts to curtail corporate expenditures in elections are important, the Citizens United court inadvertently recognized a profound truth about the way wealth structures society. Money is speech, which is precisely why its distribution matters.

The Citizens United case, which struck down prohibitions on corporate (and union) campaign spending, was never so much a change in First Amendment case law as its logical endpoint. American courts have long conceptualized free speech as a wholly negative liberty, their sole concern the degree to which the government can explicitly intervene in the activities of citizens.

But this idea of free speech has always suffered from a contradiction, namely that the exercise of speech is always shaped by the economic situation of the speaker. As the Court put it in its landmark 1976 case Buckley v. Valeo:

Virtually every means of communicating ideas in today’s mass society requires the expenditure of money. The distribution of the humblest handbill or leaflet entails printing, paper, and circulation costs. Speeches and rallies generally necessitate hiring a hall and publicizing the event. The electorate’s increasing dependence on television, radio, and other mass media for news and information has made these expensive modes of communication indispensable instruments of effective political speech.

The Buckley opinion arose from a challenge to post-Watergate provisions in the Federal Election Campaign Act. Congress attempted to place strict limits on both individual contributions and candidate expenditures, but the Court rejected the latter, laying the foundation for Citizens United. Each verdict reasons that to see the First Amendment as solely protecting speaking is an absurdity. In the Court’s view, since one’s ability to speak is so tied to one’s wallet, the argument that money isn’t speech collapses.

And they’re right. Liberals — content to tinker at the edges of the economic base, simultaneously supportive of democracy and capitalism — must resist such conclusions. Radicals face no such strictures. Rather, this realization should lead us to a far more sweeping and critical verdict: when there are class differences and maldistributed wealth, democracy can only exist in a stunted form.

The possession of money determines one’s positive ability to act. To invert an aphorism from Anatole France, rich and poor alike have the equal right to hire high-priced lobbyists.

In this way, money not only buys political power, it is political power. Its possession confers godlike capability, and its deprivation creates servitude. With money one can manipulate public taste, ruin one’s enemies, and build, destroy, and conquer.

It’s the reason the film industry can secure massive, budget-depleting tax refunds and Walmart can single-handedly block attempts at wage regulation; where the flight of capital is a sufficient threat to people’s lives, direct corruption of the political process is unnecessary. Without it, one cannot eat, create, or even choose one’s everyday movements.

All of this is rather obvious. Yet as economists Samuel Bowles and Herbert Gintis note, while the observation that capital confers power “may evoke the same degree of astonishment as the observation that dogs bark,” this truism is utterly unaccounted for by liberal political philosophy’s negative-liberty framework. Adding campaign expenditure restrictions, as American liberals propose, does little to alter this fundamentally naive assumption. Capitalism’s predation of democracy won’t let up because of a well-placed restriction on campaign giving.

If money shapes the contours of our life choices, and is the prime determinant of our possible acts, then one person possessing more money than another is no different from his having more votes. And if rights are only meaningful to the extent they can be exercised, granting an equal right to free speech would demand a massive redistribution of wealth.

Naturally, the Citizens United decision, though the inevitable progeny of a long line of cases, is an affront to democracy. It helped entrench corporate power, and the resulting tidal wave of new election spending shouldn’t be trivialized. Yet this is far more a product of economics than law. The excessive influence of the wealthy on government did not begin with , but with the founding of the country. It is a function of an inegalitarian economic system, not anticorruption statutes.

The theory behind progressive opposition to Citizens United thus clouds our understanding of freedom. In September, Justice Ruth Bader Ginsburg suggestedCitizens United was one the Court’s worst mistakes, saying that “the notion that we have all the democracy that money can buy strays so far from what our democracy is supposed to be.”

But just two weeks later, Ginsburg signed onto the Court’s judgment in Integrity Staffing Solutions, Inc. v. Busk, which limited the reach of the Fair Labor Standards Act. In that case, the Court ruled that employers do not need to compensate employees for the time they spend standing in line for a mandatory security screening.

The Court held that because the workers had been hired to pack boxes rather than stand in line, they didn’t need to be paid for the parts of their job that did not involve packing boxes. All four liberal justices concurred. (So did the Obama administration, which had filed a brief in support of the temp agency.)

For Ginsburg, there is no contradiction in opposing Citizens United and supporting Integrity Staffing Solutions, even if both ultimately reduce ordinary people’s agency. Jesse Busk, the temp worker who sued over the denial of wages, reported that he wished to be paid the $6.25 for his screening because “the job was exhausting — we would sometimes walk twenty miles a night — and I was eager to go home and get some sleep.”

But when Ginsburg speaks of “what our democracy is supposed to be,” she thinks only of an individual’s relationship with the state; economic democracy is a foreign concept, and Jesse Busk’s twelve-hour shift has no relevance. An employer’s power over an employee may decimate the usefulness of the rights Ginsburg values, but apparently because money isn’t speech, one has no right to it.

Liberal election reform efforts have taken many shapes, from matching funds schemes to proposals to distribute $50 vouchers to voters, each attempting to overlay a fair political process onto an unequal economy. Many of these policies would make a substantive difference in the level to which government power is purchasable, even if nearly all of them are likely to be declared unconstitutional by a conservative-majority Supreme Court.

But all progressive plans suffer from the same core weakness: they address only a tiny fraction of the ways in which wealth is politically important. When US Steel unilaterally laid off 545 workers last week, none had any input into this decision; when 95 percent of the “post-recession” economic gains went to the top 1 percent, the political power of the non-wealthy was stamped out even more.

By attempting to forge a superficially fair political sphere while leaving the inegalitarian core of capitalism untouched, liberals are therefore ensuring that the most pernicious antidemocratic forces in American life are untouched.

If Citizens United eroded American democracy, the central proposition of the ruling was correct. And as soon as money is recognized as speech, the incompatibility of political equality and capitalism is revealed. There can never be such a thing as free speech until economic resources are distributed equally.

A political process that limits corporate influence is to be striven for. But a politics in which capital doesn’t dominate requires an economy without a class system.

This is a guest post by Paul Adler, lecturer at the Harvard History and Literature program. He received his PhD in history from Georgetown University in 2014. Paul’s dissertation, Planetary Citizens: U.S. NGOs and the Politics of International Development in the Late Twentieth Century examines efforts by U.S. groups like INFACT and the Sierra Club to influence international institutions like Nestle and the World Bank during the 1970s and 1980s. Previous to graduate school, Paul worked for several years on global justice issues at Public Citizen’s Global Trade Watch.

On January 25, 1984, William Thompson, a leader with the International Nestle Boycott Committee (INBC) met with Nestlé executive Carl Angst in New York City. There, the two men announced a surprise: after seven years of a global boycott of Nestlé, U.S. organizers were suspending this effort in light of new Nestlé initiatives intended to address activists’ critiques. Ending ten months later, the Nestlé boycott set important precedents for liberal and left-wing activists in challenging multinational corporate power. However, the memory of the campaign as a great success does not stand well against close scrutiny.

The controversy that prompted the campaign concerned the marketing practices employed by multinational companies selling breast milk substitutes throughout the Global South. Given living conditions often characterized by lack of access to clean water, the use of products such as infant formula heightened the possibility of newborns contracting any number of dire, even deadly diseases.

Multinational companies advertised breast milk substitutes as embodying a “modern” lifestyle. To spread this message, they used an array of aggressive marketing practices. Among other techniques, companies produced booklets on infant feeding that accentuated the difficulties of breastfeeding and hired nurses to serve as salespeople in newborn wards.

Example of Nestlé advertising, Malaysia, 1978

During the 1960s and early 1970s public health experts labored to publicize the dangers associated with breast milk substitutes. They met with little success however, causing one doctor to muse in 1974 that some “group may have to take a more aggressive, Nader-like stance.” Fortunately for him, that same year, activists in the United Kingdom released a pamphlet on the crisis called The Baby Killer followed soon after by activists in Switzerland becoming embroiled in a lengthy lawsuit with Nestlé.

In the United States, the key figure who transformed the breast feeding controversy into an activist campaign was Leah Margulies. The daughter of a staffer at the International Ladies Garment Workers Union (her parents met through the Young People’s Socialist League), Margulies was, by the early 1970s, a veteran of the civil rights and radical feminist movements. In 1974, working as an organizer for the Interfaith Center on Corporate Responsibility, Margulies began devising ways to make the breast milk substitutes scandal into a campaign.

To Margulies, this controversy appeared a perfect issue to use in energizing activists to engage with questions of economic inequality and multinational corporate power. As she explained to Mother Jones in 1977, “it is very difficult to make graphic that the world is starving, not because of drought or floods, but because of economic dependency.” From 1974 to 1977, Margulies worked with church groups to spread awareness, launch several shareholder resolutions, and mount a lawsuit against Bristol-Myers. However, these efforts produced few tangible results. Looking to escalate her efforts, Margulies reached out to fellow anti-poverty activists with the intention of starting a boycott of Nestlé. The Swiss multinational offered a promising target: not only was it the world’s largest purveyor of breast milk substitutes, but it also sold household products (such as coffee) around which a consumer boycott could easily be organized.

Teaming with activists in Minneapolis, in early 1977 Margulies helped to found the Infant Formula Action Coalition (INFACT). On July 4, 1977, INFACT commenced a nationwide boycott of Nestlé. Organizing through a broad array of organizations (from public health associations to churches to left-wing solidarity groups), INFACT rapidly assembled local boycotts in towns and cities across the country.

A Nestlé Boycott Picket Line

One constituency the boycott’s organizers sought out was organized labor. Activists tried to enlist labor in part by portraying the boycott as an experiment in corporate campaigning. Writing to a number of union presidents in 1982, Americans for Democratic Action president Robert Drinan illuminated this point, describing the boycott as “an act of international solidarity with working people in the Third World” and arguing that “organized labor has long recognized the need to develop an international capability to deal with the problems presented by multinational corporations. The leaders of the infant formula campaign have shown that it is not only necessary, but possible.”

Even as they built the boycott coalition, the leaders at INFACT searched for other avenues to influence. After months of organizing focused on the U.S. Senate, on May 23, 1978 activists descended on Washington, D.C. to participate in a hearing chaired by Ted Kennedy. While activists effectively presented their case, the representative sent from Nestlé delivered a calamitous performance. He accused church groups of being part of a “world-wide church organization” conspiring to “undermin[e] the free enterprise system,” while also arguing that Nestlé bore no responsibility for ensuring that consumers safely used its products.

Excerpt from the Kennedy hearing

Feeling humiliated after the hearing, Nestlé and the other multinationals searched for a way to end the boycott. Negotiating among the activists and the companies, Kennedy helped to steer both sides towards finding a solution under the auspices of the World Health Organization (WHO). In October 1979, a meeting cosponsored by the WHO and UNICEF in Geneva ended with the WHO agreeing to draft a global code of conduct for the marketing and promotion of breast milk substitutes. For the next year and a half, lobbyists from activist groups and multinationals each tried to influence the code’s language, while activists also intensified and internationalized the boycott.

In the end, companies (backed by the U.S. government) succeeded in ensuring that the code would take form as a voluntary “recommendation,” as opposed to a legally-binding regulation. However, the code’s strictures significantly constricted corporate advertising, causing the companies to condemn the code (while activists offered critical support). When the code was voted on at the WHO in May 1981, the only nation to oppose it was the United States, acting at the behest of the Reagan administration. Following the May 1981 vote at the WHO to create the code, activists and Nestlé spent the next two and a half years battling over the company’s implementation of the code, leading to the January suspension and then the October announcement by Nestlé that it would fully abide by the WHO code.

The Nestlé boycott was an early example of a coordinated, international effort targeting a multinational industry. During the early 1980s INFACT coordinated closely with boycott efforts in Western Europe, as well as in Australia. Even more significantly, NGO activists from the Global North and Global South came together to work under the auspices of a single organization, International Baby Food Action Network. The connections forged in this era continued through the 1990s anti-WTO fights and remain significant to the present. While the boycott did terminate with a seemingly monumental victory in October 1984, subsequent events have been more dispiriting. Four years after this triumph, activists relaunched the Nestlé boycott, accusing the company of not abiding by its commitments to the code. The boycott, while mostly dormant in the U.S., is active abroad to this day, in part reflecting the difficulty of monitoring the code (given the ease with which improper advertising can occur) and in part the vast power of multinationals like Nestlé.