Lolz

April 18, 2012

The truest will always be Weber on the "strong and slow boring of hard boards" but this from Barney Frank comes very close:

I believe very strongly that people on the left are too prone to do things that are emotionally satisfying and not politically useful. I have a rule, and it’s true of Occupy, it’s true of the gay-rights movement: If you care deeply about a cause, and you are engaged in an activity on behalf of that cause that is great fun and makes you feel good and warm and enthusiastic, you’re probably not helping, because you’re out there with your friends, and political work is much tougher and harder.

February 13, 2012

I agree with a lot of what Anita Joseph has to say here, but I feel obliged to respond to this somewhat fallacious dispatching of an argument of mine:

The amazing thing is that Occupy Harvard accomplished all of this without the support of the student body…the most relevant and important cause in a generation was met with widespread scorn.

Why is this? In Slate[sic] Dylan R. Matthews ’12, who is also a Crimson editorial columnist, suggested that it is because Harvard is made up of the one percent. In the Harvard Political Review, Josh B. Lipson ’14 suggested it is because “Harvard is not the bastion of radical leftism that second-rate social commentators describe.” I disagree with these views. The Harvard College Office of Admissions and Financial Aid says that around sixty percent of the student body receives financial aid, while that figure is around fifty percent at UC Berkeley, a school that embraced the Occupy movement in a much greater way.

This would make sense as a retort if getting financial aid at Harvard indicated the same thing about one's economic class as getting financial aid at Berkeley does. But it doesn't. Harvard has much more generous financial aid than Berkeley. To use an example from an old column of mine, a family making $150,000 a year with $600,000 in assets (including a $500,000 house) gets almost $13,000 a year from Harvard. According to Berkeley's financial aid calculator, that family gets no help from Berkeley financial aid.

As Joseph's own links explain, Berkeley has only recently committed to providing financial aid to families making up to $140,000, whereas Harvard is talking about expanding aid to families making well over that. So if "people on financial aid at Harvard" includes families making upwards of $150,000, and "people on financial aid at Berkeley" excludes them, then it could simultaneously be the case that more people are on financial aid at Harvard and that Harvard has a more affluent student body overall.

I'd imagine this is the case, especially because, for in-state residents, attending Berkeley without financial aid is much, much more affordable than attending Harvard without financial aid. A year at Berkeley for a Californian costs $32,635 while a year at Harvard costs $56,000. So it makes sense, even if Berkeley attendees make less on average than Harvard attendees, that fewer would be on financial aid, as you don't need to make as much to afford Berkeley without any assistance as you do to afford Harvard without any assistance.

Joseph's larger point is that the "Organization Kid" mentality of Harvard students explains their resistance to Occupy, and I agree with that to some extent. But I think people consistently underestimate the extent to which being an Organization Kid is a result of being in and/or identifying with a certain economic class, and thus leads one to want to advance that class's interests. Harvard students are rich, so they look out for other rich people.

January 18, 2012

This, again, is only a record of what I was listening to at this point in time, posted for my own archiving as much as anything. All things listed are things I like a lot; "The Words That Maketh Murder" is a fantastic song. Also I just like making lists. Overall I thought it was a terrible year for albums and a great one for singles.

December 04, 2011

Corey Spaley points to Nick Smyth's post making a pithy argument for an anti-theoretical approach to ethics:

We are human beings. We are embedded in a massively complex social context, and within this context there exist many powerful sources of reasons. We are social creatures: The roles we play, the relations we enter into, the norms we accept, and the ideals with which we identify are constantly interacting with our practical thought, our judgment and our action. We are also organisms with basic needs: the mere fact that we are embodied generates an enormous number of important practical imperatives for us, individually and collectively. We are also citizens: we live under laws and institutions that shape our prerogatives and which can provide reasons for action on their own. We are also goal-driven: a human life is probably not complete without the active pursuit of coherently organized ends...

Gee, look at that! No moral theory, yet somehow we can still speak of acting for good reasons, we can still praise and blame one another, and we can still hold each other responsible for what we do according to various standards.

But what if some of those standards aren't actually standards? Take the assertion that "a human life is probably not complete without the active pursuit of coherently organized ends." That is profoundly arguable! I'm a hedonist. I think, when push comes to shove, that someone who has no coherent goals to speak of and plugs herself into an experience machine for her entire life will have acted in her true self-interest. She's not goal-driven at all but she's living a better life than I'm living, despite all my goals. Smyth, I assume, will contest this claim. But to do that he'd have to give an account of why there's more of value in life than hedonic well-being. And many people have! But such an account, I think, is likely to be a ethical theory of the kind Smyth finds largely useless.

Or what if these standards conflict with each other? Jean Valjean had reasons to steal bread deriving from loving his nieces and nephews and wanting them to not starve. He also had reasons to not steal bread deriving from being a French citizen. Are the former reasons stronger than the latter ones? I think so, and if it came time to defend that position I'd note that the good caused by saving childrens' lives outweighs the harm done to the producer of the bread. But what does an anti-theorist say? I suppose he could say that it's just the case that Valjean has more reason to steal the bread. What kind of monster could say otherwise?

But this kind of intuitionism is (a) itself a kind of theorizing and (b) only takes you so far. Peter Singer and Peter Unger have argued very convincingly that if you share intuitions like this one about Valjean, you are also committed to giving away most of your income to UNICEF or Oxfam. Even if Valjean's rectitude is obvious, the Singer/Unger position is not at all obvious, and if it's the case (as I think it is) that you have to support both or neither, then ethical theorizing is necessary for us to think clearly and consistently about these sorts of problems.

I suppose my point is that the best kind of ethical theory draws on precisely those resources that Smyth presents as alternatives to ethical theorizing. It doesn't deny that there's such a thing as being a good uncle or being a good citizen but instead tries to piece together when being a good uncle is more important than being a good citizen, or what other beliefs we are committed to by virtue of our beliefs about what being a good uncle entails, or how the kinds of "good" we're talking about in these cases are connected to one another.

I also think it's worth distinguishing the kind of "realism" about ethical theory's limitations that Bernard Williams argued for (and Smyth is echoing here) from the skepticism of people like me about the role of this kind of academic theorizing in real-world political debates. I doubt that recapitulating arguments from Living High and Letting Die is going to convince anyone in the House to vote for more foreign aid or more open borders, and it might in fact alienate potential allies, but I also think that's bad and that the world would be a better place if more politicians and people in general acted on the basis of well thought-out ethical beliefs, while also accepting that people with different beliefs (like, say, Rawlsians and utilitarians) can and should work together when those beliefs intersect (such as on increasing government aid to poor people).

The philosopher Theodor W. Adorno, who maintained that nothing radical could come of common sense, wrote sentences that made his readers pause and reflect on the power of language to shape the world. A sentence of his such as "Man is the ideology of dehumanization" is hardly transparent in its meaning. Adorno maintained that the way the word "man" was used by some of his contemporaries was dehumanizing.

Taken out of context, the sentence may seem vainly paradoxical. But it becomes clear when we recognize that in Adorno's time the word "man" was used by humanists to regard the individual in isolation from his or her social context. For Adorno, to be deprived of one's social context was precisely to suffer dehumanization. Thus, "man" is the ideology of dehumanization.

So why do we have to write sentences like "man is the ideology of dehumanization" when ones like "Early 20th century humanists used the word 'man' when considering individuals in isolation from their social contexts, a usage which ignores that precisely what makes us human are our social contexts" (a) by Butler's own account, can express the same ideas and (b) express those ideas in a clearer manner that's intelligible to more people?

May 16, 2011

As an old-school hedonic utilitarian who believes quite strongly that you need a single criterion to settle public policy disputes, or any disputes with ethical content, there's a lot I disagree with in Will Wilkinson's latest takedown of Richard Layard (see here for an earlier go-around). But I'm not going to convince anyone of my whole moral framework in a blog post, and smarter peoplethan me are doing that anyway, so I'll just focus on what strikes me as the cheapest of the shots in Wilkinson's post:

Happiness isn't the only thing we care about and it's not the only consideration worth according weight to in our deliberations. To make it the one and only consideration that counts—to use happiness the way Mr Layard wants to use it—would require the abolition of democracy. But the happiness data clearly show that the happiest places on earth are democracies. Thus it would seem that Mr Layard is bound by reason to abandon either his dreams of "rational public policy" determined by "a single criterion" or his allegiance to happiness as the single criterion.

Well, no. Even if one accepts that democratic governance is a good idea, someone still has to be right and someone has to be wrong. It could be the case – and I think it is the case – that attempting to institute a dictatorship of utilitarian technocrats who issue laws based on the hedonic calculus will actually produce less happiness than our current system of government does. But it could simultaneously be the case – and again, I think it is – that the best policies that our democratic system can adopt are those supported by a utilitarian calculus. It then follows that utilitarians acting within the context of a democratic system should push for policies that utilitarian reasoning supports and oppose those it doesn't.

This, I think, is what Layard is saying. Declaring that non-utilitarian thinking doesn't make for "rational public policy" is an inflammatory piece of rhetoric, but it's basically a more testy way of saying that utilitarianism is going to produce the right answers and non-utilitarian theories are going to produce the wrong answers. And obviously, most people participating in a public policy debate think they have the right answers. That's how it should work: people throw out their arguments, elected representatives weigh those arguments and vote, and participants in the debate agree that it's better for that result to be binding than not. Similarly, I think moral absolutists who argue for an inviolable rule against torture are participating in a healthy debate about that issue, and aren't just crypto-autocrats who want to impose their civil libertarian views by any means necessary. It seems like only utilitarians get accused of latent dictatorial tendencies when they argue for what they believe in.

P.S. This is a side note, but rereading Wilkinson's last fight with Layard brought mere here, where Wilkinson calls "Benthamese" to be "the vulgar dialect of the morally insensate (economists, Asperger’s cases, etc.)" which gave this Asperger's-diagnosed Benthamite a good chuckle. Pretty sure I'm not morally insensate, but I'm not one to take offense at such things.

April 27, 2011

The comment thread to Alyssa's very good post on the politics of Parks and Recreation reminded me that the belief that The Office should have ended after season three is (a) extremely prevalent and (b) wrong. For one thing, I tend to think that complaints that shows have gone too long are, as a rule, incorrect. Unless the show's existence is precluding that of a better show, who cares if it keeps producing bad episodes? It doesn't negate the excellence of the earlier ones. I stopped watching The Simpsons about a decade ago, but a lot of people like the new episodes, and its survival doesn't preclude me from enjoying seasons one through ten.

But more importantly, The Office has produced some of its best material in season four through seven. In fact, I'd say that season five is the strongest season so far. Let's tick through some reasons.

It got darker: My favorite episodes of The Office tend to be the bleakest and most despairing ones, and there's a lot more of that in season four and onward. Once the focus got off Jim and Pam, the show had a lot of fun by making its other characters miserable. Michael's dysfunctional relationship with Jan got way worse in season four, particularly in the one-two punch of "The Deposition" – where Michael is brutally interrogated about his personal life by Jan and Dunder Mifflin's lawyers – and my all-time favorite episode of the show, "Dinner Party", which reaches Who's Afraid of Virginia Woolf? levels of domestic turmoil. "Night Out" showed Ryan as having descended into drug addiction, and by the end of season four he was in handcuffs.

Season five upped the ante. The main intra-office romantic intrigue involved Dwight repeatedly cuckolding Andy. Dwight is hardly meant to be a good guy, per se, but it's still sort of amazing that a network sitcom had one of its main characters do something so unconscionable for so long. Michael found his soulmate in Holly, and then the company ripped her away from him just as they were falling in love. "Business Trip" is just brutal in showing how much hurt and rage built up in Michael over Dunder Mifflin's decision to transfer her. Even Jim and Pam's plotline was pretty depressing, which brings me to…

It let Pam and Jim stumble: This started subtly in season four, with subplots focusing on the ways the dream couple could screw people over. See Jim pissing everyone off by suggesting the office combine birthday celebrations in "Survivor Man", or the two of them locking the entire office in for the night in "Night Out". And then, in season five, it really let them fail. From the show's beginning, Pam's dream was to go to art school and become an artist, and in season five she went to the Pratt Institute to do just that. And she failed. She failed literally by not passing her classes, and more broadly she gave up on art as a career. Then she joined Michael's paper company. Which failed. Then she became a saleswoman at Dunder Mifflin. And sucked at it. Only in her very recent position as office manager is she having professional success.

Meanwhile, Jim, who once said he'd have to throw himself in front of a train if Dunder Mifflin became his career, buys a house in Scranton in season five and becomes co-manager in season six. As I wrote at Alyssa's last summer, this is a pretty astonishingly bleak trajectory. Jim and Pam, after all, originally bonded over hating Dunder Mifflin and wanting to get out and support each other as they pursued their dreams. The show let them pursue their dreams, and then made them fail and accept that they were stuck where they were. This, to me, is a much more unusual and compelling thread than the "will they won't they" stuff in season one through three.

It made Michael human: Alan Sepinwall has a good slideshow highlighting episodes where the show has humanized Michael, and it's no accident that the vast majority come after season three. While the show had made it clear that Michael is a good salesman before, seasons four and onward are where they make you empathize for him. You don't delight in his suffering when Jan controls him throughout "Dinner Party"; it's horrible. His relationship with Holly was genuinely endearing and it hurts when the company rips them apart. You want the two to be happy together.

But the best sub-season arc the show's ever done, the Michael Scott Paper Company, is where the show really lets him come into his own. It shows Michael standing up to the corporate overlords who had abused him and beating them. He learned he couldn't beat them in the marketplace, and then proceeded to beat them in negotiations. In "Broke", you keep expecting him to trip up, but he doesn't, and ends up getting him, Pam, and Ryan good jobs at Dunder Mifflin – and in Pam and Ryan's cases, promotions from their last positions there. Most impressively, you don't resent Michael's win. I, at least, was really thrilled to see him beat Stringer Bell Charles Miner. Michael went from a cartoon to a character you can root for after season four, and I think the show would have really missed something without that element.

It's still funny: I think people forget this because there's so much other good comedy on the air right now. And sure, it doesn't make me laugh out loud as much as Archer or Parks and Recreation do. But when it hits, it really, really hits. I recently rewatched "The Lover" and that's just hysterical, straight-through. I could rewatch the scene where Michael tells Jim he's sleeping with Pam's mom fifty times, and it'd still be funny. Same goes for "Scott's Tots", which features some of the best cringe humor the show's ever done. Jim and Pam's drunken escapades in "PDA", Dwight's fake fire in "Stress Relief" (complete with Angela throwing cats into the ceiling), Stanley's dream of living in a lighthouse-cum-spaceship: these rank as high in my mind as any of the comic setpieces in the first three seasons.

April 26, 2011

Via Ned, H. Allen Orr's review of Sam Harris' The Moral Landscape says most of what needs to be said. I haven't read the book, because I am a mortal creature and I have better ways to spend my time on Earth, but Orr's review does remind me of something incoherent in Harris' reasoning that's been nagging me since the book came out. Here's Orr quoting Harris:

The very idea of “objective” knowledge (i.e., knowledge acquired through honest observation and reasoning) has values built into it, as every effort we make to discuss facts depends upon principles that we must first value (e.g., logical consistency, reliance on evidence, parsimony, etc.).

While there are a number of different philosophies of science and epistemologies that can accommodate the scientific method, Harris is certainly correct that you have to accept one of them for the whole thing to work. Harris' choice appears to be scientific realism, which, in short, is the view that science describes a world that is really "out there", and that a scientific observation is true when it corresponds to this real world.

Which is funny to me, because Harris is a utilitarian. At least that's what I and Orr make of his conclusion that the good is the "well-being of conscious creatures". A quick scan of the book shows that Harris explicitly identifies identifies as a consequentialist (see page 62; sadly there's no Google Books preview I can link to). Consequentialism + a hedonic conception of the good = good old fashioned utilitarianism.

Utilitarianism, unlike some other ethical theories, has philosophical implications outside of ethics. In particular, I think it commits you to some form of pragmatism. If the answer to "what should I do?" is "whatever action maximizes the general happiness" then the answer to "what should I believe?" is "whatever belief is conducive to maximizing the general happiness". That starts to look a lot like pragmatists' argument that what is true is what is most useful to believe.

As Richard Rorty pointed out in my favorite essay of his – "Religious faith, intellectual responsibility, and romance" – this affinity between utilitarianism and pragmatism is historical as well as logical. William James was a great admirer of John Stuart Mill, and The Will to Believe can be read, in Rorty's words, as an attempt at a "utilitarian ethics of belief". And James, of course, believed strongly that religious faith could be a force for good.

This makes sense. Under pragmatism, the statement "God exists" is true if it is useful to believe that God exists. This may seem a flimsy reason to believe, and Rorty lays out a number of ways in which it limits what one can believe, but it is no flimsier a foundation than pragmatism offers for believing in science, and most would consider that foundation pretty strong. No one would mistake Quine and Sellars and Dewey for arch opponents of science, after all.

So Harris has a problem. He can be a scientific realist, which rules out both pragmatism (which rejects the idea that there needs to be a real world "out there" which true statements reflect) and utilitarianism (because it implies pragmatism). Or he can be a utilitarian, and a pragmatist, and acknowledge that religion is often a source for good in the world and a source of joy for many privately. But you can't be a utilitarian and a scientific realist, and you certainly can't try to get to utilitarianism through scientific realism, which is what he's trying to do now.

March 22, 2011

When it comes to humanitarian interventions like the one taking place in Libya right now, I tend to favor an "airdropping cash"heuristic. Military intervention is expensive. Military intervention using the US armed services, the most costly security force in the history of the world, is really, really expensive. Chances are that, in most cases, calculating the amount of money a military intervention would cost and then airdropping that amount in cash over the relevant country (or, better still, a poorer country) would be better for human welfare than undertaking the intervention. It's possible to imagine cases where this isn't true; Bosnia and Kosovo were borderline cases, and an intervention in Rwanda very well could have passed this test. But Libya, which isn't experiencing mass slaughter or anything close to it, pretty clearly doesn't count.

Ezra made a very good argument along these lines, and Jonathan Chait offered up a not wholly unconvincing counterargument:

Why intervene in Libya and not elsewhere is a question that needs to be asked. But it's not a question that needs to be asked to determine the wisdom of intervening in Libya. Should we also spend more money to prevent malaria? Yes, we should. But I see zero reason to believe that not intervening in Libya would lead to an increase in in American assistance to prevent malaria.

This is true so far as it goes. Political necessity really does constrain how we think about opportunity costs. It would, of course, be better for the world if the money currently allocated to implementing the Affordable Care Act were reallocated to buying anti-malarial bednets, clean water straws, and so forth for people in the developing world. But all other choices aside, a world with the Affordable Care Act is better than one without it, and so I supported it, fervently.

But the difference between that situation and the one in Libya is instructive. There were 535 Congressmen and Senators whose preferences were determinative of the outcome of the health care reform fight. It is not unduly hubristic on the part of policy writers to think that their writing could help sway at least one of them. Given that possibility, it feels irresponsible to look at the bill and say, "this is great, but more foreign aid would be better," when saying, "this bill is the biggest expansion of the American safety net since the Great Society" has the potential to affect things.

On foreign policy, however, there's only one actor whose preferences are determinative of outcomes, and that's Barack Obama. Blog posts aren't going to persuade him one way or the other, especially after the bombs are already falling. In this case, it makes less sense to write in order to influence real-time political debates than to write in order to change the way people in and out of government think about policy issues. The fact that people working in public policy don't think to compare the benefits of the Libya intervention to those of buying bednets for Africans is exactly the sort of thing political writers can help correct, by making that comparison themselves.

This is why Leon Wieseltier's defense of the intervention is so infuriating. He mocks Ezra's point on the relative benefits of spending money fighting Libya versus spending money fighting malaria, asking, "Did our inaction in Rwanda reduce the frequency of malaria in Africa?" The point seems to be that malaria eradication may be a better goal, but it's not politically tenable, and in light of that, intervening in Libya is a good second-best option in humanitarian terms.

But one reason that humanitarian intervention is so much more politically tenable than anti-malaria spending is that Leon Wieseltier, most everyone else at The New Republic, and a whole lot of other liberal hawks in DC have made it their mission for the past 20+ years to make it politically tenable. If he and his comrades thought anti-malaria spending was a better idea, then they should have spent time arguing that instead. But they didn't. And turning around when called on it and saying, "Well yes, this is a second best option" is really bizarre.

February 18, 2011

I'm horribly late on this, but I've been thinking about Freddie's post last month critiquing the "globalize-grow-give" model of social democracy, and there are a couple of problems with it which I thought were worth delving into.

The first has to do with the gap between the argument he makes about unions and the one he makes on trade. Freddie's basic gripe with the liberal blogosphere seems to be a lack of respect and support for the American union movement. A lot of this, I think, has to do with the union movement's rejection of free trade, which some liberal bloggers (myself very much included) see as a major moral failing. When hundreds of millions of people have escaped poverty over the past few decades due to Western countries importing their goods, it seems perverse to support a movement that pushes restrictions on those imports.

So Freddie disputes the underlying logic that free trade is leading to growth (and an ensuing decline in poverty) in developing countries:

There are often serious questions about the role of globalization in economic growth, although that free trade spurs growth is axiomatic in most political circles. In particular, some make the case that many of the strongest economic actors in the world, notably the Asian miracle economies of Japan and Korea but also most certainly the United States and Britain, grew through the protection of infant industries until those industries were capable of competing on the international stage, and only then was trade liberalized. This means that the dominant economies of the world are essentially asking third world economies to undertake trade policies that they themselves didn't when it was to their advantage not to.

But think about what an international trade regime under a infant industry protection framework would look like. If we're serious about giving developing countries' industries a leg up, then developed countries should unilaterally get rid of all trade restrictions with developing countries, and developing countries should either subsidize their industries, restrict imports, or both. Whatever can be said about this approach on the merits, it's the exact opposite of what unions in America want to have happen.

For the sake of illustration, imagine that Tanzania wants to get serious about exporting textiles. An infant industry approach would imply Tanzania should subsidize textile companies and block American textile imports, while the US should eliminate restrictions on importing Tanzanian textiles. This isn't what Workers United wants at all! They want a trade regime that benefits US textile workers, which would presumably entail restricting imports from Tanzania and/or fighting Tanzanian subsidies and restrictions at the WTO.

So whether or not infant industry protection is better than straight-up liberalized trade for poor countries is irrelevant to whether the union position on trade is defensible. Unions representing workers in export-intensive industries in the US of necessity have to fight to screw over workers in poor countries that are attempting export-based growth. They're unions; they have to look out for their members. That's their prerogative, of course, but it's also a perfectly good reason to not be so enthusiastic about the US union movement.

Of course, this could all change if the union movement accepts, as SEIU sometimes seems to, that the future of jobs in the US is going to be more in the service sector than manufacturing, and as a consequence stops fighting liberalized trade. I'd be thrilled if that happened. But I'm pretty sure Freddie wouldn't be.

My second point is about this:

I have no doubt that the well-meaning and enthusiastic bloggers that support the globalize-grow-give model want only the best for workers, but wanting what's best for others and allowing them to provide for their own needs are two separate things. The g/g/g model is inherently paternalistic…I do want to advance egalitarian ends; egalitarianism starts with equality of power.

To borrow a phrase from Amartya Sen's critique of Rawls' focus on "primary social goods", this argument strikes me as fetishistic. Unless you're Marlo Stanfield, pursuing power as an end in itself makes no sense. Power is only good if you use it for something. It's not a political victory for AFL-CIO's members if its membership and budget were to quadruple and it then didn't spend anything on lobbying or political campaigns. It'd have more power, as it could (probably) change more legislation and win more elections if it wanted to, but what's the point if it doesn't use it? The equation of "worker power" with the power of unions is also problematic, given that individual members don't always play a huge role in union decision-making, but let's leave that aside for the time being.

Now, an instrumental case could be made that pursuing progressive domestic policies depends on the existence of a healthy union movement. Hacker and Pierson have a good deal of evidence for this, and I'm generally convinced. It's not an accident that Western Europe and New Deal/Great Society-era America have/had far greater union density than the US does now. But this hinges on whether a growing union movement today would lead to a much stricter trade regime, such that growth in developing countries would slow down significantly. If the harm to the world's poor abroad from such policies is large enough to overwhelm the benefits to America's poor and middle class of a Scandinavia-level domestic welfare state, then it's a bad deal overall.

As an empirical matter, I don't think that would be the result of a healthier union movement. Sure, unions hate bilateral trade deals, but a lot of smart trade economists hate bilateral trade deals too, and the long-run economic growth of developing countries isn't going to be threatened if the US negotiates and passes fewer of them. What would worry me more is if union-backed Democrats obstructed WTO rounds or even tried to pull out of the body, or if they got anti-Chinese tariffs passed, but really serious action on either of those strikes me as a stretch. Presidents want to make nice with other countries, generally, and so I don't think even a union-backed Democratic president who ran on an anti-trade platform would be too keen on obstructing the WTO or antagonizing the world's next superpower.

So I'll take the bargain. I think the wrong turn on trade that would be generated by a stronger union movement would be mild and inconsequential enough that getting universal pre-K and child care, paid parental leave, an expanded earned income tax credit, much-too-late (by the time any of this would realistically happen) action on climate change, and so forth would be well worth it. But it really depends on the empirics. If a stronger union movement means 50% tariffs on all Chinese goods and confining much of the Chinese countryside to decades of unnecessary poverty, that's bad no matter how many US workers are empowered in the process. So Freddie's embrace of worker power for the sake of worker power, consequences be damned, strikes me as hard to defend.