Headnote: I was scheduled to present at the American Comparative Literature Association meeting in Chicago on March 20th. Obviously, the meeting got cancelled. The session was on “Aesthetic Education” and the panel members were all asked to read Joseph North’s recent book Literary Criticism: A Concise Political History (Harvard UP, 2017) and an essay by Michael Clune entitled “Judgment and Equality” (Critical Inquiry, 2018). After reading the Clune essay, I was moved to write the response posted below. I think it is fairly self-explanatory, even if you haven’t read the Clune essay. After writing this response, I discovered that Clune had offered a shorter version of his plea for the authority of experts (and polemic against equality in matters of judgment) in a Chronicle of Higher Education piece that generated a fair amount of hostile response. (You can easily find these pieces on line by googling Clune’s name.) In particular, the hostility came from the fact that conservative New York Times pundit, Ross Douhat, wrote favorably about Clune’s position on the op-ed page of the Times. Doubtless, Clune was chagrined to see his argument, which he thought was radically leftist, embraced by a right-wing writer. But I don’t know that he should have been particularly surprised; to question–or to think about limiting–the claims of democratic equality is always going to play to the right’s fundamental commitment to reining in equality and democracy wherever it rears its dangerous head. In any case, it is to the anti-democratic implications of Clune’s argument that my piece responds to. I will post some thoughts on North’s book in the next few days.

In November 2008, a week after the election of Barack Obama to the presidency, I was in a New York city room full of bankers and hedge fund managers leading a discussion on the implications of that election. The financiers were horrified; they earnestly told the gathering that Obama and a Democratic Congress, led by Nancy Pelosi were know-nothings who, through their ignorant meddling, were about to ruin American economic prosperity. These men—and of course they were all men—were completely unshaken in their conviction of their competence even following the financial collapse of the previous month. A portrait of expertise in action, offering a strong case for why the rule of experts must be tempered by the oversight of the demos. Every profession is a conspiracy against the laity, George Bernard Shaw famously warned us.

Democracy means many things, but one of its many entailments is that elites must subject themselves to the judgment of the masses. As experts we can deplore the ignorance of the non-initiated, but in a democracy authority is not to be had as a gift but must be earned. Democracy is a supremely rhetorical political form. Any one, including the expert, who has a position they want the polity to act upon must convince a majority of her fellow citizens to endorse that policy. Persuasion is the name of the game; and saying it again, just louder this time and standing on my credentials as an expert, is not a very effective rhetorical move. There is a deep anti-authoritarian bias in the demos—and we should celebrate that fact. Democracy, as Winston Churchill said, has some very obvious flaws, but it sure beats all the alternatives.

The right has eaten the left’s lunch for some forty years now. We people of the left can scream that it hasn’t been a fair fight, but that still doesn’t provide any justification for retreating from the democratic arena into a petulant insistence on our being correct and the misled masses being wrong. The technocracy of the EU may be somewhat preferable to the plutocracy of the US, but the “democratic deficit” is real in both cases. Maybe democracy is always a battle between elites for endorsement from the general populace. If that is the case, and if violence is not considered a viable or desirable alternative, then the rhetorical battle for the hearts and minds of the people is where all the action is. It makes no sense in such a battle to begin by maligning the judgment of those people. Depending on the capacity of the people to judge for themselves is the foundational moment of faith in a democratic society. Yes, as Clune reminds, us, Karl Marx refuses to make that leap of faith. Do we really want to follow Marx down that anti-democratic path?

Marx, after all, also warns us that every ruling elite indulges itself with the sweet conviction that it acts in the interests of all. We, those business men I spent the evening with told themselves, are the “universal class” because we bring the blessings of economic plenty to all. In their utter belief in their own goodness, I saw a mirror image of myself and my leftist friends. If we don’t for a moment want bankers to avoid accountability to the people they claim to serve, why would we think we deserve an exemption. Listen to your academic colleagues rant about the vocabulary of assessment and outcomes when applied to what happens in the classroom—and you will hear an echo of what I listened to that night in New York. Who dares to question the effectiveness of what transpires on our college campuses?

Kenneth Burke picked up the term “professional deformation” from John Dewey. He used it to highlight the blindness that accompanies immersion in a discipline. I think Clune is right to present judgment as emerging from the practices and institutions of a discipline. (“[T]o show someone the grounds of a given judgment is to educate them in the field’s characteristic practices,” he writes [918].) The oddity of his position, it seems to me, is that he takes this Kuhnian point as a reason to enhance our faith in the judgments of those encased in a paradigm. That strikes me as a very odd reading of Kuhn, taking his book as a celebration of “normal science” instead of a meditation on the difficulty of intellectual revolution because of the blinders normal science imposes. It is only a bit exaggerated, in my view, to see Kuhn as telling us that textbooks devour their readers and turn them into mindless conformists. Yes, Clune nods to the fact that communities of practitioners “can and do manifest bias and thus serve as sites of oppression” (918), but he seems to think acknowledgment of that fact is enough to render it harmless, appealing to an unspecified “broad range of measures” (919) that can compensate for the potential oppressions. But I read Kuhn as suggesting that it is precisely the young, the uninitiated, the outsiders (in other words, those who are least embedded in the community of practice, or even non-members of it), who are most likely to disturb its complacency, its confidence in its judgments and its blindness to its biases and oppressions. Let’s remember Foucault’s lessons about the power of disciplines. All concentrations of power are to be distrusted, which is another reason (besides a discipline’s in-built blind spots) to advocate for the subjection of expert judgments to external review—and not simply external review by other members of the community in question. I am a firm believer in the 80/20 rule; spend 80% of your effort in mastering your discipline; spend 20% of your time in wide-ranging reading and activities that are completely unrelated to that discipline. And then use that 20% to break open your discipline’s inbreeding.

I am fully sympathetic with Clune’s desire to find in aesthetics an alternative to the norms and values of commercial society. And that position does seem to entail a commitment to aesthetic education as the site when that alternative can be experienced and embraced. I also believe that the democratic commitment to the people’s right to judge the prescriptions and advice of the experts does make the need for an educated citizenry a priority for our schools and universities. The liberal arts curriculum should be aimed at making citizens more competent judges. It is a strong indication of the right wing’s rhetorical triumph with a section of the populace that a majority of Republicans in a recent poll agreed that universities did more harm than good. I don’t need to tell this audience that the liberal arts and the arts are under a sustained rhetorical attack.

What drives people like me and you crazy is that the attitudes adopted by the right are impervious to facts. Climate change denial has become the poster child for this despair over the ability of the demos to judge correctly or wisely. It is worth mentioning that the denigration of the liberal arts is equally fallacious, at least if the reasons to avoid humanities or arts classes are economic. All the evidence shows that humanities and arts majors, over a lifetime, do just as well economically as science and engineering and business majors. The sustained attack on the arts and humanities has more to do with a distaste for the values and capacities (for critical thinking, for sophisticated communication) they promote.

So what are we, the defenders of the aesthetic and the humanities (along with the world-view those disciplines entail), to do? Saying our piece, only louder this time, and with a statement of our credentials as experts, won’t do. Declaring our inequality, my superiority to you, should be a non-starter at a moment in history where increasing inequality is among our major problems. I, frankly, am surprised that Clune is even tempted to take that route. It comes across as pretty obvious petulance to me. Why isn’t anyone paying any attention to me? I know what’s what and they don’t. Listen up people.

In short, I stand with those who realize that judgment needs to be reconceived in ways that render it compatible with equality. Clune is undoubtedly right that some writers have failed to face squarely the fact that judgment and equality are not easily reconcilable. The problem, to put it into a nutshell, is that judgment seems to entail right and wrong, correct and incorrect, true and false. To make all judgments equivalent is akin (although it is not actually that same as) total relativity, the idea that every judgment is “right” within a specified context. Contrasted to that kind of relativism, the acceptance of the equivalence of all judgments can look even more fatuous, marked with a shrug and a “whatever.” No point arguing since there is no accounting for tastes, and no one gets to dictate your tastes to you even if they are weird, incomprehensible, obnoxious, disgusting. One’s man’s meat is another man’s poison.

Faced with such epistemological throwing in of the towel, it is not a surprise that folks keep coming back to Kant. Clune details how both Sianne Ngai and Richard Moran have recently tried to come to terms with Kant’s attempt to demonstrate that aesthetic judgments make a “demand” on others, thus raising our aesthetic preferences above a mere statement of personal taste and towards an intersubjective objectivity. Ngai, Moran, and Clune all use the term “demand” and the three translations of Kant’s Critique of Judgment I have consulted also use that term. But I will confess to preferring Hannah Arendt’s translation of Kant, even though I have never been able to find in Kant where she finds the phrase that she puts in quotation marks. For Arendt, those making an aesthetic judgment, then “woo the consent” of the other. Arendt, in other words, places us firmly back into the rhetorical space that I am arguing is central to democracy. Surprisingly, Clune never recognizes the affinity between his “community of practitioners” and Kant’s sensus communis. What Arendt calls our attention to—especially when she tells us that Kant’s Critique of Judgment is the “politics” critics claim he never got around to writing—is the fact that the sensus communis always needs to be created and its ongoing reconfiguration is the very stuff of politics. Yes, judgments are deeply indebted to and influenced by the community from which they are articulated, but that community and its practices is a moving target. Think of Wittgenstein’s image of language as a sea-going vessel that undergoes a slow, but complete, rebuild even as it never leaves the water for dry-dock. The democratic community—and its judgments on the practices of its various sub-cultures and its elites and its experts—is continually being refashioned through the public discourses that aim to sway the public in one direction or another.

How does this understanding of the scene of politics help. Clune, I think, provides a clue when he writes “For me to be convinced by the critic’s aesthetic judgment that James is interesting means not that I have evaluated the reasons for that judgment but that I’ve decided to undertake an education that promises to endow me with his or her cultural capacities” (926). What gets under-thought here is what would actually motivate such a decision. We need to invoke Aristotle in conjunction with Raymond Williams at this point. The expert—be she a climate scientist, a heterodox economist, or a Proust scholar—wants, at a minimum, to inspire trust, and, at a maximum, the auditor’s desire to join her community of practitioners, to make its common sense his own. It is not “reasons,” as Clune says, that are decisive here, but ethos. I would be willing to be that almost everyone in this room could point toward a teacher who inspired them—and inspired them exactly as the kind of person I myself wanted to become. What an aesthetic education offers is initiation into a particular “structure of feeling.” It is the attractiveness of that sensibility that our political and public rhetorics need to convey. Once again, Kant and Arendt help us here when they point to the crucial importance of the “example” to these attempts to “woo the other.” Modelling what a life lived within that structure of feeling looks like is far more potent that pronouncing from on high that Moby Dick is superior to Star Wars.

Look at this concretely. The rhetorical genius of the Republican party since Ronald Reagan has been to portray the professional, educated, upper-middle class left (who occupy then “helping professions” of doctor, lawyer, teacher, social work) as joyless scolds, continually nagging you about how all the things you do are harmful to the environment, to social harmony, to your own well-being. They have made it a political statement to drive a gas-guzzling truck while smoking a cigarette in defiance of those pious kill-joys. That’s the rhetorical battle that the left has been losing since 1980. Yes, the populace scorns our expert judgments, but that’s because they have no desire at all to be part of the communities in which those judgments are common sense. Our problem, you might say, is not how to educate—aesthetically or otherwise—those who make the decision to undertake an education, but is how to make the prospect of an education appealing to those who see it as only a constant repudiation of their own sensibilities and capacities. In short, “structures of feeling” triumph over “interests” much of the time and the left has proved spectacularly inept at modelling positive examples of the sensibility we wish to see prevail in our society.

I shouldn’t be so overwhelmingly negative about the left. The sea-change in attitudes (and public policy) toward LBGTQ citizens over the past thirty years cannot be overstated. Of course, given that attitudes are, as I have argued, a moving target, changes in any one direction are never set in stone. Constant maintenance, rearticulation, and adjustments on the fly are necessary. The task of education, of initiation into a sensibility that has come to seem “common sense,” as both attractive and right, is always there in front of us. I am simply arguing that the right wing has been more attuned to that educative task than the left. Or as I am prone to say, the left goes out and marches in the street on the weekend before returning to work on Monday while the right gets itself elected to school boards.

As a teacher, I find Ngai’s focus on “the interesting” crucial and poignant. When we call something “interesting,” we are saying it is something worry of attention, something worthy of pausing over and considering at more length. And that plea for attention is certainly at the very center of my practice as a teacher. When I declare in front of class that this or that is “interesting,” I am inviting students into a sensibility that wants to ponder the significance of the thing in question. But I am also pleading with them to take that first step—knowing that for many of them I am just another professor who incomprehensively gets excited about things to which they are supremely and irredeemably indifferent. You can’t win them all, of course. But the effort to win some of them over is endless, never fully successful, and in competition with lots of other demands on their attention.

There is, I am arguing, no other course of action open in a democratic society. We are, if you will, condemned to that rhetorical battle, attempting to woo our students, to woo the demos, to a particular sensibility, a particular vision of the good. That, I will state it nakedly, is politics. To dream of a world where expert opinion is accepted by the non-experts is to dream of salvation from politics, from its endless wrangling, its messy compromises, its inevitable mix of failures with successes. It is to desire a technocratic utopia, in which the “administration of things” replaces the conflicts of political contestation. No thank you.

Another way to say this is that politics is the inevitable result of living in a pluralistic universe. There will never be full consensus, there will never be a single vision of the good to which all subscribe, there will never be an all-encompassing and all-inclusive sensus communis. On the whole, I’d say that’s a good thing. I would hate to live in a world where everyone disagreed with me about everything. But I am convinced that a world in which everyone agreed with me about everything would be almost as bad.

But, but, but . . . climate change. Please recognize that climate change is just one in a long string of existential threats that democracy—slow, contentious, ruled by greed and passion—is deemed ill equipped to handle. Authoritarians of whatever political stripe are always going to identify a crisis that means democracy must be put on hold. The terrible attraction of war is that it negates the messy quotidian reality of pluralism. The dream is of a community united, yoked to a single overwhelming purpose, with politics suspended for the duration. Thus, that great champion of pluralism, William James, could also dream of a “moral equivalent of war.” Perhaps democracy truly is unequal to the challenge of climate change, but then the desire/need to jettison democracy should be stated openly. Otherwise, it is back to the frustrations of political wrangling, to the hard process of winning over the demos.

So, yes, I am in favor of an aesthetic education that aims to introduce students to a sensibility that finds commercial culture distasteful and (perhaps more importantly but perhaps not) unjust. And I want them to see that indifference to climate change is of a piece with the general casualness of our prevailing economic order to the sufferings of others. But I cannot endorse Clune’s picture of that educational process. “[T]he significant investment of time and energy that this education requires—both at its outset and for a long time afterwards—is channeled in submission to the expert’s judgment that these works make particularly rewarding objects of attention. The syllabi of an English department’s curriculum, for example, codify this submission” (926). I have been fighting against my English department’s curriculum for twenty-five years. The texts I want to teach in my classes are the ones I find good to think with—and I invite my students to join me in that thinking process. (More Arendt here: her notion that judgment involves “going visiting” and you can know a thinker’s ethos by considering the company she wants to visit—and to keep.) What I model is one person’s encounter with other minds—the minds represented by the books we read and by the people who are in the classroom with me. My colleagues should have similar freedom to construct their courses around the texts that speak to them—and in which they then try to interest their students.

Fuck submission. Maybe it’s because I teach in the South. But my students have been fed submission with mother’s milk. What they need to learn is to trust their own responses to things, to find what interests them, to find what moves them emotionally and intellectually. They need to learn the arrogance of democratic citizenship, which arrogates to itself the right to judge the pronouncements of the experts. Certainly, I push them to articulate their judgments, to undertake themselves to woo others to their view. They must accept that they too are joined in the rhetorical battle, and if they want allies they will have to learn how to be persuasive. But that’s very, very different from suggesting that anyone should ever take the passive position of submission.

Clune is scornful of Richard Moran’s “liberal” endorsement of freedom of choice. So I want to end with a question for all of you as teachers. Can I safely assume that you would deem it inappropriate, in fact unethical, to tell your students whether or not to believe in god, or what career path to follow, or for whom they should vote? If you do think, in your position as a teacher, that you have the right to tell your students what to do in such cases, I would like to hear your justification for such interference. Obviously, what I am suggesting here is that our sensus communis does endorse a kind of baseline autonomy in matters of singular importance to individuals. I certainly wouldn’t want to live in a society where my freedom to choose for myself about such matters were not respected. If some of you in the room feel differently, I am very interested in hearing an articulation and defense of such feelings.

Now we could say that our expertise as teachers does not extend to questions of career, religious faith, or politics. But where we are experts, there we are entitled to tell a student he is wrong. James really in interesting; Moby Dick really is better than Star Wars. But surely such bald assertions are worthless. How could they possibly gain the end we have in view? Via the path of submission? I can’t believe it. Yes, we stand up there in our classrooms and use every trick we can muster to woo our students, to get them interested, and even to endorse our judgments after careful consideration; one of our tasks is to teach (and model) what careful consideration looks like. And I certainly hope you are especially delighted when some student kicks against the pricks and makes an ardent case that Star Wars is every bit as good as Melville. Because that’s the sensibility I want aesthetic education to impart.

I have just returned to the US after four months in London. The British election was dispiriting, precisely because it seemed so dispirited. My on-the-ground sense (for what it is worth) is that the electorate was deeply tired and, thus, disengaged. There was little to no visible passion. The Brexit thing had exhausted every one except the right-wing and so the sense was “let’s fucking drive over this cliff; at least then it will be over. Better disaster then this endless wrangling.” I was not in the least surprised by Johnson’s victory–and it makes me think Trump will win in 2020 through a similar combination of cynicism, the opposition’s incompetence, an avalanche of lies, and the victory of a politics of fear and punishment (of the most vulnerable) over any kind of generous vision of society that cares for its members.

That said, I will take up blogging again now that I have returned.

I am having trouble disentangling the personal experience of decline that is old age from what I deem a more “objective” sense of decline in the world(s) I inhabit. For the record, I now, for the first time, feel old. Various capacities are slowly draining away. The decline is not precipitous, but it is relentless and certainly feels irreversible. There are no miracle cures or even roads to improvement out there. My responses to this fact range from impatience at my many new incompetencies to anger at my ineptitude to grief about my lost abilities. Old age is not pretty and how to suffer it gracefully so far eludes me.

But my grief and anger also focus on the current situation in my world(s). My mantra has become “I know I am old and cranky, but objectively things are worse.” Is that actually true? I can’t tell. I can only say that I look at the world and my guitar not so gently weeps. Was it really better in 1969 (when George Harrison wrote those words)? No. If you were gay, or a soldier in Vietnam, or living in many parts of the so-called third and second worlds, 2019 is likely better than 1969. The failure of American democracy, registered by the ability of the government to wage a senseless war in Vietnam for over ten years, was open to view then. The CIA’s shenanigans a few years later in Chile was evidence of a rogue state no less corrupt than Trump’s. Another danger of getting old: you end up saying I’ve seen all this before; there is nothing new under the sun.

So is something really different this time? I think so. What is different is the open cynicism, the complete unleashing of “I will take mine and death to all the others” without any shred of ideological cover. Hypocrisy is the tribute that vice pays to virtue—and that tribute has now become passé. It’s open season on the poor, the immigrant, the “losers.” No need to even pretend to feel compassion for their troubles, not to mention actually doing anything to alleviate them. Just pour it on: scorn, neglect, direct harm. And the aggression to those least able to fend it off is met with howls of glee. I am constantly reminded of Yeats’s caustic poem of disillusionment, “Nineteen Hundred and Nineteen”: “we, who seven years ago/Talked of honour and of truth/Shriek with pleasure if we show/The weasel’s twist, the weasel’s tooth.” As they say, this is unfair to weasels who are amateurs when we consider the violence humans can do—and the delight humans (and why mince words? it’s mostly men) can take in that violence.

More Yeats (has anyone ever traced the agonies and emotions that traverse aging better?) “My mind, because the minds that I have loved,/The sort of beauty that I have approved,/Prosper but little, has dried up of late,/Yet knows that to be choked with hate/May well be of all evil chances chief.” (From “Prayer for my Daughter”)

There is such pleasure in hatred. The ritual conversations that I and my ilk have about Trump and his minions have come to annoy me now. But they were sustaining for quite some time. Now I just want to walk away. I want to occupy another province, not the lowlands of hate. But the alternative seems to be resignation since I, too, live in a world where the things I most approved, most loved, most held dear, prosper but little of late. I think of myself as living in a world where I am a stranger to the beliefs, emotions, and desires of most of my fellow humans. I will never understand them—but they also seem to hold all the cards. Let me state the fear directly: after the Boris Johnson victory in England (you can hardly call it Britain since Scotland and even Northern Ireland voted the other way), I think Trump will win reelection. I think his nihilism and cynicism play well with an astonishing number of white Americans. They revel in taking the view that everyone is out to get me so I am best off hitting the first blow. Preemptive strikes: American orthodoxy since the Bush/Cheney years.

To be more parochial: the despair is not just about American society at large, but also about what is being done to higher education as a public institution and good. A combination of privatization and a relentless attack on critical thought and the production of knowledge. I guess we should be flattered that we are so hated and feared by the right-wing ideologues. But it is how ineffectual our responses are to these attacks that garners most of my attention. I feel on both the macro (society) and micro (university) level a helplessness as I watch the flood coming downriver with full relentlessness and agonizingly slow motion. The disaster unfolds slowly (rather like global warming) and we do nothing to alter its course.

I will admit to the old age crankiness of, to some extent, blaming the victims. I find my colleagues’ attitudes and behavior in the current crisis ostrich-like. They keep acting like it is 1960. Hannah Arendt was on to a deep truth when she saw much of the behavior in Nazi Germany as motivated by career ambition, by the sheer need to have and hold a job, and to keep advancing up the ranks placed above one’s current position. Academics (the ones lucky enough to occupy one of the diminishing number of tenurable positions) are focused, as they have ever been, upon getting that next book published and on getting their partner a job at the same school. Those quests absorb all their energy—and much (most?) of their interest, aside from the ritual denunciations of the Trump and their university’s administrations. These soi-disant radicals scream loudly against even the mildest suggestions of reform/change in their received practices. That the university might have to change in order to remain pertinent in a changed world is heresy to them.

That said, however, my experience at UNC clearly demonstrates that there is no placating the enemies of the university—and all that it stands for. Reforming our teaching and research practices (much as I think such reforms are needed) will not call off these weasels. My despair, it is fair to say, stems from my belief that the relentlessness and aggression of our right-wing enemies echoes a wide-spread “structure of feeling” in white America—and, here is the corresponding source of despair, a conviction that (despite the laudable insistence of some of my left-wing friends otherwise) there is simply no equivalent structure of feeling underwriting the kind of politics I hold dear. I simply do not believe that Sanders or Warren could win a national election. I think the right has succeeded in planting a fear of “socialism” so deep in the electorate’s psyche that Warren and Sanders would suffer the same fate as Corbyn. The British miracle election of 1945 comes to seem more and more a “black swan” when we consider post-1945 politics in both the UK and the US. For once, the promise of socialism triumphed over Churchill’s fear-mongering about the coming police state. The only equivalent might be LBJ’s 1964 victory—when a fear of right-wing radicalism equivalent to the fear of socialism for once led to victory. Of course, in the aftermath of that election, the Republicans discovered white American resentment and have ridden that horse ever since with pretty good results. (Yes, the Republicans are a minority party, but they have combined the oddities of the American institutional structure [the electoral college; the make-up of the Senate] with an absolutely ruthless undermining of democracy to secure their hold on power.)

So I don’t see a pathway out of the full unleashing of right-wing nastiness in the US and the UK. I guess we can say that the taboos against violence so far are holding. We are seeing nothing like the street fights (and killings) that characterized 1920s and 1930s Germany in the lead up to Hitler. Yes, we have our right-wing militias, but politically motivated domestic terrorism has been confined, so far, to loner shooters. I do think (and certainly hope I am right) that more organized violence would prove counter-productive, would generate a strong negative reaction to those using such tactics.

But the right-wing has not needed to resort to violence. Its aggressive shredding of institutional protections against the abuse of power has worked just fine. It has discovered that the electorate neither cares nor pays much attention to power-grabbing maneuvers that are procedural. There is no accountability any longer—for corporations that engage in various illegal financial capers, for rich tax evaders, or for politicians who work to deprive citizens of votes or to deprive elected officials of the other party their ability to function.

Among the things I hate is the wistfulness that accompanies my despair. Late nineteenth and early twentieth century literature (think Proust or Henry James, especially in the abominable Princess Casamassima, or even Virginia Woolf) is replete with tales that witness (helplessly) to the ongoing disappearance of a class (call them aristocrats, but better described as the leisured classes who did not have to earn their bread by working) whose faults the writers can see, but whose virtues they also think are superior to those of the commercial classes. These writers know this leisured class is doomed—and they don’t even try very hard to defend their existence, even though they think the coming world is bound to be worse. (Yeats and Eliot, of course, attempt more full-throated defenses of aristocracy, which is why they are anti-democratic conservatives in a way Proust, Woolf, and even James are not.)

I don’t like standing in a similar place, wistfully defending a set of values and a group of people who have lost their social standing, have lost their ability to influence the direction their society takes. But the flood of words from people like me—who never lose our ability to pour out more verbiage—seems more pathetic by the day. We wallow in our own virtue in a world where the weasels reign and we have nothing else to offer.

I will, per usual, knock on doors next fall, and do whatever else the Democrats ask me to do. Inevitably, I will once again donate money, and even run (as I have the last two cycles) a fund-raiser or two. I hate (so many things to hate!) abetting the link between politics and money (corrupting in every possible way) in the US. I try to abide by my resolution to give my money to local charities that I respect instead of to local political candidates. But I do not stick to that resolution resolutely. And all of it—from the knocking on doors to the raising of money—feels like tokenism to me. I don’t believe it makes an iota of difference. The real levers are located elsewhere, far from any place I will ever enter. So why do I do it? To ease my conscience. And also because people I love, people whose commitment to the fight inspires me because so whole-hearted (even as I think it naïve) do believe such things matter and ask me to do my bit. I don’t want to let them down, but they can also see my heart is not really in it. Just another messy compromise—giving something but not in a spirit that would make the gift truly welcome. But, then again, isn’t politics the art of compromise?

What does remain is the despair, the deep daily hurt of living in a society that is so cruel, and that revels in its cruelties. I don’t understand these people, yet not only must live among them, but also must accept their dominance, their ability to shape what gets done and said and felt. I will never reconcile myself to that fact—and it is crazy-making and depressing and fuels dreams of flight.

I am about one-third of the way through Martin Hägglund’s This Life: Secular Faith and Spiritual Freedom (Pantheon Books, 2019), of which more anon.

But I have been carrying around in my head for over seven months now my own build-it-from-scratch notion of ethics without God. The impetus was a student pushing me in class last fall to sketch out the position—and then the book on Nietzsche’s “religion of life” that I discussed in my last post (way too long ago; here’s the link).

So here goes. The starting point is: it is better to be alive than dead. Ask one hundred people if they would rather live than die and 99 will choose life.

A fundamental value: to be alive.

First Objection:

Various writers have expressed the opinion that is best not to have been born since this life is just a constant tale of suffering and woe. Life’s a bitch and then you die.

Here’s Ecclesiastes, beginning of Chapter 4:

“Next, I turned to look at all the acts of oppression that make people suffer under the sun. Look at the tears of those who suffer! No one can comfort them. Their oppressors have all the power. No one can comfort those who suffer. 2 I congratulate the dead, who have already died, rather than the living, who still have to carry on. 3 But the person who hasn’t been born yet is better off than both of them. He hasn’t seen the evil that is done under the sun.”

Here’s Sophocles’ version of that thought, from Oedipus at Colonus:

“Not to be born is, beyond all estimation, best; but when a man has seen the light of day, this is next best by far, that with utmost speed he should go back from where he came. For when he has seen youth go by, with its easy merry-making, [1230] what hard affliction is foreign to him, what suffering does he not know? Envy, factions, strife, battles, [1235] and murders. Last of all falls to his lot old age, blamed, weak, unsociable, friendless, wherein dwells every misery among miseries.”

And here is Nietzsche’s version, which he calls the “wisdom of Silenus” in The Birth of Tragedy:

“The best of all things is something entirely outside your grasp: not to be born, not to be, to be nothing. But the second best thing for you is to die soon.”

Second Objection:

As Hägglund argues, many religions are committed to the notion that being alive on earth is not the most fundamental good. There is a better life elsewhere—a different thought than the claim that non-existence (not to have been born) would be preferable to life.

Response to Objections:

The rejoinder to the first two objections is that few people actually live in such a way that their conduct demonstrates an actual belief in non-existence or an alternative existence being preferable to life on this earth. Never say never. I would not argue that no one has ever preferred an alternative to this life. But the wide-spread commitment to life and its continuance on the part of the vast majority seems to me enough to go on. I certainly don’t see how that commitment can appear a weaker starting plank than belief in a divine prescriptor of moral rules. I would venture to guess that the number of people who do not believe in such a god is greater than the number who would happily give up this life for some other state.

Third Objection:

There are obvious—and manifold—reasons to choose death over life under a variety of circumstances. I think there are two different paths to follow in thinking about this objection.

Path #1:

People (all the time) have things that they value more than life. They are willing (literally—it is crucial that it is literally) to die for those things. Hence the problem of establishing “life” as the supreme value. Rather, what seems to be the case is that life is an understood and fundamental value—and that we demonstrate the truly serious value of other things precisely by being willing to sacrifice life for those other things. To put one’s life on the line is the ultimate way of showing where one’s basic commitments reside. This is my basic take-away from Peter Woodford’s The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (U of Chicago P, 2018; the book discussed in my last post.) To use Agamben’s terms “bare life” is not enough; it will always be judged in relation to other values. A standard will be applied to any life; its worth will be judged. And in some cases, some value will be deemed of more worth than life—and life will be sacrificed in the name of that higher value. In other words, “life” can not be the sole value.

I am resolutely pluralist about what those higher values might be that people are willing to sacrifice life for. My only point is that an assumed value of life provides the mechanism (if you will) for demonstrating the value placed on that “other” and “higher” thing. In other words, the fact (gift?) of life—and the fact of its vulnerability and inevitable demise (a big point for Hägglund, to be discussed in next post)—establishes a fundamental value against which other values can be measured and displayed. Without life, no value. (A solecism in one sense. Of course, if no one was alive, there would be no values. But the point is also that there would be no values if life itself was not valued, at least to some extent.) Placing life in the balance enables the assertion of a hierarchy of values, a reckoning of what matters most.

Path #2:

It is possible not only to imagine, but also to put into effect, conditions that make life preferable to death. As Hannah Arendt put it, chillingly, in The Origins of Totalitarianism, the Nazis, in the concentration camps and elsewhere, were experts in making life worse than death. Better to be dead than to suffer various forms of torture and deprivation.

I want to give this fact a positive spin. If the first plank of a secular ethics is “it is better to be alive than dead,” then the second to twentieth planks attend to the actual conditions on the ground required to make the first plank true. We can begin to flesh out what “makes a life worth living,” starting with material needs like sufficient food, water, and shelter, and moving on from there to things like security, love, education, health care etc. We have various versions of the full list from the UN Declaration of Rights to Martha Nussbaum’s list of “capabilities.”

“Bare life” is not sufficient; attending to life leads quickly to a consideration of “quality” of life. A secular ethics is committed, it seems to me, to bringing about a world in which the conditions for a life worth living are available to all. The work of ethics is the articulation of those conditions. That articulation becomes fairly complex once some kind of base-line autonomy—i.e. the freedom of individuals to decide for themselves what a life worth living looks like—is made a basic condition of a life worth living. [Autonomy is where the plurality of “higher values” for which people are willing to sacrifice life comes in. My argument would be 1) no one should be able to compel you to sacrifice life for their “higher value” and 2) you are not allowed to compel anyone to sacrifice life for your “higher value.” But what about sacrificing your goods—through taxes, for example? That’s much trickier and raises thorny issues of legitimate coercion.]

It seems to me that a secular ethics requires one further plank. Call it the equality principle. Simply stated: no one is more entitled to the basic conditions of a life worth living than anyone else. This is the minimalist position I have discussed at other times on this blog. Setting a floor to which all are entitled is required for this secular ethics to proceed.

What can be the justification for the equality principle? Some kind of Kantian universalism seems required at this juncture. To state it negatively: nothing in nature justifies the differentiation of access to the basic enabling conditions of a life worth living. To state it positively: to be alive is to possess an equal claim to the means for a life worth living.

Two complications immediately arise: 1. Is there any way to justify inequalities above the floor? After every one has the minimal conditions met, must there be full equality from there? 2. Can there be any justification for depriving some people, in certain cases, of the minimum? (The obvious example would be imprisonment or other deprivations meted out as punishments.)

Both of these complications raise the issue of responsibility and accountability. To what extent is the life that people have, including the quality of that life, a product of their prior choices and actions? Once we grant that people have the freedom to make consequential choices, how do we respond to those consequences? And when is society justified in imposing consequences that agents themselves would strive to evade?

No one said ethics was going to be easy. Laws and punishments are not going to disappear. Democracy is meant to provide a deliberative process for the creation of laws and sanctions—and to provide the results of those deliberations with legitimacy.

All I have tried to do in this post is to show where a secular ethics might begin its deliberations—without appealing to a divine source for our ethical intuitions or for our ethical reasonings.

‘[Sister Helen] Prejean’s logic rests on the hope that shame, guilt, and even simple embarrassment are still operative principles in American cultural and political life—and that such principles can fairly trump the forces of desensitization and self-justification. Such a presumption is sorely challenged by the seeming unembarrassability of the military, the government, corporate CEOs, and others repetitively caught in monstrous acts of irresponsibility and malfeasance. This unembarassability has proved difficult to contend with, as it has had a literally stunning effect on the citizenry. They ought to be ashamed of themselves! we cry over and over again, to no avail. But they are not ashamed, and they are not going to become so” (32).

I don’t have much to say to this statement—beyond noting how completely it echoes my own experience and sentiments. The administration at my university is just about completely non-accountable at this point. Which made me think that “public shaming” (as I tried to do in the newspaper editorials I wrote about their actions) was the only recourse left. But they have proved immune to shaming, might even take it as proof that they are doing their “tough jobs” of protecting the university’s interests.

It does not make me feel a sap. I realize more and more that a certain self-image of integrity is central to my own serenity. Of course, complacency about one’s self is an ever-present danger. Pharaseeism afflicts us all. But I do abide by the rule of “never say no to a student.” Whatever they ask for, they shall receive—just as the same all-inclusive indulgence is extended to my children. I have no right, given my job and my salary, to turn students down. And abiding by that rule is one way I maintain my self-respect.

So the question about the shameless is: where does their self-respect reside? Where is the line they would not cross, the action they would not permit themselves? I have always liked what I call “Kant’s rule of publicity”: basically Kant argues in one of his political essays that any action is morally dubious if the agent of that action would prefer it being kept a secret. We reveal our awareness of an action’s non-morality when we strive to keep it unknown. (Yes, there is the tradition of keeping benevolent actions a secret—a tradition mostly honored in the breach these days by our publicity-seeking philanthropists—but the existence of this sub-set of good actions needn’t detract from Kant’s larger point.) The attempt to keep things secret is an acknowledgement of shame and guilt. But it does seem Nelson is right: when malfeasance is “outed” these days, the impulse is to brave it out, to never show the weakness of admitting guilt or manifesting shame.

And there is the even more gob-smacking pride in offensive behavior, as politicians compete to see who can most vociferously endorse torture and taking food stamps away from the hungry, and CEOs boast about how far they can drive down wages and take away benefits for their workers. Oh, brave new world!

I said, perhaps, far too little about Hardt and Negri’s Assembly when I finished reading it a few months back. Since then, I have read Todd Gitlin on Occupy, della Porta on Social Movements in Times of Austerity (Polity, 2015), and Judith Butler’s Notes Toward a Performative Theory of Assembly.

One theme is the performative nature of assembly: how it can create the collective that proposes to make a political statement/intervention, and (even more) how it can create the kind of community to which those who assemble aspire. The assembly is “prefigurative.” That is the term that is used. It is the change it wishes to see in the world.

My skeptical objection has been consonant with most responses to anarchism: a) how does the assembly propose to produce/gather the resources that would make it sustainable?; b) is the assembly scalable? If it proposes itself as a model of the desired polis, then how does it grow to accommodate much larger numbers of members/participants?; and c) what kinds of structures, organization, leadership, communications, and other infrastructure must be created in order to maintain the assembly over the long haul? Occupy was completely parasitic on the “larger society” it was trying to secede from—or was it overthrow? Occupy was dependent on goods made in that larger society, monetary donations coming from that society, as well as on the expertise (medical, technical) that larger society, through its educational system, imparted to certain individuals.

An assembly, in other words, is not a society—and to claim that somehow it represents an alternative society seems to me disingenuous, extremely naïve. It is one thing to say that Occupy modeled modes of relationship that we wish could be more prevalent in our lives. It is quite another to claim Occupy modeled an alternative to mainstream society. To use Judith Butler’s terms, Occupy did not provide the grounds for a “livable life.”

But Gitlin and Butler both point us toward what seems to me a much more productive way to think about assembly. They both stress that democracy as a political form is deeply dependent upon assembly—and that the current assault on democracy from the right includes a serious impairment of rights to assembly. Vote suppression has gotten most of the press when it comes to attending to the ways that our plutocrats are trying to hold out against the popular will. But the anti-democratic forces are also determined to limit opportunities for assembly.

Let’s do the theory first. Democracy rests on the notion of popular sovereignty. In the last instance, political decisions in a democracy gain their legitimacy through their being products of the people legislating its own laws for itself (Kant). The fact that such things as a ban on assault rifles and increased taxes on the rich are (if the polls can be believed) supported by a large majority of Americans, but impossible to enact in our current political system, seems a good indicator that we do not live in a democracy—a fact with which most of the Republican party seems not only very comfortable with, but determined to sustain.

Because the final arbiter is supposed to be the popular will, there will always be a tension in democracy between the representative bodies of organized government and the people. That tension leads to repeated critiques of representative government and calls for “direct” or “participatory” democracy (dating all the way back to Rousseau). It also leads to the oft-repeated worry/claim that democracy only works on a small-scale. A large scale democracy (and what is the number here?; probably anything over 100,000 citizens or so) will inevitably depend on representatives to carry out its political business—and thus, in the eyes of direct democracy advocates, inevitably fail to be truly democratic. Elections are too infrequent—and not fine-grained enough (what, exactly, are the voters saying?) to provide sufficient popular input into specific decisions. Add the many ways in which elections are manipulated and you quickly get politicians who are only minimally accountable to the populations they supposedly represent. The electoral system is gamed to insure that position (office) and all its privileges and powers are retained by incumbents—or by the party currently in power.

How to make politicians accountable? One device is plebiscites, which have some kind of appeal. Let the people vote directly on matters of interest to them. The problem with plebiscites is that they are a favored tool of the right—and produce (in many cases at least) what is best called “illiberal democracy” or (to use Stuart Hall’s term) “authoritarian populism.” The blunt way to say this: never put rights to a vote.

Liberal democracy (or constitutional democracy) actually tries to place certain things (usually called “rights”) outside of normal democratic decision making, out of the give and take of ordinary political conflict/wrangling/compromise. Some things are held apart from the fray, are guaranteed as the rules of the game (basic procedures), as the lasting institutions (the court system, the legislature, the executive), and as the basic rights enjoyed by all citizens (civil liberties). The constitution also established the “checks and balances” of a liberal order—such that no particular person, office, or governmental institution possesses absolute power. Power is distributed among various sites of government in an attempt to forestall the ever present danger of its (power’s) abuse. The use of plebiscites is, thus, authoritarian, precisely because it bypasses this constitutional distribution of power through the appeal to the direct voice of the people—thus authorizing the executive to act irrespective of what the courts or legislature has to say.

SO: if one is committed to a liberal polity as well as to a democratic one, the notion of “direct democracy” is not very appealing. The “tyranny of the majority” is a serious concern—as my home state of North Carolina has repeatedly demonstrated throughout its history of Jim Crow and in its recent 61% vote in favor of an amendment to the state constitution against extending the legal protections of marriage to same-sex couples. In a liberal society, the popular will is to be checked, to be balanced by other sites of power, just as any other form of power is.

Of course, in any system, there comes to be a place where the buck stops. As critics of the Constitution—the anti-Federalists of the 1790 debates over ratification—pointed out from the start, the structure of the US government lodges that final power in the Supreme Court. That is why we are such a litigious society; the final arbiter is the Court—a fact that is deeply problematic, and which has led, in our current deeply polarized moment, to the Republicans resting their best hope for defeating the popular will on controlling, through the appointment of right-wing judges, the Court.

Some theorists of sovereignty insist that it can never be distributed, that it can only be exercised when it emanates from one site. Despite the outsized power of the Supreme Court in our system, I think it vastly overstates the case to say the Court is the sole site of power in our polity. Justifying that claim would lead me in another direction, one I won’t take up here. Suffice it to say that I favor a constitutional amendment that would limit Supreme Court judges (and probably all federal judges) to one 25 year term. That way we would be spared having our fates in the hands of 80 year olds (a true absurdity) as well as randomizing when a position on the Court came vacant in relation to which party controlled the Senate at that moment. The amendment would also state that the Senate must make its decision about the President’s nominee within six months—or forfeit its “advise and consent” powers if it fails to act in a timely fashion.

But: back to assembly. Butler, I think very usefully, suggests that assembly in a democracy is an incredibly important supplement to the legislature. Here’s the basic idea: the people’s representatives can, because of their relative freedom from direct accountability, do things that various segments of the population disagree with. The first amendment ties “assembly” to a right to petition the government. The text reads: “Congress shall make no law . . . abridging . . . the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” Thus, assembly is tied to the notion of “the people” having another means, apart from the actions of its representatives, of expressing its opinions, desires, will—and that alternative means is expressly imagined as a way of expressing displeasure with the actions of the government. It is in the context of “grievances” that we can expect the people to assemble. In this way, the right to assemble can be seen as a partial remedy to the recognized ills of representation. The people, by assembling, embody (the terms here are Butler’s) democratic sovereignty and make it “appear” (utilizing Arendt’s notion of politics as the “space of appearances). After all, the will of the people (the ultimate ground of democracy) is invisible unless it takes the corporate form of assembly since even, as Benjamin Anderson’s notion of “imagined community” makes clear, an election is a virtual, not visible and actual, manifestation of the popular will.

Assembly, then, is democracy in action (note the Arendtian stress on action)—and would thus seem to be as essential (perhaps even more essential) to democracy than voting. It is when and where the demos comes into existence. It is democracy visible—and hence its deep appeal to contemporary writers from Hardt/Negri through to Butler, writers who are all appalled by democracy’s retreat in the face of technocratic, plutocratic neoliberalism.

Gitlin documents in the last chapter of his book all the various ways—starting with union busting and moving through the use of “permits” for demonstrations to keep demonstrators far away from the people they are demonstrating against to the criminalizing of assembly itself (as not “permitted” in the double sense of that word) to a closing down of public spaces to certain political uses—that the right to assembly is currently under assault, an assault that parallels the various efforts to curtail voting rights. Our overlords fear the assembled people—and are doing their best to erect obstacles to such assembly.

Thinking of the Chartists, I am also sorry that the nineteenth century connection of assembly to the presentation of petitions (a connection the first amendment also makes) seems to have been lost. All those virtual petitions each of us is asked to sign on line every day are pretty demonstrably useless. But what about 100,000 (or more) people marching to the Capitol and calling on the Senate Majority Leader to come out and take from the hands of the people a petition? Great political theater if nothing else—and a vivid demonstration of the collapse of democracy if that politician refuses (as I suspect would happen) to engage with the people he claims to serve. Face-to-face is much harder to ignore than what comes to you across the computer screen.

Still, and obviously, I don’t think assembly is the be-all and end-all of politics, for all the reasons I keep on banging on about. But Gitlin and Butler have made me much more attuned to the possibilities and resources that assembly can—and does—possess for a left wing politics.

Foucault introduces the notion of “biopower” as a supplement to his theory of “disciplinary power.” He argues, convincingly in my view, that what we might call the “welfare state” slowly emerges from about 1750 on. That state takes ensuring the welfare of its citizens, promoting and even providing the means toward sustaining life, as one of its primary missions—or even its fundamental reason to exist, the very basis of its legitimacy. The state that can protect, preserve, and even enhance the life of its citizens is a state worthy of their allegiance and obedience. It seems plausible to claim that the Roman empire did not value citizens’ lives in this way, or that medieval kingdoms did not place each citizen’s welfare as a central value the polity was pledged to honor.

Typical of Foucault is his desire to focus on the way that something which is often celebrated as “progress” in fact carries significant costs that a Whiggish history ignores. We can use the term “liberalism” to designate the traditional story (even though, as I have argued vehemently over the years, it makes no sense to accuse 20th century liberals of buying this story; we must distinguish, at the very least, “classical” from “modern”—or 29th century—liberalism). The liberal story has several parts: a) consent of the governed to the state’s power in return for protection, for the preservation of life; b) the rise of the individual, which is why every life is equally entitled to that protection; and c) the establishment of “rights” that aim to protect citizens from the potential abuses of power by the state itself. Liberty, in this understanding of the world liberalism establishes, is meaningless without security. Only someone who is confident that his life will continue will be able to act out the kinds of long-term plans and undertake the kinds of initiatives that make liberty a reality. This notion of the necessary preconditions of liberty gets expanded as the 19th century moves into the 20th to include what sometimes get called “social rights” (to contrast them to “political rights.”) Social rights are claims upon the polity to provide the “means” to life: namely, food, shelter, education, health care, clean air and water, the list can go on. Political rights, on the other hand, are direct protections against undue interference in a citizen’s behavior: freedom of speech, religion, assembly, along with legal rights against preventive detention, arbitrary imprisonment, and rights of participation, including the right to vote, to run for office, and to form/join political parties.

Foucault had, with his work on disciplinary power, made a compelling case that the advent of individualism, usually seen as a progressive step toward valuing all lives (if not equally, at least in ways that proclaimed that no life could be legitimately sacrificed), offered pathways to the intensification of power. Namely, each individual becomes a target for power’s intervention. (Strictly speaking, of course, we should say each body becomes a site for power’s intervention—and that power produces individuals out of bodies.) Liberal political orders exist hand-in-hand with an economic order (one Foucault resists calling capitalism) that is determined to make each person as productive as possible. A whole series of disciplinary techniques are applied at a multiplicity of sites through a society to insure that individuals are up to the mark, that they are, as the phrase goes, “productive members of society.” And all kinds of punishments are devised for those who prove deviant, where deviance comes in an astounding variety of forms. Disciplinary power “articulates” the social field with finer and finer gradations of acceptable behavior, with every citizen constantly being measured (through endless processes of examination) against the various norms.

Disciplinary power, then, works upon each individual. Compulsory education is one of its innovations; the highly organized factory is another, the creation and training of the mass citizen army another. In each case, every body in the ranks must be made to conform, to play its part.

Biopwer, by way of contrast, works on populations. The nation that takes “life” as its raison d’etre will focus attention on individual life, but it will also be concerned with the general preservation of the nation as well. That is, it will become interested in birth and death rates, working to raise life expectancy, to lessen infant mortality, to encourage pregnancy and attend to the health of pregnant women. The statistical (general) knowledge that can be generated about such things will suggest various large-scale interventions by state power. The most obvious one are in public health measures: laws (regulations) to protect air and water quality, but also the outlawing of “dangerous” drugs and the interdiction of suicide.

At some points, Foucault appears to be simply describing something that is so familiar to us, so taken for granted, that it is practically invisible. The state’s power increases when we, as citizens, grant it the right to enforce various public health measures. We could say, in a similar fashion, that state power increases if we make it one of the state’s responsibilities to provide public transport. The gathering of money and the granting of jobs involved in creating and running a public transport system must entail the state having more power. After all, power is not just power over (any employer has power over employees, and the state is no different in that regard) but also power to. The state would not have the power to (ability to) run a transportation system unless it had power. So the more duties we assign to the state, the more power it, necessarily, accumulates (unless it is totally ineffectual).

However, as many readers of Foucault have noted, his discussions of power quite often come with the distinct flavor of “critique,” in a dual sense: first, as a revelation of power’s presence where either ideology (semi-deliberate masking of the reality) or taken-for-grantedness hide that presence, and second, as a strongly implied normative criticism of power as illegitimate, evil, or pernicious. Some commentators have even started to wonder if Foucault has affinities with ne0liberals insofar as he associates state power with tyranny. I think that is going too far because Foucault (especially with disciplinary power) was very attuned to the ways in which power is exercised in non-state venues (like the factory) and certainly never thought of the economic sphere, of private enterprise, as a site of liberty unrestrained by power. But his temperamental anarchy does make his approach certain libertarian positions in troubling ways—since, in my view, the libertarian is absurdly naïve, being blind to power’s presence in ways that Foucault has taught us to mistrust. Power is everywhere—and always with us. (Hence other readers of Foucault have taken “power” to be the “god-term” in his work.) Instead of the anarchist dream of a world without power, my view is we have to think about ways to rein in power, to limits its abuse, and that means distributing power in ways that neither state or employers have enough power to leave their citizens or their employees without effective recourse against abuses. Foucault, however, never goes in that direction. After identifying the many sites where power is exercised, and implying that such exercises are not good things, he has nothing more to say about how we might or should respond to that situation.

Foucault has a particular reason for thinking biopower pernicious: his argument that it leads to racism. I will take up that argument tomorrow—since it is the direct claim that a “politics of life” leads to the infliction of large-scale death. For now, one last point: biopower is not biopolitics. There are lots of ways of understanding “politics,” but one fairly basic definition of the term would be “pertaining to the collective arrangement of ways of living together with others.” That is, we don’t have politics until more than one party is involved in the creation (through negotiation, or legislation, or other means) of the arrangements—and where the goal is to establish a modus vivendi that enables sustainable co-existence (which means at least semi-peaceful and semi-stable ways of muddling along). “Biopower” only identifies where and how power, focused on issues/questions of “life,” intervenes, is exercised. “Biopoliitcs” attends to the ways that placing the question of “life” prominently among the issues a society must address leads to certain political debates/decisions/conflicts in the ongoing collective effort to forge the terms of sociality. We might say that “biopower” suggests a passivity of the part of power’s subjects—a passivity Foucualt always claimed he never intended to convey, yet nonetheless inflicts a vision that is as “apolitical” as his. An odd charge, I know, since Foucault seems intensely political. But his work rarely attends to the collective processes through which power is created and its specific techniques are forged. Instead, power appears out of the cloud like the God in the Book of Job. And it proves just about as unaccountable as that God as well. You can resist it the way you might kick your broken-down car but you can’t get under the hood and actually tinker with its workings. It takes a political vision to imagine that kind of transformative work, a work that would involve negotiation and compromise with others, and the eventual creation of legal and institutional frameworks (invariably imperfect). It would require, in other words, a belief in the power of people to intervene in history, in place of the kind of transcendent power Foucault presents us with.

To summarize: a movement needs to generate mass disobedience to an objectionable governmental practice or law–and win the approval of non-movement members in the process. For the civil rights movement, that meant refusing to abide by the practices and legal statutes that were segregation de facto and de jure. The mass disobedience found that sweet spot where, finally, the government lost its will to uphold those practices and laws. Yes, it took some time. But, finally, the spectacle of arresting people who were just trying to be treated equally was no longer supportable.

For the anti-war movement, it was draft resistance. Not as clear that public opinion was won over to the side of the resisters, but draft law came close to being unenforceable and the easy way out was to create the “all volunteer” army. That move, of course, was the government’s way of sidestepping the larger issue of the anti-war movement: citizens’ ability to stop the government from waging war. That ability has not been gained, while ending the draft took away a crucial leverage spot and made anti-war movements much more difficult to sustain.

Pretty obviously, protesting–and rectifying–discrimination is harder. In the cases of segregation and the draft there is a law to disobey. But in the case of discrimination, you are trying to get the government to enforce the law against your opponents. Now the government and the legal system needs to be your adversary, and is not your antagonist. That greatly limits the stage, doesn’t provide for dramatic confrontations, or mass disobedience. Prodding the government to action is a tough one–and, I am starting to think, the real source of my perplexity about what forms effective action today could take.

That would seem to go in spades for a constitutional crisis. Since the 2000 election, with the follow-ups of the illegal Iraq War and torture, and now the shenanigans of the Trump administration, we have seemingly discovered that it is very difficult, if not impossible, to call the government to account. If the “system” worked in calling the Nixon administration to account for its crimes, that still suggested that only the government could successfully curb the government. And since 2000 there is no evidence of the government having the wherewithal to call itself to account.

I read the other day someone talking about how the people would take to the streets if Trump fired the special prosecutor or pardoned himself and his family. But it is unclear how taking to the streets would have any impact. The pessimist in me says that as long as daily life was not disrupted, the republic would tolerate massive malfeasance. One, because the issues–the rule of law etc.–are so arcane, and two, because it doesn’t feel like it hits people where they live.

Oddly enough, Trump’s crimes are sort of victimless; they damage our democracy, perhaps irreparably, but they don’t seem to harm anyone in particular. I was wondering about this in terms of “standing.” Could I sue (and who would I sue) for damages because my vote was rendered meaningless through election fraud? Would I be granted “standing” to bring such a suit? And what would be the remedy if I won such a case? It is unimaginable that there would be a “do-over” of the election? And yet, what else could be suitable recompense?

I wish I had something better to offer. A successful movement has to get a large number of people to consider themselves as members of a wronged collective. Post-2008, the unemployed and the defrauded quite conspicuously failed to make that leap. Somehow losing your job or losing your home was experienced as an individual misfortune, not something that tied you to many others with whom you should unite to protest against your lot. And, again, that would have been a case of trying to get the government to do something, rather than protesting against or disobeying a government action.

As long as normal life is mostly left in peace, we seem to be left with the ballot box. But not only have Republicans worked hard to shelter themselves from democracy (through gerrymandering, voter suppression and the like), but politicians have more reasons than ever to listen to the powerful few as opposed to the powerless many.

North Carolina’s Moral Mondays seem to prove this point. They have been sustained over an admirably long time–and seem to have had no impact at all except to harden the hearts of our Scrooge-like state legislators.

All of this might mean that party politics is really the only game in town. Leftists need to engineer a take-over of the Democratic party akin the the take-over of the Republican party by its right-wing. Only the primary threat makes politicians answerable to voters when the general election districts are gerrymandered.