Friday, January 30, 2015

Last Thursday, my Dorf on Law post discussed the emergence of New York Governor Andrew Cuomo as a loud voice blaming teachers for the problems in the schools. Cuomo's actions and words have made it abundantly clear that he blames tenure and the teachers' unions for making it too difficult to fire as many people as Cuomo thinks should be fired. In that post, I again made light of the bizarre statistical illogic of comparing the percentage of students whose tests scores fall below some cutoff level with the percentage of teachers who are evaluated as "ineffective": "91 percent of teachers around the state of New York are rated either
effective or highly effective, and yet 31 percent of our kids are
reading, writing or doing math at grade level." How could that be so?!

I would have left it at that, but within minutes after publishing my post, I came upon that day's editorial page of The New York Times. There, the newspaper's editorial board repeated the very same statistical nonsense: "Fewer than 1 percent of the state’s teachers were rated ineffective in
the most recent evaluations, while only about a third of the state’s
students in grades 3 through 8 were proficient in math and language
arts." Thus was born my Verdict column for this week, which was published yesterday. There, I went into further detail, explaining the odd underlying assumptions that are necessary to make such a statistical comparison meaningful. The more one thinks about it, the less sense it makes. (Consider an illustrative example: Suppose that a large percentage of students are ill-equipped for school, and they are evenly distributed throughout the school system. If so, even if every teacher is a good teacher, every teacher will "fail" a large percentage of her students. In other words, it might not be the teachers' fault.)

That is all good, nerdy fun, I admit. In yesterday's Verdict column, however, I devoted the bulk of my attention to a more important issue: Why are school reformers so convinced that the only way to improve the quality of teaching in the schools is to make every teacher fear every day for her job, and as a related matter, why would it make sense to lionize a small fraction of "superstar" teachers? This is classic carrot-and-stick thinking, but it is truly nonsense, especially in the educational context. In an odd way, it actually harkens back to the Soviet system, where the government rewarded "Stakhanovites," who were the superstar "industrial worker[s] awarded recognition and special privileges for output beyond production norms," while treating everyone else as disposable cogs.

Or, for a less loaded example, consider the utter failure of "profit sharing" systems (such as Employee Stock Ownership Plans) to increase workers' productivity in the U.S. Rewarding superstars, it turns out, is quickly viewed as a cynical game, with workers understandably refusing to be jerked around for an employee-of-the-month plaque, even if they have been promised a lottery-like chance of a big prize. Meanwhile, as I have argued often, if we are worried about "workers' incentives," we might want at least to stop for a moment to think about what potential teachers think when they consider entering a profession with decreasing job security (and declining social respect). That is hardly a great recruiting strategy. The problem, of course, is that improving the schools would be expensive and difficult, and people like Governor Cuomo are looking for cheap and easy answers. Even so, it is not only cynics who have bought into the blame-the-teachers campaign. The Times editorial that I noted above described the governors proposal, in part, as "mak[ing] it harder for teachers to get tenure and easier to fire ineffective and bad teachers," and concluding: "Many of his proposals are likely to ignite the ire of teachers’ unions
that did not endorse him in the last election, and he can expect
considerable resistance from them and their friends in Albany. On the
whole, these provisions make good sense." True, the editorial did call for more money for schools, too, but it was completely on board with the governor's plan to hold school funding hostage to his demands to be able to fire teachers more easily. (Again, remember that teachers can be fired now. The issue is whether it will be possible to fire them with less -- or no -- cause.)

What I find especially perplexing about all of this is the "long game" politically that Democrats are playing. Perhaps, as some commenters and I have discussed in previous posts (e.g., here), people like Cuomo and most other Democrats and liberals simply have no long game. They flail around, thinking that they can co-opt the "safe" position and win by appealing to an ever-shifting middle, only to find that they are (correctly) viewed as having no principles and no faith in their own ideas.

There is surely a lot of truth to that, but even so, we still need to know why the Democrats would choose the particular path that they have chosen, which very prominently includes abandoning their staunchest supporters. In the 1980's, the emergence of the Democratic Leadership Council (DLC), which succeeded in pulling the party to the right, was explicitly a response to the fear that Democrats were captives to "special interests." Labor leaders were easy to caricature, and the "smart" political move at the time was to attack unions. But teachers? As I noted in a Dorf on Law post this past Fall, it was the Clintons -- the embodiment of DLC triangulation -- who cynically decided early in their careers to pick a fight specifically with teachers' unions.

At some point, maybe someone in that crowd will realize that they are destroying their future. There has been much concern about a lack of success by Democrats at the local and state levels. Guess who used to be the Democrats' most reliable workers in those venues? Yet the atmosphere has now become so poisoned that even a reliably left-leaning (and massively influential) source like The New York Times editorial page blithely talks about "the ire of the teachers' unions," as if they are the enemy.

Imagine, however, that there were evidence supporting the idea that adopting anti-teachers' union policies has improved educational outcomes. Even then, the political calculation would have to be: "Well, these are my core supporters, and without them, I will lose a competitive advantage. How much am I willing to give them to keep them happy, even if what they want is a bad idea?" That is certainly what the Wall Street wing of the Republican Party seems to do vis-a-vis the Tea Party and the Religious Right. Sometimes, crude political calculations actually require the adoption of less than optimal choices about policies.

That is all well and good, but there is no such conflict here. The evidence defies all of the claims that making it easier to fire teachers improves schools. Liberals and Democrats thus do not have to balance political advantage against "good policy." Yet at this point, these people continue to act as if "taking on the teachers' unions" is both good policy and good politics. Neither is true, and if Democrats do not figure that out quickly, they will have willingly destroyed one of their most important sources of support.

Thursday, January 29, 2015

by Michael Dorf
Last Wednesday, on the fifth anniversary of Citizens United v. FEC, about half a dozen protesters briefly disrupted the Supreme Court's proceedings. The next day, Sheldon Silver, the long-time Speaker of the New York State Assembly, was arrested on corruption charges. The timing was coincidental but the events are nonetheless closely related. (Silver is being replaced as Speaker but for now he says he intends to keep his seat.)

Let's start with Citizens United. According to one well-known criticism of the Supreme Court's campaign finance jurisprudence, the Court makes two errors. First, the Court says that the only interest that justifies campaign finance limits is the interest in avoiding corruption or the appearance of corruption, thereby ruling out of bounds the possibility that campaign finance limits might be adopted in the interest of political equality--to ensure that inequalities in the distribution of material resources do not spill over into our politics to undermine the principle of one-person-one-vote. Second, the Court then employs a too-narrow definition of corruption and its appearance, in which only a quid pro quo of the sort that could lead to a bribery conviction counts as corruption.

On the surface, the Silver indictment might be thought to provide partial vindication for the narrow definition of corruption in Citizens United and related cases. After all, the indictment shows that the campaign finance jurisprudence does not protect so much "speech" by wealthy parties seeking to influence the government as to render a corruption prosecution impossible. Put differently, the fact that Silver had to break the law in order to provide political services in exchange for cash seems to show that the law--even after Citizens United--has bite.

But the argument I have just offered won't fly. Silver is not charged with exchanging political favors for campaign contributions. He may well have done that, but if so, at least the Justice Dep't doesn't (currently) think that he did so illegally. The indictment alleges that Silver supplemented his state pay ($121,000 annually as Speaker plus various perks such as a chauffeured car and a per diem) with millions of dollars in bribes that were presented as legal fees for work he never actually did. At least as of Saturday, Jan. 24, Silver was still listed as of counsel with the personal injury law firm of Weitz & Luxenberg, with a "focus" on mesothelioma and asbestos cases, but Silver's firm profile page describes no actual legal work, and the indictment alleges that he did none. It also alleges that he had additional mechanisms for receiving bribes from parties with business in the state legislature.

If the allegations in the indictment are true, then Silver is an old-fashioned corrupt politician--someone who abused his public office in order to make a buck for himself--rather than someone driven to corruption by a political system in which running for office is extremely expensive, so that to succeed, politicians need to grant access and appeal to the interests of persons and firms willing to pay to put or keep them in office. The Sheldon Silvers of the world (again, if the indictment is accurate) would exist even in a world in which the courts upheld much more rigorous campaign finance regulation.

So why do I say that the Silver case is related to campaign finance? Because there is available a tool that would work as a partial solution to both problems: public finance.

Public finance of electoral campaigns--if funded sufficiently generously--substantially reduces the need of candidates to rely on the support of well-heeled donors or the "indirect" "uncoordinated" efforts of "independent" (equally well-heeled) individuals and groups. Unfortunately, public finance is unavailable for most races and where it is available--as in Presidential elections--it has not kept pace with inflation. Since 2008 candidate Obama's decision to forgo federal matching funds in order to escape federal spending limits, the Presidential system is effectively dead.

A similar problem exists on the compensation side. Under-compensated public servants will find it tempting to supplement their income by selling special favors. The very low pay of police officers in New Orleans (and often elsewhere) has historically operated as an invitation to corruption. The same can be true for state legislators. States do not pay their legislators salaries commensurate to their responsibilities. In New York, the annual salary of a member of the Assembly (other than the Speaker) is just under $80,000, even though the New York legislature makes laws governing a state that would have the world's 16th largest economy if it were a country. That is, of course, a more-than-decent middle-class wage. The median household income in New York State is about $60,000, and so the average voter--who does not get a per diem or a chauffeured limo on top of his salary--is unlikely to be sympathetic to the claim that legislators are underpaid. Thus, NY voters tend to oppose a proposal for the first increase in state legislator salaries in over a decade and a half.

Corruption cases like Silver's provide part of the reason why. A typical voter thinks "I get by on less money; I work harder; and on top of it all, these guys are corrupt. No way am I giving them a raise."

I cannot deny the logic to that line of thinking, but it tends to be self-defeating. Higher pay for legislators should not be conceived as a reward for good performance but as a way of reducing the incentives for corruption. That's why Zephyr Teachout's proposal to strictly limit outside income for legislators is not enough; you need to attack the incentive to seek outside income.

As with legislator pay, so with public financing of campaigns. Extremely wealthy self-funding politicians (like former NYC Mayor Mike Bloomberg) sometimes argue that they are incorruptible because they do not need to raise money from wealthy donors and they will not be tempted to use their office for personal gain. These are fair points, but even if some billionaire politicians are entirely public spirited, they necessarily see the world through their own highly privileged eyes. Excluding all but the extremely wealthy from public office is too high a price to pay for combating corruption.

Finally, to be clear, I do not think that public financing of campaigns or paying public officials wages within shouting distance of comparable private sector jobs would cure all political corruption. Some people will be corrupt under any system--and maybe Sheldon Silver is such a person. The best we can do is design rules that reduce the temptations of the greedy and the venal.

Wednesday, January 28, 2015

by Eric SegallOn March 4,
the Supreme Court will hear oral arguments in King v. Burwell, yet another challenge to the Affordable
Care Act ("ACA"). This time around, the plaintiffs are claiming that the IRS acted
illegally by providing federal subsidies on health insurance exchanges created by the
Secretary of HHS because the ACA only authorizes such subsidies on an “exchange
established by the state.” The government’s response (on the textual issue) is
that a different section of the ACA provides that if the states do not create
their own insurance exchanges, HHS will set up “such exchange.”

The
government clearly has the better of the textual argument because under the
well-established Chevron doctrine, if
the law is ambiguous (and here it is), the agency’s interpretation only has to
be reasonable. Much has been and will be written on that question but that debate is
not the focus of this blog post.

Instead, I
want to focus on the retelling of history by the architects of this lawsuit, Jonathan Adler, a law professor at Case Western Reserve, and Michael Cannon of
the Cato Institute. It is not an overstatement to say that without the dedication of these two men to the destruction of the ACA, this lawsuit would never have gotten off the ground.

On social
media, and in their amicus brief in the Supreme Court, Adler and Cannon support their textual interpretation of the ACA (that subsidies are not available on federal exchanges) with the assertion that Congress used its spending power to threaten the states with the withholding of the subsidies unless the states agreed to establish their own exchanges. This claim is repeated
in a separate amicus brief submitted officially by the Cato institute, and in the
brief for the plaintiffs.

The claim that Congress intentionally and knowingly used its spending power to coerce states to create insurance exchanges by threatening to withhold subsidies is simply false.

There is not
a single word in the entire law telling the states that, if they decide not to
open their own exchange, they will lose federal subsidies. Moreover, no member of
Congress nor any member of the Obama Administration ever communicated such a threat to the states once the law was passed.

The reason there is no evidence
is that such an understanding would have been at the time, and is today, completely inconsistent with the commonly understood and fundamental assumptions
underlying the ACA. If insurance companies must cover people with
preexisting conditions, and if the government is going to force healthy people
to buy insurance, then the government must also provide premium subsidies; otherwise there will be a “death spiral" of increased premiums for everyone. This structure represents the now iconic three-legged stool and is at the heart (and running through the blood) of the ACA.

Lawyers representing two non-profits in Missouri have also filed an amicus brief devoted in large part to the idea that
numerous states knew that subsidies might not be allowed on federal exchanges
when those states debated whether to create a state exchange. According to the brief, in light of that history, such a reading of
the law is not unreasonable.

This
argument is itself unreasonable and misleading because all of the evidence cited for that
claim comes after Adler and Cannon
publicized their mistaken interpretation of the statute. States came up with
the crazy idea that subsidies might not be available on federal exchanges only after Adler and Cannon’s campaign started. None of that
is relevant to what people understood the law meant when it was enacted in
2010. Back then, it was common knowledge that subsidies had to be available
everywhere there was an ACA health insurance exchange.

The fact
that some states debated creating exchanges in the context of possibly losing
federal subsidies because Adler & Cannon convinced them that was a possibility
does not come close to outweighing the evidence that no government official
ever told the states that subsidies might not be available on federal exchanges,
that such a threat is nowhere in the ACA, and that no one who wanted to see the
law succeed would ever have given the states the tool to unilaterally
destroy the law.

The amicus
brief filed by a number of conservative Senators and Members of the House of
Representatives also claims that the ACA was not meant to provide subsidies on
federal exchanges but their brief does not point to one member of Congress who
ever made that point during the debate on the law or afterward (at least not until the Adler/Cannon theory became public).
That absence is understandable because no one thought that to be the case until
well after the law was passed and this litigation was anticipated.

Proving a negative is difficult. But as of this moment, there is not a shred of evidence that, in 2010, when the law was passed, any member of Congress or the Administration believed federal subsidies would be unavailable on federal exchanges. If anyone can demonstrate otherwise, then we can have the argument. So far, no one has come close.

If the
Supreme Court rules for the plaintiffs in this case, it is likely that over 8,000,000 people will lose their health insurance, that markets in those
states could crumble, and serious physical and economic harm to people will
result. The Court should not take such a drastic step on the basis of the self-fulfilling prophecies of two men who have relentlessly tried to kill the ACA. If you don’t believe me about their passion,
just look at Cannon’s own Twitter handle: “The Man Who Could Bring Down ObamaCare…Obamacare's Single Most
Relentless Antagonist… Anti-Universal Coverage Club founder.”

The lack of evidence to support the plaintiffs' theory (and its utter inconsistency with the entire concept of the three-legged stool) does not mean that the government must win. It just means that for the plaintiffs to prevail they must do so on the basis of the four corners of the statute. If law matters, they will lose there too but that is a topic for another day.

Tuesday, January 27, 2015

Three years ago, in a Dorf on Law post titled (in what is easily my most cringe-worthy play on words) "Owed on a Grecian Urge," I described how Greece had become the reflexive cautionary tale for those in the United States and Europe who view themselves as fiscally "responsible." In the time since then, the misuse of the Greek story has become utterly commonplace, with Republicans routinely saying that President Obama and the Democrats are going to turn the United States into Greece any day now (even though our debt-to-GDP ratio remains slightly below that of Germany, which is supposedly the paragon of fiscal probity).

As I noted, Greece does seem to be the one and only case of a European country whose economic troubles were significantly attributable to pre-crisis fiscal mismanagement -- but I should emphasize that their budgetary problems arise mostly from a chronic failure to collect taxes owed, not from "out of control spending," which is the false analogy that the Republicans who invoke Greece are trying to draw to U.S. economic policy.

And even if Greece's problems were caused by fiscal imbalances, that most definitely does not mean that the way to solve the problem now is through fiscal contraction: "The answer to having driven off a cliff is often not to try to drive back up the side of the cliff." I do not know why I qualified that sentence with the word "often," but the point is that there are better ways to recover from a crisis than to imagine that doing the exact opposite will achieve the exact opposite results, no matter how much circumstances have changed.

At the time that I wrote that post, I would have imagined that depression-level unemployment and continued austerity in any country would have led to a near-term political explosion. Yet Greece, Spain, Portugal, Italy, and some other European countries have experienced years of economic disaster that are every bit as bad as the Great Depression of the 1930's was in the U.S. and Europe. That is not an exaggeration: Measured unemployment in the U.S. topped out at 25% at the worst of the Great Depression, while Greece's unemployment rate has hit 28% (60% among young people). Similarly shocking suffering is being seen in Spain (which did not run fiscal deficits before the crisis) and in the other countries that I mentioned.

What would the "political explosion" that I imagined have looked like? I was not optimistic enough to think that it would definitely be peaceful. Years of pain and hopelessness can cause people to turn to extreme measures. And even if there were no outright revolutions, certainly the results at the ballot box would not necessarily be pleasant. Extremist parties (a few on the left, but mostly on the right) have been proliferating across Europe, with the usual anti-foreigner ugliness that one sees whenever people feel desperate.

And this week, elections in Greece swept into power a party whose name, Syriza, translates to "coalition of the radical left." As an analysis by Neil Irwin in yesterday's New York Times put it, "the real surprise is not that Greek leftists have been elected. The surprise is that it took this long." Again, I do think that it is surprising that it was a leftist party that won, rather than a crypto-fascist party (or even a not-so-crypto one), given the economic devastation, but I agree with Irwin that something like this was a long time coming.

What is perhaps more surprising is that this coalition of the radical left is anything but radical. In Greece, during the worst economy seen in almost anyone's lifetime, it was apparently politically prudent to oversell the radicalism of the party, in order to gain political support. But the new Prime Minister, Alexis Tsipras, is not saying anything remotely extreme. He is, in fact, simply saying that austerity policies during a Depression are a terrible idea, which is what the vast majority of economists would have said pre-2008, and what the majority of economists still say (the difference between the "vast majority" and simple majority being those who decided to abandon the evidence in order to side with Republicans and European financial elites).

This is not, moreover, a theoretical matter. We have seen the supposedly prudent German government impose austerity on the rest of the continent for five years, and we have seen the results. The response from the pro-austerity crowd? Keep bleeding the patient, and he'll get better any day now. Mr. Tsipras is saying that Greeks have had enough, and that there is a better way. And he is right.

What makes this supposedly radical left party even less radical is its stance on the euro. Apparently, Mr. Tsipras has promised that he will not have Greece withdraw from the common currency. Again, this is a fascinating political dynamic. The Greek people, who surely know that the ability of richer countries to impose oppressive conditions on Greece stems largely from controlling the currency -- which, by the way, acts as an effective subsidy to the German economy -- are nonetheless willing to elect a man who is insistent that Greece not leave the Euro Zone. I can imagine arguments that the costs of dropping the euro would be high, and maybe (but not necessarily) even higher than the costs of staying, but I am surprised that such a rational cost-benefit analysis could be going on inside a country that has experienced years of pain, all while being lectured by their tormenters for being weak-willed and irresponsible.

In any event, the financial markets are calm, and there is apparently little if any concern that the "radical left" takeover of Greece will be a major event in the modern history of Europe. I would imagine that the Syriza example will be copied across Europe. Certainly, the mock-left nothingness of Francois Hollande in France has done nothing but allow the cruel austerity to continue. Europeans should, in fact, hope that the Greeks are showing the way forward, because there are many bad alternatives out there that are truly radical.

Monday, January 26, 2015

by Michael Dorf
In my latest Verdict column, I argue that the SCOTUS cert grant in the Sixth Circuit same-sex marriage (SSM) cases makes it all but a foregone conclusion that the Court will recognize a right to SSM by the end of the current term. I say that the important question now is how the Court goes about finding a right to SSM: Will the Justices apply nominally rational basis scrutiny while finding the "accidental procreation" argument irrational? Will they say that laws denying SSM are rooted in constitutionally impermissible "animus"? Or will they say--as I propose they should--that laws discriminating on the basis of sexual orientation must be subject to heightened scrutiny?

Readers interested in why I would prefer an express holding that sexual orientation is a suspect or semi-suspect classification are invited to check out the column. Here I want to address what may strike all but the most dedicated Supreme Court junkies as a non sequitur: Did the cert grant in the SSM cases increase the odds that the Obama Administration will lose the challenge to the Affordable Care Act subsidies on the federal exchanges in King v. Burwell?

Obviously, there is no doctrinal connection between the issues. So why might one think that the odds of a government defeat in King just went up a bit? The answer lies in the realm of human psychology.

Recall that one leading theory that aimed to explain the vote of CJ Roberts to uphold the ACA's so-called individual mandate under the taxing power in NFIB v. Sebeliuswent like this: The Chief Justice cares a great deal about the Court's reputation as an institution; he believes that public perceptions of the Court as a partisan body undermine that reputation; he foresaw that a 5-Republican-to-4-Democrat split on the Court to invalidate the signature legislative achievement of a first-term Democratic President during an election year would be widely perceived as partisan; and so he was inclined to want to find some way to uphold the ACA.

There is a crass and a less-crass version of the foregoing theory. The crass version had the Chief Justice making the calculations just described consciously. In the less crass version, the calculations were subconscious.

As I've said before, I don't know of any evidence that the institutional integrity considerations influenced the Chief Justice at all in NFIB v. Sebelius. I certainly disagree with the conservative critics who argue that his reasoning with respect to the tax power was so weak that he must have been deciding based on other factors; in my view, that was a perfectly plausible purely legal basis for the ruling.

Nonetheless, I acknowledge the possibility that the Chief Justice (and one or more other Justices) might occasionally give conscious or subsconscious consideration to how the Court's rulings will be perceived. If so, then having a same-sex marriage case on the docket--in which the Court will almost certainly produce a "liberal" ruling--gives the Chief (and other conservative Justices) the latitude to rule against the Obama Administration in King without substantially contributing to the perception of the Court as a partisan body. The average relatively-low-information observer will see a liberal and a conservative decision on big issues and think that the Court is not deciding based on politics but based on law.

So far I have merely articulated a worry that my informal methods lead me to think is fairly widespread among SCOTUS cognoscenti. I have not offered any concrete evidence for it, and I do not know of any efforts to test for such evidence. I'm confident that I lack the statistical skills to tease out from the long-term pattern of SCOTUS decisions whether a high-profile liberal decision in any Term increases the odds that an unrelated case will be decided in a conservative way (or vice-versa). But let me suggest that this is a sufficiently interesting hypothesis that the kinds of scholars who do have the right skill set might want to test it.

Friday, January 23, 2015

By Michael Dorf
Last October, I received a summons for jury duty. Because it was the middle of the semester, I postponed my service to what should have been winter break, but as it worked out, I ended up with a new summons to appear on the first day of second-semester classes. I had mixed feelings about the prospect of serving on a jury for any substantial length of time. True, it would be disruptive, but not more disruptive for me than for anyone else with a job and other responsibilities. I figured out that in the event that I was chosen for a jury, I could teach some partial classes during the lunch break and make up the others later in the semester. And I thought it would be educational to serve on a jury.

No such luck. I have now been called for jury duty about half a dozen times but each time I am excused--presumably because one or the other side uses a peremptory challenge on me.

That makes some sense, I suppose. If I were a lawyer picking a jury, I would worry about a lawyer or law professor serving on the jury for two reasons. First, I would be concerned that she would hesitate to follow the judge's instructions if she thought that they misstated the law. Second, I would worry that other jurors would defer too much to the ostensible authority figure.

Indeed, every time I have been subject to voir dire, the judge and/or lawyers explore just these issues with me. And every time I say (honestly) that I would accept the judge's instructions and that I would deliberate with my fellow jurors as one of twelve equals. That second part isn't sufficient to allay all doubts, of course. The lawyers and judges might worry that even if the lawyer/law professor-juror did not seek deference from fellow jurors, the fellow jurors might accord such deference anyway. Still, that worry should not rise to the level of cause for excusing me, and so I conclude that this time, as before, one of the lawyers used a peremptory challenge to zap me from the jury.

Oh well. Occasionally lawyers and, less commonly, law professors, are actually chosen to serve on juries, but it is a sufficiently infrequent occurrence that I don't expect it to happen to me. Nevertheless, I do have a couple of observations based on my latest bout of jury duty. They concern pre-trial publicity.

I was part of a venire that was assembled to try a locally high-profile criminal case--a former Cornell undergraduate charged with committing rape nearly two years ago, when he was a senior. A majority of the prospective jurors knew something about the case based on pre-trial publicity, and thus much of the voir dire focused on whether people had followed the pre-trial publicity, whether they had formed an opinion based on it, and if so, whether they could set that opinion aside and base their verdict solely on the evidence. As anyone who has seen, conducted, or experienced voir dire would expect, most of the prospective jurors said that they could judge the case based soely on the evidence presented in court, and a few said they had doubts whether they could. Of the doubters, some were probably being truthful, while others may have been seizing an opportunity to say something that would get them out of jury duty.

The pre-trial publicity itself was peculiar in two respects. First, the defense attorney seemed much more concerned about the potential impact of pre-trial publicity than the prosecutor seemed. In most cases that would make sense. The presumption of innocence and rules of evidence do not apply to journalists, so news coverage can lead people to believe a defendant guilty when court procedures might not. That is in most cases, however. In this case, the particulars of the recent news coverage probably favored the defendant.

A few days before jury selection, a local newspaper ran a story indicating that the defendant had turned down a plea deal for a lesser charge that would have resulted in six months behind bars. If convicted at trial, he faces up to 25 years. A similar story appeared in another local paper. The first story refers to a "document," while the second does not name a source.

I have no idea how news of the rejected plea deal leaked to the press, but it does seem to me that, on balance, this aspect of the pre-trial publicity favors the defense. Jurors often expect a defendant to take the stand and insist on his innocence (notwithstanding his right not to, under the Fifth Amendment), but they may not give that much credence to a defendant's claim of innocence. After all, a person who would commit rape or any other serious offense would surely commit perjury to avoid prison--so a protestation of innocence would not really distinguish a guilty from an innocent defendant. But if jurors know that a defendant turned down a seemingly very good deal, that could tell them that the defendant is so convinced of his innocence that he is willing to risk a very large prison sentence on it. In addition, the plea offer itself tells jurors that the prosecution thinks its own case is pretty weak. Why else offer the defendant such a steep discount on sentencing for giving up his trial right?

In fact, there could be reasons besides the weakness of the case. Perhaps the alleged victim would very much prefer not to have to testify. Even testifying truthfully could be embarrassing and traumatic. So the fact that a defendant turned down six months in jail to face the possibility of 25 years doesn't prove that the defendant is innocent--but it does tend to suggest that the defendant believes either that he is innocent or that for some other reason he has a good chance of an acquittal. Hence, to the extent that a juror learned about the rejected plea deal and thought through its implications, that juror would be more likely to come away thinking the defendant is innocent than she would if she didn't read that story--or if she only read the more common kind of news coverage.

Of course, defense attorneys are so accustomed to thinking of pre-trial publicity as harmful to their clients, that it's quite possible that the defense attorney in this case worried about it simply out of habit. Or perhaps he thought that whatever small benefit his client received from the pre-trial publicity regarding the rejected plea deal was outweighed by other pre-trial publicity of the more conventional sort. Both of the stories linked above state that the defense planned to argue that the defendant was so drunk that he lacked the requisite mens rea for the offense, but that is not in fact the defense that is being presented. (The trial started on Wednesday and continues today.) In any event, the voir dire with respect to pre-trial publicity went more or less as it usually does--except in one respect.

That brings me to the second peculiarity of the pre-trial publicity. A good deal of it was just barely pre-trial. When we prospective jurors entered the courtroom, we could see the name of the case--PEOPLE v. MESKO--in big bold letters on a bulletin board in the front of the room. Prior to the judge's arrival on the bench, no one told us to put away our electronic devices. I used my iPad to read an academic paper but it emerged in voir dire that a large number of prospective jurors used their phones and tablets to search for news stories about the case. Many of them said that prior to reading about the case that very morning on their phones or tablets, they did not know anything about it. This was credible. Although the alleged rape was big news in Ithaca in 2013, the jury pool was drawn from the county as a whole, including communities where there was considerably less news coverage. And to be honest, although the case had been big local news in 2013, it was not that big a story overall, especially for those of us (i.e., academics and professionals) who tend to focus on national and international news more than we focus on local news. I myself only vaguely remembered anything I had read about the case, and much of what I've written here is drawn from stories I looked up after I was excused from jury service. Accordingly, it appeared to me that about half of the people who had a potential bias as a result of pre-trial publicity developed that bias the very morning of the trial, as a consequence of the court's own flawed procedures.

The remedy for this last problem seems so obvious that it is hard to believe it hasn't already been implemented universally: As soon as prospective jurors learn what case (or in busier courthouses, cases) they will be examined for, they should be forbidden from looking at any external material about the case (or cases). This measure won't address the bias from media coverage to which potential jurors are exposed before they realize they are potential jurors, but it would address a big chunk of a totally unnecessary problem.

Thursday, January 22, 2015

It is fair to conclude that Governor Andrew Cuomo of New York wants to be President. If Hillary Clinton chooses not to run in 2016, Cuomo would immediately be cast as the favored candidate of the Democratic "centrist" establishment. He certainly has spent a great deal of time and effort trying to prove that he is not a liberal, at least not on economic issues. And he seems especially keen to provoke a confrontation with New York State's teachers and their unions, apparently in the belief that this will make him appear not to be "captured by special interests," or something like that.

At this point, it is becoming rather tiresome to read supposedly non-editorial news reports (like this one) saying that Cuomo's proposals, "atypically for a Democrat, will put him in direct conflict with teachers’ unions." Atypically? It is surely true that more Democrats than not support the positions favored by teachers and their representatives, but the implication in such language (stated explicitly elsewhere) is that there is something courageous and rare going on when a Democrat "defies" the teachers' unions. Plenty of Democrats, including President Obama and his Secretary of Education, have taken positions against the teachers, and there is all kinds of "liberal money" (especially from Silicon Valley) that will not only back anti-teacher Democrats, but that is committed to attacking public education directly. (From the same news article: "Charter school advocates have also spent heavily on lobbying [in New York], with one group, Families for Excellent Schools, spending close to $9 million last year, according to state filings.")

Cuomo, for his part, is happy to take their money: "In
the most recent election, Mr. Cuomo raised more than $2 million from
supporters of charter schools and school choice, from their companies or
from their families. (His campaign raised $47 million over all.)
Several gave the maximum allowable contribution, $60,800." This is not even a situation in which it is necessary to figure out the direction of causality between Cuomo's positions and the money that he receives. That is, it could be that the anti-union money is backing Cuomo because he is already a like-minded soul, or he could be shading his position their way in order to capture their money and support. Either way, it is simply not credible to suggest that Cuomo is being politically bold in opposing a core constituency of his party. He can win the nomination without them, and he knows that they will fall in line in a general election.

In short, this is classic Clintonian triangulation: Announce that you are a "different kind of Democrat" who is willing to confront the "powerful teachers' unions" for the good of America at large (and, of course, "for the children"), and then count on gullible journalists and pundits to make it all sound principled.

So much for the politics. What about the substance? In his State of the State speech earlier this week, Cuomo said that he wanted to change the system that New York State uses to evaluate its public school teachers. He was hardly subtle: "They are baloney. Who are we kidding, my friends?" His complaint, such as it is, is that too many teachers received high ratings. Why is that a problem? Cuomo apparently believes that the answer to that question is obvious, but in any event, he does not say more. Even so, it is worth examining what he is complaining about.

With the cooperation of the teachers' unions -- yes, those supposedly intractable blocs that, if we are to believe the hype, oppose all efforts at reform -- New York State recently changed its evaluation system. "The system, enacted into state law in 2010, was created, in part, to
make it easier to identify which teachers performed the best so their
methods could be replicated, and which performed the worst, so they
could be fired." Sounds like the kind of reform that people who bash teachers have been talking about for years, and that supposedly cannot happen in a unionized environment.

So what do the results tell us? "Nine out of 10 New York City teachers received one of the top two
rankings in the first year of a new evaluation system that was hailed as
a better way of assessing how they perform, according to figures released on Tuesday." This might appear to be good news, but no. Now that the first set of ratings is in, the claim is that they are bogus. The tone is obvious in this strange comparison: "Although very few teachers in the city were deemed not to be up to
standards, state officials and education experts said the city appeared
to be doing a better job of evaluating its teachers than the rest of New
York State."

How do we know that the city is doing a "better job" than the rest of the state? "In the city, only 9 percent of teachers received the highest rating, 'highly effective,' compared with 58 percent in the rest of the state.
Seven percent of teachers in the city received the second-lowest rating — 'developing' — while 1.2 percent received the lowest rating, 'ineffective.' In the rest of the state, the comparable figures were 2
percent and 0.4 percent."

Get it? The whole point was to make it easier to fire teachers, but too few of them are being rated as fire-able. If ever there were a result in search of a justification, this is it. The core assumption by people like Cuomo is that there are bad teachers who are being coddled by the system, and they must be found and dealt with. If they are not being found, then the system that was just adopted is "baloney."

For the sake of argument, let us imagine that the new system is not identifying all of the teachers who should not keep their jobs. One possible explanation for this, I suppose, is that the new system somehow allows bad teachers to be protected from reality. Who protects them? "Teachers in the city tended to do best in the more subjective portions
of their evaluations, which included principals’ observations of their
work. On that portion, principals gave 30.8 percent of teachers the
highest rating." So, the logic goes, the problem must be that the principals are not being honest, and they are refusing to tell it like it is.

Why
might this happen? The anti-teacher explanation would be that the
principals are afraid of the teachers (and the hovering specter of the
unions), so that the principals are unwilling to take the heat by giving a low evaluation to a teacher. That might (or might not) have a grain of truth to it. To the extent that it is true, however, it raises two further issues. First, the principals themselves are subject to evaluation, including by higher-level administrators. And since the current atmosphere is very much oriented toward finding "bad apples," with all kinds of political pressure coming from above, it is hardly the case that the principals' incentives are all aligned with giving every teacher a pass.

Second, and much more fundamentally, if the problem is that a system of personal evaluations by principals cannot be trusted, it must be because we think that some significant number of school principals are unwilling to do what they know to be right, because they knuckle under to pressure. If that is true, however, then what are we to imagine would happen if tenure for teachers is abolished and the unions are disbanded? Now, with no pressure from the teachers' side, and principals' incentives all aligned in the same fire-the-teachers direction, we are to believe that the principals will suddenly discover their better angels, and never fire a good teacher without due process?

In a Dorf on Law post a few months ago, I mocked a New York Times op-ed by Frank Bruni, who wrote glowingly of a Colorado school principal's dedication to the "team-building" that is possible in a no-tenure system. The principal said: "Do you have people who all share the same vision and are willing to
walk through the fire together?" Bruni then wrote: "Principals with control over
that coax better outcomes from students, he [Bruni's source] said." Ignoring the complete absence of logic needed to reach that conclusion, the question is why we are to believe that too many principals are patsies to the teachers, but that they suddenly will become paragons of integrity who inspire people to walk through the fire together, without bending to political pressures.

At its most elemental level, the argument against teachers' job protections (for which their unions fought, and which they safeguard, even as they cooperate in trying to improve the system) is based on the simple idea that bad outcomes in schools must be teachers' fault. As I noted in another Dorf on Law post last August, the spokeswoman for an anti-tenure group put it this way: "91 percent of teachers around the state of New York are rated either
effective or highly effective, and yet 31 percent of our kids are
reading, writing or doing math at grade level." If the children are not succeeding, then the only conclusion that the anti-tenure/anti-union side considers is that the teachers must be blamed and fired.

Consider this comment by the NYS Education Commissioner, in response to the new rankings of teachers: "I’m concerned that in some districts, there’s a tendency to blanket
everyone with the same rating. That defeats the purpose of the
observations and the evaluations, and we have to work to fix that." Revealingly, the state's top education official tells us that the purpose of the system is to differentiate people. But what if it were true that teaching is a profession that draws in sufficiently dedicated teachers, who do their jobs well? What if the vast majority of them really are as effective as they can be, under the often difficult circumstances that they face, and the ones that are ineffective are already being identified and moved out of the profession?

To be particularly blunt, why is the commissioner so sure that everyone should not have the same rating? And if people like Governor Cuomo are so certain that there is no alternative explanation, where is the evidence? Certainly, there is no evidence showing that states and districts without tenure achieve better outcomes than those that have not abandoned job protections for teachers. Yet that glaring lack of evidence does not deter those who are looking for easy scapegoats.

I hope it should be clear, but I will say it anyway: There are surely some bad teachers out there. (There are bad professors. There are bad barristas. There are bad insurance agents. There are bad cops. There are bad ministers. There are ...) And the systems that we use to evaluate teachers should always be scrutinized and revised. This must happen, however, in a way that is not merely a response to political pressure to blame teachers, or that burnishes the presidential credentials of a particularly craven and ethically challenged Democratic politician.

Wednesday, January 21, 2015

In my Verdict column for this week, I discuss a newly-signed New York State bill that will criminalize the tattooing and piercing of one's companion animals (with some exceptions). In the column, I suggest that although the law appears to be well-motivated, it exposes the deep contradiction between the intention to protect nonhuman animals from unnecessary violence, on one hand, and the practices in which most of the population engages (and which the law thoroughly supports and endorses), on the other. The question for this post is what one ought to do, given a legal regime that arbitrarily singles out a small proportion of cruelty against animals to criminalize. Should a conscientious prosecutor simply refuse to pursue animal cruelty at all, or should she prosecute offenders, notwithstanding the fact that they are--in their conduct--doing nothing worse than what the overwhelming majority of the population does when it lawfully participates in utterly unnecessary cruelty to animals through individual, daily decisions to consume animal products?

I am quite torn about this question. Hypocrisy, to my mind, is a serious problem. If the law endorses animal cruelty, as it does, in so many zones, and if most of the population funds animal cruelty, as it does, then who are we, "the people," to be prosecuting and locking up those individuals who happen to violate a law that identifies and stigmatizes some small sphere of animal cruelty which society has arbitrarily decided it will not tolerate? As Gary Francione eloquently said in an editorial at the time, "We're all Michael Vick," and it was accordingly problematic to send Vick to prison and to condemn him, as many have, for engaging in a morally-indistinguishable version of what everyone else is doing.

Indeed, it may even be racist to single out Michael Vick for condemnation, because minority communities are more likely to participate in dog-fighting (or cock-fighting), while white communities are satisfied to participate in socially acceptable animal cruelty against pigs (barbecues, bacon, ham) and chickens (slaughtered at 7 weeks old for "chicken" or, in the case of male rooster chicks from laying hens, ground alive or gassed to death at one-day old) in the poultry and egg industries, respectively, instead of in the cock-fighting industry. I made an argument along these lines about a different minority practice, Kaporos using chickens among Ultra-Orthodox Jewish communities.

At the same time, I have a competing impulse. First, the people who engage in the animal cruelty prohibited by law may be (and probably are) doing so in addition to rather than instead of engaging in the cruelty in which the rest of the population engages. That is, a person who organizes or attends a dog-fight is almost certainly not otherwise a vegan, so he participates in all of the same animal abuse in which the majority of the population participates as well as in dog fighting. For this reason, it is perhaps appropriate that he receive harsher treatment (by the law and by social stigma) than others.

One problem with this argument, however, is that in any individual case, we might have someone who is withdrawing his participation from other forms of animal abuse, but the law that singles out what he happens to be doing would not take that fact into account. The law, not surprisingly, ignores its own arbitrariness and hypocrisy and would thus ignore the fact that in a particular case, the person who goes to dog fights is also consuming a strictly plant-based diet and might therefore be responsible for far less violence against animals than the non-vegan who prosecutes him (or the society that urges his prosecution).

A second argument for prosecuting the dog-fighter or other participant in illegal animal cruelty, notwithstanding the arbitrariness of the law, is that when a particular kind of violence and injustice is socially accepted, it makes it more difficult for people to fully absorb (and act upon) the moral imperative to stop engaging in that violence and injustice. This is why, for example, we would undoubtedly judge a person who today kept a human slave in his home more harshly than we judge people who lived in the United States in the late Eighteenth and early Nineteenth Century, such as Thomas Jefferson, who owned slaves. If, for example, it turned out that Bill Clinton or George W. Bush secretly purchased slaves while serving as President of the United States, it would be difficult to say nice things about their respective presidencies, given this conduct. So long as a form of violent injustice is legally (and socially) accepted, by contrast, the individual practitioners of the injustice may perhaps bear somewhat less personal responsibility for engaging in the practice, because they are simply following the human herd. Once a particular kind of animal abuse has become illegal, then at least that act is arguably no longer a product of moral blindness, because society has made its wishes known. That arguably makes the conduct worse or, at least, more culpable for being illegal.

A third argument for prosecuting people who violate criminal laws against animal cruelty, even as their fellow citizens engage in equally horrific (and worse) cruelty on a daily basis, is that the criminal law is in part about identifying bad characters. People who engage in daily cruelty against animals by consuming the flesh and secretions of tortured living beings, given the realities of today, are not necessarily "bad" people, any more than were the people who participated in human slavery during the many years in which people thought of slavery as a normal and proper institution to embrace in one's life were necessarily all "bad" people. Once an injustice becomes criminal, however, even if what is criminal is only a tiny segment of a far larger injustice, it takes a particular sort of person to commit that injustice. The person who burns his dog with a torch, then, is likely to be a bad person, in a way that a different person who consumes the results of the equally torturous treatment of pigs is not as likely to be a bad person. If, as I have suggested here, the purpose of the criminal law and punishment is to identify the "bad" sorts of people to remove from society, then the violation of an express statute prohibiting an act as "animal cruelty" might help society identify people who really are bad in addition to having done something very wrong to an animal (the latter of which would not distinguish him from 98% of the population).

In response to these two arguments for prosecution, I would note that people are complicated characters and that someone can be very kind and generous in one domain while being very cruel and heartless in another (this is the banality of evil). It is also the case that someone can go from being cruel to being kinder, and the notion that some people (who commit legally prohibited cruelty against animals) are, beyond redemption, evil characters runs contrary to my optimism about the possibility of change. I would therefore be reluctant to say, for example, that Michael Vick is permanently and necessarily a "bad" person in a way that his teammate, who consumes animal corpses and secretions every day and thereby funds hideous violence against animals, is simply not.

A final argument for prosecuting animal cruelty is that the criminal law is a way of affirming that our society continues to hold certain values sacred, even if we do not remotely live up to those values. To decide to refrain from prosecuting all animal cruelty cases--even if the grounds are the utter arbitrariness of the criminal law in this regard--would be, perhaps inadvertently, to send the message that there is no animal cruelty at all that triggers society's outrage at this time. This message may be too depressing to tolerate, and it could have the harmful effect of further entrenching society's existing willingness to tolerate all manner of violence against animals. In a sense, then, hypocrisy here--though sickening and worthy of serious critique--is the (tiny) homage that vice pays to virtue, and it may be important to support that homage, however inadequate and morally arbitrary.

This last argument is, I think, what keeps me from wholeheartedly endorsing the withdrawal of the criminal justice system from issues of violence against animals, though I tend not to support single-issue anti-cruelty initiatives. I continue to believe that the violence that is legally prohibited is morally equivalent to (and no worse than) the violence that is legally tolerated, endorsed, and funded by the vast majority of people. Yet I want people to hold onto the small (and inadequately developed) instinct they have that one should not be cruel to animals, an instinct that I hope will flower with exposure to the truth about the animals whose flesh and secretions most of us unthinkingly consume. I want, in other words, to be able to say "Remember how you supported that law against animal cruelty and were glad that XYZ was prosecuted for torturing a cat? Well, here's some food for thought: your justifiable view of XYZ and animal cruelty has other implications for how we live..." If there are no laws against animal cruelty and no criminal prosecutions of violence against animals, it might be considerably more difficult to begin that important conversation.

Tuesday, January 20, 2015

My latest Verdict column picks up on a point that Professor Dorf made in his Verdict column last week, which is that the recent police slowdown in New York City (which, thankfully, appears to be ending) exposes how vulnerable our civilian leaders might be to lawless actions by the people who have taken on the responsibility of enforcing the laws. After making that initial point, Professor Dorf's column mostly focused on the underlying dispute and the free speech issues surrounding the "tacit strike." My concern was in thinking in more detail about the consequences of what could amount to organized extortion: "You (Mayor de Blasio and any other civilians who are saying and doing things that we don't like) had better change your tune, or else bad things could happen to your city!"

I think that today's column says all that I wanted to say about the importance of civilian control of the police and military. Here, therefore, I will pick up on a related point, which ultimately ties into my discussion of "us versus them" mindsets in other professions beyond the police. Two Sundays ago, in what was overall an excellent op-ed column discussing the blue-versus-de Blasio dispute, Times columnist Nicholas Kristof used an analogy that, I suspect, enraged a fair number of people. (I do not read the comments boards on sites other than Dorf on Law -- even Verdict -- so I have not verified the outrage.) I want to think a bit more about that analogy here.

Former NYC Mayor Rudolph Giuliani has made his usual number of jaw-droppingly dishonest arguments during this dispute. Among the lesser of those statements was this: "I find it very disappointing that you’re not discussing the fact that 93
percent of blacks in America are killed by other blacks. We’re talking
about the exception here." Kristof responds: "How would we feel if we were told: When Americans are killed by Muslim terrorists, it’s an exception. Get over it" (emphasis in original). One can almost hear the angry screams: You're comparing cops to terrorists!?!?!? Of course, using the terrorism example against Giuliani is telling, given that he has made an entire post-mayoral career out of his response to a tragedy that, as a statistical matter, still (thank goodness) ranks very low on the causes of death that have ended Americans' lives.

We do not, after all, simply look at the top cause of death, address it until it goes away, and then move onto the next item on the list. We tolerate unbelievable numbers of auto-related fatalities, along with thousands of preventable deaths each year from obesity- and heart-related illnesses, to say nothing of deaths by bullets. The idea that it is not acceptable to be concerned about a statistically less likely problem is the worst kind of sophistry. (But again, Giuliani is saying things like: "We’ve had four months of propaganda, starting with the president, that everybody should hate the police." At this point, what else should we expect from him?)

But Kristof's point is important in a deeper way. There is something about terrorism that makes it important beyond its numbers. Put simply, the reason that we label some brands of violence terrorism in the first place is that it is designed to terrorize people. All you need is one horrible event in a major city (let's say Paris) with what is in other contexts (a day in Baghdad) a relatively low fatality rate, and the whole world takes notice. What makes acts of terror so disturbing is that they are designed to make it impossible for a person to feel safe. This, I think, is the same phenomenon that makes very low-probability events like earthquakes so scary for people. Knowing that the earth under one's feet can literally fall away is no small matter.

The point, therefore, is that police officers who violate the law -- and especially those who appear to target particular groups for harsh and often violent treatment -- undermine people's right to feel safe. In the 70's and 80's, the Philadelphia Police Department came under scrutiny (and ultimately was the subject of federal action) for widespread lawless behavior. I recall at the time that my sister, who worked in the city, told me at one point that if she saw someone walking toward her at night on the sidewalk, she felt unsafe -- but if she saw that it was a police officer, she felt even less safe.

What makes this so important is that we know that bad people can do bad things, and that there is only so much that we can do to minimize our likelihood of being harmed by criminals. But the one thing we ought to be able to know is that, if a police officer arrives on the scene, we will not be victimized. Even if we are doing something wrong (like selling loose cigarettes in an outer borough of New York City), we have a right to expect that the police who respond will not make matters worse.

This is also why, I think, we uniquely care about false imprisonment by the state, as opposed to the same thing being done by criminals. If one is being held against one's will by criminals, at least one can think: "I hope the police find me. Then I'll be safe." But if it is the police and other agents of the state who are the wrongdoers, then where is the hope?

Which brings me back to a point that I made in today's Verdict column. I noted there that professional insularity is hardly limited to law enforcement agencies. Judges, legislators, and even football players often act as if the rules of society do not apply to them. I did not mention medical doctors in the column, but the stories that I have heard suggest that many doctors talk openly among themselves about patients being "the enemy." The sense of grievance among doctors about being sued for malpractice -- "How dare you question my competence, when you couldn't even pass a Freshman science class!" -- is similar to complaints that we have heard recently about people supposedly not understanding how difficult it is to be a police officer, which then apparently means that we have no right to punish them when they violate the law.

The most telling comparison, however, is between abusive police officers and abusive priests. Again, the problem arises from the degree of trust that people place in the particular profession. A young boy (or, in some cases, girl) who was being sexually abused by a priest must have been thinking, "Who can I talk to about this to make it stop? This is God's assistant!" No one would believe the child, because of the social esteem in which the clergy is held. (When I was growing up as a minister's kid, people young and old told me that they assumed I would not do bad things. And I was not even the authority figure! Piety by association.)

The larger point, therefore, is that it is legitimate to expect more from people in whom great trust has been placed. As a member of a profession myself, I certainly know what it is like when people outside the profession say ignorant things, and I would resist efforts to impose what I view as unwise rules on me and my colleagues. The trust that has been placed in professors is profoundly important, but it is nothing compared to what we need to be able to expect from doctors, clergy, and especially law enforcement officers. It must be difficult to feel scrutinized all the time, but that is necessarily part of the job. Without it, power can be too easily abused.

Monday, January 19, 2015

In Spike Lee's gripping 1989 film Do the Right Thing (spoiler alert!), Smiley, an intellectually disabled man, periodically appears on screen attempting to sell pictures of Dr. Martin Luther King, Jr. and Malcolm X. The film ends with a scroll of two quotations: one from Dr. King decrying violence as necessarily counterproductive for justice movements; and another from Malcolm X, endorsing violence in "self-defense" against bad people in power.

The film portrayed the choice between their respective philosophies as a difficult one, but for white America, of course it was a no-brainer. White Americans looking for an African American to canonize naturally chose Dr. King, seeing his message of non-violence as much more acceptable than Malcolm X's "by any means necessary." And that explains why the juxtaposed quotes and closing scene--in which Lee's character Mookie starts a riot in response to a police killing of a friend--caused such consternation among white audiences (described astutely here). If widely viewed today, it still would. We are now farther in time from the release of Do the Right Thing than the release was from the assassinations of Malcolm X and Dr. King, but as recent blue-on-Black killings tragically illustrate, its themes remain highly salient.

For today's commemoration of the life and work of Dr. King, I'd like to ask a question about the framing of the choice between him and Malcolm X. If white America was going to canonize a civil rights saint, the choice between Dr. King and Malcolm X was indeed easy. But why were those the only two choices? There was another possibility, one that, at least on the surface, would have seemed more logical still: namely, Thurgood Marshall, aka "Mr. Civil Rights." I'll make the case for Marshall as a more fitting choice, and then offer a few hypotheses about why we settled on Dr. King instead.

I'll begin with a digression into another film, the recent Selma. The film does not include actual speeches by Dr. King (because of copyright issues), so director Anna DuVernay and her team created simulacra of them. When interviewed by Terry Gross on Fresh Air recently, DuVernay explained that she boiled down Dr. King's message in his Selma speech to the idea

that racism is a lie that's been told to white people to divert their attention from the challenges in their own life by the powers that be, that rich white men indoctrinate racism into poor white men to make them look at black people and not at the powerful white men, who might not be helping them as they should.

And indeed, that idea plays a central role in Dr. King's actual speech at the conclusion of the Selma march. He said:

the segregation of the races was really a political stratagem employed by the emerging Bourbon interests in the South to keep the southern masses divided and southern labor the cheapest in the land. You see, it was a simple thing to keep the poor white masses working for near-starvation wages in the years that followed the Civil War. Why, if the poor white plantation or mill worker became dissatisfied with his low wages, the plantation or mill owner would merely threaten to fire him and hire former Negro slaves and pay him even less. Thus, the southern wage level was kept almost unbearably low.

Toward the end of the Reconstruction era, something very significant happened. That is what was known as the Populist Movement. The leaders of this movement began awakening the poor white masses and the former Negro slaves to the fact that they were being fleeced by the emerging Bourbon interests. Not only that, but they began uniting the Negro and white masses into a voting bloc that threatened to drive the Bourbon interests from the command posts of political power in the South.

To meet this threat, the southern aristocracy began immediately to engineer this development of a segregated society. I want you to follow me through here because this is very important to see the roots of racism and the denial of the right to vote. Through their control of mass media, they revised the doctrine of white supremacy. They saturated the thinking of the poor white masses with it, thus clouding their minds to the real issue involved in the Populist Movement. They then directed the placement on the books of the South of laws that made it a crime for Negroes and whites to come together as equals at any level. And that did it. That crippled and eventually destroyed the Populist Movement of the nineteenth century.

Is it true that rich powerful white people inculcated racism in poor whites to blind them to their own economic interests? Yes, to some degree. But it's also true that poor and working poor whites often took racism well beyond the interests of rich powerful white people. The events surrounding the Lake County, Florida trials for the alleged 1949 rape of a white woman--as recounted in Gilbert King's terrific 2012 book Devil in the Grove--offer an interesting counterpoint. The chief villain in the story is the white virulently racist sheriff Willis McCall, but McCall is largely a symptom of the broader society. As the book explains, the white owners of the citrus groves depended on cheap African American labor. To the extent that Jim Crow deprived African Americans and poor whites of the means to resist economic exploitation, they benefited from racism. But when the white mob rampaged in the African American community, the wealthy grove owners were upset, because they feared an exodus of African Americans that would leave them with a shortage of cheap labor. The white economic elites wanted enough racism to permit exploitation but not so much as to result in murder and flight.

Enter Thurgood Marshall, then at the height of his power as a lawyer, to defend the African American men who were falsely accused, while simultaneously litigating the cases that would ultimately become Brown v. Board of Education. For the most part, Devil in the Grove tells the story of the "Groveland Boys," which is more or less a mid-twentieth century reprise of the Scottsboro Boys case. But the book also describes the career and views of Marshall, including the distance that Marshall deliberately placed between the NAACP and more left-leaning supporters of civil rights. As portrayed in the book, Marshall acted partly strategically in order to avoid antagonizing the strongly anti-communist FBI under J. Edgar Hoover, but it is not just that. Marshall was fundamentally a liberal. Dr. King appealed to liberals and was not at all illiberal, but his vision of social justice was, to a greater extent than Marshall's, redistributive.

Indeed, it is by now a well-worn criticism of American post-civil-rights-era culture that we have sanitized Dr. King's vision by selectively focusing on a few lines from his "I Have a Dream" speech, thereby enabling even white conservatives to embrace him as an opponent of affirmative action--ignoring his views about economics, war, and much more. It would also require considerable amnesia to make such a figure out of Marshall, to be sure, but Marshall's faith in the rule of law and his anti-communism ought to have made him a more natural candidate for canonization than Dr. King.

And yet it didn't work out that way. We have an airport, an archtitecturally uninteresting government building in D.C., and some scholarships named after Marshall, but Dr. King gets an entire day, the equal of Washington and Lincoln combined. Why?

No single factor explains it all, but I'll point to three. First, Marshall was a great lawyer but Dr. King was a transcendent rhetorician. In the American canon of great political speaking, Dr. King stands alone; only Lincoln, FDR, and JFK even warrant mention in the same conversation.

Second, Dr. King died young, and so he could be invoked for almost any position, regardless of where he actually would have come down on that issue. The law establishing Dr. King's birthday as a national holiday was signed in 1983, when Marshall was still an active Justice on the Supreme Court, and as a consistently liberal vote on the Court, the continued object of attacks by the right. In the 1980s, it would not have been possible to treat Marshall as a trans-partisan hero, whereas Dr. King's absence permitted the appropriation of his legacy for that purpose.

Third, although Malcolm X was killed more than three years earlier than Dr. King, King's assassination in 1968, together with RFK's assassination a couple of months later and with the growing urban unrest of the mid to late 1960s, led many Americans to wonder whether the social fabric was coming undone. The violent crime spike of the late 1960s did not seriously begin to subside until the early 1990s, and thus the crucial frame for canonization during the relevant period was violence. Of course, Thurgood Marshall opposed violence too, but non-violence was central to the message of Dr. King. He, Thoreau, and Gandhi are more closely associated with non-violent politics for social change than anyone else.

Put cynically, the decision by white America to canonize Dr. King was driven as much by fear of the alternative--Black nationalism and street crime--as by agreement with his message. That's not all that was at stake in the decision by President Reagan to sign the King holiday bill. But it was a big piece of it. Understanding what was really at stake in the decision to canonize Dr. King is perhaps a useful step towards really understanding his actual message.

Friday, January 16, 2015

The Supreme Court cert grant in the SSM cases from the 6th Circuit included two rephrased questions presented: "1)Does the Fourteenth Amendment require a state to license a marriage between two people of the same sex? 2) Does the Fourteenth Amendment require a state to recognize a marriage between two people of the same sex when their marriage was lawfully licensed and performed out-of-state?"

An astute observer emailed me asking whether this is not a bit odd. After all, one might think that the answer to both questions is no, so long as the state doesn't license or recognize any marriages, same-sex or opposite-sex.

But in fact, the states all do license and recognize opposite-sex marriages, so the objection is academic. Moreover, under the Court's fundamental rights jurisprudence, states probably cannot simply deny marriage to everyone.

Accordingly, I don't read much significance into the Court's rephrasing of the cert questions. It seems to me that the Court rephrased in such a way as to make clear that in addressing both questions, lawyers are free to (and expected to) address both equal protection and substantive due process issues.

It will come as no surprise to regular readers of this blog that I am not optimistic about the legislation likely to emerge from the new Congress. However, I do see one possible salutary outcome: Perhaps Republicans in the Senate will "go nuclear" and abolish the filibuster for ordinary legislation.

When the Democrats abolished the filibuster for executive appointments and lower court judges in 2013, Republicans cried foul. Senators Alexander and McConnell warned, in essence, that what goes around comes around. Now that the Republicans have their Senate majority but fewer than 60 seats, it will be tempting for them to follow Harry Reid's lead and finish off the filibuster for ordinary legislation. (They have no incentive to eliminate it for Supreme Court nominees during a Democratic Presidency; more about the Supreme Court in a postscript below.) Democrats should be sanguine about this possibility.

The filibuster is bad for small-d democracy for the obvious reasons. The point is not that the 60-votes-for-cloture rule gives rights to a minority. Constitutional democracy is not simple majoritarianism. It is consistent with, and indeed often requires, respect for minority rights. But there is no reason to think that this particular protection for minority rights--allowing a numerical minority in a body that already overwhelmingly overrepresents small-state and rural interests--is needed. I wouldn't necessarily say that the current cloture rule is unconstitutional: Article I makes each house the arbiter of its own procedures, after all, and a supermajority for cloture has been with us for a very long time. But the fact (if it is a fact) that the current cloture rule is constitutional does not mean it's a good idea.

Granting that allowing a simple majority to end debate (except perhaps for a conventional "talking" filibuster) would be good for small-d democracy, might it nonetheless be bad for Big-D Democrats? The short answer is no. Allowing the Senate to pass bills with only 51 votes would still leave President Obama with a veto, which can only be overridden by 67 votes in the Senate. So as a practical matter, little changes for the next two years.

To be sure, presidents don't like to have to use their veto power. They think it makes them look weak. Accordingly, since the 2010 midterms, Senate Democrats have protected President Obama from needing to veto more than a couple of bills. Republican abolition of the filibuster for ordinary legislation would necessitate more vetoes, but a second-term president in his last two years in office has little to lose on that score. Obama's threatened veto of a bill approving the Keystone pipeline indicates that he has reached that same conclusion.

What about the long run? Presumably some day there will be a Republican president and Republican majorities in both the House and Senate, but with fewer than 60 Republican Senators. Do Democrats have more to lose from being unable to block legislation in that scenario than they have to gain from the ability to enact legislation in a future when there is a Democratic President with Democratic majorities in both the House and Senate? That question is in some sense unanswerable, of course, but other things being equal, Republicans benefit more from gridlock than do Democrats because Republicans are generally more hostile to regulation.

Here too there are subtleties. Democrats stand to lose when Congress repeals existing laws, not just when it fails to enact new laws, and so making it easier for Congress to legislate also create risks of excessive deregulation. But on the whole I think those risks are outweighed by the risks of gridlock. Even without repealing existing laws, a blocking minority can gut those laws by denying funding for enforcement. So over the long run, it seems to me that Democrats benefit more than Republicans from abolition of the filibuster for ordinary legislation. Accordingly, if Harry Reid is a good long-term poker player, he will obstruct Republicans at every turn, thus goading the Republicans into abolishing the filibuster in a fit of pique.

Postscript: One potential consequence of abolishing the filibuster for ordinary legislation is that political barriers will be lowered for abolishing the filibuster for Supreme Court nominees the next time that a President nominates a Justice for a Senate with a sub-60-vote majority of his party. It seems to me that this would be more or less a wash. It would make it easier for Republicans to nominate conservatives and for Democrats to nominate liberals, at least when they control the Senate. In his 2007 book The Next Justice, Chris Eisgruber (now President of Princeton) argued that retaining the filibuster made the most sense for judicial (especially Supreme Court) nominees because it pushed presidents to name moderates, which is desirable in an ostensibly apolitical branch. That may be true in theory, but the last decade or so suggests that the possibility of filibustering Supreme Court nominees will eventually lead to gridlock.Indeed, quite apart from the cloture rule, the trend line for recent nominations suggests that it may be impossible for a president to get anybody confirmed by a Senate controlled by the other party, and so abolishing the filibuster for the Supreme Court might be needed just to maintain nine Justices on the Court.

Thursday, January 15, 2015

Remember Mitt Romney? He was the guy who dismissed the 47% of Americans who, per Romney, "believe that they are victims, that they are entitled to health care," and who will never "take personal responsibility and care for their lives." Well, he might be back, if the political rumors of the week are to be believed. Given that Romney's comments about "the 47%" were probably the biggest gaffe in his gaffe-prone 2012 presidential campaign, it is a mild coincidence that this week also saw the publication of a study that completely undermines the conservative mythology about the people with no "skin in the game," that is, who supposedly pay no taxes.

Romney is by no means the only conservative who has tried to misuse that statistic, and he will surely not be the last. Here, therefore, I will briefly summarize that politically explosive distortion, and then I will describe the new study of state-level taxes that was released yesterday.

The infamous 47% statistic actually emerged in early 2010, when conservatives discovered that only 53% of the population had a positive federal income tax liability in 2009. As I (and many others) wrote at the time, there were multiple problems with jumping from that statistic to the conclusion that "almost half of the people pay no taxes." The year 2009 was the first and worst year of the Great Recession, meaning that a lot of people who would have been paying federal income taxes were instead unemployed and thus had no income to tax. Those non-paying 47% also included retirees, who generally would not be expected to be paying income taxes in any case.

More to the point of this post, the statement that "x% of taxpayers pay zero federal income tax in a given year" may be true, but it ignores the other taxes that people pay. Even at the federal level, the personal income tax constitutes less than half of government revenues. In the most recent year available, 2013, the personal income tax constituted (purely coincidentally) 47% of federal tax revenues. Everyone who earns even a dollar pays federal payroll taxes. And if, as some conservative economists assert, the corporate income tax is passed on to workers in the form of lower wages, then workers paid the $280 billion collected from that ever-shrinking source of revenues.

In my initial Dorf on Law post in 2010, responding to the distorted claims about the 47% who were supposedly not paying taxes, I noted that most people "do pay mostly-regressive state and local taxes." And that is where yesterday's report comes in. The Institute on Taxation and Economic Policy (ITEP) is a liberal, nonpartisan group that (along with its sibling organization Citizens for Tax Justice), provides extremely high-quality numerical analyses of tax policy. This is the fifth year that ITEP has issued "Who Pays?" a report that summarizes the tax systems of all 50 states and D.C. The report makes for depressing reading for anyone who believes in progressive taxation, and it raises some interesting questions about some conservative talking points.

The bottom line of the report is that state taxes are, indeed, "mostly regressive." Indeed, if you take each state's tax system as a whole, there is not a single state in the country that is running a progressive tax system. According to the study, this year the bottom 20% of income-earners nationwide will pay an average of 10.9% of their pretax incomes in state and local taxes, while the top 1% will pay 5.4% on average. As I noted in my 2010 Dorf on Law post, the combined impact of federal and state taxes adds up to a proportional system, in which the poorest and richest all pay the same rates of taxes. (That is bad enough, but there are further reasons beyond the scope of this post to believe that the measured tax rates for upper-income people are seriously overstated.)

Amazingly, not a single state has a progressive tax system. Delaware is the least regressive, with the bottom 20% paying 5.5% and the top 1% paying 4.8%. That is still regressive, of course. The most regressive state is Washington, where the tax rates are 16.8% for the bottom fifth, and 2.4% for the top 1% of earners. To Washington's credit, the Democrats there are at least trying to make their state's taxes less regressive, but the best way to do so is by adopting a state income tax, which voters there rejected in a ballot initiative a few years ago.

Interestingly, there is a state that imposes an even lower tax rate on its top 1% than Washington does. Florida's aggregate tax rate on the top percentile is 1.9% (versus 12.9% rate on the bottom fifth). This is so close to zero that I wondered whether Florida's affluent residents have enough "skin in the game" to be good citizens. For those readers who have been spared this particular bit of sophistry, there is a claim among many conservatives that everyone should have to pay taxes, because otherwise, they will not be vigilant in making sure that their elected representatives are spending the tax revenues wisely. The further implication is that people with low or no tax liabilities will simply ignore the government, because it is ignoring them.

I find this argument laughable, as I have explained here and here. Still, these data provide an opening for some empirical testing. Do rich people try to influence state governments more in Delaware than in Washington or Florida? If anyone can find a correlation, please let me know. Color me skeptical.